Chicken or egg: splitting IaC



Which came first - a chicken or an egg? Quite a strange start for an article on Infrastructure-as-Code, isn't it?



What is an egg?



Most often, Infrastructure-as-Code (IaC) is a declarative way of representing infrastructure. In it we describe the state that we want to get, starting from the iron part, ending with the software configuration. Therefore, IaC is used to:



  1. Resource Provision. These are VMs, S3, VPCs, etc. Basic tools for work: Terraform and CloudFormation .
  2. Software Configuration . Basic tools: Ansible , Chef, etc.


Any code is in git repositories. And sooner or later the team lead will decide that it is necessary to put things in order in them. And it will refactor. And will create some structure. And he will see that it is good.



It's also good that there is already a GitLab and GitHub provider for Terraform (and that's a Software Configuration). With their help, you can manage the entire project: team members, CI / CD, git-flow, etc.



Where did the egg come from?



So we gradually come to the main question.



First of all, you need to start with a repository, which describes the structure of other repositories, including yourself. And of course, within GitOps, you need to add a CI so that the changes are automatically executed.



If Git hasn't been built yet?



  1. How do I store it in Git?
  2. How to screw on CI?
  3. If we also deploy Gitlab using IaC, and even in Kubernetes?
  4. And GitLab Runner is also on Kubernetes?
  5. What about Kubernetes in a cloud provider?


Which came first: GitLab, where I will upload my code, or code describing what kind of GitLab I need?


Chicken with eggs



" Oyakodon 3 dinosaur» [ src ]



Let's try to cook a dish using Managed Kubernetes Selectel as a cloud provider .



TL; DR



Is it possible to immediately and in one team?



$ export MY_SELECTEL_TOKEN=<token>
$ curl https://gitlab.com/chicken-or-egg/mks/make/-/snippets/2002106/raw | bash




Ingredients:



  • Account from my.selectel.ru;
  • Token from the account;
  • Kubernetes skills;
  • Helm skills;
  • Terraform skills;
  • Helm chart GitLab;
  • Helm chart GitLab Runner.


Recipe:



  1. Get MY_SELECTEL_TOKEN from the my.selectel.ru panel .
  2. Create a Kubernetes cluster by transferring the token from the account to it.
  3. Get KUBECONFIG from the created cluster.
  4. Install GitLab on Kubernetes.
  5. Get the GitLab-token from the generated GitLab for the root user .
  6. Create project structure in GitLab using GitLab-token.
  7. Push existing code to GitLab.
  8. ???
  9. Profit!


Step 1 . The token can be obtained in the API Keys section .



Step 2 . We prepare our Terraform for baking a cluster of 2 nodes. If you are sure that you have enough resources for all, then you can enable autoquotas:



provider "selectel" {
 token = var.my_selectel_token
}

variable "my_selectel_token" {}
variable "username" {}
variable "region" {}


resource "selectel_vpc_project_v2" "my-k8s" {
 name = "my-k8s-cluster"
 theme = {
   color = "269926"
 }
 quotas {
   resource_name = "compute_cores"
   resource_quotas {
     region = var.region
     zone = "${var.region}a"
     value = 16
   }
 }
 quotas {
   resource_name = "network_floatingips"
   resource_quotas {
     region = var.region
     value = 1
   }
 }
 quotas {
   resource_name = "load_balancers"
   resource_quotas {
     region = var.region
     value = 1
   }
 }
 quotas {
   resource_name = "compute_ram"
   resource_quotas {
     region = var.region
     zone = "${var.region}a"
     value = 32768
   }
 }
 quotas {
   resource_name = "volume_gigabytes_fast"
   resource_quotas {
     region = var.region
     zone = "${var.region}a"
     # (20 * 2) + 50 + (8 * 3 + 10)
     value = 130
   }
 }
}

resource "selectel_mks_cluster_v1" "k8s-cluster" {
 name         = "k8s-cluster"
 project_id   = selectel_vpc_project_v2.my-k8s.id
 region       = var.region
 kube_version = "1.17.9"
}

resource "selectel_mks_nodegroup_v1" "nodegroup_1" {
 cluster_id        = selectel_mks_cluster_v1.k8s-cluster.id
 project_id        = selectel_mks_cluster_v1.k8s-cluster.project_id
 region            = selectel_mks_cluster_v1.k8s-cluster.region
 availability_zone = "${var.region}a"
 nodes_count       = 2
 cpus              = 8
 ram_mb            = 16384
 volume_gb         = 15
 volume_type       = "fast.${var.region}a"
 labels            = {
   "project": "my",
 }
}


Add a user to the project:



resource "random_password" "my-k8s-user-pass" {
 length = 16
 special = true
 override_special = "_%@"
}

resource "selectel_vpc_user_v2" "my-k8s-user" {
 password = random_password.my-k8s-user-pass.result
 name = var.username
 enabled  = true
}

resource "selectel_vpc_keypair_v2" "my-k8s-user-ssh" {
 public_key = file("~/.ssh/id_rsa.pub")
 user_id    = selectel_vpc_user_v2.my-k8s-user.id
 name = var.username
}

resource "selectel_vpc_role_v2" "my-k8s-role" {
 project_id = selectel_vpc_project_v2.my-k8s.id
 user_id    = selectel_vpc_user_v2.my-k8s-user.id
}


Output:



output "project_id" {
 value = selectel_vpc_project_v2.my-k8s.id
}

output "k8s_id" {
 value = selectel_mks_cluster_v1.k8s-cluster.id
}

output "user_name" {
 value = selectel_vpc_user_v2.my-k8s-user.name
}

output "user_pass" {
 value = selectel_vpc_user_v2.my-k8s-user.password
}


Launch:



$ env \
TF_VAR_region=ru-3 \
TF_VAR_username=diamon \
TF_VAR_my_selectel_token=<token> \
terraform plan -out planfile

$ terraform apply -input=false -auto-approve planfile




Step 3 . We get Cubconfig.



To programmatically download KUBECONFIG, you need to get a token from OpenStack:



openstack token issue -c id -f value > token


And with this token, make a request to the Managed Kubernetes Selectel API. k8s_id produces terraform :



curl -XGET -H "x-auth-token: $(cat token)" "https://ru-3.mks.selcloud.ru/v1/clusters/$(cat k8s_id)/kubeconfig" -o kubeConfig.yaml


Cubconfig can also be obtained through the panel.





Step 4 . After the cluster is baked and we have access to it, we can add yaml on top to our liking.



I prefer to add:



  • namespace,
  • storage class,
  • pod security policy and more.


Storage Class for Selectel can be taken from the official repository .



Since I initially selected a cluster in the ru-3a zone , I also need a Storage Class from this zone.



kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: fast.ru-3a
 annotations:
   storageclass.kubernetes.io/is-default-class: "true"
provisioner: cinder.csi.openstack.org
parameters:
 type: fast.ru-3a
 availability: ru-3a
allowVolumeExpansion: true


Step 5 . We install a load balancer.



We will use the standard nginx-ingress for many . There are already plenty of instructions for installing it, so let's not dwell on this.



$ helm repo add nginx-stable https://helm.nginx.com/stable
$ helm upgrade nginx-ingress nginx-stable/nginx-ingress -n ingress --install -f ../internal/K8S-cluster/ingress/values.yml


We are waiting for it to receive an external IP for about 3-4 minutes:





Received external IP:





Step 6 . Install GitLab.



$ helm repo add gitlab https://charts.gitlab.io
$ helm upgrade gitlab gitlab/gitlab -n gitlab  --install -f gitlab/values.yml --set "global.hosts.domain=gitlab.$EXTERNAL_IP.nip.io"


We are again waiting for all the pods to rise.



kubectl get po -n gitlab
NAME                                      	READY   STATUS  	RESTARTS   AGE
gitlab-gitaly-0                           	0/1 	Pending 	0      	0s
gitlab-gitlab-exporter-88f6cc8c4-fl52d    	0/1 	Pending 	0      	0s
gitlab-gitlab-runner-6b6867c5cf-hd9dp     	0/1 	Pending 	0      	0s
gitlab-gitlab-shell-55cb6ccdb-h5g8x       	0/1 	Init:0/2	0      	0s
gitlab-migrations.1-2cg6n                 	0/1 	Pending 	0      	0s
gitlab-minio-6dd7d96ddb-zd9j6             	0/1 	Pending 	0      	0s
gitlab-minio-create-buckets.1-bncdp       	0/1 	Pending 	0      	0s
gitlab-postgresql-0                       	0/2 	Pending 	0      	0s
gitlab-prometheus-server-6cfb57f575-v8k6j 	0/2 	Pending 	0      	0s
gitlab-redis-master-0                     	0/2 	Pending 	0      	0s
gitlab-registry-6bd77b4b8c-pb9v9          	0/1 	Pending 	0      	0s
gitlab-registry-6bd77b4b8c-zgb6r          	0/1 	Init:0/2	0      	0s
gitlab-shared-secrets.1-pc7-5jgq4         	0/1 	Completed   0      	20s
gitlab-sidekiq-all-in-1-v1-54dbcf7f5f-qbq67   0/1 	Pending 	0      	0s
gitlab-task-runner-6fd6857db7-9x567       	0/1 	Pending 	0      	0s
gitlab-webservice-d9d4fcff8-hp8wl         	0/2 	Pending 	0      	0s
Waiting gitlab
./wait_gitlab.sh ../internal/gitlab/gitlab/.pods
waiting for pod...
waiting for pod...
waiting for pod...


Pods went up:





Step 7 . We get the GitLab-token.



First, we find out the password for entering:



kubectl get secret -n gitlab gitlab-gitlab-initial-root-password -o jsonpath='{.data.password}' | base64 --decode


Now let's log in and get the token:



python3 get_gitlab_token.py root $GITLAB_PASSWORD http://gitlab.gitlab.$EXTERNAL_IP.nip.io


Step 8 . Bringing Git repositories to the correct hierarchy using the Gitlab Provider.



cd ../internal/gitlab/hierarchy && terraform apply -input=false -auto-approve planfile


Unfortunately, the terraform GitLab provider has a floating bug . Then you have to remove conflicting projects by hand for tf.state to be repaired. Then restart the command `$ make all`


Step 9 . We transfer local repositories to the server.



$ make push

[master (root-commit) b61d977]  Initial commit
 3 files changed, 46 insertions(+)
 create mode 100644 .gitignore
 create mode 100644 values.yml
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (5/5), done.
Writing objects: 100% (5/5), 770 bytes | 770.00 KiB/s, done.
Total 5 (delta 0), reused 0 (delta 0)


Done:











Conclusion



We have achieved that we can declaratively manage everything from our local machine. Now I want to transfer all these tasks to CI and only press the buttons. To do this, we need to pass our local states (Terraform state) to CI. How to do this in the next part.



Subscribe to our blog not to miss any new articles!



All Articles