We master the task for deployment in GKE without plugins, SMS and registration. Peek under Jenkins' jacket with one eye

It all started when a team leader of one of our development teams asked us to test out their new application, which had been containerized the day before. I put it up. After about 20 minutes, a request was received to update the application, because a very necessary piece was finished there. I renewed. After another couple of hours ... well, you



already guess what began to happen next ... I must admit, I'm rather lazy (did I admit this earlier? No?), And, given the fact that team leads have access to Jenkins, in which We have all the CI / CD, I thought: let him deploy himself as much as he pleases! I remembered a joke: give a man a fish and he will be full for the day; call a person Sated and he will be Sated all his life. And he went to play with the job, which would be able to deploy a container into a Kuber with the application of any successfully assembled version and transfer any ENV values ​​to it (my grandfather, a philologist, an English teacher in the past, would now twist his finger at his temple and look at me very expressively after reading this sentence).



So, in a post I will talk about how I learned:



  1. Dynamically update jobs in Jenkins from the job itself or from other jobs;
  2. Connect to the cloud console (Cloud shell) from the node with the Jenkins agent installed;
  3. Deploy a workload to the Google Kubernetes Engine.


In fact, I am, of course, a little cunning. It is assumed that at least part of your infrastructure is in the google cloud, and, therefore, you are its user and, of course, you have a GCP account. But the note is not about that.



This is my next cheat sheet. I only want to write such notes in one case: I had a problem in front of me, I initially did not know how to solve it, the solution was not googled in its finished form, so I googled it in parts and eventually solved the problem. And so that in the future, when I forget how I did it, I do not have to google everything again piece by piece and compile together, I write myself such cheat sheets.

Disclaimer: 1. « », best practice . « » .

2. , , , — .

Jenkins



I foresee your question: what does dynamic job update have to do with it? I entered the value of the string parameter with the handles and go ahead!



The answer is: I'm really lazy, I don't like it when they complain: Misha, the deployment is crashing, everything is gone! You start looking, and there is a typo in the value of some task launch parameter. Therefore, I prefer to do everything as completely as possible. If it is possible to prevent the user from entering data directly by giving a list of values ​​to select instead, then I organize the selection.



The plan is as follows: create a job in Jenkins, in which, before launching, it would be possible to select a version from the list, specify values ​​for the parameters passed to the container via ENV , then it collects the container and pushes it to the Container Registry. Further from there, the container is launched in kubera asworkload with parameters specified in the job.



We will not consider the process of creating and configuring a job in Jenkins, this is offtopic. We will proceed from the assumption that the task is ready. To implement an updatable version list, we need two things: an existing source list with a priori valid version numbers and a Choice parameter type variable in the task. In our example, let the variable be named BUILD_VERSION , we will not dwell on it in detail. But let's take a closer look at the source list.



There are not so many options. Two immediately occurred to me:



  • Use the Remote access API that Jenkins offers to its users;
  • Query the contents of the remote repository folder (in our case, this is JFrog Artifactory, which is not important).


Jenkins Remote access API



According to the established fine tradition, I prefer to avoid lengthy explanations.

I will only allow myself to freely translate a piece of the first paragraph of the first page of the API documentation :

Jenkins provides an API for remote machine-readable access to its functionality. <...> Remote access is offered in a REST-like style. This means that there is no single entry point for all capabilities, and instead a URL like " ... / api / " is used, where " ... " is the object to which the API capabilities are applied.
In other words, if the deployment task, which we are currently talking about, is available at the address http://jenkins.mybuild.er/view/AweSomeApp/job/AweSomeApp_build, then the API whistles for this task are available at Next, we have a choice in what form to receive output. Let's dwell on XML, since the API only allows filtering in this case. Let's just try to get a list of all the job runs. We are only interested in the assembly name ( displayName ) and its result ( result ):http://jenkins.mybuild.er/view/AweSomeApp/job/AweSomeApp_build/api/











http://jenkins.mybuild.er/view/AweSomeApp/job/AweSomeApp_build/api/xml?tree=allBuilds[displayName,result]


Happened?



Now let's filter out only those launches that end up with a SUCCESS result . We use the & exclude argument and pass it the path to a value not equal to SUCCESS as a parameter . Yes Yes. Double negation is a statement. We exclude everything that does not interest us:



http://jenkins.mybuild.er/view/AweSomeApp/job/AweSomeApp_build/api/xml?tree=allBuilds[displayName,result]&exclude=freeStyleProject/allBuild[result!='SUCCESS']


Screenshot of the list of successful




Well, just for the sake of pampering, let's make sure that the filter did not deceive us (filters never lie!) And display a list of “unsuccessful” ones:



http://jenkins.mybuild.er/view/AweSomeApp/job/AweSomeApp_build/api/xml?tree=allBuilds[displayName,result]&exclude=freeStyleProject/allBuild[result='SUCCESS']


Screenshot of the list of unsuccessful




List of versions from a folder on a remote server



There is a second way to get a list of versions. I like it even more than the Jenkins API call. Well, because if the application has been successfully built, then it has been packaged and put into the repository in the appropriate folder. Like, the repository is the default repository of working versions of applications. Like. Well, let's ask him what versions are in storage. We will curl, grep and awk the remote folder. If someone is interested in the unliner, then it is under the spoiler.



One line command
: , , . :



curl -H "X-JFrog-Art-Api:VeryLongAPIKey" -s http://arts.myre.po/artifactory/awesomeapp/ | sed 's/a href=//' | grep "$(date +%b)-$(date +%Y)\|$(date +%b --date='-1 month')-$(date +%Y)" | awk '{print $1}' | grep -oP '>\K[^/]+' )




Job setup and job configuration file in Jenkins



With the source of the version list sorted out. Let's now screw the resulting list into the task. For me, the obvious solution was to add a step in the application build job. The step that would be performed if the result is "success".



Open the assembly task settings and scroll to the bottom. Click on the buttons: Add build step -> Conditional step (single) . In the step settings, select the Current build status condition , set the SUCCESS value , the action to be performed if the Run shell command succeeds .



And now the fun part. Jenkins stores job configurations in files. In XML format. Along the wayhttp://--/config.xmlAccordingly, you can download the configuration file, edit it as needed and put it in the place where it was taken from.



Remember, above we agreed that we will create a BUILD_VERSION parameter for the version list ?



Let's download the config file and take a look inside it. Just to make sure that the parameter is in place and really the right kind.



Screenshot under the spoiler.



Your config.xml snippet should look the same. Except that the content of the choices element is not yet present




Are you convinced? All right, we are writing a script that will be executed in case of a successful build.

The script will receive a list of versions, download a configuration file, write a list of versions into it in the place we need, and then put it back. Yes. Everything is correct. Write a list of versions in XML to the place where there is already a list of versions (will be in the future, after the first launch of the script). I know there are still some fierce lovers of regular expressions in the world. I do not belong to them. Please install xmlstarler on the machine where the config will be edited. It seems to me that this is not a big price to pay for avoiding XML editing with sed.



Under the spoiler, I cite the code that performs the entire sequence described above.



We write to the config the list of versions from the folder on the remote server
#!/bin/bash
##############  
curl -X GET -u username:apiKey http://jenkins.mybuild.er/view/AweSomeApp/job/AweSomeApp_k8s/config.xml -o appConfig.xml

##############     xml-   
xmlstarlet ed --inplace -d '/project/properties/hudson.model.ParametersDefinitionProperty/parameterDefinitions/hudson.model.ChoiceParameterDefinition[name="BUILD_VERSION"]/choices[@class="java.util.Arrays$ArrayList"]/a[@class="string-array"]' appConfig.xml

xmlstarlet ed --inplace --subnode '/project/properties/hudson.model.ParametersDefinitionProperty/parameterDefinitions/hudson.model.ChoiceParameterDefinition[name="BUILD_VERSION"]/choices[@class="java.util.Arrays$ArrayList"]' --type elem -n a appConfig.xml

xmlstarlet ed --inplace --insert '/project/properties/hudson.model.ParametersDefinitionProperty/parameterDefinitions/hudson.model.ChoiceParameterDefinition[name="BUILD_VERSION"]/choices[@class="java.util.Arrays$ArrayList"]/a' --type attr -n class -v string-array appConfig.xml

##############       
readarray -t vers < <( curl -H "X-JFrog-Art-Api:Api:VeryLongAPIKey" -s http://arts.myre.po/artifactory/awesomeapp/ | sed 's/a href=//' | grep "$(date +%b)-$(date +%Y)\|$(date +%b --date='-1 month')-$(date +%Y)" | awk '{print $1}' | grep -oP '>\K[^/]+' )

##############       
printf '%s\n' "${vers[@]}" | sort -r | \
                while IFS= read -r line
                do
                    xmlstarlet ed --inplace --subnode '/project/properties/hudson.model.ParametersDefinitionProperty/parameterDefinitions/hudson.model.ChoiceParameterDefinition[name="BUILD_VERSION"]/choices[@class="java.util.Arrays$ArrayList"]/a[@class="string-array"]' --type elem -n string -v "$line" appConfig.xml
                done

##############   
curl -X POST -u username:apiKey http://jenkins.mybuild.er/view/AweSomeApp/job/AweSomeApp_k8s/config.xml --data-binary @appConfig.xml

##############     
rm -f appConfig.xml




If you liked the option of getting versions from Jenkins more and you are as lazy as I am, then under the spoiler the same code, but the list is from Jenkins:



We write a list of versions from Jenkins to the config
: , . , awk . .



#!/bin/bash
##############  
curl -X GET -u username:apiKey http://jenkins.mybuild.er/view/AweSomeApp/job/AweSomeApp_k8s/config.xml -o appConfig.xml

##############     xml-   
xmlstarlet ed --inplace -d '/project/properties/hudson.model.ParametersDefinitionProperty/parameterDefinitions/hudson.model.ChoiceParameterDefinition[name="BUILD_VERSION"]/choices[@class="java.util.Arrays$ArrayList"]/a[@class="string-array"]' appConfig.xml

xmlstarlet ed --inplace --subnode '/project/properties/hudson.model.ParametersDefinitionProperty/parameterDefinitions/hudson.model.ChoiceParameterDefinition[name="BUILD_VERSION"]/choices[@class="java.util.Arrays$ArrayList"]' --type elem -n a appConfig.xml

xmlstarlet ed --inplace --insert '/project/properties/hudson.model.ParametersDefinitionProperty/parameterDefinitions/hudson.model.ChoiceParameterDefinition[name="BUILD_VERSION"]/choices[@class="java.util.Arrays$ArrayList"]/a' --type attr -n class -v string-array appConfig.xml

##############       Jenkins
curl -g -X GET -u username:apiKey 'http://jenkins.mybuild.er/view/AweSomeApp/job/AweSomeApp_build/api/xml?tree=allBuilds[displayName,result]&exclude=freeStyleProject/allBuild[result!=%22SUCCESS%22]&pretty=true' -o builds.xml

##############       XML
readarray vers < <(xmlstarlet sel -t -v "freeStyleProject/allBuild/displayName" builds.xml | awk -F":" '{print $2}')

##############       
printf '%s\n' "${vers[@]}" | sort -r | \
                while IFS= read -r line
                do
                    xmlstarlet ed --inplace --subnode '/project/properties/hudson.model.ParametersDefinitionProperty/parameterDefinitions/hudson.model.ChoiceParameterDefinition[name="BUILD_VERSION"]/choices[@class="java.util.Arrays$ArrayList"]/a[@class="string-array"]' --type elem -n string -v "$line" appConfig.xml
                done

##############   
curl -X POST -u username:apiKey http://jenkins.mybuild.er/view/AweSomeApp/job/AweSomeApp_k8s/config.xml --data-binary @appConfig.xml

##############     
rm -f appConfig.xml




In theory, if you tested the code written on the basis of the examples above, then in the deployment task you should already have a drop-down list with versions. Here's something like the screenshot under the spoiler.



Correctly completed version list




If everything worked, then copy and paste the script in the Run shell command and save the changes.



Cloud shell connection



Collectors are in our containers. We use Ansible as our application delivery and configuration manager. Accordingly, when it comes to building containers, three options come to mind: install Docker in Docker, install Docker on a machine with Ansible, or build containers in the cloud console. We have agreed to be silent about plugins for Jenkins in this article. Remember?



I decided: well, since the containers "out of the box" can be assembled in the cloud console, then why fence a vegetable garden? Keep it clean, right? I want to collect containers with Jenkins in the cloud console, and then shoot them into kuber from there. Moreover, Google has very rich channels inside the infrastructure, which will have a beneficial effect on the deployment speed.



Two things are needed to connect to the cloud console: gcloudand access rights to the Google Cloud API for the VM instance from which this connection will be made.



For those who plan to connect not from the Google cloud at all
. , *nix' .



, — . — .



The easiest way to grant permissions is through the web interface.



  1. Stop the VM instance from which you will connect to the cloud console in the future.
  2. Open Instance Details and click Edit .
  3. At the very bottom of the page, select the instance access scope Full access to all Cloud APIs .



    Screenshot


  4. Save your changes and start the instance.


After the VM has finished booting, connect to it via SSH and make sure that the connection is successful. Use the command:



gcloud alpha cloud-shell ssh


A successful connection looks something like this




Deploy to GKE



Since we are striving in every possible way to completely switch to IaC (Infrastucture as a Code), we store dockerfiles in the gita. This is on the one hand. A deployment in kubernetes is described by a yaml file that is used only by this task, which itself is also like a code. This is on the other side. In general, I mean that the plan is as follows:



  1. We take the values ​​of the BUILD_VERSION variables and, optionally, the values ​​of the variables that will be passed through ENV .
  2. Downloading dockerfile from the gita.
  3. Generating yaml for deployment.
  4. Upload both of these files via scp to the cloud console.
  5. Build a container there and push it to the Container registry
  6. We apply the load deployment file to the Kuber.


Let's be more specific. Since we started talking about ENV , suppose we need to pass the values ​​of two parameters: PARAM1 and PARAM2 . Add their task for deployment, type - String Parameter .



Screenshot




We will generate yaml by simply redirecting echo to a file. It is assumed, of course, that you have PARAM1 and PARAM2 in the dockerfile , that the load name will be awesomeapp , and the assembled container with the application of the specified version is in the Container registry along the path gcr.io/awesomeapp/awesomeapp- $ BUILD_VERSION , where $ BUILD_VERSION is just was selected from the dropdown list.



Listing commands
touch deploy.yaml
echo "apiVersion: apps/v1" >> deploy.yaml
echo "kind: Deployment" >> deploy.yaml
echo "metadata:" >> deploy.yaml
echo "  name: awesomeapp" >> deploy.yaml
echo "spec:" >> deploy.yaml
echo "  replicas: 1" >> deploy.yaml
echo "  selector:" >> deploy.yaml
echo "    matchLabels:" >> deploy.yaml
echo "      run: awesomeapp" >> deploy.yaml
echo "  template:" >> deploy.yaml
echo "    metadata:" >> deploy.yaml
echo "      labels:" >> deploy.yaml
echo "        run: awesomeapp" >> deploy.yaml
echo "    spec:" >> deploy.yaml
echo "      containers:" >> deploy.yaml
echo "      - name: awesomeapp" >> deploy.yaml
echo "        image: gcr.io/awesomeapp/awesomeapp-$BUILD_VERSION:latest" >> deploy.yaml
echo "        env:" >> deploy.yaml
echo "        - name: PARAM1" >> deploy.yaml
echo "          value: $PARAM1" >> deploy.yaml
echo "        - name: PARAM2" >> deploy.yaml
echo "          value: $PARAM2" >> deploy.yaml




After connecting using gcloud alpha cloud-shell ssh to Jenkins agent, interactive mode is not available, so we send commands to the cloud console using the --command parameter .



We clean the home folder in the cloud console from the old dockerfile:



gcloud alpha cloud-shell ssh --command="rm -f Dockerfile"


We put the freshly downloaded dockerfile into the home folder of the cloud console using scp:



gcloud alpha cloud-shell scp localhost:./Dockerfile cloudshell:~


We collect, tag and push the container to the Container registry:



gcloud alpha cloud-shell ssh --command="docker build -t awesomeapp-$BUILD_VERSION ./ --build-arg BUILD_VERSION=$BUILD_VERSION --no-cache"
gcloud alpha cloud-shell ssh --command="docker tag awesomeapp-$BUILD_VERSION gcr.io/awesomeapp/awesomeapp-$BUILD_VERSION"
gcloud alpha cloud-shell ssh --command="docker push gcr.io/awesomeapp/awesomeapp-$BUILD_VERSION"


We do the same with the deployment file. Note that the commands below use fictitious names for the cluster where the deployment is taking place ( awsm-cluster ) and the name of the project ( awesome-project ) where the cluster is located.



gcloud alpha cloud-shell ssh --command="rm -f deploy.yaml"
gcloud alpha cloud-shell scp localhost:./deploy.yaml cloudshell:~
gcloud alpha cloud-shell ssh --command="gcloud container clusters get-credentials awsm-cluster --zone us-central1-c --project awesome-project && \
kubectl apply -f deploy.yaml"


We start the task, open the console output and hope to see a successful container build.



Screenshot




And then the successful deployment of the assembled container



Screenshot




I have deliberately ignored the Ingress setup . For one simple reason: once having configured it for a workload with a given name, it will remain operational, no matter how many deployments with this name are performed. Well, in general, this is a little beyond the scope of history.



Instead of conclusions



All of the above steps, probably, could not have been done, but simply installed some plugin for Jenkins, their muuulion. But somehow I don't like plugins. Well, more precisely, I resort to them only out of despair.



And I just like to pick up some new topic for me. The text above is also a way to share the findings that I made, solving the problem described at the very beginning. Share with those who, like, are not at all a dire wolf in devops. If my findings help at least someone, I will be happy.



All Articles