Modern OpenShift Applications, Part 3: OpenShift as a Development Environment and OpenShift Pipelines

Hello everyone on this blog! This is the third post in a series in which we show you how to deploy modern web applications on Red Hat OpenShift.







In the previous two posts, we covered how to deploy modern web applications in just a few steps, and how to use a new S2I image along with a ready-made HTTP server image such as NGINX using chained builds for production deployment.



Today we will show you how to run a development server for your application on the OpenShift platform and synchronize it with the local file system, as well as talk about what OpenShift Pipelines is and how you can use it as an alternative to linked assemblies.



OpenShift as a development environment



Development workflow



As discussed in the first post , a typical development workflow for modern web applications is simply a "development server" that monitors changes in local files. When they happen, the build of the application is started and then it is updated to the browser.



In most modern frameworks, this "development server" is built into the corresponding command line tools.



Local example



First, let's see how this works in the case of running applications locally. Let's take the React app from the previous articles as an example , although much of the same workflow concepts apply to all other modern frameworks.

So, to start the "development server" in our React example, we issue the following command:



$ npm run start


Then in the terminal window we will see something like the following:







And our application will open in the default browser:







Now, if we make changes to the file, then the application should update in the browser.



OK, with development in local mode, everything is clear, but how to achieve the same on OpenShift?



Development server on OpenShift



If you remember, in the previous post , we analyzed the so-called run phase of the S2I image and saw that by default the serve module is responsible for serving our web application.



However, if you take a closer look at the run script from that example, you will see that it contains the $ NPM_RUN environment variable, which allows you to execute your own command.



For example, we can use the nodeshift module to deploy our application:



$ npx nodeshift --deploy.env NPM_RUN="yarn start" --dockerImage=nodeshift/ubi8-s2i-web-app


Note: The example above is abbreviated to illustrate the general idea.



Here we have added the NPM_RUN environment variable to our deployment, which tells the runtime to run the yarn start command, which starts the React development server inside our OpenShift pod.



If you look at the log of a running pod, then there will be something like the following:







Of course, this will all be about nothing until we can synchronize local code with code that is also monitored for changes, but lives on a remote server.



Synchronizing remote and local code



Fortunately, nodeshift can help with synchronization, and you can use the watch command to track changes.



So after we have executed the command to deploy the development server for our application, we can safely use the following command:



$ npx nodeshift watch


As a result, a connection will be made to the running pod, which we created a little earlier, synchronization of our local files with the remote cluster will be activated, and files on our local system will be monitored for changes.



Therefore, if we now update the src / App.js file, the system will react to these changes, copy them to the remote cluster and start the development server, which will then update our application in the browser.



For completeness, let's show how these commands look in their entirety:



$ npx nodeshift --strictSSL=false --dockerImage=nodeshift/ubi8-s2i-web-app --build.env YARN_ENABLED=true --expose --deploy.env NPM_RUN="yarn start" --deploy.port 3000

$ npx nodeshift watch --strictSSL=false


The watch command is an abstraction on top of the oc rsync command, you can learn more about how it works here .



This was an example for React, but the exact same method can be used with other frameworks, just set the NPM_RUN environment variable as needed.

 

Openshift Pipelines









Next, we'll talk about a tool like OpenShift Pipelines and how it can be used as an alternative to chained builds.



What is OpenShift Pipelines



OpenShift Pipelines is a cloud-based CI / CD continuous integration and delivery system for organizing pipelines using Tekton. Tekton is a flexible open source Kubernetes native CI / CD framework that automates deployments across platforms (Kubernetes, serverless, virtual machines, etc.) by abstracting from the underlying layer.



Some knowledge of Pipelines is required to understand this article, so we strongly advise you to read the official tutorial first .



Setting up the working environment



To play around with the examples in this article, you first need to prepare your production environment:



  1. OpenShift 4. CodeReady Containers (CRD), .
  2. , , Pipeline Operator. , , .
  3. Tekton CLI (tkn) .
  4. create-react-app, , ( React).
  5. () , npm install npm start.


The application repository will also have a k8s folder, where Kubernetes / OpenShift YAMLs used to deploy the application will be located. There will be Tasks, ClusterTasks, Resources and Pipelines that we will create in this repository .



Let's get started



The first step for our example is to create a new project in the OpenShift cluster. Let's call this project webapp-pipeline and create it with the following command:



$ oc new-project webapp-pipeline


Further, this name of the project will appear in the code, so if you decide to name it something else, do not forget to edit the code from the examples accordingly. Starting from this point, we will not go from top to bottom, but from bottom to top: that is, first we will create all the components of the conveyor, and only then itself.



So, first of all ...



Tasks



Let's create a couple of tasks that will then help us deploy the application within our pipeline. The first task, apply_manifests_task, is responsible for applying YAML to those Kubernetes resources (service, deployment and route) that are located in the k8s folder of our application. The second task - update_deployment_task - is responsible for updating an already deployed image to the one created by our pipeline.



Don't worry if it's not clear yet. In fact, these tasks are something like utilities, and we will discuss them in more detail a little later. For now, let's just create them:



$ oc create -f https://raw.githubusercontent.com/nodeshift/webapp-pipeline-tutorial/master/tasks/update_deployment_task.yaml
$ oc create -f https://raw.githubusercontent.com/nodeshift/webapp-pipeline-tutorial/master/tasks/apply_manifests_task.yaml


Then, using the tkn CLI command, check that the tasks have been created:



$ tkn task ls

NAME                AGE
apply-manifests     1 minute ago
update-deployment   1 minute ago


Note: these are local tasks of your current project.



Cluster tasks



Clustered tasks are basically the same as simple tasks. That is, it is a reusable collection of steps that are combined in one way or another when starting a specific task. The difference is that the cluster task is available everywhere within the cluster. To see a list of cluster tasks that are automatically created when the Pipeline Operator is added, again use the tkn CLI command:



$ tkn clustertask ls

NAME                       AGE
buildah                    1 day ago
buildah-v0-10-0            1 day ago
jib-maven                  1 day ago
kn                         1 day ago
maven                      1 day ago
openshift-client           1 day ago
openshift-client-v0-10-0   1 day ago
s2i                        1 day ago
s2i-go                     1 day ago
s2i-go-v0-10-0             1 day ago
s2i-java-11                1 day ago
s2i-java-11-v0-10-0        1 day ago
s2i-java-8                 1 day ago
s2i-java-8-v0-10-0         1 day ago
s2i-nodejs                 1 day ago
s2i-nodejs-v0-10-0         1 day ago
s2i-perl                   1 day ago
s2i-perl-v0-10-0           1 day ago
s2i-php                    1 day ago
s2i-php-v0-10-0            1 day ago
s2i-python-3               1 day ago
s2i-python-3-v0-10-0       1 day ago
s2i-ruby                   1 day ago
s2i-ruby-v0-10-0           1 day ago
s2i-v0-10-0                1 day ago


Now let's create two cluster tasks. The first will generate an S2I image and send it to the internal OpenShift registry; the second is to build our NGINX-based image using the application we have already assembled as content.



Create and send the image



When creating the first task, we will repeat what we already did in the previous article about linked assemblies. Recall that we used the S2I image (ubi8-s2i-web-app) to "build" our application and ended up with the image stored in the internal OpenShift registry. We will now use this S2I image of the web application to create a DockerFile for our application, and then use Buildah to do the actual build and push the resulting image to the internal OpenShift registry, since this is exactly what OpenShift does when you deploy yours. applications using NodeShift.



How did we know all this, you ask? From the official version of official Node.js , we just copied it and finished it for ourselves.



So, now we create the s2i-web-app cluster task:



$ oc create -f https://raw.githubusercontent.com/nodeshift/webapp-pipeline-tutorial/master/clustertasks/s2i-web-app-task.yaml


We will not go into detail about this, but just dwell on the OUTPUT_DIR parameter:



params:
      - name: OUTPUT_DIR
        description: The location of the build output directory
        default: build


By default, this parameter is set to build, which is where React puts the collected content. Other frameworks use different paths, for example, in Ember it is dist. The output of our first cluster task will be an image containing the HTML, JavaScript, and CSS we've collected.



Building an image based on NGINX



As for our second cluster task, it should collect us an image based on NGINX, using the content of the application we have already collected. Basically, this is the part of the previous section where we looked at chained builds.



To do this, we - in the same way as a little above - create a cluster task webapp-build-runtime:



$ oc create -f https://raw.githubusercontent.com/nodeshift/webapp-pipeline-tutorial/master/clustertasks/webapp-build-runtime-task.yaml


If you look at the code of these cluster tasks, you can see that the Git repository we are working with or the names of the images we create is not specified there. We only specify what exactly we transfer to Git, or a certain image, where the final image should be displayed. This is why these cluster tasks can be reused when working with other applications.



And here we gracefully move on to the next point ...



Resources



So, since, as we just said, cluster tasks should be as generalized as possible, we need to create resources that will be used in input (Git repository) and output (final images). The first resource we need is Git where our application resides, something like this:



# This resource is the location of the git repo with the web application source
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: web-application-repo
spec:
  type: git
  params:
    - name: url
      value: https://github.com/nodeshift-starters/react-pipeline-example
    - name: revision
      value: master


Here PipelineResource is of type git. The url key in the params section points to a specific repository and sets the master branch (this is optional, but we write it for completeness).



Now we need to create a resource for the image, where the results of the s2i-web-app task will be saved, this is done like this:



# This resource is the result of running "npm run build",  the resulting built files will be located in /opt/app-root/output
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: built-web-application-image
spec:
  type: image
  params:
    - name: url
      value: image-registry.openshift-image-registry.svc:5000/webapp-pipeline/built-web-application:latest


Here the PipelineResource is of type image, and the url parameter value points to the internal OpenShift Image Registry, specifically to the one in the webapp-pipeline namespace. Remember to change this parameter if you are using a different namespace.



And finally, the last resource that we need will also be of the image type and this will be the final NGINX image, which will then be used during deployment:



# This resource is the image that will be just the static html, css, js files being run with nginx
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: runtime-web-application-image
spec:
  type: image
  params:
    - name: url
      value: image-registry.openshift-image-registry.svc:5000/webapp-pipeline/runtime-web-application:latest


Again, notice that this resource stores the image in the internal OpenShift registry in the webapp-pipeline namespace.



To create all these resources at once, use the create command:



$ oc create -f https://raw.githubusercontent.com/nodeshift/webapp-pipeline-tutorial/master/resources/resource.yaml


You can make sure that the resources have been created like this:



$ tkn resource ls


Pipeline pipeline



Now that we have all the necessary components, we will assemble a pipeline from them, creating it with the following command:



$ oc create -f https://raw.githubusercontent.com/nodeshift/webapp-pipeline-tutorial/master/pipelines/build-and-deploy-react.yaml


But, before running this command, let's take a look at these components. The first is the name:



apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
  name: build-and-deploy-react


Then, in the spec section, we see an indication of the resources that we created earlier:



spec:
  resources:
    - name: web-application-repo
      type: git
    - name: built-web-application-image
      type: image
    - name: runtime-web-application-image
      type: image


We then create tasks for our pipeline to complete. First of all, he must execute the s2i-web-app task we have already created:



tasks:
    - name: build-web-application
      taskRef:
        name: s2i-web-app
        kind: ClusterTask


This task takes input (gir-resource) and output (built-web-application-image resource) parameters. We also pass it a special parameter so that it does not verify TLS since we are using self-signed certificates:



resources:
        inputs:
          - name: source
            resource: web-application-repo
        outputs:
          - name: image
            resource: built-web-application-image
      params:
        - name: TLSVERIFY
          value: "false"


The next task is almost the same, only here the already created webapp-build-runtime cluster task is called:



name: build-runtime-image
    taskRef:
      name: webapp-build-runtime
      kind: ClusterTask


As with the previous task, we are passing the resource, but this is now a built-web-application-image (the output of our previous task). And as an output, we again set the image. Since this task should be performed after the previous one, we add the runAfter field:



resources:
        inputs:
          - name: image
            resource: built-web-application-image
        outputs:
          - name: image
            resource: runtime-web-application-image
        params:
        - name: TLSVERIFY
          value: "false"
      runAfter:
        - build-web-application


The next two tasks are responsible for applying the YAML files for the service, route and deployment that live in the k8s directory of our web application, and also for updating this deployment when creating new images. We set these two cluster tasks at the beginning of the article.



Running the conveyor



So, all parts of our pipeline are created, and we will start it with the following command:



$ tkn pipeline start build-and-deploy-react


At this stage, the command line is used interactively and you need to select the appropriate resources in response to each of its requests: for the git resource, select the web-application-repo, then for the resource of the first image, select the built-web-application-image, and finally for second image resource –runtime-web-application-image:



? Choose the git resource to use for web-application-repo: web-application-repo (https://github.com/nodeshift-starters/react-pipeline-example)
? Choose the image resource to use for built-web-application-image: built-web-application-image (image-registry.openshift-image-registry.svc:5000/webapp-pipeline/built-web-
application:latest)
? Choose the image resource to use for runtime-web-application-image: runtime-web-application-image (image-registry.openshift-image-registry.svc:5000/webapp-pipeline/runtim
e-web-application:latest)
Pipelinerun started: build-and-deploy-react-run-4xwsr


Now let's check the pipeline status with the following command:



$ tkn pipeline logs -f


After the pipeline starts and the application is deployed, we request the published route with the following command:



$ oc get route react-pipeline-example --template='http://{{.spec.host}}'


For more visibility, you can view our pipeline in the Developer mode of the web console in the Pipelines section , as shown in Fig. 1.







Fig. 1. Overview of running pipelines.



Clicking on a running pipeline displays additional information, as shown in Figure 2.







Figure: 2. More information about the pipeline.



After more information, you can see the running applications in the Topology view , as shown in Figure 3.







Fig 3. Running pod.



Clicking the circle in the upper right corner of the icon opens our application, as shown in Figure 4.







Figure: 4. Running React application.



Conclusion



So, we have shown how to run a development server for your application on OpenShift and synchronize it with the local file system. We also looked at how to mimic the chained-build template using OpenShift Pipelines. All sample codes from this article can be found here .



Additional resources (EN)







Upcoming webinars announcements



We're kicking off a series of Friday webinars on the native experience of using the Red Hat OpenShift Container Platform and Kubernetes:






All Articles