Typically, there are four steps in a pipeline:
Step Type | Description |
---|---|
Build | Creates or builds artifacts used by other steps. |
Promotion | Deploys artifacts created in a build step to a specified environment. |
Validation | Validates actions performed in a previous step, typically build or deploy. If the validation step fails, the pipeline process might be aborted. |
Compliance | These type of steps are sometimes needed to make the pipeline compliant with enterprise release management standards. |
Note | Because the primary task of a pipeline is to promote software through various Software Development Lifecycle (SDLC) environments to production, an MVP (Minimum Viable Product) pipeline may only require build and promotion steps, with validation and compliance checking being performed by humans. |
The recommended approach is to create an MVP pipeline, and only add validation and compliance steps after the MVP pipeline is verified.
Build Stage
Build steps include everything needed to build a container image. Although version control is always in effect for container images, it might also be required for other deployment artifacts (for example, jar files) for compliance reasons. The pipeline owner must work with the release policies owner to ensure compliance requirements are met.
There are three ways to build a docker image:
Build type | Description |
---|---|
External Docker build | Performs a standard Dockerfile build external to the OpenShift cluster. In this case, OpenShift is independent of the build. |
Performs a standard Dockerfile build executed as a pod inside of OpenShift. | |
Performs a source to image build that includes intermediate artifacts and docker image and that is executed as one process inside a pod. |
Each of these build types produces the desired output, but has different advantages and disadvantages. Because builds are spiky types of workload, Internal Docker Builds are well-suited for container platforms in which capacity is given back to the pool at the end of the build. Running builds inside of OpenShift can increase overall environment efficiency. However, running builds internally in OpenShift creates garbage that needs to be removed (see this sections of the doc on garbage collection.
The main advantage of S2I builds is that they don’t require learning Dockerfile syntax. Building good docker images is an acquired skill and it may not be worth the cost of training all the developers in an organization. Another advantage is that the build steps in a S2I build are not run with root privileges. Another advantage is that the build steps in S2I builds run as the user defined in the base image. This gives an organization the ability to control the permission scope of the build, while still allowing the developer full ability to self serve.
On the other hand, because S2I builds are not part of the Kubernetes API, S2I builds are not portable across Kubernetes-based platforms. In addition, although it is possible to customize the S2I process, learning to customize S2I builds can be as time consuming and complex as learning how to properly build docker images.
More information for these build types can be found in our OpenShift Builds deep dive.
Binary Builds
If a build process already exists and the objective is to customize it to deploy to OpenShift, the binary build is a good option. The binary build uses artifacts produced in a previous step as input to an OpenShift build (either docker or S2I).
Check out our Binary Build tutorial to see this in action.
Jenkins Pipeline Build Strategy
A recent addition to the set of available OpenShift build types is the Jenkins pipeline build strategy. A Jenkins pipeline build is a build strategy that includes a mechanism to trigger a Jenkins build from within OpenShift via an API call. The build logic resides in Jenkins.
Promotion Stage
Promotion steps are used to deploy solutions (comprised of multiple components) to SDLC environments. In OpenShift, an environment is a project residing in a cluster. Assuming there is an existing OpenShift template that represents the solution, the deployment is performed with the following command:
oc process <template> ... | oc apply -f -
This command will instantiate the template to the target environment using the environment dependent template parameters (not to be confused with the environment dependent properties) and applies it to OpenShift.
The apply
command determines if there are differences between the current state and the new, desired state, and if so, applies the differences as patches to the existing API objects. Because existing objects are not destroyed, the related applications do not incur outages.
If triggers are active at this point, a new deployment commences. In general, automatic triggers for deployments should be carefully considered. In fact if we have a pipeline orchestrator ( such as Jenkins) the logic to trigger a rollout should reside in the orchestrator.
Check out our guide on OpenShift Templates for more details on these concepts.
Rollout Strategies
The following rollout strategies are possible:
Rollout strategy | Managed by | Description |
---|---|---|
Recreate | OpenShift | Destroys and recreates pods, causing outages. |
Rolling | OpenShift | Gradually creates new pods and destroys old ones in such a way to avoid outages. |
Blue/Green | Pipeline | Maintains two sets of pods (old and new) and splits traffic between the two sets, gradually moving more and more traffic to the new pods. |
Canary | Pipeline | Sends a small amount of traffic to the new pods. Once the new version is validated, works as blue/green. |
A/B (*) | Pipeline + Service Mesh | Sends traffic to one version or the other based on some content in the request. |
Shadow | Pipeline + Service Mesh | Sends traffic to both old and new pods, traffic to new pods is managed in a way to avoid side effects. Metrics are collected, analyzed, and compared with old traffic. |
(*) Note: this definition of A/B testing differs from the one that you can find in the OpenShift documentation.
Deployment strategy characteristics
Rollout strategy | Zero downtime | Real traffic testing | Targeted user | Infrastructure cost | Complexity |
---|---|---|---|---|---|
Recreate | no | no | no | low | low |
Rolling | yes | no | no | low | low |
Blue/Green | yes | no | no | high | medium |
Canary | yes | no | yes | low | medium |
A/B | yes | yes | yes | low | high |
Shadow | yes | yes | no | high | high |
Validation Stage
Validation steps tend to be organization specific and, in part, reflect the development methodology adopted by that organization.
Validation steps can be classified as environment independent and environment dependent. To catch errors as quickly as possible, environment independent steps should be executed before the first deployment. Conversely, environment dependent steps can only be executed in an environment after the solution has been deployed.
Examples of environment independent and environment dependent validation steps are provided in the following tables.
Environment independent validation steps:
Validation step | Common tool/technology implementations |
---|---|
Code static analysis | |
Code static security scanning | |
Unit test | |
Image security scanning | CloudForms (with atomic scan), Twistlock, BlackDuck, Aquasec. |
Image security scanning is a new step introduced by container technology. It ensures the quality of images in terms of security vulnerabilities (CVEs)
This process has two sides: on one side we need to scan the images to determine if there are known vulnerabilities in it, on the other side we need to prevent images that don’t pass the quality check from running as containers. This enforcement resides outside of the pipeline and can be configured in OpenShift via the image admission control.
An example image scanning pipeline step can be found here.
Environment dependent validation steps:
Validation step | Common tool / technology implementations |
---|---|
Various tools and techniques. BDD seems to be emerging for web and mobile apps. | |
Pipeline orchestrated and tested with integration testing tools. | |
Behavioral Driven Development (BDD) testing is a valuable option and should be used for testing web and mobile applications.
BDD test cases are created using a pseudo natural language (Gherkin), allowing them to be quickly and easily created by non-technical users. The test cases are translated into machine instructions at test runtime (a popular one is Cucumber).
The general architecture of BDD tests is illustrated in the following diagram:
The above architecture can be fully deployed and contained in OpenShift. Multiple combinations of browser/operating system and mobile device/operating system tests can be executed in parallel. The advent of Windows containers will provide additional opportunities for testing applications running on Internet Explorer and Edge.
Compliance Stage
Compliance steps may be needed when pipelines must be compliant with an organization’s release process and change management.
Examples of compliance steps are provided in the following table:
Step | Description |
---|---|
Manual approval | Suspends pipeline execution until manual approval to continue is provided . |
Container image signing | Cryptographically signs images |
Deployment manifest upload | Uploads the manifest of what is deployed to a centralized change management tool. |
Container image signing provides a method for making an assertion cryptographically verifiable. When an image is signed (a fact to which any meaning can be assigned, for example, the image is approved for execution ), anyone can verify that assertion. Again having assertions is only one piece of the process, the other piece, which resides outside of the pipeline, is having a way to verify and enforce them. Here you can find an example of a pipeline with an container image signing step.
Google is currently working on a more general purpose assertion framework for deployment pipelines (see Grapheas and Kritis).