Sparktree - You Don't Know Jenkins - Part 3 (2023)

With the release of Jenkins 2.x, support for Pipeline jobs is built-in. This is important for multiple reasons, but mostlybecause Pipeline jobs are now the defacto standard for creating complex jobs, custom deployment workflows withoutadditional plugins. The best part is that pipelines are basically just Groovy scripts with some Jenkins specificadditions.

While Pipeline jobs can be used to build artifacts just like a regular Freestyle job, their true power is only apparent when youstart using the Pipeline for orchestration.

Before Pipelines were released you had to make use of post build triggers and artifact archiving to create a usefulorchestration workflow. With Pipelines, this concept is now a first class citizen. You can clone multiple repositories,trigger down stream jobs, run stages in parallel, make decisions about what stages to run based on parameters. Youhave the power to build a Pipeline that suites your needs.

This post is part of a series that is all about solving common problems using new Jenkins features, modern automation & configuration-as-code practices.

  • Part 1 - Automated Jenkins Install using Chef
  • Part 2 - Maintainable Jenkins Jobs using Job DSL
  • Part 3 - Leveraging Pipelines for Continuous Deployment/Orchestration
  • Part 4 - Advanced DSL & Pipeline Techniques (Coming soon)

This is Part 3 - Leveraging Pipelines for Continuous Deployment/Orchestration. If you haven’t read Part 1, you might want to start there.

Declarative vs Scripted Pipeline

The first thing you need to know is that there’s actually 2 significantly different types of pipelines.

The first type is called a Declarative Pipeline. If you’re familiar with a Jenkinsfile, then you’re already with theDeclarative Pipeline syntax. Its simple and structured, making it easy to understand.

The second type is called a Scripted Pipeline. It is a fully featured programming environment, offering a tremendousamount of flexibility and extensibility to Jenkins users.

The two are both fundamentally the same Pipeline sub-system underneath. They are both durable implementations of “Pipeline as code.”They are both able to use steps built into Pipeline or provided by plugins. Both are able utilize Shared Libraries(a topic we’ll dive into in a future ost).

Where they differ however is in syntax and flexibility. Declarative limits what is available to the user with a morestrict and pre-defined structure, making it an ideal choice for simpler continuous delivery pipelines. Scripted providesvery few limits; the only limits on structure and syntax tend to be defined by Groovy itself, rather than any Pipeline-specificsystems, making it an ideal choice for power-users and those with more complex requirements.

For the most part the issues and solutions I talk about in the following sections are relevant to both types of JenkinsPipeline, however some only apply to Scripted.

Serialization woes

If you’ve worked with Jenkins Pipelines for anything more than simple/toy examples, you’ll have run into exceptions.

These exceptions are confusing, until you begin to understand the truth about Pipelines & Jenkinsfiles: You’re not writinga groovy script, you’re writing a list of groovy scripts.

I could dive deep into Abstract Syntax Tree (AST), the Groovy-CPS engine and continuation-passing style transformation,but as a developer writing Jenkinsfiles and pipeline scripts you probably just want to get your script working.

Here’s what you need to know: after each pipeline step Jenkins will take a snapshot of the current execution state.

This is because Jenkins pipelines are supposed to be robust against restarts (they can continue where they left off,rather than requiring your pipeline to start over from the beginning). While this sounds great, the way Jenkins doesthis is by serializing the current pipeline state. If you’re using classes that do not serialize nicely(using implements Serializable) then Jenkins will throw an error.


There’s a couple of solutions for this:

  • @NonCPS decorated methods may safely use non-Serializable objects as local variables, though they should not acceptnon-serializable parameters or return or store non-serializable values.

     @NonCPS def version(text) { def matcher = text =~ '<version>(.+)</version>' matcher ? matcher[0][1] : null }
  • All non-serializable variables should be Nulled before the next Jenkins pipeline step is called.

     def matcher = readFile('pom.xml') =~ '<version>(.+)</version>' if (matcher) { echo "Building version ${matcher[0][1]}" } matcher = null sh "${mvnHome}/bin/mvn -B -Dmaven.test.failure.ignore verify"
  • Use implements Serializable for any classes that you define yourself. Only really applicable in Shared Libraries(detailed in a future post)

     class Utilities implements Serializable { def steps Utilities(steps) {this.steps = steps} def mvn(args) { "${steps.tool 'Maven'}/bin/mvn -o ${args}" } }

Script Approval & Groovy Sandbox

Pipelines also introduce another annoyingly common exception org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException.

Like the Serialization error above, this related to the magic that makes Jenkins Pipeline Groovy different than regularGroovy scripts. Since Groovy is a full programming language, with all the functionality and potential destructiveness thatentails, the Jenkins developers decided to mitigate that potential for harm by only allowing certain whitelisted methodsto be used in Pipeline scripts.

Unfortunately a large number of common legitimate Groovy methods are not whitelisted by default, which can make Pipelinedevelopment frustrating.Even more frustrating is the fact that the RejectedAccessException’s are only thrown at Runtime, potentially 2 hoursinto a 3 hour pipeline script. Yeah, not fun.


There’s a couple ways to mitigate these issues:

  • Disable the Jenkins Pipeline sandbox. While this may be ok while developing a new script, this shouldn’t be your defaultfor finished scripts. The Pipeline Groovy runtime has access to all the Jenkins internals, meaning you can retrieve encryptedcredentials, trigger deployments, delete build artifacts and cause havoc in any number of ways.
  • Whitelist each and every method that you use. If you make heavy use of Groovy shortcut methods in DefaultGroovyMethods(like .any .each, .find) you’ll want to take a look at my Jenkins init.d scriptthat automatically whitelists them all.
  • Global Shared Libraries. I’ll talk about this more in a future post, but Global Pipeline Libraries are assumedto be trusted, and as such any methods (no matter how dangerous) are not subject to the Jenkins security sandbox.


There’s a lot of documentation about Pipelines, however they are spread out between various Github repos, the Jenkins Blogand the official documentation. I’m going to list links and sources here that you’ll find useful for various topics.


Documentation can be a bit hard to find, especially if you want an updated list of all the available pipeline steps.

You’re best bet is to check the master list: Pipeline Steps Reference. Itcontains documentation for all the known pipeline steps provided by plugins.

If however you’re only interested in the steps that are actually usable on your Jenkins server, you’ll want to go tohttp:///pipeline-syntax/html. While that website is fully featured, the documentation can be a bitterse, so you’ll also want to check out the Snippet Generator: http:///pipeline-syntax


While you might already be familiar with Pipelines, sometimes looking at actual code is more useful than reading aboutan abstract concept.

The Jenkins team has a jenkinsci/pipeline-examples with working codefor Pipelines, Jenkinsfiles and Shared Libraries. You should definitely check it out.

If you’ve already written a couple Pipeline scripts and you’re starting to get comfortable, then it may be time to startreading about the Best Practices

Pipelines are powerful, but to really see them shine, you’ll want to start importing third party jars and reusing code.

Importing Jars from the public maven repo is as easy as including @Grab at the top of your Pipeline script.

@Grab('org.yaml:snakeyaml:1.17')import org.yaml.snakeyaml.Yaml

Reusing Pipelines functions is easy too, just move your code into a Shared Library, configure it as a Library in theJenkins Manage page, and then import it in your Pipeline script

@Library('somelib')import com.mycorp.pipeline.somelib.UsefulClass

I’ll be talking about Shared Pipelines more in a future post of this series, with much more detail.

String Interpolation & Multiline Strings

While this is mostly just about Groovy syntax, and not really Jenkins Pipeline specific, I’ve found that there are alot of questions around String manipulation and multiline strings.

String interpolation is pretty easy. All you need to know is that single quotes (') are literal strings, while doublequoted strings support interpolation and escape characters.

def myString = 'hello'assert '${myString} world' == '${hello} world'assert "${myString} world" == 'hello world'

Multiline strings are easy to create as well, just create use three single or double quotes to open and close the string.As before, single quotes are literal multi-line strings, while double quotes are used for interpolated multi-line strings

def myString = 'hello'assert '''\${myString} worldfoo bar''' == "\\\n${myString} world\nfoo bar\n"assert """\${myString} worldfoo bar""".stripIndent() == "hello world\nfoo bar\n"

Shell Output Parsing

A little known but incredibly useful feature of the pipeline shell sh step, is that you can redirect the STDOUT into a groovy variable.

def gitCommit = sh(returnStdout: true, script: 'git rev-parse HEAD').trim()echo "Git commit sha: ${gitCommit}"

Build Name & Description

Occasionally you’ll wish that you could include more contextual data in your build history, instead of having to identify a specific build by build number.

Pipeline’s have you covered:

Sparktree - You Don't Know Jenkins - Part 3 (1)

At any point in your pipeline script you can add/update the job name (build ID) & description using the global variable currentBuild.

//this will replace the build number in the Jenkins = "short string"//this will show up as a grey text block below the build numbercurrentBuild.description = "my new description"

Pipelines are completely customizable and extensible, making it hard to give you a out-of-the-box solution, like I’ve done in previous guides.

Instead the goal here was to answer the common questions I’ve seen about Pipelines and throw in some links and resourcesso you can build a Pipeline that works for you.

Having said that, Pipeline scripts are only one half of the solution.

Part X - Advanced Techniques - Pipeline Testing, Shared Libraries (Coming soon)

In a future post we’ll talk about how you can actually start testing your Pipeline scripts. As you start writing more orchestrationcode you’ll find that, unlike application code, orchestration code is incredibly difficult to write and test effectively.

In addition, any discussion about Pipelines wouldn’t be complete without mentioning Shared Libraries. I’ve touched on thema couple times in this guide, but in a future post, I’ll be writing a complex & testable Shared Library, step by step so you can follow along.

Additional References

Top Articles
Latest Posts
Article information

Author: Cheryll Lueilwitz

Last Updated: 02/11/2023

Views: 6376

Rating: 4.3 / 5 (54 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Cheryll Lueilwitz

Birthday: 1997-12-23

Address: 4653 O'Kon Hill, Lake Juanstad, AR 65469

Phone: +494124489301

Job: Marketing Representative

Hobby: Reading, Ice skating, Foraging, BASE jumping, Hiking, Skateboarding, Kayaking

Introduction: My name is Cheryll Lueilwitz, I am a sparkling, clean, super, lucky, joyous, outstanding, lucky person who loves writing and wants to share my knowledge and understanding with you.