Lets Start Continuous integration with Jenkins Pipeline

0
1757
Jenkins Pipeline
Img: jenkins.io

But this time I will expand the topic of continuous integration with Jenkins and dive into details about Jenkins Pipelines. Here you will find everything you wanted to know about continuous integration with Jenkins Pipeline!

Ok, sure you know this all, but basic terms will never worse the situation. And we’ll start with them.

See More: How to Write a Jenkinsfile

What is Jenkins

Jenkins is an open source automation server written in Java. Jenkins helps to automate the non-human part of software development process, with continuous integration and facilitating technical aspects of continuous delivery. It is a server-based system that runs in servlet containers such as Apache Tomcat. It can execute Apache Ant, Apache Maven and sbt based projects as well as arbitrary shell scripts and Windows batch commands.

What is a Jenkins Pipeline

According to Jenkins it’s..

Jenkins Pipeline (or simply “Pipeline” with a capital “P”) is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.

A continuous delivery pipeline is an automated expression of your process for getting software from version control right through to your users and customers. Every change to your software (committed in source control) goes through a complex process on its way to being released. This process involves building the software in a reliable and repeatable manner, as well as the progression of the built software (called a “build”) through multiple stages of testing and deployment.

Benefits

One huge benefit of using a pipeline is that the job itself is durable. A Pipeline job is able to survive planned or even unplanned restarts of the Jenkins master. If you need to survive slave failures as well, you’ll have to use checkpoints.

Unfortunately, the checkpoints plugin is only available for the enterprise edition of Jenkins. Pipelines are also pausable. You can use an “input” step to wait for human input or approval before continuing the job.

They’re also versatile and extensible. You can set up pipelines that fork, join, loop, and even execute items in parallel. You can also use custom groovy to extend the Pipeline DSL.

Pipeline Vocabulary

Alright, it’s time to cover some pipeline vocabulary. Each pipeline generally consists of three things: Steps, Nodes, and Stages.

A step, also known as a “build step”, is a single task that we want Jenkins to execute.

A “node”, within the contexts of a pipeline, refers to a step that does two things. First, it schedules the defined steps so that it’ll run as soon as an executor is available. Second, it creates a temporary workspace which is removed once all steps have completed.

And lastly, we have “Stages”. Stages are for setting up logical divisions within pipelines. The Jenkins Pipeline visualization plugin will display each stage as a separate segment. Because of this, teams tend to name stages for each phase of the development process, such as “Dev, Test, Stage, and Production”.

The Jenkinsfile

The core of the pipeline is the file called Jenkinsfile. It’s the list of jobs which the pipeline will perform and can be held on the Jenkins server itself as part of the pipeline or at the root of a linked git/Bitbucket repository

An example Jenkinsfile looks like this:

pipeline  {
    // 1 line comment
    / *
     * Multi-line comment
     * /
    agent  {  ...  }
    environment  {  ...  }
    options  {
        buildDiscarder (...)
        disableConcurrentBuilds (...)
        skipDefaultCheckout (...)
        timeout (...)
        retry (...)
        timestamps ()
    }
    parameters  {
        string (...)
        booleanParam (...)
        choice (...)
    }
    tools  {
        maven  '...'
        jdk  '...'
        gradle  '...'
    }
    triggers  {
        cron (...)
        PollSCM (...)
    }
    Stages  {
        Stage  {
            agent  {  ...  }
            environment  {  ...  }
            tools  {  ...  }
            the when  {  ...  }
            Steps  {
                // Pipeline Steps Reference reference
                echo  'help'
                sh  'ls'
                script  {
                    // any Groovy script
                }
            }
        }
    }
    steps  {
        // see Pipeline Steps Reference
    }
    post  {
        always  {  ...  }
        success  { ...  }
        failure  {  ...  }
        ...
    }
}

This pipeline file

  • sets up environment variables
  • pulls data down from a git repo
  • sets it up in a Jenkins workspace
  • runs a script under scripts/
  • once completes by cleaning up the workspace (successful or not)

In this example we break this information down to the following chunks with an explanation

Pipeline

We define the pipeline using the pipeline{} section

Our first stage is an option but recommended one where we setup environment variables using an environment{} section

In this example I’m not restricting the agent the pipleine can run on, so i’ve inclued agent any

pipeline
  {
    agent
  {
  ...
  }
    environment  {
  ...
  }
    options
  {
        buildDiscarder (...)
        disableConcurrentBuilds (...)
        skipDefaultCheckout (...)
        timeout (...)
        retry (...)
        timestamps ()
    }

Stages

The core of pipelines are stages {} and steps{}, stages can have multiple steps in them

In this example our first stage is called Checkout: code, this label will also be shown as we will see later in the Jenkins interface so making it relevent is useful

Our steps are to create a workspace on the Jenkins slave and then setup and pull down the data from the Jenkins repo we linked the pipeline to when we set it up (see later)

Checkout
 ([
    $ Class :  'GitSCM' ,
    Branches:  Scm . Branches ,
    Extensions:  Scm . Extensions  Tasu  [
        [
            $ Class :  'RelativeTargetDirectory' ,
            RelativeTargetDir:  "Src / V2"
        ]
    ],
    UserRemoteConfigs:  Scm . UserRemoteConfigs
]
)

Build

Our second stage is an example of using the data pulled down from the git repo, which includes a script.

You can see we have to do some linux maintenance by making the script executable and running as sudo,

steps
  {
        linux:  {
            sh  './make.bash -t linux_amd64'
        },
}

Post

Post You can use the Archive Artifact Plugin in the section.

post  
   {
        success
  {
            archiveArtifacts  artifacts:  bin / *,  fingerprint:  true
        }
    }

Note about brackets

Every open bracket {must have a close} and vice versa, if there are too many of either bracket then the pipeline will fail with various errors.

The Jenkinsfile in a git/Bitbucket project

The above Jenkinsfile along with the git-maven-junit-docker-pipepline, it will help to setup git mavan and junit with docker.

The Pipeline plugin

The pipeline needs to be added to Jenkins for this to work. This isn’t always added by default and can be added via the Manage Jenkins interface.

Creating a new pipeline

With our Jenkinsfile and script in git/Bitbucket the pipeline needs to be setup in Jenkins. We do this using the Jenkins server web interface.

Login to Jenkins

Click on New Job

We need to create a pipeline, so we

  • Give the pipeline a name
  • Select pipeline in the list
  • Click on OK

Next we scroll down to Advanced Project Options.

  • Select — Pipeline script from SCM (if you don’t see this option on your Jenkins server see the plugins note below)
  • Add your git/Bitbucket repo link
  • Add the credentials for connecting to your git repo

Note: because we have put the Jenkinsfile in the root of the Git repo path, it will be picked up and used by Jenkins automatically.

Note Additional Jenkins plugins

On a default installation of Jenkins most of the plugins you needed are installed, however to check we should have at least the following plugins:

Manage Jenkins -> Manage Plugins

  • Bitbucket Plugin
  • Pipeline
  • Git
  • Pipeline SCM Step

Running a Pipeline build

Once the pipeline has been created we can build the plugin. On the first build you will see a screen which can see a breakdown of the pipeline

Note that the heading we used in our stages of the above Jenkinsfile are now displayed within the pipeline display of Jenkins

As each stage of the pipeline runs through, the Jenkins interface will display a simple Red failed, Green worked status for the stage.

Error checking

If the pipeline fails (or even works in the interface but has unexpected results) the Jenkins interface provides logs for each stage of the pipeline

Clicking on the Logfile will explain what happened during the run of that stage

These instructions were written using a docker image of Bitbucket as the git repo and of Jenkins.

Ensuring a robust Jenkins deployment

The following practices are important if you want to run a robust Jenkins integration server that won’t break down.

Secure your Jenkins servers

Jenkins does not perform any security checks as part of its default configuration, so always ensure that you authenticate users and enforce access control on your Jenkins servers. Due to Jenkins’ important role, a breach of your Jenkins servers can mean a loss of access credentials to your most valuable resources and other potential risks such as unauthorized users kicking off malicious builds and jobs.

Be cautious with the master(s)

Jenkins supports a master/slave mode where the workload for building projects is delegated to multiple slave nodes, allowing one Jenkins installation to host a large number of projects or provide different environments needed for builds/tests.

In a large, complex integration environment that includes multiple users that configure jobs, you should ensure that they are not running builds on the master with unrestricted access into the JENKINS_HOME directory. This is where Jenkins stores all of its important data.

Instead, you can use the Job Restrictions Plugin to limit which jobs can be executed on the master. You can also use multiple masters and restrict each to a specific team.

Backup configuration

In order to make sure all configurations and activity logs will be available when needed, you can use the thinBack plugin. This plugin backs up Jenkins’ global and job-specific configurations and can be scheduled for automated backup of only the most critical configurations.

You must regularly test that you can restore your system from the backup; otherwise the backup serves no purpose.

Free up enough disk space

Jenkins needs disk space to perform builds, store data logs, and keep archives. To keep Jenkins up and running, make sure that you reserve 10 percent or more of the total disk space for Jenkins in order to prevent fragmentation (which may result in operating system performance degradation, for example). In most cases, mounting network drives and sharing the specific folders is sufficient to fix such issues.

You can use the Disk Usage Plugin to monitor your project usage trends and the remaining allocated disk space. Also, large and complex architectures always include monitoring tools in the integration and delivery systems. Installing lightweight tools such as Nagios or Zabbix can help with logging and monitoring Jenkins, and they will not harm its performance.

Version 2.0 will make CI even easier

In the new 2.0 version, Jenkins offers pipeline as code, a new setup experience, and several UI improvements. The Pipeline plugin introduces a domain-specific language (DSL) that helps users model their software delivery pipeline as code. Jenkins 2.0 will also help you choose the plugins that match your needs. Other improvements and redesign enhancements can be found on the official Jenkins 2.0 website.

Using Jenkins to implement your CI is simple, and it is the most common tool for providing CI servers in modern R&D environments. We encourage you to get involved and join the great Jenkins open-source community. You can contribute your own plugins, fix bugs, and share the fixes with the community, or stay tuned to Jenkins mailing lists and the IRC Channel.

 

NO COMMENTS