Running Jenkins job with different maven opts in parallel - maven

community.
I would like to run Jenkins job, same job but with diferent maven opts values and in parallel. How can i achieve that? I was trying to use different Jenkins plugins, but with no luck.
Trying to configure pipelines using groovy scripts, but i am so amateur that i can't figure out how to achieve what i want. The goal is to run same jenkins job in parallel, but the only thing that must be different is environment where my tests should run.
Maybe there is already a solution so you could point me to that.

You should be able to use parallel blocks for this. Following is a sample.
pipeline {
agent none
stages {
stage('Run Tests') {
parallel {
stage('Test On Dev') {
agent {
label "IfYouwantToChangeAgent"
}
steps {
sh "mvn clean test -Dsomething=dev"
}
post {
always {
junit "**/TEST-*.xml"
}
}
}
stage('Test On QA') {
agent {
label "QA"
}
steps {
sh "mvn clean test -Dsomething=qa"
}
post {
always {
junit "**/TEST-*.xml"
}
}
}
}
}
}
}

Related

How to execute a jenkins declarative script from another declarative script?

I want to execute nested declarative scripts that pre exist. Say I have this Declarative script in my workspace and its called test.DS
pipeline {
agent any
stages {
parallel {
stage('stage-1') {
steps {
sh "echo this is stage-1"
}
}
stage('stage-2') {
steps {
sh "echo this is stage-2"
}
}
}
}
}
What would a declarative script look like that will run this script test.DS?
Below is one of the possible solutions
node {
load './test.DS'
}

Pipeline created by Job-DSL fails to run maven

I am trying to setup Jenkins. I am using the docker image BlueOcean. I am trying to create a Jenkins pipeline using a Job-DSL. When I create the pipeline myself, and run it, it works. But when I run the pipeline created by the Job-DSL, it fails because of maven.
I looked on internet, but I couldn't find a solution proper to my case.
He is the Jenkinsfile
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
}
and this is the job-DSL
job('PROJ-unit-tests') {
scm {
git('git://github.com/Jouda-Hidri/Transistics.git') { node ->
node / gitConfigName('DSL User')
node / gitConfigEmail('hxxxa#gmail.com')
}
}
triggers {
scm('*/15 * * * *')
}
steps {
maven('-e clean test')
}
}

Locking resource in Jekinsfile pipeline for parallel and sequential stages at one time

I'm trying to run the following process in my Jenkinsfile:
Build the app
Trigger deploy of two components to the test environment, in parallel
foo deploy
bar deploy
Run tests on a deployed app
The steps 2 and 3 require locking a resource because I have only one test environment available.
There's no problem to achieve that w/o running step 2 in parallel, however, when I configure Jenkinsfile to run them together I get the following error from Jenkins:
WorkflowScript: 19: Parallel stages or branches can only be included in a top-level stage. # line 19, column 7.
stage('Deploy Foo') {
^
Here's the full Jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
powershell(script: '.\\ci\\build.ps1 -Script .\\ci\\build\\build.cake')
}
}
stage('Deploy and run tests') {
when {
branch('develop')
}
options {
lock('test-env')
}
stages {
stage('Deploy') {
parallel {
stage('Deploy Foo') {
steps {
build(job: 'Deploy_Foo')
}
}
stage('Deploy Bar') {
steps {
build(job: 'Deploy_Bar')
}
}
}
}
stage('Run tests') {
steps {
powershell(script: '.\\ci\\build.ps1 -Script .\\ci\\test\\build.cake')
}
}
}
}
}
}
I have also tried a solution with locking test-env resource separately for Deploy and Test stages, however, this increases the risk of race condition when some other running job may wait for that resource and "jump in" between Deploy and Test stages of the current job.
Is there any way to achieve such mix of sequential and parallel stages as described above in Jenkinsfile?

Running a script post successful build in jenkins

I am trying to run a bash script after a successful build in jenkins.
stages {
stage("test") {
steps {
...
}
post {
success {
steps {
sh "./myscript"
}
}
}
}
}
I am getting an error saying that method "steps" does not exist. How can I run a script after a successful build?
You need to remove the ”steps” inside the ”success” block. call the script directly inside the ”success” block.
according to the docs which is quite confusing, the ”success” is a container for steps (so no need to add another nested ”steps” ):
https://jenkins.io/doc/book/pipeline/syntax/#post
stages {
stage("test") {
steps {
...
}
post {
success {
sh "./myscript"
}
}
}
}

How to run pipeline stage only when there are no test failures?

I'm using Jenkins in a multi-modular Maven project and I have three stages in my pipeline: build, test and deploy. I want to be able to run all unit tests during test stage and, when there are test failures, deploy stage should be skipped.
For now, I managed to find a workaround (which works as I want it to), but I had to explicitly approve usage of some methods, which are marked by Jenkins as vurnelabilities.
I am wondering if there is any clearer way to achieve my goals?
import hudson.tasks.test.AbstractTestResultAction
pipeline {
agent {
docker {
image 'maven:3-jdk-8-alpine'
args '--name maven3 -v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn clean install -DskipTests'
}
}
stage('Test') {
steps {
sh 'mvn test --fail-never'
}
post {
always {
junit '**/target/surefire-reports/*.xml'
}
}
}
stage('Deploy') {
steps {
script {
def AbstractTestResultAction testResultAction = currentBuild.rawBuild.getAction(AbstractTestResultAction.class)
if (testResultAction == null) {
error("Could not read test results")
}
def numberOfFailedTests = testResultAction.failCount
testResultAction = null
if (numberOfFailedTests == null || numberOfFailedTests != 0) {
error("Did not deploy. Number of failed tests: " + numberOfFailedTests)
}
sh 'mvn deploy -DskipTests'
}
}
}
}
}
In your test stage you execute: mvn test --fail-never
Now even if you have test failures maven will return with an exit code 0.
Jenkins is checking the exit code. if it is 0 it will continue with the next command.
so get rid of --fail-never

Resources