Logic or automation that will carry out Junit XML test naming, stashing and unstashing - jenkins-pipeline

Within a Jenkinsfile, I am running regression tests in pytest and using the --junitxml flag to produce test output.
Each regression test runs in its own stage and captures an .xml file with the test output, I then stash these xml files (as each stage runs on a different agent). After all the regression tests have been run, the stashed files are then recovered for reporting once all tests are done.
Please see below:
stage ('Regression 01') {
agent {
label 'rhel1'
}
steps {
sh "cd /directory1/appServer && /home/appServer/py/venvs/*/bin/python -m " +
"pytest -m fast test-dir/regression_test.py -c conf.cfg --junitxml /share/01.xml"
stash includes: '01.xml', name: 'test01'
}
}
stage ('Regression 02') {
agent {
label 'rhel2'
}
steps {
sh "cd /directory1/appServer && /home/appServer/py/venvs/*/bin/python -m " +
"pytest -m fast test-dir/regression_test.py -c conf.cfg --junitxml /share/1.xml"
stash includes: '02.xml', name: 'test02'
}
}
post {
always {
unstash 'test01'
unstash 'test02'
junit "*.xml"
...
}
}
I have a total of 10 regression tests running which each stash a unique .xml file, I am also looking to add more, therefore hardcoding the XML test names is not feasible.
How can I create some sort of automation or logic within my Jenkinsfile that will do the XML naming, stashing and unstashing?

Related

Jenkinsfile Creating Scratch Org Failing

I am trying to create a Continuous Integration between BitBucket and Salesforce using Jenkins and I am having trouble with the Scratch Org creation. The Jenkinsfile I BELIEVE is set up correctly. Here it is:
node {
def SF_JENKINSUSER = env.SF_JENKINS_USER
def SF_USERNAME = env.SF_JENKINS_USER + '.' + env.SF_DEV
def SF_URL = env.SF_TESTURL
def SF_PROD = env.SF_PRODURL
def SF_DEV_HUB = env.SF_DEVHUB
stage('Checkout Source') {
checkout scm
}
withEnv(["HOME=${env.WORKSPACE}"]) {
withCredentials([string(credentialsId: 'SF_CONSUMER_KEY_BIND', variable: 'SF_CONSUMER_KEY'), file(credentialsId: 'SERVER_KEY_CREDENTALS_ID', variable: 'server_key_file')]) {
stage('Authorize DevHub Org') {
try {
rc = command "sfdx force:auth:jwt:grant -r ${SF_PROD} -i ${SF_CONSUMER_KEY} -u ${SF_JENKINSUSER} -f ${server_key_file} --setdefaultdevhubusername -a ${SF_DEV_HUB}"
if ( rc != 0 ) {
echo '========== ERROR: ' + rc
error 'Salesforce org authorization failed.'
}
else {
command "sfdx force:org:list"
echo '========== LOGGED IN =========='
}
}
catch (err) {
echo "========== DEVHUB AUTHORIZATION FAILURE: ${err} =========="
}
}
// Create a new scratch org to test the repo
stage('Create Test Scratch Org') {
try {
rc = command "sfdx force:org:create -s -f config\\project-scratch-def.json -a TestScratch -w 10 -d 1"
if (rc != 0) {
error 'Salesforce test scratch org creation failed.'
}
}
catch (err) {
echo "========== SCRATCH ORG CREATION FAILURE: ${err} =========="
}
}
}
} }
def command(script) {
if ( isUnix() ) {
return sh(returnStatus: true, script: script);
}
else {
return bat(returnStatus: true, script: script);
}
Apologies about the formatting there. Now, the results of this I cannot figure out. It says the connected status of the orgs are JwtGrantFailure and it's looking for a server.key file instead of the scratch json file in the command line. Here are the pertinent parts of the output from this job:
E:\DevOps_Root\JENKINS\workspace\TestingCIPipeline2>sfdx force:org:list
=== Orgs
ALIAS USERNAME ORG ID CONNECTED STATUS
(D) DevHub sa.jenkins#[...].com 00D300000000UicEAE JwtGrantError
No active scratch orgs found. Specify --all to see all scratch orgs
[Pipeline] echo
========== LOGGED IN ==========
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Create Test Scratch Org)
[Pipeline] isUnix
[Pipeline] bat
E:\DevOps_Root\JENKINS\workspace\TestingCIPipeline2>sfdx force:org:create -s -f config\project-scratch-def.json -a TestScratch -w 10 -d 1
ERROR running force:org:create: ENOENT: no such file or directory, open
'E:\DevOps_Root\JENKINS\workspace\Pipe#tmp\secretFiles\e0ab232f-1958-42d1-b3bb-aed5e00a562f\server.key'
[Pipeline] echo
========== SCRATCH ORG COMMAND FAILURE: 1
Why would the job be looking for the server.key file when I have already run the withCredentials successfully? What am I missing here?
Any insights would be greatly appredciated.
Ok so .. this one confounded me for a while, but I finally got the script to create a scratch org.
I logged into the Jenkins Virtual Server through Remote Desktop Connection, opened windows explorer, navigated to the Jenkins User .sfdx folder and deleted the following files:
alias.json
key.json
user#domain.json
stash.json
After I did that, I made some updates in the Jenkinsfile and pushed the changes up to the repository. The job ran, and the Scratch Org was created.
My new issue is trying to figure out how to have the same job run again because we will have multiple repos working this single job.
Anyway, I hope this helps some of you out who are facing the same issue.

How to save the commands history of a Jenkins build run?

I wrote a declarative Jenkins pipeline and would like to track the CLI commandos executed by Jenkins. To do this, I added a stage and the step sh 'history -a' in it:
pipeline {
options {
...
}
agent {
node {
...
}
}
stages {
stage('Build') {
steps {
sh 'hostname'
sh 'pwd'
...
}
}
...
stage('History') {
steps {
sh 'history -a'
}
}
}
post {
...
}
}
But that is not working:
Console Output
...
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Tear Down)
[Pipeline] sh
+ history -a
/path/to/project-root#tmp/durable-66ba15cc/script.sh: 1: history: not found
[Pipeline] }
...
Other Linux commands like hostname, ls, or pwd are working fine.
Why does history run into an error? How to store the shell commands called by Jenkins in the context of a pipeline?
That specific error that you are getting I think it is only because the agent where you are running sh, does not have the history cmd available - history: not found
If you can store the sh commands... If you want only the sh commands, I think you need to write to a file that you create at the beginning, where you write to everytime you have a sh step, or you can just use the pipeline log file (the output console).
You can find here a thread about the location of the pipeline or build logs, in case it helps.

Jenkins pipeline not executing next stage after failure in one stage of running bash script

I am running a shell script inside a docker container via jenkins groovy pipeline script. The bash script sets some environment variables and then executes unit tests. The stdout of these unit test execution is dumped to a text file.
I later copy this text file outside of the container for usage.
Here is the shell script:
#/bin/bash
source /root/venv/bin/activate
export PYTHONPATH=/foo/bar
cd unit_tests
rm -f results.txt
python tests.py >> results.txt
My pipeline script is as follows:
stage('Run Unit Tests') {
steps {
sh '''
docker-compose -f ./dir1/docker-compose-test.yml up -d
docker cp /supporting_files/run_unit_tests.sh container_1:/foo/bar/
docker exec container_1 /bin/bash run_unit_tests.sh
docker cp container_1:/foo/bar/unit_tests/results.txt .
'''
}
}
stage('Reporting') {
steps {
//steps for reporting
}
}
The problem is whenever any test fails, the results.txt has the appropriate text about failures and their stack. But the pipeline stop executing saying
[Pipeline] }
ERROR: script returned exit code 1
Because of this I am not able to execute next steps of parsing the results.txt file and reporting the results.
How do I make the pipeline execute next stage ?
I tried some things like:
1. Using catchError:
stage('Run Unit Tests') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
//Running the commands above
'''
}
}
}
Using try:
try{
stage('Run Unit Tests') {
sh '''
//Executing tests
'''
}
} catch(e) {
echo e.toString()
}
But both of them does not help.
Also the shell script simply dumps the stdout of running tests into a text file so I don't understand why an exit code 1 should be returned as the operation itself does not fail. I saw the text file later, it had the correct failures and error counts with stack.

Get the cause of a Maven build failure inside a Jenkins pipeline

I have a Jenkins scripted pipeline set up where I execute a number of Maven builds. I want to treat one of them as non-fatal if the root cause is a known one.
I have tried to achieve that by inspecting the Exception's message, e.g.
try {
sh "mvn -U clean verify sonar:sonar ${sonarcloudParams}"
} catch ( Exception e ) {
if ( e.getMessage().contains("not authorized to run analysis")) {
echo "Marking build unstable due to missing SonarCloud onboarding. See https://cwiki.apache.org/confluence/display/SLING/SonarCloud+analysis for steps to fix."
currentBuild.result = 'UNSTABLE'
}
}
The problem is that the exception's message is not the one from Maven, but instead "script returned exit code 1".
There is no further information in e.getCause().
How can I access the cause of the Maven build failure inside my scripted pipeline?
You can get the command output, then parse it containers specific message.
def output = sh(
script: "mvn -U clean verify sonar:sonar ${sonarcloudParams}",
returnStdout: true
).trim()
echo "mvn cmd output: ${output}"
if(output.contains('not authorized to run analysis')) {
currentBuild.result = 'UNSTABLE'
}
// parse jenkins job build log
def logUrl = env.BUILD_URL + 'consoleText'
def cmd = "curl -u \${JENKINS_AUTH} -k ${logUrl} | tail -n 50"
def output = sh(returnStdout: true, script: cmd).trim()
echo "job build log: ${output}"
if(output.contains('not authorized to run analysis')) {
currentBuild.result = 'UNSTABLE'
}
One option is to inspect the last log lines using
def sonarCloudNotEnabled = currentBuild.rawBuild.getLog(50).find {
line -> line.contains("not authorized to run analysis")
}
However, this does not work by default. On the Jenkins instance I'm using it errors out with
Scripts not permitted to use method org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper getRawBuild. Administrators can decide whether to approve or reject this signature.

Store the console output of a build step execution in Jenkins pipeline

In my jenkins pipeline i use the "Execute shell command " to run my gradle build script.
Now i want to check if the build has failed in which case i would like to read the console output, store it in a string and publish it to a slack channel.
The code that i have tried goes as follows :
try {
for (int i = 0 ; i < noOfComponents ; i++ ){
component = compileProjectsWithPriority[i]
node {
out = sh script: "cd /home/jenkins/projects/${component} && ${gradleHome}/bin/gradle build", returnStdout: true}
}
}
catch (e){
def errorSummary = 'Build failed due to compilation error in '+"${component}"+'\n'+"${out}"
slackSend (channel: '#my_channel', color: '#FF0000', message: errorSummary)
}
However it does not even execute the shell script and also the console output is null. What is the right approach to do this.
Thanks in advance
The sh command in Jenkins pipelines may not work with shell built-ins like cd. Perhaps try using dir, as below:
node {
dir("/home/jenkins/projects/${component}") {
out = sh script: "${gradleHome}/bin/gradle build", returnStdout: true
}
}
All commands within { and } for a dir will execute with the specified directory as their working directory. This will overcome any problems that may exist with the cd shell built-in.

Resources