I am trying to create a Continuous Integration between BitBucket and Salesforce using Jenkins and I am having trouble with the Scratch Org creation. The Jenkinsfile I BELIEVE is set up correctly. Here it is:
node {
def SF_JENKINSUSER = env.SF_JENKINS_USER
def SF_USERNAME = env.SF_JENKINS_USER + '.' + env.SF_DEV
def SF_URL = env.SF_TESTURL
def SF_PROD = env.SF_PRODURL
def SF_DEV_HUB = env.SF_DEVHUB
stage('Checkout Source') {
checkout scm
}
withEnv(["HOME=${env.WORKSPACE}"]) {
withCredentials([string(credentialsId: 'SF_CONSUMER_KEY_BIND', variable: 'SF_CONSUMER_KEY'), file(credentialsId: 'SERVER_KEY_CREDENTALS_ID', variable: 'server_key_file')]) {
stage('Authorize DevHub Org') {
try {
rc = command "sfdx force:auth:jwt:grant -r ${SF_PROD} -i ${SF_CONSUMER_KEY} -u ${SF_JENKINSUSER} -f ${server_key_file} --setdefaultdevhubusername -a ${SF_DEV_HUB}"
if ( rc != 0 ) {
echo '========== ERROR: ' + rc
error 'Salesforce org authorization failed.'
}
else {
command "sfdx force:org:list"
echo '========== LOGGED IN =========='
}
}
catch (err) {
echo "========== DEVHUB AUTHORIZATION FAILURE: ${err} =========="
}
}
// Create a new scratch org to test the repo
stage('Create Test Scratch Org') {
try {
rc = command "sfdx force:org:create -s -f config\\project-scratch-def.json -a TestScratch -w 10 -d 1"
if (rc != 0) {
error 'Salesforce test scratch org creation failed.'
}
}
catch (err) {
echo "========== SCRATCH ORG CREATION FAILURE: ${err} =========="
}
}
}
} }
def command(script) {
if ( isUnix() ) {
return sh(returnStatus: true, script: script);
}
else {
return bat(returnStatus: true, script: script);
}
Apologies about the formatting there. Now, the results of this I cannot figure out. It says the connected status of the orgs are JwtGrantFailure and it's looking for a server.key file instead of the scratch json file in the command line. Here are the pertinent parts of the output from this job:
E:\DevOps_Root\JENKINS\workspace\TestingCIPipeline2>sfdx force:org:list
=== Orgs
ALIAS USERNAME ORG ID CONNECTED STATUS
(D) DevHub sa.jenkins#[...].com 00D300000000UicEAE JwtGrantError
No active scratch orgs found. Specify --all to see all scratch orgs
[Pipeline] echo
========== LOGGED IN ==========
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Create Test Scratch Org)
[Pipeline] isUnix
[Pipeline] bat
E:\DevOps_Root\JENKINS\workspace\TestingCIPipeline2>sfdx force:org:create -s -f config\project-scratch-def.json -a TestScratch -w 10 -d 1
ERROR running force:org:create: ENOENT: no such file or directory, open
'E:\DevOps_Root\JENKINS\workspace\Pipe#tmp\secretFiles\e0ab232f-1958-42d1-b3bb-aed5e00a562f\server.key'
[Pipeline] echo
========== SCRATCH ORG COMMAND FAILURE: 1
Why would the job be looking for the server.key file when I have already run the withCredentials successfully? What am I missing here?
Any insights would be greatly appredciated.
Ok so .. this one confounded me for a while, but I finally got the script to create a scratch org.
I logged into the Jenkins Virtual Server through Remote Desktop Connection, opened windows explorer, navigated to the Jenkins User .sfdx folder and deleted the following files:
alias.json
key.json
user#domain.json
stash.json
After I did that, I made some updates in the Jenkinsfile and pushed the changes up to the repository. The job ran, and the Scratch Org was created.
My new issue is trying to figure out how to have the same job run again because we will have multiple repos working this single job.
Anyway, I hope this helps some of you out who are facing the same issue.
Related
While executing the sh command in jenkins pipeline project, we are getting an an error like below and sh command is not working.
Error message:
[Pipeline] sh
Warning: JENKINS-41339 probably bogus PATH=/bin/sh:/usr/atria/bin:/usr/atria/bin:$PATH; perhaps you meant to use ‘PATH+EXTRA=/something/bin’?
process apparently never started in /var/lib/jenkins/workspace/QSearch_pipelineTest#tmp/durable-6d5deef7
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
Below is the jenkins pipeline script code:
pipeline {
agent any
environment {
DATE = "December 17th"
}
stages {
stage("Env Variables") {
environment {
NAME = "Alex"
}
steps {
echo "Today is ${env.DATE}"
echo "My name ${env.NAME}"
echo "My path is ${env.PATH}"
script {
env.WEBSITE = "phoenixNAP KB"
env.PATH = "/bin/sh:$PATH"
}
echo "This is an example for ${env.WEBSITE}"
echo "My path ${env.PATH}"
**sh 'env'**
withEnv(["TEST_VARIABLE=TEST_VALUE"]) {
echo "The value of TEST_VARIABLE is ${env.TEST_VARIABLE}"
}
}
}
}
Below is the output of jenkins build job:
...
[Pipeline] echo
My path /bin/sh:/usr/atria/bin:/usr/atria/bin:$PATH
**[Pipeline] sh
Warning: JENKINS-41339 probably bogus PATH=/bin/sh:/usr/atria/bin:/usr/atria/bin:$PATH; perhaps you meant to use ‘PATH+EXTRA=/something/bin’?
process apparently never started in /var/lib/jenkins/workspace/QSearch_pipelineTest#tmp/durable-6d5deef7
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)**
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
**ERROR: script returned exit code -2**
Finished: FAILURE
I wrote a declarative Jenkins pipeline and would like to track the CLI commandos executed by Jenkins. To do this, I added a stage and the step sh 'history -a' in it:
pipeline {
options {
...
}
agent {
node {
...
}
}
stages {
stage('Build') {
steps {
sh 'hostname'
sh 'pwd'
...
}
}
...
stage('History') {
steps {
sh 'history -a'
}
}
}
post {
...
}
}
But that is not working:
Console Output
...
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Tear Down)
[Pipeline] sh
+ history -a
/path/to/project-root#tmp/durable-66ba15cc/script.sh: 1: history: not found
[Pipeline] }
...
Other Linux commands like hostname, ls, or pwd are working fine.
Why does history run into an error? How to store the shell commands called by Jenkins in the context of a pipeline?
That specific error that you are getting I think it is only because the agent where you are running sh, does not have the history cmd available - history: not found
If you can store the sh commands... If you want only the sh commands, I think you need to write to a file that you create at the beginning, where you write to everytime you have a sh step, or you can just use the pipeline log file (the output console).
You can find here a thread about the location of the pipeline or build logs, in case it helps.
I have jenkins pipeline jobs which runs shell scripts internally. even though the shell scripts fails job will show as passed only.
My Pipeline:
stage('Code Checkout') {
timestamps {
step([$class: 'WsCleanup'])
echo "check out======GIT =========== on ${env.gitlabBranch}"
checkout scm
}
}
stage("build") {
sh 'sh script.sh'
}
}
catch(err){
currentBuild.result = 'FAILURE'
emailExtraMsg = "Build Failure:"+ err.getMessage()
throw err
}
}
}
LOG:
+ sh script.sh
$RELEASE_BRANCH is empty
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
It looks like your script returns with zero status code. Otherwise it would throw an exception as described in sh step description. The problem may be that exit status of sh sctipt.sh is the exit status of last executed command and your script may do something after error happens (e.g. echo something before exit). The simplest and brutal way to make sure every error is returned is to use put set -e at the top of your script.
You don't need any catch to have this functionality (I mean fail on script error) unless you want to do some extra operations in case of error. But if you do, then you should enclose script execution in try clause:
stage("build") {
try {
sh 'sh script.sh'
}
catch (err) {
currentBuild.result = 'FAILURE'
emailExtraMsg = "Build Failure:"+ err.getMessage()
throw err
}
}
I have a Jenkins scripted pipeline set up where I execute a number of Maven builds. I want to treat one of them as non-fatal if the root cause is a known one.
I have tried to achieve that by inspecting the Exception's message, e.g.
try {
sh "mvn -U clean verify sonar:sonar ${sonarcloudParams}"
} catch ( Exception e ) {
if ( e.getMessage().contains("not authorized to run analysis")) {
echo "Marking build unstable due to missing SonarCloud onboarding. See https://cwiki.apache.org/confluence/display/SLING/SonarCloud+analysis for steps to fix."
currentBuild.result = 'UNSTABLE'
}
}
The problem is that the exception's message is not the one from Maven, but instead "script returned exit code 1".
There is no further information in e.getCause().
How can I access the cause of the Maven build failure inside my scripted pipeline?
You can get the command output, then parse it containers specific message.
def output = sh(
script: "mvn -U clean verify sonar:sonar ${sonarcloudParams}",
returnStdout: true
).trim()
echo "mvn cmd output: ${output}"
if(output.contains('not authorized to run analysis')) {
currentBuild.result = 'UNSTABLE'
}
// parse jenkins job build log
def logUrl = env.BUILD_URL + 'consoleText'
def cmd = "curl -u \${JENKINS_AUTH} -k ${logUrl} | tail -n 50"
def output = sh(returnStdout: true, script: cmd).trim()
echo "job build log: ${output}"
if(output.contains('not authorized to run analysis')) {
currentBuild.result = 'UNSTABLE'
}
One option is to inspect the last log lines using
def sonarCloudNotEnabled = currentBuild.rawBuild.getLog(50).find {
line -> line.contains("not authorized to run analysis")
}
However, this does not work by default. On the Jenkins instance I'm using it errors out with
Scripts not permitted to use method org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper getRawBuild. Administrators can decide whether to approve or reject this signature.
In my jenkins pipeline i use the "Execute shell command " to run my gradle build script.
Now i want to check if the build has failed in which case i would like to read the console output, store it in a string and publish it to a slack channel.
The code that i have tried goes as follows :
try {
for (int i = 0 ; i < noOfComponents ; i++ ){
component = compileProjectsWithPriority[i]
node {
out = sh script: "cd /home/jenkins/projects/${component} && ${gradleHome}/bin/gradle build", returnStdout: true}
}
}
catch (e){
def errorSummary = 'Build failed due to compilation error in '+"${component}"+'\n'+"${out}"
slackSend (channel: '#my_channel', color: '#FF0000', message: errorSummary)
}
However it does not even execute the shell script and also the console output is null. What is the right approach to do this.
Thanks in advance
The sh command in Jenkins pipelines may not work with shell built-ins like cd. Perhaps try using dir, as below:
node {
dir("/home/jenkins/projects/${component}") {
out = sh script: "${gradleHome}/bin/gradle build", returnStdout: true
}
}
All commands within { and } for a dir will execute with the specified directory as their working directory. This will overcome any problems that may exist with the cd shell built-in.