How can I display build timestamps for each line of a multi-branch pipeline project? Is it a supported feature? If yes, does it need to be enabled in the Jenkinsfile or is there a GUI option?
Adding options to the declarative pipeline
pipeline {
agent any
options { timestamps () }
// stages and rest of pipeline.
}
Credit goes to the comment above Jenkins Pipeline: Enable timestamps in build log console
For scripted pipeline just wrap your script in timestamps { }
Eg.
timestamps {
// do your job
}
Note: You must have the timestamper plugin installed: wiki.jenkins.io/display/JENKINS/Timestamper
I'm wondering why #roomsg comment on the accepted answer didn't become an answer.
I just noticed that (at least in our setup) you can configure this
globally: check the "Enabled for all Pipeline builds" in the
"Timestamper" section in Jenkins configuration
I think this is the best answering for Q. So,in case you have access as admin you can set it for all pipeline jobs through GUI
Related
I am trying to use the below :
pipeline {
agent { label 'slave1 || slave2' }
stages{
}
}
When I am using the above format, the job is executing in slave1. But when I am reversing the format i.e
agent { label 'slave2 || slave1' }, its still executing on slave1.
Could you please help in clarifying is it the expected way of working. Isn't it something like the label written first, is given first precedence .
This is a feature of Jenkins, not a bug. It tries to be consistent in choosing a slave, as this has a potential of saving some time. For example, on a slave that was used previously, the results of a checkout may still be in the workspace.
Since slave1 fits both the requirements of 'slave1 || slave2' and 'slave2 || slave1', Jenkins will use it. If it's unavailable or busy, some other slave will be used instead.
From version 0.33.1 onwards, aws-sam-cli supports colored output. I'm trying to run the sam deploy command from Jenkins pipeline and the output is not displaying in colored format. I've installed ANSIColor Jenkins plugin and wrapped the sam deploy command with ansiColor('xterm') {}. The command works as expected and the Cloudformation stack is getting created. The concern is the output is not in colored format.
node {
stage('Example') {
ansiColor('xterm') {
sh "sam deploy --parameter-overrides ${someparameter} --template-file ${templatefile} --stack-name ${stackname} --capabilities CAPABILITY_NAMED_IAM --no-fail-on-empty-changeset --no-execute-changeset"
}
}
}
In order to verify my Jenkins, I tried test-snippet in Jenkins and it displayed the colored output.
ansiColor('xterm') {
stage "\u001B[31mI'm Red\u001B[0m Now not"
}
So Jenkins is able to display ANSI color, but the aws-sam-cli output is not in colored format.
Any ideas or pointers would be helpful.
aws-sam-cli uses click library to format its output, including color handling.
The documentation for click explains why you're seeing what you're seeing:
Starting with Click 2.0, the echo() function gained extra
functionality to deal with ANSI colors and styles. [...]
Primarily this means that:
Clickâs echo() function will automatically strip ANSI color codes if the stream is not connected to a terminal.
This is a typical behavior of most programs, however some programs allow overriding this, usually with --color parameter.
In your case, I'd suggest asking for an enhancement on click's issue tracker.
Edit: There's already been one.
I have setup Jenkins project piper (https://sap.github.io/jenkins-library/). I have then setup a basic SAP Cloud Application Programming model app with integration for the SAP Cloud SDK pipeline with default configuration and uncommented the 'productionDeployment' stage and completed cloud foundry endpoints/orgs/spaces etc. I have committed the applicatino to the master branch in the git repo.
The pipeline executes successfully but is skipping the production deployment step.
Pipeline execution results
When checking the logs I see:
[Pipeline] // stageenter code here
[Pipeline] stage
[Pipeline] { (Production Deployment)
Stage "Production Deployment" skipped due to when conditional
When I look at the script (https://github.com/SAP/cloud-s4-sdk-pipeline/blob/master/s4sdk-pipeline.groovy) I see:
stage('Production Deployment') {
*when { expression { commonPipelineEnvironment.configuration.runStage.PRODUCTION_DEPLOYMENT }* }
//milestone 80 is set in stageProductionDeployment
steps { stageProductionDeployment script: this }
}
Can anyone explain what is required to pass the commonPipelineEnvironment.configuration.runStage.PRODUCTION_DEPLOYMENT check in order to execute the stageProductionDeployment script?
My pipeline_config.yml file (anonymized) is:
###
# This file configures the SAP Cloud SDK Continuous Delivery pipeline of your project.
# For a reference of the configuration concept and available options, please have a look into its documentation.
#
# The documentation for the most recent pipeline version can always be found at:
# https://github.com/SAP/cloud-s4-sdk-pipeline/blob/master/configuration.md
# If you are using a fixed version of the pipeline, please make sure to view the corresponding version from the tag
# list of GitHub (e.g. "v15" when you configured pipelineVersion = "v15" in the Jenkinsfile).
#
# For general information on how to get started with Continuous Delivery, visit:
# https://blogs.sap.com/2017/09/20/continuous-integration-and-delivery
#
# We aim to keep the pipeline configuration as stable as possible. However, major changes might also imply breaking
# changes in the configuration. Before doing an update, please check the the release notes of all intermediate releases
# and adapt this file if necessary.
#
# This is a YAML-file. YAML is a indentation-sensitive file format. Please make sure to properly indent changes to it.
###
### General project setup
general:
productiveBranch: 'master'
### Step-specific configuration
steps:
setupCommonPipelineEnvironment:
collectTelemetryData: true
cloudFoundryDeploy:
dockerImage: 'ppiper/cf-cli'
smokeTestStatusCode: '200'
cloudFoundry:
org: 'XXXXXX'
space: 'XXXXXX'
appName: 'MTBookshopNode'
manifest: 'mta.yaml'
credentialsId: 'CF_CREDENTIALSID'
apiEndpoint: 'https://api.cf.XX10.hana.ondemand.com'
### Stage-specific configuration
stages:
# This exclude is required for the example project to be successful in the pipeline
# Remove it when you have added your first test
s4SdkQualityChecks:
jacocoExcludes:
- '**/OrdersService.class'
# integrationTests:
# credentials:
# - alias: 'mySystemAlias'
# credentialId: 'mySystemCredentialsId'
# s4SdkQualityChecks:
# nonErpDestinations:
# - 'myCustomDestination'
productionDeployment:
cfTargets:
- org: 'XXXXXX'
space: 'XXXXXX'
apiEndpoint: 'https://api.cf.XX10.hana.ondemand.com'
appName: 'myAppName'
manifest: 'mta.yaml'
credentialsId: 'CF_CREDENTIALSID'
My Jenkins file is unchanged:
#!/usr/bin/env groovy
/*
* This file bootstraps the codified Continuous Delivery pipeline for extensions of SAP solutions, such as SAP S/4HANA.
* The pipeline helps you to deliver software changes quickly and in a reliable manner.
* A suitable Jenkins instance is required to run the pipeline.
* The Jenkins can easily be bootstraped using the life-cycle script located inside the 'cx-server' directory.
*
* More information on getting started with Continuous Delivery can be found in the following places:
* - GitHub repository: https://github.com/SAP/cloud-s4-sdk-pipeline
* - Blog Post: https://blogs.sap.com/2017/09/20/continuous-integration-and-delivery
*/
/*
* Set pipelineVersion to a fixed released version (e.g. "v15") when running in a productive environment.
* To find out about available versions and release notes, visit: https://github.com/SAP/cloud-s4-sdk-pipeline/releases
*/
String pipelineVersion = "master"
node {
deleteDir()
sh "git clone --depth 1 https://github.com/SAP/cloud-s4-sdk-pipeline.git -b ${pipelineVersion} pipelines"
load './pipelines/s4sdk-pipeline.groovy'
}
Any ideas what I am missing for a production deployment and how I get through this check in the script for production deployment?
Regards
Neil
the pipeline was built for multi-branch pipelines and will not work correctly in a single-branch pipeline job. There is no problem with running a project that has a single branch in a multi-branch pipeline job. To avoid confusion, we added a check to the pipeline in a recent version as documented here https://blogs.sap.com/2019/11/21/new-versions-of-sap-cloud-sdk-3.8.0-for-java-1.13.1-for-javascript-and-v26-of-continuous-delivery-toolkit/#cd-toolkit
Kind regards
Florian
I'm using Jenkins ver. 2.150.1 and have some freestyle jobs and some pipeline jobs.
In both job types I am using the emailext plugin, with template and pre-send scripts.
It seems that the build variable, which is available in the freestyle projects, is null in the pipeline projects.
The pre-send script is the following (just an example, my script is more complex):
msg.setSubject(msg.getSubject() + " [" + build.getUrl() + "]")
There is no problem with the msg variable.
In the freestyle job, this script adds the build url to the mail subject.
In the pipeline job, the following is given in the job console:
java.lang.NullPointerException: Cannot invoke method getUrl() on null object
The invocation of emailext in the pipeline job is:
emailext body: '${SCRIPT, template="groovy-html.custom.pipeline.sandbox.template"}',
presendScript: '${SCRIPT, template="presend.sandbox.groovy"}',
subject: '$DEFAULT_SUBJECT',
to: 'user#domain.com'
I would rather find a general solution to this problem (i.e. Access the build variable in a pipeline pre-send script), but would also appreciate any workarounds to my current needs:
Access job name, job number, and workspace folder in a pipeline pre-send script.
I have finally found the answer -
Apparently for presend script in pipeline jobs, the build object does not exist, and instead the run object does. At the time I posted this question this was still undocumented!
Found the answer in this thread
Which got the author to update the description in the wiki:
run - the build this message belongs to (may be used with FreeStyle or Pipeline jobs)
build - the build this message belongs to (only use with FreeStyle jobs)
You can access the build in a script like this:
// findUrl.groovy
def call(script) {
println script.currentBuild.rawBuild.url
// or if you just need the build url
println script.env.BUILD_URL
}
and would call the script like this from the pipeline:
stage('Get build URL') {
steps {
findUrl this
}
}
The currentBuild gives you a RunWrapper object and the rawBuild a Run. Hope this helps.
I was trying to resolve this issue, and searching forums etc. and trying for myself, without success.
We have a jenkins job and there we use the Release Plugin (with a standard configuration)
In the job then we have the "Perform Maven Release" in the left side to generate a version (tag, change poms, etc.) This work perfect.
We want to send an email to the team when the release has been done.
I tried the enviroment variable that the release plugin sets (IS_M2RELEASEBUILD by default) and combine with the email-ext plugin plugin where I can attach a groovy script (advanced=>trigger=>script trigger)
And I tried a lot of scripts to active the email, and none works, my last chance was:
def env = System.getenv()
env['IS_M2RELEASEBUILD'] == 'true'
but when I perform the release we have not the email sent (so this script evaluate the conditional to false or whatever)
Anyone has this setup in his Jenkins?
Thanks a lot!
You need to use "Editable Email Notification" as "Post-build Action" and paste
def env = build.getEnvironment();
String isRelease = env['IS_M2RELEASEBUILD'];
logger.println "IS_M2RELEASEBUILD="+isRelease;
if ( isRelease == null || isRelease.equals('false')) {
logger.println "cancel=true;";
cancel=true;
}
as Pre-send Script, fill in your E-Mail(s) in "Project Recipient List" and add an "Success"-Trigger.
(precondition is you have not changed the default "Release envrionment variable" in "Maven release build")
https://wiki.jenkins-ci.org/display/JENKINS/Email-ext+plugin
This plugin allows you to configure every aspect of email notifications. You can customize when an email is sent, who should receive it, and what the email says.
This is not an answer, just a suggestion (I can't add comments). Have you tried echoing that environment variable in a post-build and pre-build step?
Have you tried having another build run when the release build completes successfully and have that job send the email, perhaps by running a shell script.