How to use console log parser in jenkins pipeline project - jenkins-pipeline

I am new to jenkins and have build a pipeline project. In one of the stage , I build a docker image and in the next stage , I execute container-structure-test on the docker image . The test cases results can be viewed in the console output.
What I want is in the build summary page , it has a link from where I can directly view the test case results in the logs and need not to go through the complete console output. Since its not junit test cases , I could not find any straight out of the box jenkins plugin.
I came across console log parser plugin but I'm not sure how to use in jenkins declarative pipeline project . I see this option in free style project under post build action but no such option is available in pipeline project.
Could someone suggest me how I can make use of this plugin in pipeline builds to address my usecase.

You can write out the container running log to a file, then publish this file as report.
stage('Test') {
steps {
script {
out = sh(returnStdout: true,
script: '''
docker run ......
'''
)
writeFile text: out, file: 'test.log'
publishHTML([
allowMissing: true, alwaysLinkToLastBuild: false,
includes: 'test.log', keepAll: false,
reportDir: '.', reportFiles: 'test.log',
reportName: 'HTML Report'
])
}
}
}

Here is official documentation mentioning which method you should use:
https://www.jenkins.io/doc/pipeline/steps/log-parser/
You can use it anywhere, but probably you would like to use it at the end of the pipeline in the post phase.
Example:
pipeline {
// some stages here
post {
always {
logParser ([
projectRulePath: 'path/to/rules/file/on/the/node',
parsingRulesPath: '',
showGraphs: true,
unstableOnWarning: true,
useProjectRule: true
])
}
}
}

Related

Get Artifactory URL from Jenksfile and email it

I'm using a Jenkinsfile to upload some artifacts to Artifactory. Once this is complete, I want to be able to send an email with the download link for the artifacts. Currently, the best I can find is to send the directory where the file is and then navigate to it. Is there a way to capture the full download URL without having to go into the build log or having to find it to download it? I've include my Jenkinsfile Stage below.
stage('Artifactory') {
when{
anyOf{
branch 'UploadBranch'
}
}
steps{
rtUpload (
serverId: 'Artifactory_Server',
spec: '''{
"files": [
{
"pattern": "path/to/file/*",
"target": "Project/${BUILD_TIMESTAMP}/folder1/"
},
{
"pattern": "path/to/other/file/*",
"target": "Project/${BUILD_TIMESTAMP}/folder2/"
}
]}
'''
)
}
post {
always {
emailext attachLog: true, body: '''A new file has been uploaded into Artifactory.
Please find link below:
https://fake.artifactory.com/Project/${BUILD_TIMESTAMP}''', subject: 'New file in Artifactory', recipientProviders: [[$class: 'DevelopersRecipientProvider']]
cleanWs()
}
}
}
As you can see from the snippet above, I won't know the file name or which folder it will end up in ahead of time, so I'd need a way of capturing the upload log. Is this even something I can do in a Jenkinsfile?
I have a similar job setup, it's working fine till now.
Once you upload the artifact then follow the below steps.
Steps...
Make a Jfrog Artifact search of the recently uploaded item in a particular location e.g. in your case it's https://fake.artifactory.com/Project/${BUILD_TIMESTAMP}/folder2/
Save the output in a file or some Environment Variable.
Retrieve the value of the same Environment variable in your message.

Jenkins Pipeline script from SCM shares Perforce workspace with Sync inside of script (different stream/depot)

I am looking for help with with our Jenkins Pipeline setup. I had a Jenkins pipeline job working just fine, where the groovy script was checked out from a Perforce stream (in stage "Declarative: Checkout SCM") and then run. The script itself performs, at its core, a p4 sync and a p4 reconcile.
pipeline {
agent {
node {
customWorkspace "workspaces/MY_WORKSPACE"
}
}
stages {
stage('Sync') {
steps {
script {
p4sync(
charset: 'none',
credential: '1',
format: "jenkins-${NODE_NAME}-MY_WORKSPACE",
populate: syncOnly(force: false, have: true, modtime: false, parallel: [enable: false, minbytes: '1024', minfiles: '1', threads: '4'], pin: '', quiet: true, revert: true),
source: streamSource('//depot/STREAM')
)
}
}
}
stage('Reconcile') {
steps {
script {
withCredentials([usernamePassword(credentialsId: '1', passwordVariable: 'SVC_USER_PW', usernameVariable: 'SVC_USER_NAME')]) {
bat label: 'P4 reconcile', true, script:
"""
p4 -c "%P4_CLIENT%" -p "%P4_PORT%" -u ${SVC_USER_NAME} -P ${SVC_USER_PW} -s reconcile -e -a -d -f "//depot/STREAM/some/folder/location/*.file"
"""
}
}
}
}
}
}
Due to an exterior requirement, we decided to move all our pipeline script files to a separate depot on the same Perforce server and changed the pipeline script checkout accordingly.
Now, the pipeline script checkout step ("Declarative: Checkout SCM") will create a new workspace called jenkins-NODE_NAME-buildsystems (for the pipeline script depot //buildsystems) which will use the same local workspace root directory D:\some\path\workspaces\MY_WORKSPACE on the build node as the actual workspace jenkins-NODE_NAME-MY_WORKSPACE, created and synced in the first pipeline step by p4sync. This means that Perforce creates two workspaces with the same local workspace root directory (which can cause all sorts of problems in itself). In addition, in the pipeline script, the P4 environment variable P4_CLIENT points to the wrong workspace jenkins-NODE_NAME-buildsystems (so the reconcile won't work), which should only have been used by the pipeline script checkout, not by the pipeline itself.
Which brings me to my question. How can I separate the workspaces of the pipeline script checkout and of the p4sync in the pipeline script? In the pipeline I can specify a customWorkspace, but not in the Jenkins configuration for the pipeline script checkout, and the latter weirdly seems to follow that customWorkspace statement, maybe because jenkins-NODE_NAME-MY_WORKSPACE had already been opened by Perforce on the node...?
Any hints are much appreciated.
Thanks,
Stefan

Jenkins Pipeline emailext: How to access build object in pre-send script

I'm using Jenkins ver. 2.150.1 and have some freestyle jobs and some pipeline jobs.
In both job types I am using the emailext plugin, with template and pre-send scripts.
It seems that the build variable, which is available in the freestyle projects, is null in the pipeline projects.
The pre-send script is the following (just an example, my script is more complex):
msg.setSubject(msg.getSubject() + " [" + build.getUrl() + "]")
There is no problem with the msg variable.
In the freestyle job, this script adds the build url to the mail subject.
In the pipeline job, the following is given in the job console:
java.lang.NullPointerException: Cannot invoke method getUrl() on null object
The invocation of emailext in the pipeline job is:
emailext body: '${SCRIPT, template="groovy-html.custom.pipeline.sandbox.template"}',
presendScript: '${SCRIPT, template="presend.sandbox.groovy"}',
subject: '$DEFAULT_SUBJECT',
to: 'user#domain.com'
I would rather find a general solution to this problem (i.e. Access the build variable in a pipeline pre-send script), but would also appreciate any workarounds to my current needs:
Access job name, job number, and workspace folder in a pipeline pre-send script.
I have finally found the answer -
Apparently for presend script in pipeline jobs, the build object does not exist, and instead the run object does. At the time I posted this question this was still undocumented!
Found the answer in this thread
Which got the author to update the description in the wiki:
run - the build this message belongs to (may be used with FreeStyle or Pipeline jobs)
build - the build this message belongs to (only use with FreeStyle jobs)
You can access the build in a script like this:
// findUrl.groovy
def call(script) {
println script.currentBuild.rawBuild.url
// or if you just need the build url
println script.env.BUILD_URL
}
and would call the script like this from the pipeline:
stage('Get build URL') {
steps {
findUrl this
}
}
The currentBuild gives you a RunWrapper object and the rawBuild a Run. Hope this helps.

How to have full CI/CD in openshift?

I know it's possible to have that when using Jenkins inside of Openshift, but when using pure build images it seems full CI/CD seems to be missing.
Our perfect scenario for each push to 'master' branch would be:
build app
run unit tests
notify team if failed to build
deploy image
notify if failed to start
Simple Openshift build setup only includes bold items.
Can we have full CI/CD inside of Openshift? Or should we do checks outside?
Also notifications on failures are still missing in Openshift as far as I know.
Personally, I think you had better use the OpenShift Pipeline Jebkins Plugin for your use.
It can be implemented your own CI/CD using various ways, so it's a just sample. Maybe you would undergo trial and error for finding your own CI/CD configurations.
For example, simple build and deploy description using OpenShift Pipeline Jenkins Plugin.
For more details, refer here
And post notification for the job result is configured using Cleaning up and notifications.
apiVersion: v1
kind: BuildConfig
metadata:
labels:
name: your-pipeline
name: your-pipeline
spec:
runPolicy: Serial
strategy:
jenkinsPipelineStrategy:
jenkinsfile: |-
node(''){
stage('some unit tests') {
sh 'git clone https://github.com/yourproject/yourrepo'
sh 'python -m unittest tests/unittest_start_and_result_mailing.py'
}
stage('Build using your-yourconfig'){
openshiftBuild(namespace: 'your-project', bldCfg: 'your-buildconfig', showBuildLogs: 'true')
}
stage('Deployment using your-deploymentconfig'){
openshiftDeploy(namespace: 'your-project', depCfg: 'your-deploymentconfig')
}
stage('Verify Deployment status'){
openshiftVerifyDeployment(namespace: 'your-project', depCfg: 'your-deploymentconfig', verifyReplicaCount: 'true')
}
}
post {
always {
echo 'One way or another, I have finished'
deleteDir() /* clean up our workspace */
}
success {
echo 'I succeeeded!'
}
unstable {
echo 'I am unstable :/'
}
failure {
echo 'I failed :('
}
changed {
echo 'Things were different before...'
}
}
type: JenkinsPipeline
triggers:
- github:
secret: gitsecret
type: GitHub
- generic:
secret: genericsecret
type: Generic
I hope it help you.

Jenkins Pipeline: Enable timestamps in build log console

How can I display build timestamps for each line of a multi-branch pipeline project? Is it a supported feature? If yes, does it need to be enabled in the Jenkinsfile or is there a GUI option?
Adding options to the declarative pipeline
pipeline {
agent any
options { timestamps () }
// stages and rest of pipeline.
}
Credit goes to the comment above Jenkins Pipeline: Enable timestamps in build log console
For scripted pipeline just wrap your script in timestamps { }
Eg.
timestamps {
// do your job
}
Note: You must have the timestamper plugin installed: wiki.jenkins.io/display/JENKINS/Timestamper
I'm wondering why #roomsg comment on the accepted answer didn't become an answer.
I just noticed that (at least in our setup) you can configure this
globally: check the "Enabled for all Pipeline builds" in the
"Timestamper" section in Jenkins configuration
I think this is the best answering for Q. So,in case you have access as admin you can set it for all pipeline jobs through GUI

Resources