Console Output in pipeline:Jenkins - shell

I have created a complex pipeline. In each stage I have called a job. I want to see the console output for each job in a stage in Jenkins. How to get it?

The object returned from a build step can be used to query the log like this:
pipeline {
agent any
stages {
stage('test') {
steps {
echo 'Building anotherJob and getting the log'
script {
def bRun = build 'anotherJob'
echo 'last 100 lines of BuildB'
for(String line : bRun.getRawBuild().getLog(100)){
echo line
}
}
}
}
}
}
The object returned from the build step is a RunWrapper class object. The getRawBuild() call is returning a Run object - there may be other options than reading the log line-by-line from the looks of this class. For this to work you need to either disable the pipeline sandbox or get script approvals for these methods:
method hudson.model.Run getLog int
method org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper getRawBuild
If you are doing this for many builds, it would be worth putting some code in a pipeline shared library to do what you need or define a function in the pipeline.

Related

Keep workspace when switching stages in combination with agent none

I have a Jenkins pipeline where I want to first build my project (Stage A) and trigger an asynchronous long running external test process with the built artifacts. The external test process then resumes the Job using a callback. Afterwards (Stage B) performs some validations of the test results and attaches them to the job. I don't want to block an executor while the external test process is running so I came up with the following Jenkinsfile which mostly suites my needs:
#!/usr/bin/env groovy
pipeline {
agent none
stages {
stage('Stage A') {
agent { docker { image 'my-maven:0.0.17' } }
steps {
script {
sh "rm testfile.txt"
sh "echo ABCD > testfile.txt"
sh "cat testfile.txt"
}
}
}
stage('ContinueJob') {
agent none
input { message "The job will continue once the asynchronous operation has finished" }
steps { echo "Job has been continued" }
}
stage('Stage B') {
agent { docker { image 'my-maven:0.0.17' } }
steps {
script {
sh "cat testfile.txt"
def data = readFile(file: 'testfile.txt')
if (!data.contains("ABCD")) {
error("ABCD not found in testfile.txt")
}
}
}
}
}
}
However, depending on the load of the Jenkins or the time passed or some unknown other conditions, sometimes the files that I create in "Stage A" are no longer available in "Stage B". It seems that Jenkins switches to a different Docker node which causes the loss of workspace data, e.g. in the logs I can see:
[Pipeline] { (Stage A)
[Pipeline] node
Running on Docker3 in /var/opt/jenkins/workspace/TestJob
.....
[Pipeline] stage
[Pipeline] { (Stage B)
[Pipeline] node
Running on Docker2 in /var/opt/jenkins/workspace/TestJob
Whereas with a successful run, it keeps using e.g. node "Docker2" for both stages.
Note that I have also tried reuseNode true within the two docker sections but that didn't help either.
How can I tell Jenkins to keep my workspace files available?
As pointed out by the comment from #Patrice M. if the files are not that big (which is the case for me) stash/unstash are very useful to solve this problem. I have used this combination now since a couple of months and it has solved my issue.

Parallel execution 'mvn test' in Jenkins

I try to create jenkinsfile for parallel execution command mvn test with different arguments. On the first stage of jenkinsfile I create *.csv file where are will be future arguments for mvn test command. Also I don't know the quantity of parallel stages (it depends on first stage where I get data from DB). So, summarize it again. Logic:
First stage for getting data from DB over command mvn test (with args). On this test I save data into csv file.
In loop of jenkinsfile I read every string, parse it and get args for arallel execution mvn test (with args based on the parsed data).
Now it looks like this (only necessary fragments of jenkinsfile):
def buildProject = { a, b, c ->
node {
stage(a) {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
sh "mvn -Dtest=test2 test -Darg1=${b} -Darg2=${c}"
}
}
}
}
stages {
stage('Preparation of file.csv') {
steps {
sh 'mvn -Dtest=test1 test'
}
}
stage('Parallel stage') {
steps {
script {
file = readFile "file.csv"
lines = file.readLines()
def branches = [:]
for(i = 0; i < lines.size(); i++) {
values = lines[i].split(';')
branches["${values[0]}"] = { buildProject(values[0], values[1], values[2]) }
}
parallel branches
}
}
}
}
So, which problems do I face now with?
I see in log following error:
[ERROR] The goal you specified requires a project to execute but there is no POM in this directory (/Data/jenkins/workspace//#2)
I look at workspaces of Jenkins and see that there were created several empty(!!!) directories (quantity equals to quantity of parallel stages). And therefore mvn command could be executed because of absence of pom.xml and other files.
In branches the same data are saved on every iteration of loop and in 'stage(a)' I see the same title (but every iteration of loop has unique 'values[0]').
Can you help me with this issue?
Thank you in advance!
So, regarding this jenkins issue issues.jenkins.io/browse/JENKINS-50307 and workaround which could be found there, task could be closed!

Fail Gradle build when anything on standard error

How can I configure Gradle to fail build at the end (not fail fast) if there is anything printed on standard error output by any task or plugin?
I haven't found a way to do it in the official API.
Here’s a sample build.gradle that shows how this can work:
// create a listener which collects stderr output:
def errMsgs = []
StandardOutputListener errListener = { errMsgs << it }
// add the listener to both the project *and* all tasks:
project.logging.addStandardErrorListener errListener
project.tasks.all { it.logging.addStandardErrorListener errListener }
// evaluate the collected stderr output at the end of the build:
gradle.buildFinished {
if (errMsgs) {
// (or fail in whatever other way makes sense for you)
throw new RuntimeException(errMsgs.toString())
}
}
// example showing that the project-level capturing of stderr logs works:
if (project.hasProperty('projErr'))
System.err.print('proj stderr msg')
// example showing that the task-level capturing of stderr logs works:
task foo {
doLast {
System.err.print('task stderr msg')
}
}
// example showing that stdout logs are not captured:
task bar {
doLast {
System.out.print('task stdout msg')
}
}
The examples in the second half are only there to show that it works as expected. Try the build with various command line args/options:
# doesn’t fail:
./gradlew bar
# fails due to project error:
./gradlew -PprojErr bar
# fails due to task error:
./gradlew foo
# fails due to both task and project error:
./gradlew -PprojErr foo

jenkins pipeline function errors with no error

I am using a jenkins pipeline script to make product tests on our machines
the father of all tests looks like this
node('nightly-master') {
stage 'run'
println PRODUCTS
oliTest('win7.nightly.test', 'checkAndWaitForInstalledProduct.py', 'esxi', 'opsi-local-image-prepare', 'opsi-local-image-win7', PRODUCTS)
)
}
PRODUCTS is a textbox variable, enetered at the build start
the function oliTest() is this:
def call(SERVERID, CHECKSCRIPT, VIRTUALIZATION, OLIPREPARE, OLINETBOOT, PRODUCTS){
try {
timeout(time: 5, unit: 'HOURS') {
println SERVERID
println CHECKSCRIPT
println VIRTUALIZATION
println OLIPREPARE
println OLINETBOOT
println PRODUCTS
//oliPrepare(SERVERID, CHECKSCRIPT, VIRTUALIZATION, OLIPREPARE, OLINETBOOT)
oliProd(SERVERID, CHECKSCRIPT, VIRTUALIZATION, PRODUCTS)
oliBackup(SERVERID, CHECKSCRIPT, VIRTUALIZATION)
oliRestore(SERVERID, CHECKSCRIPT, VIRTUALIZATION)
}
} catch(error) {
sh "fab -f /home/adminuser/scripts/${VIRTUALIZATIO}Nfab.py powerOffVm:vmName=${SERVERID}"
sh 'return 1'
}
}
the println values are printed correctly into the jenkins log
as soon as the function oliProd() is called the test fails without any error message at the forr loop in the following block
def call(SERVERID, CHECKSCRIPT, VIRTUALIZATION, PRODUCTS){
stage 'install Products'
println SERVERID
println CHECKSCRIPT
println VIRTUALIZATION
println PRODUCTS
sh " echo ${PRODUCTS}"
sh "echo ${SERVERID}"
sh "for i in ${PRODUCTS}; do opsi-admin -d method setProductActionRequestWithDependencies $i ${SERVERID} setup;done"
}
writing it multi line with '''COMMAND''' exists with an error, because ${SERVERID} is not expanded and left empty
Any suggestions how to make things work??
Cheers
Note that you could use have triple double-quotes instead of triple single-quotes. That would have fixed that simple problem.
However, you really should be doing your iteration in the script code itself, instead of trying to do iteration in the shell.
Jon S suggested to resolve script methods such as "echo" against a reference to the "pipeline object as in oliTest(this, ...) where oliTest declares a Script parameter and passes it along to other methods/instances to be used to resolve echo as scriptObj.echo.
println in "call" method of "vars/foo.groovy" works, but not in method in class

Jenkins: java.io.NotSerializableException: groovy.util.slurpersupport.NodeChild

I have code that reads in a pom.xml file then attempts to re-serialize and write it back out:
// Get the file raw text
def pomXMLText = readFile(pomFile)
// Parse the pom.xml file
def project = new XmlSlurper(false, false).parseText(pomXMLText)
... do some useful stuff ...
def pomFileOut = "$WORKSPACE/pomtest.xml"
def pomXMLTextOut = groovy.xml.XmlUtil.serialize(project)
println "pomXMLTextOut = $pomXMLTextOut" // <-- This line prints to updated XML
writeFile file: pomFileOut, text: pomXMLTextOut // <-- This line crashes with the error listed in the posting title: java.io.NotSerializableException: groovy.util.slurpersupport.NodeChild
I've tried casting the pomXMLTextOut variable to a String. I tried applying the .text() method, which gets a jenkins sandbox security error. Has anyone else been able to successfully write an XML file from a groovy script running in a Jenkins pipeline?
BTW, I've also tried using a File object, but that isn't remotable across jenkins nodes. It works as long as the job always runs on master.
You could try a #NonCPS annotation and close those non-serializable objects in a funcation like this
#NonCPS
def writeToFile(String text) {
...
}
Here's the explanation from Pipeline groovy plugin
#NonCPS methods may safely use non-Serializable objects as local
variables

Resources