jenkins pipeline function errors with no error - bash

I am using a jenkins pipeline script to make product tests on our machines
the father of all tests looks like this
node('nightly-master') {
stage 'run'
println PRODUCTS
oliTest('win7.nightly.test', 'checkAndWaitForInstalledProduct.py', 'esxi', 'opsi-local-image-prepare', 'opsi-local-image-win7', PRODUCTS)
)
}
PRODUCTS is a textbox variable, enetered at the build start
the function oliTest() is this:
def call(SERVERID, CHECKSCRIPT, VIRTUALIZATION, OLIPREPARE, OLINETBOOT, PRODUCTS){
try {
timeout(time: 5, unit: 'HOURS') {
println SERVERID
println CHECKSCRIPT
println VIRTUALIZATION
println OLIPREPARE
println OLINETBOOT
println PRODUCTS
//oliPrepare(SERVERID, CHECKSCRIPT, VIRTUALIZATION, OLIPREPARE, OLINETBOOT)
oliProd(SERVERID, CHECKSCRIPT, VIRTUALIZATION, PRODUCTS)
oliBackup(SERVERID, CHECKSCRIPT, VIRTUALIZATION)
oliRestore(SERVERID, CHECKSCRIPT, VIRTUALIZATION)
}
} catch(error) {
sh "fab -f /home/adminuser/scripts/${VIRTUALIZATIO}Nfab.py powerOffVm:vmName=${SERVERID}"
sh 'return 1'
}
}
the println values are printed correctly into the jenkins log
as soon as the function oliProd() is called the test fails without any error message at the forr loop in the following block
def call(SERVERID, CHECKSCRIPT, VIRTUALIZATION, PRODUCTS){
stage 'install Products'
println SERVERID
println CHECKSCRIPT
println VIRTUALIZATION
println PRODUCTS
sh " echo ${PRODUCTS}"
sh "echo ${SERVERID}"
sh "for i in ${PRODUCTS}; do opsi-admin -d method setProductActionRequestWithDependencies $i ${SERVERID} setup;done"
}
writing it multi line with '''COMMAND''' exists with an error, because ${SERVERID} is not expanded and left empty
Any suggestions how to make things work??
Cheers

Note that you could use have triple double-quotes instead of triple single-quotes. That would have fixed that simple problem.
However, you really should be doing your iteration in the script code itself, instead of trying to do iteration in the shell.

Jon S suggested to resolve script methods such as "echo" against a reference to the "pipeline object as in oliTest(this, ...) where oliTest declares a Script parameter and passes it along to other methods/instances to be used to resolve echo as scriptObj.echo.
println in "call" method of "vars/foo.groovy" works, but not in method in class

Related

Console Output in pipeline:Jenkins

I have created a complex pipeline. In each stage I have called a job. I want to see the console output for each job in a stage in Jenkins. How to get it?
The object returned from a build step can be used to query the log like this:
pipeline {
agent any
stages {
stage('test') {
steps {
echo 'Building anotherJob and getting the log'
script {
def bRun = build 'anotherJob'
echo 'last 100 lines of BuildB'
for(String line : bRun.getRawBuild().getLog(100)){
echo line
}
}
}
}
}
}
The object returned from the build step is a RunWrapper class object. The getRawBuild() call is returning a Run object - there may be other options than reading the log line-by-line from the looks of this class. For this to work you need to either disable the pipeline sandbox or get script approvals for these methods:
method hudson.model.Run getLog int
method org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper getRawBuild
If you are doing this for many builds, it would be worth putting some code in a pipeline shared library to do what you need or define a function in the pipeline.

Gradle: need help to understand

Have in build.gradle
task hello {
doLast {
println 'Hello World!'
}
}
task count {
println "one"
doLast{
4.times {print "$it "}
}
println "two"
doFirst{
2.times {println "$it - 1 "}
}
3.times {println( "$it -3")}
}
task intro(dependsOn: hello){
doLast{
println("I'm Gradle!")
}
}
run in shell
gradle intro
and get
one
two
0 -3
1 -3
2 -3
:hello
Hello World!
:intro
I'm Gradle!
BUILD SUCCESSFUL
but it's not correct!!!
the correct output is
:hello
Hello World!
:intro
I'm Gradle!
BUILD SUCCESSFUL
What did I do wrong?
ps
adding details to because there is too much code here :(
adding details to because there is too much code here :(
adding details to because there is too much code here :(
adding details to because there is too much code here :(
Why do you think it's wrong? Probably it is absolutely correct. This is all due to configuration of the build. Read about it in the official user guide.
There are a number of phases take place during the build. One of them is configuration phase. All the output you don't expect to see - is the configuration's output. When you do something in task's closure, it's executed at the configuration of your build, untill you place it into the doLast or doFirst closure to run at the execution phase (or task's closure is declared with << that's the same as doLast).
Note that configuration is executed for all tasks, no matter if they will be executed or not. That is the reason of your unexpected output - it's just done as a part of your build configuration, though they are declared within some task.

What is the meaning of << in gradle task definition

What is the difference between these two tasks. Only task with << in its definition is shown in the output of ./gradlew tasks.
task greet(type: GreetingToFileTask) {
destination = { project.greetingFile }
}
task sayGreeting(dependsOn: greet) << {
println file(greetingFile).text
}
The lines above are from gradle documentation Here
The << is a shortcut to the toLast configuration item of a task definition. I.e. the following two declarations are equivalent:
task hello << {
println 'Hello world!'
}
and:
task hello {
doLast {
println 'Hello world!'
}
}
(example taken from Gradle documentation here).
Now, in the first code snippet you just define a task and configuring its destination property. However, the task will only be executed if needed.
In the second code snippet, however, you are actually defining an action that will always be executed during the configuration phase, regardless of the tasks targeted for execution (cite from here):
A task has both configuration and actions. When using the <<, you are
simply using a shortcut to define an action. Code defined in the
configuration section of your task will get executed during the
configuration phase of the build regardless of what task was targeted.

Gradle executes all tasks?

I have a very simple build script like so
task hello{
println("hello World")
}
task bye {
println("bye")
}
On the command line I run
gradle hello and I get the following output:
hello World
bye
:hello UP-TO-DATE
Why is it executing the task "bye" (I'm assuming it gets executed since "bye" gets printed)? Thanks.
It's a common pitfall:
task hello {
println("Any code in here is about *configuring* the\
task. By default, all tasks always get configured.")
doLast {
println("Any code in here is about *executing* the task.\
This code only gets run if and when Gradle decides to execute the task.")
}
}
The distinction between configuration phase and execution phase is probably the single most important concept to understand in Gradle. It can be confusing at first, and may go away in the future. A kind of analogue in the Ant/Maven world is that these tools first parse XML build scripts and build an object model (perhaps resolving some properties along the way), and only then execute the build.
Adding to Peter answer, If you want to execute all task , you can specify the defaultTasks list.
defaultTasks 'clean', 'run'
task clean {
doLast {
println 'Default Cleaning!'
}
}
task run {
doLast {
println 'Default Running!'
}
}
task other {
doLast {
println "I'm not a default task!"
}
}
Output
Output of gradle -q
> gradle -q
Default Cleaning!
Default Running!
More details can be found here
https://docs.gradle.org/current/userguide/tutorial_using_tasks.html

How to continue a Jenkins build even though a build step failed?

I am using a Phing build script with Jenkins and would like to run it end to end on a job and capture all the reports. The problem is it stop building on a failed build step. Is there a way or a plugin that would continue the job even on failures?
Thanks
I don't know a lot about Phing but, since it's based on Ant, if the build step you are executing has a "failonerror" attribute you should be able to set it to false so that the entire build doesn't fail if the step returns an error.
Yes, use try, catch block in you pipeline scripts
example:
try {
// do some stuff that potentially fails
} catch (error) {
// do stuff if try fails
} finally {
// when you need some clean up to do
}
Or alternatively if you use sh commands to run these tests, consider running your sh scripts with the "|| true" suffix, this tells the linux sh script to exit with a result code of 0, even if your real command exited with an exit code.
example:
stage('Test') {
def testScript = ""
def testProjects = findFiles(glob: 'test/**/project.json')
if (!fileExists('reports/xml')) {
if (!fileExists('reports')) {
sh "mkdir reports"
}
sh "mkdir reports/xml"
}
for(prj in testProjects) {
println "Test project located, running tests: " + prj.path
def matcher = prj.path =~ 'test\\/(.+)\\/project.json'
testScript += "dotnet test --no-build '${prj.path}' -xml 'reports/xml/${matcher[0][1]}.Results.xml' || true\n"
}
sh testScript

Resources