Jenkins Scripted Pipeline: sshCommand execution statusCode - jenkins-pipeline

In my jenkins scripted pipeline one stage I am running a bash script in remote machine. I tried few ways as follows but not coping with requirement :
- Since I have argument to pass to remote server command to run which sshScript doesn't supports it
- publish over ssh plugin displays my execCommand in the jenkins logs which I dont want too.
So I use sshPut to put my bash script on remote server and sshCommand with arguments to run it there. All good , except when I have errors, I need to exit and do some other things. What happens is if there is an error, jenkins jobs exits execute with exception. This can be override by setting failOnError: false for sshCommand; but then that Job will never fail at all.
I need that if the sshCommand: exits with an error then I need to do some thing like send slackNotify or so. So is there any thing like statusCode or exit-bhalah which I can compare it like != 0 and do some function ?
I am thinking of something like
stage('Deploy'){
// some blocks here
sshCommand remote: remote, failOnError: false, command: "bash Filescript.sh $ARGS1"
if (statusCode != 0){
//do my thing here
}
}

You can use try-catch:
stage('Deploy'){
// some blocks here
try {
sshCommand remote: remote, command: "bash Filescript.sh $ARGS1"
}
catch {
//do my thing here
}
}

Related

How to suppress output of 'rm -rf' in Jenkins shell script?

I've a big Jenkins pipeline and when build is run, lot of console output is generated which causes space issue on Jenkins master.
I've following code in Jenkins pipeline with Shell Script, which logs every file being removed. I've lots of log files that cause lot of console output -
stage('Logs Cleanup') {
steps {
script {
sh '''rm -rf /home/oracle/test/logs1/* /home/oracle/test/logs2/*'''
}
}
}
Is there any way I can suppress output of that command?
NOTE: If same command it run from Terminal, it logs nothing in output.
For your specific delete using Jenkins only:
stage('Logs Cleanup') {
steps {
dir ('/home/oracle/test/logs1/') {
deleteDir()
}
dir ('/home/oracle/test/logs2/') {
deleteDir()
}
}
}
Looks like there are some comments that solve the problem for you, but no one mentioned that you can control the output of sh commands through the command itself.
The sh command has some optional parameters that can be used; one of them is returnStdout. In your case, you can suppress stdout like this:
stage('Logs Cleanup') {
steps {
script {
sh script: 'rm -rf /home/oracle/test/logs1/* /home/oracle/test/logs2/*', returnStdout: false
}
}
}
There are some other useful parameters, for example returnStatus will return the status code of the command for use in the pipeline.

Retry failed xcodebuild test

I am building custom Jenkins build script (sh) for iOS app build/test checks.
But sometimes UI test fails just because of timing issue, so I want it to re run few more times to make sure the issue is real.
for (( ATTEMPT=1; ATTEMPT<=2; ATTEMPT++ ))
do
xcodebuild [flags] test #add_result_saving_mechanism
#if failed, do smth to go to next attempt. Else - break
if SOME_KIND_OF_FAIL_CHECK; then
continue
else
break
fi
fi
I used xcpretty before, so was able to read $PIPESTATUS and react accordingly, but xcpretty is crashing for xcodebuild test for some reason, so looking ways to do without it
xcodebuild [flags] test | xcpretty
STATUS="${PIPESTATUS}"
if [ "$STATUS" != "0" ]; then
FAILURE_MSG="${TARGET} (${BUILD_NAME}) failed UI/Unit testing"
#try next attempt if available
continue
else
break
fi
How can I handle these retries without pipes/xcpretty?
From the jenkins perspective - it always used to bail early if a script encountered an error, so you could try this type of syntax to prevent that (in the jenkins config this is, not in the script itself)
jenkins_build_script.sh || true
# continue with things...
Also, if you're having trouble capturing the failure itself - try piping the xcodebuild output to a log file and then grep-ing for the errors you're anticipating.
Since you said it's a script run by Jenkins you can handle the retry from Jenkins pipeline instead of inside the shell script.
As in the example from the docs
pipeline {
agent any
stages {
stage('Deploy') {
steps {
retry(3) {
sh './flakey-deploy.sh'
}
}
}
}
}
You can read about it here
Hope this will help good luck.

how to find the last executed command status in foreach loop in perl

I have a piece of logic it will execute the start service one by one based on the list of servers now question is after successful execution of last server I need to do some action based on status. how to handle the command to wait until the last sever to perform
Kindly suggest
sample code.
foreach my $Server_name(#servers)
{
my $command =qq(sudo /bin/su - jenkins -c "ssh scm\#$Server_name ' /bin/sh ${SCRIPT_HOME}/startService.sh'");
print "$command\n";
system($command);
if ($?== 0)
{
do some action
}
}
The $? variable in this case contains the status of system, but system itself returns that as well. Then assign its return to a variable and check it after the loop
my $exit_status;
foreach my $server_name (#servers)
{
my $command = qq(sudo /bin/su - jenkins -c "ssh scm\#$server_name ' /bin/sh ${SCRIPT_HOME}/startService.sh'");
print "$command\n";
$exit_status = system($command);
}
if ($exit_status == 0) { ... }
I'd like to also comment
We don't know what you need from errors but consider whether $? is enough; if you only check for errors it's fine but otherwise all you get from system is the last wait call's status. Various IPC modules provide for better error reporting
A command in a string for system can be unsafe as it may get passed to the shell to interpret first. If the shell isn't actually needed it is better to use the LIST form of system, where the shell isn't involved; also see exec for discussion of the LIST form
To prepare a string for the command it is better to use String::ShellQuote

How to take the result of a precious build step in Jenkins with Groovy?

Is there an example script of taking the previous build step's return code? I would like to know how to do this with Groovy. The previous build step is running an SSH command remotely and returns a specific return code. How can I read this return code in the next build step with Groovy?
If you go to the pipeline snippet generator from a pipeline job in the Jenkins UI (click on 'pipeline syntax' on the left) it will give you the syntax of each step like "sh". For a shell command you can do it like this example:
pipeline {
// Assumes you have Linux agents..
agent any
stages{
stage('Test') {
steps {
script {
def result = sh returnStatus: true, script: 'ls -a'
echo "Return code of shell script: ${result}"
}
}
}
}
}
I don't know if there is any way of getting it for the previous step, if you did not get the result like this for every step though.
In case of failure, when the returnStatus is explicitly requested like this, an exception is not thrown so you will need to handle the return status and fail the job explicitly with error('message..') if that is what is required.

Run a background job from Gradle

I've created a task starting a remote job like
task mytask(type: Exec) {
commandLine 'ssh'
args '-f -l me myserver ./start'.split(' ')
}
and it works, however, it seems to wait for the job to terminate. But it never terminates and it shouldn't.
Doing the same from the command line works: Because of the -f switch the ssh command gets executed in the background.
I've tried to add '>&' /dev/null (csh stdout and stderr redirect) to the command line, but without any success. Also the obvious & did nothing. I also extracted the command line into a script, and it's always the same: Gradle waits for termination.
Solution
I've solved it by using a script and redirecting both stdout and stderr in the script. My problem came from confusing redirections... by passing '>&' /dev/null I redirected the streams on the remote computer, but what was needed was a redirection on the local one (i.e., without putting the redirection operator in quotes).
The Exec task always waits for termination. To run a background job, use the Ant task 'Exec'
ant.exec(
executable: 'ssh',
spawn: true
) {
arg '-f'
arg '-l'
arg 'me'
arg 'myserver'
arg './start'
}
The Exec task always waits for termination. To run a background job, you need to write your own task, which could, for example, use the Java ProcessBuilder API.
As #peter-niederwieser suggests, ProcessBuilder might be the sollution. Something along the lines of Tomas Lins ExecWait might work to your winnings.
In short, it listens for a chosen word, and marks task as done when it hits.
From the page:
class ExecWait extends DefaultTask {
String command
String ready
String directory
#TaskAction
def spawnProcess() {
ProcessBuilder builder = new ProcessBuilder(command.split(' '))
builder.redirectErrorStream(true)
builder.directory(new File(directory))
Process process = builder.start()
InputStream stdout = process.getInputStream()
BufferedReader reader = new BufferedReader(new InputStreamReader(stdout))
def line
while ((line = reader.readLine()) != null) {
println line
if (line.contains(ready)) {
println "$command is ready"
break;
}
}
}
The Gradle spawn plugin can launch a background process in a Gradle build and subsequently tear it down again. NB this does not work under Windows, but seems fine on Linux or MacOS X. If you find that it starts up the background process, but doesn't appear to detect when the background process has finished initialising so that integration testing can begin, you have to configure the task with the parameter "ready", which is a string that it looks for in the output of the started background process to determine when it is safe to proceed.

Resources