Catch outcome of jenkins job in shell command variable - shell

I have a job in Jenkins that uses two commands in a Execute Shell Command.
The first does a test job, the second creates a report out of this. It looks a littlebit like this:
node_modules/gulp/bin/gulp.js run-cucumber-tests
node_modules/gulp/bin/gulp.js create-cucumber-report
If there are test failures, the command will exit with code 1. This means the second command won't even be fired. But even though the first command failed, I want the report to be created!
What I've tried is to do this:
node_modules/gulp/bin/gulp.js run-cucumber-tests || true
node_modules/gulp/bin/gulp.js create-cucumber-report
Now the report does get created but the build is marked as succeeded. That's not what I want. I want the jenkins build job to eventually fail, but with the reports created.
I was wondering, maybe I can catch the outcome of the first command in a variable, continue with the second and then throw it after the second command.

You can use set +e to let the script continue even if an error occurred and then use $? to capture the result of the last command. With exit you can force the result code of the script to the previously captured value.
set +e
node_modules/gulp/bin/gulp.js run-cucumber-tests
RESULT=$?
node_modules/gulp/bin/gulp.js create-cucumber-report
exit $RESULT

Related

Using set-output as a workflow command fails with exit code 3

I do not know whether here is a problem in one of my steps regarding an github-workflow-job.
The step looks like this:
- name: check_tf_code
id: tf_check
run: |
terraform fmt -check
echo '::set-output name=bash_return_Value::$?'
and the error output for this step in the log is:
Error: Process completed with exit code 3
What could be the problem here? Is GitHub not able to make a bash evaluation for $? to get the
info whether the last (terraform) command was successful or not?
Run steps are run with set -eo pipefail (see docs); this means that if any command has a non-zero exit status, the job aborts, including when the command is part of a pipeline.
In your case, this means that if terraform fmt -check has a non-zero exit status, the next line is never run.
Then, your output would be literally $? because Bash doesn't expand anything in single quotes. You'd have to use
echo "::set-output name=bash_return_Value::$?"
but you already know that when this line is hit, the value of $? is 0.
If you want to do something in case a command fails, you could do
if ! terraform fmt -check; then
# Do something
fi
because if conditions don't trigger the set -e behaviour.

Bash script failing of no reason

I have this bash script (actually a part of https://github.com/ddollar/heroku-buildpack-multi/blob/master/bin/compile with echo I've added myself):
echo "[DEBUG] chmod done"
framework=$($dir/bin/detect $1)
echo "[DEBUG] $framework done"
And I see in the log:
[DEBUG] chmod done
Staging failed: Buildpack compilation step failed
And I do not see the second echo in the logs at all.
I do not know much bash unfortunately. Could anybody explain to me in what case the first echo performs and second do not? I always thought that both echo should always work no matter whether the second line succeeds or not.
It's not visible in your question, but clicking on your link, it says in the third line
set -e
This means to stop processing the script immediately whenever an error occurs. Comment that line, and the script should run through and also print the second echo statement.
Note that I didn't inspect what the script actually does and I cannot tell you if commenting out set -e is actually good advice or not.
From man set:
−e: When this option is on, when any command fails (for any of the reasons listed
in Section 2.8.1, Consequences of Shell Errors or by returning an exit status
greater than zero), the shell immediately shall exit with the following excep‐
tions:
1. The failure of any individual command in a multi-command pipeline shall not
cause the shell to exit. Only the failure of the pipeline itself shall be
considered.
2. The −e setting shall be ignored when executing the compound list following
the while, until, if, or elif reserved word, a pipeline beginning with the
! reserved word, or any command of an AND-OR list other than the last.
3. If the exit status of a compound command other than a subshell command was
the result of a failure while −e was being ignored, then −e shall not apply
to this command.
This requirement applies to the shell environment and each subshell environment
separately. For example, in:
set -e; (false; echo one) | cat; echo two
the false command causes the subshell to exit without executing echo one; how‐
ever, echo two is executed because the exit status of the pipeline (false; echo
one) | cat is zero.

capture exit code from a script flow

I need help with some scripts I'm writing.
Scenario:
Script A is executed by a scheduling process. This script takes the arguments passed to it, parses them in some way and runs script B feeding it with those arguments;
Script B does sudo -u user ssh user#REMOTEMACHINE, runs some commands (in the remote machine) and finally runs script C (also in the remote machine). I am passing those commands using a HERE DOCUMENT. Also, I'm passing the previous arguments to this script too.
This "flow" runs correctly and the job completes successfully.
My problems are:
Since this "flow" is ran by a scheduling process, I need to tell it if the job completed successfully or not. I'm doing this via exit codes, so what I want is to have a chain of exit codes, returning back from the last script to the first, in case of errors. I'm not able to perform this, because exit codes works correctly for the single scripts (I tried executing them singularly and look for the exit codes), but they are not sended back to the parent script. In my opinion, the problem is that ssh is getting the exit code from the child script, which in fact ended successfully, because there was no error executing it: it's the command inside of it that gone wrong.
While the process works correctly, I still get this line:
ssh: Could not resolve hostname : Name or service not known
But actually the script completes successfully.
I hope you understand what I wrote, I can eventually post my scripts here.
Thanks
O.
EDIT:
This are the scripts. There could be some problem with variable names because I renamed it quikly to upload the files.
Since I can't upload 3 files because of my low reputation, I merged them in a single file
SCRIPT FILE
I managed to solve the problem.
I followed olivier's advice and used the escape char to make the variable expanded by the remote machine.
Also I implemented different exit codes based on where the error occured.
At last, I modified the first script as follows, after launching sudo -u for the second script:
EXITCODEOFTHESECONDSCRIPT=$?
if [ $EXITCODEOFTHESECONDSCRIPT = 0 ]
then
echo ""
echo "Export job took $SECONDS seconds."
echo ""
exit 0
else
exit $EXITCODEOFTHESECONDSCRIPT
fi
This way I am able to exit the main script MAINTAINING the exit code provided from the second script.
In fact, I found that the problem was that the process worked well, even in case of errors, but the fact that I was giving more commands after the second script fail (the echo command was enough) provided other exit codes that overwrited the one I wanted.
Thanks to all !

How to get Jenkins job to fail if phantomJS.exe call exits with an error

In a Jenkins job, I have a windows batch Command build step, which runs a test with phantomJS, like this:
How can I make the job fail, if inside smokeTest.js I exit phantom with an error, like this:
phantomJS.exit(1)
Jenkins considers a job as "failed" if the return-code of any command-block
(like that 'ExecuteWindows batch command' above) is non-zero.
If it does not work for you in this case, it is probably because 'phantomjs.exe' returns '0' in any case
(can confirm this by echo-ing the ERRORLEVEL just after that command: ECHO %ERRORLEVEL% ).
If this is the case (i.e.: it returns '0' even when fails), you can handle it this way:
Print a clear error-message to the console (or STDOUT - something that will show in the log of the Job)
Use the Text-finder Plugin to catch that error-message and mark the build as 'Failed'.

Getting the return value in "sh -e"

I'm writing a shell script with #!/bin/sh as the first line so that the script exits on the first error. There are a few lines in the file that are in the form of command || true so that the script doesn't exit right there if the command fails. However, I still want to know know the exit code of the command. How would I get the exit code without having to use set +e to temporarily disable that behavior?
Your question appears to imply set -e.
Assuming set -e:
Instead of command || true you can use command || exitCode=$?. The script will continue and the exit status of command is captured in exitCode.
$? is an internal variable that keeps the exit code of the last command.
Since || short-circuits if command succeeds, set exitCode=0 between tests or instead use: command && exitCode=0 || exitCode=$?.
But prefer to avoid set -e style scripting altogether, and instead add explicit error handling to each command in your script.
If you want to know the status of the command, then presumably you take different actions depending on its value. In which case your code should look something like:
if command; then
# do something when command succeeds
else
# do something when command fails
fi
In that case you don't need to do anything special, since the shell will not abort when command fails.
The only reasons set -e would give you any problems is if you write your code as:
command
if test $? = 1; ...
So don't do that.

Resources