Detecting success or failure of cp and rm - xcode

I have a shell script that runs a bunch of commands (on OS X 10.7) as part of a build step for XCode. The script removes a bunch of files and copies a bunch of files.
The problem I have right now is that if the cp command fails, the build still 'succeeds' according to XCode, presumably because the script still returns with an exit status of 0. How can I capture the result of the cp ? I looked up the man page and it doesn't seem to return a value.

cp will return an error code (non zero) on failure, but your script probably ignores it and proceeds to the next command.
Unless you explicitly check the return code of each command in a multi-step script, then the shell will carry on.
See Aborting a shell script if any command returns a non-zero value? for how to exit a script on any error.

Related

Programmatically/script to run zsh command

As part of a bigger script I'm using print -z ls to have zsh's input buffer show the ls command. This requires me to manually press enter to actually execute the command. Is there a way to have ZSH execute the command?
To clarify, the objective is to have a command run, keep it in history, and in case another command is running it shouldn't run in parallel or something like that.
The solution I've found is:
python -c "import fcntl, sys, termios; fcntl.ioctl(sys.stdin, termios.TIOCSTI, '\n');
I'm not sure why, but sometimes you might need to repeat the command 2 times for the actual command to be executed. In my case this is happening because I send a process to the background, although this still doesn't make much sense because that process is sending a signal back to the original shell (triggering a trap) which actually calls this code.
In case anyone is interested, this was my goal:
https://gist.github.com/alexmipego/89c59a5e3abe34faeaee0b07b23b56eb

Script piped into bash fails to expand globs during rm command

I am writing a script with the intention of being able to download and run it from anywhere, like:
bash <(curl -s https://raw.githubusercontent.com/path/to/script.sh)
The command above allows me to download the script, run interactive commands (e.g. read), and - for the most part - Just Works. I have run into an issue during the cleanup portion of my script, however, and haven't been able to discern a fix
During cleanup I need to remove several .bkp files created by the script's execution. To do so I run rm -f **/*.bkp inside the script. When a local copy of the script is run, this works great! When run via bash/curl, however, it removes nothing. I believe this has something to do with a failure to expand the glob as a result of the way I've connected the I/O of bash and curl, but have been unable to find a way to get everything to play nice
How can I meet all of the following requirements?
Download and run a script from a remote resource
Ensure that the user's keyboard input is connected for use in e.g. read calls within the script
Correctly expand the glob passed to rm
Bonus points: colorize input with e.g. echo -e "\x1b[31mSome error text here\x1b[0m" (also not working, suspected to be related to the same bash/curl I/O issues)

fish shell evaluate make return code

I'm trying to write a script in Fish that runs a Make recipe and then executes all of the resultant binaries. The problem I'm having is that I would like to have the script exit with an error code if the make command encounters an error. Whenever I try to capture Make's return value, I end up with its output log instead.
For example:
if test (make allUnitTests) -eq 0
echo "success"
else
echo "fail"
end
returns an error because "test" is seeing the build process, not the terminating condition.
I wrote the script so that I could easily make Jenkins run all my unit tests whenever I trigger a build. Since I haven't been able to get the above section of the script working correctly, I've instead instructed Jenkins to run the make command as a separate command, which does exactly what I want: halting the entire build process without executing any binaries if anything fails to compile. Thus, at this point my question is more of an academic exercise, but I would like to add building the unit test binaries into the script (and have it cleanly terminate on a build error) for the benefit of any humans who might check out the code and would like to run the unit tests.
I played a little with something like:
if test (count (make allUnitTests | grep "Stop")) -eq 0
but this has two problems:
I'm apparently piping stdout when I need to pipe stderr. (Come to think of it, if I could just check to see if anything was written to stderr, then I wouldn't need grep at all.)
Grep is swallowing all the log data piped to it, which I really want to be visible on the console.
You are misunderstanding the parentheses - these run a command substitution. What this does is capture the output of the process running in the substitution, which it will then use as arguments (separated by newlines by default) to the process outside.
This means your test will receive the full output of make.
What you instead want to do is just run if make allUnitTests without any parens, since you are just interested in the return value.
If you would like to do something between running make and checking its return value, the "$status" variable always contains the return value of the last command, so you can save that:
make allUnitTests
set -l makestatus $status
# Do something else
if test $makestatus -eq 0
# Do the if-thing
else
# Do the else-thing
end

How/When does Execute Shell mark a build as failure in Jenkins?

The horror stories I found while searching for an answer for this one...
OK, I have a .sh script which pretty much does everything Jenkins supposed to do:
checks out sources from SVN
build the project
deploys the project
cleans after itself
So in Jenkins I only have to 'build' the project by running the script in an Execute Shell command.
The script is ran (the sources are downloaded, the project is build/deploy) but then it marks the build as a failure:
Build step 'Execute shell' marked build as failure
Even if the script was successfully ran! I tried closing the script with:
exit 0 (still marks it as failure)
exit 1 (marks it as failure, as expected)
no exit command at all (marks it as failure)
When, how and why does Execute Shell mark my build as a failure?
First things first, hover the mouse over the grey area below. Not part of the answer, but absolutely has to be said:
If you have a shell script that does "checkout, build, deploy" all by itself, then why are you using Jenkins? You are foregoing all the features of Jenkins that make it what it is. You might as well have a cron or an SVN post-commit hook call the script directly. Jenkins performing the SVN checkout itself is crucial. It allows the builds to be triggered only when there are changes (or on timer, or manual, if you prefer). It keeps track of changes between builds. It shows those changes, so you can see which build was for which set of changes. It emails committers when their changes caused successful or failed build (again, as configured as you prefer). It will email committers when their fixes fixed the failing build. And more and more. Jenkins archiving the artifacts also makes them available, per build, straight off Jenkins. While not as crucial as the SVN checkout, this is once again an integral part of what makes it Jenkins. Same with deploying. Unless you have a single environment, deployment usually happens to multiple environments. Jenkins can keep track of which environment a specific build (with specific set of SVN changes) is deployed it, through the use of Promotions. You are foregoing all of this. It sounds like you are told "you have to use Jenkins" but you don't really want to, and you are doing it just to get your bosses off your back, just to put a checkmark "yes, I've used Jenkins"
The short answer is: the exit code of last command of the Jenkin's Execute Shell build step is what determines the success/failure of the Build Step. 0 - success, anything else - failure.
Note, this is determining the success/failure of the build step, not the whole job run. The success/failure of the whole job run can further be affected by multiple build steps, and post-build actions and plugins.
You've mentioned Build step 'Execute shell' marked build as failure, so we will focus just on a single build step. If your Execute shell build step only has a single line that calls your shell script, then the exit code of your shell script will determine the success/failure of the build step. If you have more lines, after your shell script execution, then carefully review them, as they are the ones that could be causing failure.
Finally, have a read here Jenkins Build Script exits after Google Test execution. It is not directly related to your question, but note that part about Jenkins launching the Execute Shell build step, as a shell script with /bin/sh -xe
The -e means that the shell script will exit with failure, even if just 1 command fails, even if you do error checking for that command (because the script exits before it gets to your error checking). This is contrary to normal execution of shell scripts, which usually print the error message for the failed command (or redirect it to null and handle it by other means), and continue.
To circumvent this, add set +e to the top of your shell script.
Since you say your script does all it is supposed to do, chances are the failing command is somewhere at the end of the script. Maybe a final echo? Or copy of artifacts somewhere? Without seeing the full console output, we are just guessing.
Please post the job run's console output, and preferably the shell script itself too, and then we could tell you exactly which line is failing.
Simple and short answer to your question is
Please add following line into your "Execute shell" Build step.
#!/bin/sh
Now let me explain you the reason why we require this line for "Execute Shell" build job.
By default Jenkins take /bin/sh -xe and this means -x will print each and every command.And the other option -e, which causes shell to stop running a script immediately when any command exits with non-zero (when any command fails) exit code.
So by adding the #!/bin/sh will allow you to execute with no option.
In my opinion, turning off the -e option to your shell is a really bad idea. Eventually one of the commands in your script will fail due to transient conditions like out of disk space or network errors. Without -e Jenkins won't notice and will continue along happily. If you've got Jenkins set up to do deployment, that may result in bad code getting pushed and bringing down your site.
If you have a line in your script where failure is expected, like a grep or a find, then just add || true to the end of that line. That ensures that line will always return success.
If you need to use that exit code, you can either hoist the command into your if statement:
grep foo bar; if [ $? == 0 ]; then ... --> if grep foo bar; then ...
Or you can capture the return code in your || clause:
grep foo bar || ret=$?
I 've tried all mentioned options (even changing sh to bash without -xe params), the only one option worked for me is:
<command-which-returns-not-zero> || exit 0
Plain and simple:
If Jenkins sees the build step (which is a script too) exits with non-zero code, the build is marked with a red ball (= failed).
Why exactly that happens depends on your build script.
I wrote something similar from another point-of-view but maybe it will help to read it anyway:
Why does Jenkins think my build succeeded?
So by adding the #!/bin/sh will allow you to execute with no option.
It also helped me in fixing an issue where I was executing bash script from Jenkins master on my Linux slave. By just adding #!/bin/bash above my actual script in "Execute Shell" block it fixed my issue as otherwise it was executing windows git provided version of bash shell that was giving an error.
Try and always find where exactly its failing by adding the following line into your "Execute shell" Build step.
#!/bin/sh -xe
By adding the -x you will print each and every command that ran (including the lines from embedded scripts) and that will help in spotting the root cause.
Removing the -e option i.e. running #!/bin/sh will allow you to execute with no option, which is really a bad idea as Bryan explained well in one of the answers.
The problem is with no option Jenkins will ignore errors and continue execution of subsequent steps (if there are any) which will leave your process in an consistent state. If this is for a production build or deployment, that may have a bad impact.
Once you find the problem area, run the same failing command from the directory as jenkins-user manually, to get to the exact error/rootcause.
In Jenkins ver. 1.635, it is impossible to show a native environment variable like this:
$BUILD_NUMBER or ${BUILD_NUMBER}
In this case, you have to set it in an other variable.
set BUILDNO = $BUILD_NUMBER
$BUILDNO

Preventing a program from breaking my Bash script

I'm using S3cmd in a bash script upon startup. If it returns an error code, the script is ready to do something. However, s3cmd seem to (sometimes) break it all when an error occurs, and outputs information on screen. It just exists my script.
How do I prevent a program from breaking my Bash script? If something is wrong, I just want the bash script to keep on doing the next thing in line.
EDIT: It seems this only happens with /etc/rc.local. If I runt the script as something else (/home/whateverscript) it does as I want it to.
May be you can wrap your
s3cmd sync --recursive --delete-removed --config="$HEMMAPPEN/.s3cfg" "$SOURCEFOLDER" "$TARGETFOLDER/"
in the script as below,
outputText=$(s3cmd sync --recursive --delete-removed --config="$HEMMAPPEN/.s3cfg" "$SOURCEFOLDER" "$TARGETFOLDER/" 2>&1;echo ,$?)
This will redirect your stderr to stdout (2>&1) and the variable outputText would contain your desired output (in the form *stdout,exit_status*) of the command, if it's required later in the script context.
If you don't want stdout but only the status of the command in outputText, you can use the following
status=$(s3cmd sync --recursive --delete-removed --config="$HEMMAPPEN/.s3cfg" "$SOURCEFOLDER" "$TARGETFOLDER/" > /dev/null 2>&1;echo $?)
status variable would contain the status of teh command that was run.
I hope it makes sense. Please comment, if it hasn't solved your problem.

Resources