Terminate makefile command when subcommand throws? - shell

I've got the following Makefile:
#runs the working directory unit tests
test:
#NODE_ENV=test; \
mocha --ignore-leaks $(shell find ./test -name \*test.js);
#deploys working directory
deploy:
#make test; \
make deploy-git; \
make deploy-servers;
#deploys working to git deployment branch
deploy-git:
#status=$$(git status --porcelain); \
if test "x$${status}" = x; then \
git branch -f deployment; \
git push origin deployment; \
echo "Done deploying to git deployment branch."; \
else \
git status; \
echo "Error: cannot deploy. Working directory is dirty."; \
fi
deploy-servers:
# for each server
# #DEPLOY_SERVER_IP = "127.0.0.1"; \
# make deploy-server
#deploy-server:
# connect to this server with ssh
# check if app is already running
# stop the app on the server if already running
# set working directory to app folder
# update deployment git branch
# use git to move head to deployment branch
# start app again
Note that deploy-servers and deploy-server are just dummies for now. This is what the deploy command should do:
run the tests (make test), exit on failure
push current head to deployment branch (make deploy-git), exit on failure
pull deployment branch on servers (make deploy-servers)
You can see this in the Makefile as:
deploy:
#make test; \
make deploy-git; \
make deploy-servers;
The issue is that I am not sure how to prevent make deploy-git from executing when make test fails, and how to prevent make deploy-servers from executing when the tests fail or when make deploy-git fails.
Is there a clear way to do this, or should I resort to using shell files or write these tools in a normal programming language?

The exit status of a shell command list is the exit status of the last command in the list. Simply turn your command list into separate single simple commands. By default, make stops when a command returns nonzero. So you get what you want with
deploy:
#make test
make deploy-git
make deploy-servers
Should you ever want to ignore the exit status of a simple command, you can prefix it with a dash:
target:
cmd1
-cmd2 # It is okay if this fails
cmd3
Your make manual has all the details.

Others have given answers which are based on splitting the "recipe" into individual commands.
In situations where that is not viable, what you can do is set -e in the shell script to make it terminate if a command fails:
target:
set -e ; \
command1 ; \
command2 ; command3 ; \
... commandN
This is the same set -e that you would put near the top of a shell script to get it to bail when some command terminates unsuccessfully.
Suppose that we are not interested in the termination statuses of command2 and command3. Suppose it is okay if these indicate failure, or do not reliably use termination status. Then, instead of set -e we can code an explicit exit test:
target:
command1 ; \
command2 || exit 1 ; \
command3 ; \
true # exit 0 will do here also.
Since command3 can indicate failure, and we don't want it to fail our build, we add a successful dummy command.

make should already do this; it executes complex commands with sh -e, which (as long as it's not in a loop in a POSIX compatible shell) will abort execution if a command exits nonzero, and aborts the entire Makefile on failure of a command unless you specifically tell it not to. If you're feeling paranoid, you can use && in place of ; in your commands.

I solved this very issue by proxying to a new make command at the potential breakpoint:
.PHONY cmd_name cmd_name_contd
cmd_name:
if [ "`pwd`" = "/this/dir" ]; then make cmd_name_contd; fi
cmd_name_contd:
#echo "The directory was good, continuing"
That way, if the directory was wrong, it just exits silently, you could also add an else condition with a message to display on failure.

Related

Gitlab CI ignores script exit code other than 1

I'm trying to set up a GitLab pipeline, so that certain exit_codes are okay for a script I'm executing.
I have tried both shell and a ruby script, but both seem to have the same behaviour.
test_job:
stage: build
image: ruby:3.0
script:
- chmod +x ci/wrap.sh
- ./ci/wrap.sh
allow_failure:
exit_codes:
- 64
As you can see, I am just executing the script and nothing more, my expectation would be, that the last script executed is used a the exit status for the job.
In the script I'm only calling exit 64, which should be a "allowed failure" in that case, the pipeline log however says that the job failed because of exit code 1:
How do I get GitLab to accept the exit code of this (or a ruby) script as the job exit code?
I found a way to fix this problem. Apparently Gitlab Runner uses the -e flag, which means that any non-zero exit code will cancel the job. This can be updated by using set +e, but then you still need to capture the actual exit code for the job.
Using $? in two different lines of the configuration does not work, because Gitlab does echo calls in-between them.
So the exit code needs to be captured directly, example:
script:
- set +e
- ruby "ci/example.rb" || EXIT_CODE=$?
- exit $EXIT_CODE
Here's my trick for turning off early failure and checking the result code later:
script:
- set +e +o pipefail
- python missing_module.py 2>&1 | grep found; result=$?
- set -e -o pipefail
- "[ $result == 0 ]"
This turns off early exit and runs a command that we consider to have an acceptable failure if "found" is in the error text. It then turns early exit back on and tests whether the exit code we saved was good or not.

Capture Shell Script Output and Do a String Match to execute the next command?

i have a Shell script doing the following two commands, connecting to a remote server and putting files via SFTP, lets called it "execute.sh"
sftp -b /usr/local/outbox/send.sh username#example.com
mv /usr/local/outbox/DD* /usr/local/outbox/completed/
Then in my "send.sh" i have following commands to be executed on the remote server.
cd ExampleFolder/outbox
put Files_*
bye
Now my problem is
If the first command "sftp -b" fails due to a remote connection error some network problem, it still moves the files into the "completed folder" which is incorrect, so i want some way to do the next command "mv" to be executed only if the first command to "sftp" is successfully connected.
Can we do this by enhancing this shell script ? or some work around ?
My Shell is Bash.
Simply insert && between the two commands:
sftp -b /usr/local/outbox/send.sh username#example.com && \
mv /usr/local/outbox/DD* /usr/local/outbox/completed/
If the first fails, the second one will not run.
Alternatively, you can check the exit code of the first command explicitly. The exit code of the last command is always saved in $?, and it is 0 if the command succeeded:
sftp -b /usr/local/outbox/send.sh username#example.com
if [ $? -eq 0 ]
then
mv /usr/local/outbox/DD* /usr/local/outbox/completed/
fi
If you really wanted to capture the output of the first command, you could run it in $(...) and store the value in a variable:
sftpOutput="$(sftp -b /usr/local/outbox/send.sh username#example.com)"
and then use this variable in further checks, e.g. match it against a pattern in the next if.

how to protect a make target against concurrent execution?

What is the simplest way to make sure that
make target
does not run concurrently ? If two process run make target at the same time, both would run the target and will likely step on each other toes.
A shell snippet such as
dotlockfile target.lock || exit 1
trap "dotlockfile -u target.lock" EXIT
make target
works well: if two process run that snippet at the same time, one of them will acquire the lock and the other will wait for it to finish. When the first process completes, the other get to run make target which will return immediately and do nothing because the target has already been built.
I'm hoping there is a simpler way to do the same thing.
The following (in your makefile) should significantly reduce the probability that several processes collide:
target:
#if [ -f making.target ]; then \
echo "Already making target..."; \
exit 1; \
else \
touch making.target; \
<do whatever is needed to make target>; \
rm making.target; \
fi
You can easily test it by replacing <do whatever is needed to make target> by sleep 10 and try to make target twice, with and without waiting for the first one when launching the second. Note that you can also simply skip the recipe instead of aborting the whole make run: simply remove the exit 1 line.
The simpler solution is to use flock as follows:
flock /tmp/target.lock make target
If this command is run while another is holding the lock, it will wait for it to finish and will run when the lock is released. Alternatively, if the intention is to give up if a make target is already running, the -n flag can be used like so:
flock -n /tmp/target.lock make target

Jenkins Conditional steps not completing shell script

OK, so I've been asked to allow a "safe word" in our Git commit comments so that our CI build will skip if the developer wants to just commit without building. So, I set up a Conditional Step (multiple) with Execute shell in the Run? field. I'm running the following command to see if the safe word is in the commit message and if it is, "Don't run" the build and if it's not, to run the build. If I put a header in this shell (i.e. #!/bin/bash), I get an error. Without it, my script stops after setting the RUN variable. The IF..THEN..ELSE loop doesn't even begin to run. Here's my script..
RUN=$(git --no-pager log -1 --pretty=online:"%s" --grep "keyword")
if [ "$RUN" != "" ];
then exit 0
else
exit 666
fi
The output when the "safe word" is not present looks like this:
++ git --no-pager log -1 --pretty=online:%s --grep blech
+ RUN=
Run condition [Execute Shell] preventing perform for step [BuilderChain]
Finished: SUCCESS
With the "safe word" present, the output is this:
++ git --no-pager log -1 --pretty=online:%s --grep initializer
+ RUN='online:CVirtualBroker - Tweak construction to use initializer lists -
Run condition [Execute Shell] preventing perform for step [BuilderChain]
Finished: SUCCESS
Notice that the RUN variable gets set, then the script stops. What gives?
AJ
you have a semicolon at the end of If statement
if [ "$RUN" != "" ];
which make it stop...
The command after is exit 0 so I assume that your process stops

Interpret a Hudson job as successful even though one of the invoked programs fails

I have a Hudson job that periodically merges changes from an upstream bazaar repository.
Currently, when there are no changes upstream, Hudson reports this job as a failure because the bzr commit command returns with an error. My script looks like something this:
bzr branch lp:~lorinh/project/my-local-branch
cd my-local-branch
REV_UPSTREAM=`bzr version-info lp:project --custom --template="{revno}"`
bzr merge lp:project
bzr commit -m "merged upstream version ${REV_UPSTREAM}"
./run_tests.sh
bzr push lp:~lorinh/project/my-local-branch
If there are no changes to merge, the Hudson console output looks something like this:
+ bzr branch lp:~lorinh/project/my-local-branch
Branched 807 revision(s).
+ bzr merge lp:project
Nothing to do.
+ bzr commit -m merged upstream version 733
Committing to: /var/lib/hudson/jobs/merge-upstream/workspace/myproject/
aborting commit write group: PointlessCommit(No changes to commit)
bzr: ERROR: No changes to commit. Use --unchanged to commit anyhow.
Sending e-mails to: me#example.com
Finished: FAILURE
The problem is that I don't want Hudson to report this as a failure. How do I modify my commands so the script terminates at the failed commit, but it isn't interpreted by Hudson as an error? I tried changing the commit command to:
bzr commit -m "merged upstream version ${REV_UPSTREAM}" || exit
But that didn't work.
(Note: I realize I could use Hudson's "Poll SCM" instead of "Build periodically". However, with bazaar, if somebody does a push with local commits that were done before the most recent modifications, then Hudson won't detect a change to the repository.)
You were very close! Here's the corrected version of what you were trying:
bzr commit -m "merged upstream version ${REV_UPSTREAM}" || exit 0
This now does what you asked for, but isn't perfect. I'll get to that later.
Note the tiny important change from your version - we are now being explicit that we should exit with code 0 (success), if the bzr command does not do so. In your version, exit (with no argument) will terminate your script but return the exit code of the last command executed - in this case the bzr commit.
More about exit
How do we find out about this behaviour of exit? The exit command is a shell built-in - to find documentation on it we use the help command:
help exit
Which on my machine tells me:
exit: exit [n]
Exit the shell.
Exits the shell with a status of N. If N is omitted, the exit status
is that of the last command executed.
Here's a decent tutorial on exit and exit codes in the bash shell
Hudson and exit codes
Hudson follows this common convention of interpreting exit code 0 as success, and any other code as failure. It will flag your build as a failure if the build script it executes exits with a non-zero code.
Why your script is stopping after the bzr commit
If, as you say, you have the following and your script is stopping after the bzr commit...
bzr commit -m "merged upstream version ${REV_UPSTREAM}"
./run_tests.sh
... I suspect your script has an instruction such as set -e or is being invoked with something like bash -e build_script.sh
Either of these tells the shell to exit immediately if a command exits with a non-zero status, and to pass along that same "failure" exit code. (There are subtleties - see footnote 1).
Disabling exit-on-error
While this behaviour of exiting on error is extremely useful, sometimes we'd like to disable it temporarily. You've found one way, in
bzr commit whatever || true
We can also disable the error-checking with set +e.
Here's a pattern you may find useful. In it we will:
Disable exit-on-error (with set +e)
run the command which may error bzr commit whatever
capture its exit code ($?) for later inspection
Re-enable exit-on-error (with set -e)
Test and act upon the exit code of any commands
Let's implement that. Again we'll exit 0 (success) if the bzr command failed.
set +e
bzr commit whatever
commit_status=$?
set -e
if [[ "$commit_status" != "0" ]]; then
echo "bzr commit finds nothing to do. Build will stop, with success"
exit 0
fi
echo "On we go with the rest of the build script..."
Note that we bracket as little as possible with set +e / set -e. If we have typos in our script in that section, they won't stop the script and there'll be chaos. Read the section "Avoiding set -e" in the post "Insufficiently known POSIX shell features" for more ideas.
What's wrong with foo || exit 0 ?
As I mentioned earlier, there's a problem with our first proposed solution. We've said that when bzr commit is non-zero (i.e. it doesn't commit normally) we'll always stop and indicate success. This will happen even if bzr commit fails for some other reason (and with some other non-zero exit code): perhaps you've made a typo in your command invocation, or bzr can't connect to the repo.
In at least some of these cases, you'd probably want the build to be flagged as a failure so you can do something about it.
Towards a better solution
We want to be specific about which non-zero exit codes we expect from bzr, and what we'll do about each.
If you look back at the set +e / set -e pattern above, it shouldn't be difficult to expand the conditional logic (if) above into something that can deal with a number of specific exit codes from bzr, and with a catch-all for unanticipated exit codes which (I suggest) fails the build and reports the offending code and command.
To find out the exit codes for any command, read the docs or run the command and then run echo $? as the next command. $? holds the exit code of the previous command.
Footnote 1: The exit-on-error behaviour switched with set -e has some subtleties you'll need to read up on, concerning behaviour when commands are in pipelines, conditional statements and other constructs.
given that bzr doesn't seem to emit a correct exit code (based on you bzr ... || exit example), one solution is to capture the output of bzr and then scan for ERROR or other.
bzr commit -m "merged upstream version ${REV_UPSTREAM}" 2>&1 | tee /tmp/bzr_tmp.$$
case $( < /tmp/bzr_tmp.$$ ) in
*ERROR* )
printf "error running bzr, found error msg = $(< /tmp/bzr_tmp.$$)\n"
exit 1
;;
* )
: # everything_OK this case target
# just to document the default action of 'nothing' ;-)
;;
esac
A slightly easier case target regex, based on your sample output, would be *FAILURE ) ....
the $( < file ) is a newer feature of the shell that you can think of as $(cat file), but is more efficient in use of process resources, as it doesn't need to launch a new process (cat), to dump a file.
I hope this helps.

Resources