How can I have Vagrant's provision fail if a subcommand fails? - vagrant

If one of the lines in config.vm.provision "shell" fails, Vagrant keeps going forward with the rest of the script. How can I make any error cause the provision process to fail?

I dont think vagrant provides for an option on the shell provisioning but it can be managed within your script itself by using The Set Builtin
#!/bin/bash
set -e
.... rest of your commands - first command which fails will break the script and exits ...

I want exactly this. Looks like it's not possible so I've settled for a panicky banner instead. The child scripts contain set -e to exit if anything fails.
Vagrantfile:
config.vm.provision "shell", path: "setup-files/parent.sh"`
parent.sh:
#!/bin/bash
/home/vagrant/setup-files/child1.sh &&
/home/vagrant/setup-files/child2.sh
if [ $? -ne 0 ]; then # If: last exit code is non-zero
echo !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
echo An error occurred running build scripts
echo !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
fi

Related

How to make exception for a bash script with set -ex

I have a bash script that has set -ex, which means the running script will exit once any command in it hits an error.
My use case is that there's a subcommand in this script for which I want to catch its error, instead of making the running script shutdown.
E.g., here's myscript.sh
#!/bin/bash
set -ex
# sudo code here
error = $( some command )
if [[ -n $error ]] ; then
#do something
fi
Any idea how I can achieve this?
You can override the output of a single command
this_will_fail || true
Or for an entire block of code
set +e
this_will_fail
set -e
Beware, however, that if you decide you don't want to use set -e in the script anymore this won't work.
If you want to handle a particular command's error status yourself, you can use as the condition in an if statement:
if ! some command; then
echo 'An error occurred!' >&2
# handle error here
fi
Since the command is part of a condition, it won't exit on error. Note that other than the ! (which negates it, so the then clause will run if the command fails rather than it succeeds), you just include the command directly in the if statement (no brackets, parentheses, etc).
BTW, in your pseudocode example, you seem to be treating it as an error if the command produces any output; usually that's not what you want, and I'm assuming you actually want to test the exit status to detect errors.

Gitlab CI ignores script exit code other than 1

I'm trying to set up a GitLab pipeline, so that certain exit_codes are okay for a script I'm executing.
I have tried both shell and a ruby script, but both seem to have the same behaviour.
test_job:
stage: build
image: ruby:3.0
script:
- chmod +x ci/wrap.sh
- ./ci/wrap.sh
allow_failure:
exit_codes:
- 64
As you can see, I am just executing the script and nothing more, my expectation would be, that the last script executed is used a the exit status for the job.
In the script I'm only calling exit 64, which should be a "allowed failure" in that case, the pipeline log however says that the job failed because of exit code 1:
How do I get GitLab to accept the exit code of this (or a ruby) script as the job exit code?
I found a way to fix this problem. Apparently Gitlab Runner uses the -e flag, which means that any non-zero exit code will cancel the job. This can be updated by using set +e, but then you still need to capture the actual exit code for the job.
Using $? in two different lines of the configuration does not work, because Gitlab does echo calls in-between them.
So the exit code needs to be captured directly, example:
script:
- set +e
- ruby "ci/example.rb" || EXIT_CODE=$?
- exit $EXIT_CODE
Here's my trick for turning off early failure and checking the result code later:
script:
- set +e +o pipefail
- python missing_module.py 2>&1 | grep found; result=$?
- set -e -o pipefail
- "[ $result == 0 ]"
This turns off early exit and runs a command that we consider to have an acceptable failure if "found" is in the error text. It then turns early exit back on and tests whether the exit code we saved was good or not.

Loop trough docker output until I find a String in bash

I am quite new to bash (barely any experience at all) and I need some help with a bash script.
I am using docker-compose to create multiple containers - for this example let's say 2 containers. The 2nd container will execute a bash command, but before that, I need to check that the 1st container is operational and fully configured. Instead of using a sleep command I want to create a bash script that will be located in the 2nd container and once executed do the following:
Execute a command and log the console output in a file
Read that file and check if a String is present. The command that I will execute in the previous step will take a few seconds (5 - 10) seconds to complete and I need to read the file after it has finished executing. I suppose i can add sleep to make sure the command is finished executing or is there a better way to do this?
If the string is not present I want to execute the same command again until I find the String I am looking for
Once I find the string I am looking for I want to exit the loop and execute a different command
I found out how to do this in Java, but if I need to do this in a bash script.
The docker-containers have alpine as an operating system, but I updated the Dockerfile to install bash.
I tried this solution, but it does not work.
#!/bin/bash
[command to be executed] > allout.txt 2>&1
until
tail -n 0 -F /path/to/file | \
while read LINE
do
if echo "$LINE" | grep -q $string
then
echo -e "$string found in the console output"
fi
done
do
echo "String is not present. Executing command again"
sleep 5
[command to be executed] > allout.txt 2>&1
done
echo -e "String is found"
In your docker-compose file make use of depends_on option.
depends_on will take care of startup and shutdown sequence of your multiple containers.
But it does not check whether a container is ready before moving to another container startup. To handle this scenario check this out.
As described in this link,
You can use tools such as wait-for-it, dockerize, or sh-compatible wait-for. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
OR
Alternatively, write your own wrapper script to perform a more application-specific health check.
In case you don't want to make use of above tools then check this out. Here they use a combination of HEALTHCHECK and service_healthy condition as shown here. For complete example check this.
Just:
while :; do
# 1. Execute a command and log the console output in a file
command > output.log
# TODO: handle errors, etc.
# 2. Read that file and check if a String is present.
if grep -q "searched_string" output.log; then
# Once I find the string I am looking for I want to exit the loop
break;
fi
# 3. If the string is not present I want to execute the same command again until I find the String I am looking for
# add ex. sleep 0.1 for the loop to delay a little bit, not to use 100% cpu
done
# ...and execute a different command
different_command
You can timeout a command with timeout.
Notes:
colon is a utility that returns a zero exit status, much like true, I prefer while : instead of while true, they mean the same.
The code presented should work in any posix shell.

executing command on vagrant-mounted

I'm trying to run a command after a share is mounted with vagrant. bu I've never written an upstart script before. What I have so far is
start on vagrant-mounted
script
if [ "$MOUNTPOINT" = "/vagrant" ]
then
env CMD="echo $MOUNTPOINT mounted at $(date)"
elif [ "$MOUNTPOINT" = "/srv/website" ]
then
env CMD ="echo execute command"
fi
end script
exec "$CMD >> /var/log/vup.log"
of course that's not the actual script I want to run but I haven't gotten that far yet but the structure is what I need. My starting point has been this article. I've had a different version that was simply
echo $MOUNTPOINT mounted at `date`>> /var/log/vup.log
that version did write to the log.
Trying to use init-checkconf faile with failed to ask Upstart to check conf file

exit not working as expected in Bash

I use SSH Secure Shell client to connect to a server and run my scripts.
I want to stop a script on some conditions, so when I use exit, not only the script stops, but all the client disconnects from the server!, Here is the code:
if [[ `echo $#` -eq 0 ]]; then
echo "Missing argument- must to get a friend list";
exit
fi
for user in $*; do
if [[ !(-f `echo ${user}.user`) ]]; then
echo "The user name ${user} doesn't exist.";
exit
fi
done
A picture of the client:
Why is this happening?
You use source to run the script, this runs it in the current shell. That means that exit terminates the current shell and with that the ssh session.
replace source with bash and it should work, or better put
#!/bin/bash
on to of the file and make it executable.
exit returns from the current shell - If you've started a script by running it directly, this will exit the shell that the script is running in.
return returns from a function or sourced file (TY Dennis Williamson) - Same thing, but it doesn't terminate your current shell.
break returns from a loop - Similar to return, but can be used anywhere within a loop to stop processing more items. This is probably what you want.
if you are running from the current shell, exit will obviously exit from the shell and disconnect you. try running it in a new shell ( use a . before the script) or else use 'return' instead of exit

Resources