I have a string of commands I want to run in succession, if the previous command has been successful. Here is an example:
echo "hello" && sleep 5 && cd /tmp && rm -r * && echo "googbye"
If any part fails, I need to know which part failed. Also, I am trying to implement a spinner while the commands are running. Is there a way to do this?
In terms of detecting which bit failed, you could use:
bad=0
[[ $bad -eq 0 ]] && { echo hello || bad=1; }
[[ $bad -eq 0 ]] && { sleep 5 || bad=2; }
[[ $bad -eq 0 ]] && { cd /tmp || bad=3; }
[[ $bad -eq 0 ]] && { rm -r * || bad=4; }
[[ $bad -eq 0 ]] && { echo goodbye || bad=5; }
The variable bad would then be set based on which bit failed (if any).
Alternatively, if that's too verbose, you can still do it in a single line, albeit a longer one:
bad=1 && echo hello && bad=2 && sleep 5 && bad=3 && cd /tmp && bad=4 && echo goodbye && bad=0
In terms of a spinner, that may not work so well if you actually have output in the job you're doing but you can do it with a simple background function:
spinner() {
while : ; do
echo -ne '\b|' ; sleep 1
echo -ne '\b/' ; sleep 1
echo -ne '\b-' ; sleep 1
echo -ne '\b\\' ; sleep 1
done
}
echo -n 'Starting: .'
spinner & pid=$!
sleep 13 ; kill $pid
echo
echo Done.
With regard to putting it all together based on your question and comments, I'd probably have two functions, the first to run the spinner in the background and the second to do the payload, including capturing its output so it doesn't affect the spinner.
To that end, have a look at the following demo code:
spinner() {
echo -n 'Starting: .'
chars='\-/|'
while [[ -f "$1" ]] ; do
echo -ne "\b${chars:3}" ; sleep 1
chars="${chars:3}${chars:0:3}"
done
echo -e ' .. done.'
}
payload() {
echo Running task 1
sleep 3
true || return 1
echo Running task 2
sleep 3
false || return 2
echo Running task 3
sleep 3
true || return 3
return 0
}
touch /tmp/sentinel.$$
spinner /tmp/sentinel.$$ & pid=$!
payload >/tmp/out.$$ 2>&1 ; rc=$?
rm /tmp/sentinel.$$ ; wait
echo "Return code was $rc, output was:"
cat /tmp/out.$$
rm /tmp/out.$$
In the payload function, simply replace each of the steps with whatever you wish to actually do (such as shut down a systemd job or delete some files).
The code as it stands will fail step 2 (false) but, for your own code, this would be running actual commands and evaluating their return value.
I've also used a sentinel file to terminate the spinner function so that it's not shut down "violently".
Related
I'm new to bash and having an issue where exit is always called in my script. Consider this simple code:
if [[ "$x" -ge 1 && "$x" -le 4 ]]; then
/export/home/scripts/script1.sh \
"$x" \
|| echo "Error.. something went wrong." && exit 1
fi
How can I handle errors, considering && takes precedence over || ?
Using GNU bash, version 3.2.51(1).
Thanks
You can do it like this :
if [[ "$x" -ge 1 && "$x" -le 4 ]]; then
/export/home/scripts/script1.sh \
"$x" \
|| { echo "Error.. something went wrong." && exit 1 ; }
fi
Note : I used { ; }, instead of (), because () will open your command in a subshell, so it will not exit.
&& and || have the same precedence in shell; the implicit parenthesization is (a || b) && c, not a || (b && c). Mixing || and && in the same list is rarely a good idea; use an explicit if statement.
if [[ "$x" -ge 1 && "$x" -le 4 ]]; then
if ! /export/home/scripts/script1.sh "$x"; then
echo "Error.. something went wrong"
exit 1
fi
fi
For arithmetic comparisons, prefer the arithemetic command ((...)) over [[ ... ]] for readability.
if (( x >= 1 && x <= 4 )); then
You can use braces to regroup commands without creating a new subshell :
{ true || false; } && echo true || echo false # echoes true
{ false || false; } && echo true || echo false # echoes false
Its syntax is pretty annoying : the opening brace must be followed by a space (or another character of $IFS, such as a linefeed or a tab), and the closing brace must be preceded by a linefeed or a ;, denoting the end of the last command of the block.
Parenthesis don't have those difficulties, but they will execute their instructions in a subshell, which has multiple other effects :
calling exit will only exit the subshell, not the shell running your script : (exit) is a no-op
updating variables will only apply to the subshell and will have no effect on the values known to your script : a=0;( (( a++ )) ; echo $a) ; echo $a will echo 1 from the subshell, then 0 from the outer shell.
I prefer doing explicit tests on scripts using if so that I can clean up after myself if things go pear shaped. Helps keep the code looking cleaner, too.
if [[ "$x" -ge 1 && "$x" -le 4 ]]; then
if ! /export/home/scripts/script1.sh "$x"; then
err="Error.. something went wrong."
test -t 0 && echo "$err" >&2 # send errors to stderr if on terminal
logger -p local0.critical -t $(hostname -s) "$err" # send to syslog
# You could even add some code here to clean up after script1.sh.
exit 1
fi
fi
I want to invoke a function with some input checks (the input should be an integer between 1-21). If ok, then do the echo "invoke", else just print a message about invalid input.
I tried with the following simplified example, it works for the invalid case, but does not invoke for the valid case. what is wrong?
function _check_num ()
{
[[ "$1" =~ ^[0-9]+$ ]] && [ "$1" -ge 1 -a "$1" -le 21 ] || echo "input should be (1-21)" && return 1 // one-liner
}
function _call()
{
_check_num $1 && echo "invoke only if input is 1-21" // does not invoke given valid input
}
Note: please explain me the root cause of this one-liner case.
Do not use chains of && and || as a replacement for an if statement.
&& and || have equal precedence, so a && b || c runs c if either a or b fail; it is not equivalent to if a; then b; else c; fi.
a && b || c && d is parsed as ((a && b) ||c) && d, not (a && b) || (c && d).
Use an expicit if statement to make your code readable. (Also, don't use -a inside [...]; it is considered ambiguous and obsolete.)
function _check_num ()
{
if [[ "$1" =~ ^[0-9]+$ ]] && [ "$1" -ge 1 ] && [ "$1" -le 21 ]; then
return 0
else
echo "input should be (1-21)" >&2
return 1
fi
}
The less readable version would be something like the following, uses braces to properly group the commands.
function _check_num ()
{
{
[[ "$1" =~ ^[0-9]+$ ]] && [ "$1" -ge 1 ] && [ "$1" -le 21 ]
} || {
echo "input should be (1-21)" >&2 && return 1
}
}
function _check_num ()
{
if [[ "$1" =~ ^[0-9]+$ ]] && [ "$1" -ge 1 -a "$1" -le 21 ]; then
return 0
else
echo "input should be (1-21)"
return 1
fi
}
I thin you missed your return 0 statement
See chepner answer: problem is priority of && / || operators
Normally the return status of the first part should be 0 but idk something could go wrong.
You can use group last echo and return in {...} to make it work:
function _check_num () {
[[ "$1" =~ ^[0-9]+$ ]] && [ "$1" -ge 1 -a "$1" -le 21 ] ||
{ echo "input should be (1-21)" && return 1; }
}
Without {...} last return 1 is always returning 1 whether value is valid or invalid.
If you put your "one liner" inside a script to test it:
function _check_num () {
[[ "$1" =~ ^[0-9]+$ ]] &&
[ "$1" -ge 1 -a "$1" -le 21 ] ||
echo "input should be (1-21)" &&
return 1 # one-liner
}
_check_num "$1"; echo "exit value $?"
we get:
$ ./script.sh qwe
input should be (1-21)
exit value = 1
$ ./script.sh 112
input should be (1-21)
exit value = 1
$ ./script.sh 12
exit value = 1
As you can see, the exit value is always 1 (not a successful result).
Therefore you can not use the exit value as the trigger for other code.
Where?
The core issue is in this structure:
[ "$1" -ge 1 -a "$1" -le 21 ] || echo "input should be (1-21)" && return 1
which could be reduced to:
[ … ] || echo && return 1
The two possible exit values (success or not) of the […] could be tested with:
f(){ true || echo && return 1; }; f; echo "$?" # prints a 1.
f(){ false || echo && return 1; }; f; echo "$?" # *also* a 1[1].
[1] After also printing a blank line from the internal echo.
Why ?
Because of the "short circuit" effect of the shell «AND and OR lists»
From bash manual:
Lists
Of these list operators, && and || have equal precedence …
The return status is the exit status of the last command executed.
AND and OR lists are sequences of one of more pipelines separated by the && and || control operators, respectively.
What is crucial to understand the issue is this part (for AND):
An AND list has the form
command1 && command2
command2 is executed if, and only if, command1 returns an exit status of zero.
And for OR
An OR list has the form
command1 || command2
command2 is executed if and only if command1 returns a non-zero exit status.
And the natural understanding of which should be the present exit status at any position:
The return status of AND and OR lists is the exit status of the last command executed in the list.
If you split any AND and OR list at any position, the exit status at such position is the exit status of the last previous command executed.
Some commands may get bypassed.
Step by step:
The first command is a test [[…]], the exit value at this point is its exit value.
This first command is connected to the next with an OR (||).
If the exit value of the first command is fail (not 0) the next command (echo) will be executed.
The exit code of echo is always success.
The next connection is an AND (&&).
At that point the exit value is true, the next command will be executed.
If the exit value of the first command is success (0) the next command will not be executed.
But the next one (return 1) will because the connection is an AND.
In both cases the return 1 is the last command executed.
The return value is always 1.
Precedence
Precedence may affect the order of the commands executed, but that does not explain why some commands are not executed.
Precedence in an "AND and OR list" is the same for && and ||.
So, operators will be considered in the left to right order they are found.
Associativity
Associativity in a shell "AND and OR list" is "left-associative".
That means that: if there are no parenthesis, operations are grouped from the left.
The first command is grouped with the second.
The result of that is grouped with the third, etc.
But even grouped as explained here. That does not explain why some commands are bypassed.
References
Advanced Bash-Scripting Guide: Chapter 26. List Constructs
Precedence
Operator associativity
Maybe you should consider using a function like:
function _check_num () {
declare -i num
if [[ $1 =~ ^[+]?(0+)?([0-9]+)$ ]]; then
num=${BASH_REMATCH[2]}
if ! (( num >=1 && num <= 21 )); then
echo "input should be (1-21)"
return 1
fi
return 0
else
echo "input should be a number"
return 2
fi
}
Is there a simple way to do something like:
if ! some_command; then
some_commands;
fi
or
if [[ com1 && com2 ]]; then
something;
fi
where it's the exit status of com1 and com2 that are used.
I realize that I can do things like get the exit status even with -e set by using || and check that, etc. Just wondering if there was something simpler that I am missing.
ADDENDUM: I also realize that I could do:
if some_command; then
else
some_commands;
fi
! can be used outside of an if statement; it's the general exit-status inverter, not part of the if syntax.
! some_command && { some_commands; }
and
some_command || { some_commands; }
are equivalent. You can also use
com1 && com2 && { some_commands; }
for first: you can write something like: where <command1/2> is/are some command(s)/script(s) with/without parameter
$ <command1> 2>/dev/null || <command2> 2>/dev/null
for second: you can write something like:
$ ( <command1> && <command2> ) 2>/dev/null && <command3> 2>/dev/null
ex:
$ echo 1 2>/dev/null || echo 0 2>/dev/null
1
$ ech1o 1 2>/dev/null|| echo 0 2>/dev/null
0
AND
$ ( echo 1 && echo 2 ) 2>/dev/null && echo 3
1
2
3
The following won't print anything as echo 3 will never be executed (due to echo1 is not a valid command)
$ ( echo1 1 && echo 2 ) 2>/dev/null && echo 3
What i want to do should be pretty simple, on my own i have reached the solution below, all i need is a few pointers to tell me if this is the way to do it or i should refactor anything in the code.
The below code, should create a few parallel processes and wait for them to finish executing then rerun the code again and again and again...
The script is triggered by a cron job once at 10 minutes, if the script is running, then do nothing, otherwise start the working process.
Any insight is highly appreciated since i am not that familiar with bash programming.
#!/bin/bash
# paths
THISPATH="$( cd "$( dirname "$0" )" && pwd )"
# make sure we move in the working directory
cd $THISPATH
# console init path
CONSOLEPATH="$( cd ../../ && pwd )/console.php"
# command line arguments
daemon=0
PHPPATH="/usr/bin/php"
help=0
# flag for binary search
LOOKEDFORPHP=0
# arguments init
while getopts d:p:h: opt; do
case $opt in
d)
daemon=$OPTARG
;;
p)
PHPPATH=$OPTARG
LOOKEDFORPHP=1
;;
h)
help=$OPTARG
;;
esac
done
shift $((OPTIND - 1))
# allow only one process
processesLength=$(ps aux | grep -v "grep" | grep -c $THISPATH/send-campaigns-daemon.sh)
if [ ${processesLength:-0} -gt 2 ]; then
# The process is already running
exit 0
fi
if [ $help -eq 1 ]; then
echo "---------------------------------------------------------------"
echo "| Usage: send-campaigns-daemon.sh |"
echo "| To force PHP CLI binary : |"
echo "| send-campaigns-daemon.sh -p /path/to/php-cli/binary |"
echo "---------------------------------------------------------------"
exit 0
fi
# php executable path, find it if not provided
if [ $PHPPATH ] && [ ! -f $PHPPATH ] && [ $LOOKEDFORPHP -eq 0 ]; then
phpVariants=( "php-cli" "php5-cli" "php5" "php" )
LOOKEDFORPHP=1
for i in "${phpVariants[#]}"
do
which $i >/dev/null 2>&1
if [ $? -eq 0 ]; then
PHPPATH=$(which $i)
fi
done
fi
if [ ! $PHPPATH ] || [ ! -f $PHPPATH ]; then
# Did not find PHP
exit 1
fi
# load options from app
parallelProcessesPerCampaign=3
campaignsAtOnce=10
subscribersAtOnce=300
sleepTime=30
function loadOptions {
local COMMAND="$PHPPATH $CONSOLEPATH option get_option --name=%s --default=%d"
parallelProcessesPerCampaign=$(printf "$COMMAND" "system.cron.send_campaigns.parallel_processes_per_campaign" 3)
campaignsAtOnce=$(printf "$COMMAND" "system.cron.send_campaigns.campaigns_at_once" 10)
subscribersAtOnce=$(printf "$COMMAND" "system.cron.send_campaigns.subscribers_at_once" 300)
sleepTime=$(printf "$COMMAND" "system.cron.send_campaigns.pause" 30)
parallelProcessesPerCampaign=$($parallelProcessesPerCampaign)
campaignsAtOnce=$($campaignsAtOnce)
subscribersAtOnce=$($subscribersAtOnce)
sleepTime=$($sleepTime)
}
# define the daemon function that will stay in loop
function daemon {
loadOptions
local pids=()
local k=0
local i=0
local COMMAND="$PHPPATH -q $CONSOLEPATH send-campaigns --campaigns_offset=%d --campaigns_limit=%d --subscribers_offset=%d --subscribers_limit=%d --parallel_process_number=%d --parallel_processes_count=%d --usleep=%d --from_daemon=1"
while [ $i -lt $campaignsAtOnce ]
do
while [ $k -lt $parallelProcessesPerCampaign ]
do
parallelProcessNumber=$(( $k + 1 ))
usleep=$(( $k * 10 + $i * 10 ))
CMD=$(printf "$COMMAND" $i 1 $(( $subscribersAtOnce * $k )) $subscribersAtOnce $parallelProcessNumber $parallelProcessesPerCampaign $usleep)
$CMD > /dev/null 2>&1 &
pids+=($!)
k=$(( k + 1 ))
done
i=$(( i + 1 ))
done
waitForPids pids
sleep $sleepTime
daemon
}
function daemonize {
$THISPATH/send-campaigns-daemon.sh -d 1 -p $PHPPATH > /dev/null 2>&1 &
}
function waitForPids {
stillRunning=0
for i in "${pids[#]}"
do
if ps -p $i > /dev/null
then
stillRunning=1
break
fi
done
if [ $stillRunning -eq 1 ]; then
sleep 0.5
waitForPids pids
fi
return 0
}
if [ $daemon -eq 1 ]; then
daemon
else
daemonize
fi
exit 0
when starting a script, create a lock file to know that this script is running. When the script finish, delete the lock file. If somebody kill the process while it is running, the lock file remain forever, though test how old it is and delete after if older than a defined value. For example,
#!/bin/bash
# 10 min
LOCK_MAX=600
typedef LOCKFILE=/var/lock/${0##*/}.lock
if [[ -f $LOCKFILE ]] ; then
TIMEINI=$( stat -c %X $LOCKFILE )
SEGS=$(( $(date +%s) - $TIEMPOINI ))
if [[ $SEGS -gt $LOCK_MAX ]] ; then
reportLocking or somethig to inform you
# Kill old intance ???
OLDPID=$(<$LOCKFILE)
[[ -e /proc/$OLDPID ]] && kill -9 $OLDPID
# Next time that the program is run, there is no lock file and it will run.
rm $LOCKFILE
fi
exit 65
fi
# Save PID of this instance to the lock file
echo "$$" > $LOCKFILE
### Your code go here
# Remove the lock file before script finish
[[ -e $LOCKFILE ]] && rm $LOCKFILE
exit 0
from here:
#!/bin/bash
...
echo PARALLEL_JOBS:${PARALLEL_JOBS:=1}
declare -a tests=($(.../find_what_to_run))
echo "${tests[#]}" | \
xargs -d' ' -n1 -P${PARALLEL_JOBS} -I {} bash -c ".../run_that {}" || { echo "FAILURE"; exit 1; }
echo "SUCCESS"
and here you can nick the code for portable locking with fuser
Okay, so i guess i can answer to my own question with a proper answer that works after many tests.
So here is the final version, simplified, without comments/echo :
#!/bin/bash
sleep 2
DIR="$( cd "$( dirname "$0" )" && pwd )"
FILE_NAME="$( basename "$0" )"
COMMAND_FILE_PATH="$DIR/$FILE_NAME"
if [ ! -f "$COMMAND_FILE_PATH" ]; then
exit 1
fi
cd $DIR
CONSOLE_PATH="$( cd ../../ && pwd )/console.php"
PHP_PATH="/usr/bin/php"
help=0
LOOKED_FOR_PHP=0
while getopts p:h: opt; do
case $opt in
p)
PHP_PATH=$OPTARG
LOOKED_FOR_PHP=1
;;
h)
help=$OPTARG
;;
esac
done
shift $((OPTIND - 1))
if [ $help -eq 1 ]; then
printf "%s\n" "HELP INFO"
exit 0
fi
if [ "$PHP_PATH" ] && [ ! -f "$PHP_PATH" ] && [ "$LOOKED_FOR_PHP" -eq 0 ]; then
php_variants=( "php-cli" "php5-cli" "php5" "php" )
LOOKED_FOR_PHP=1
for i in "${php_variants[#]}"
do
which $i >/dev/null 2>&1
if [ $? -eq 0 ]; then
PHP_PATH="$(which $i)"
break
fi
done
fi
if [ ! "$PHP_PATH" ] || [ ! -f "$PHP_PATH" ]; then
exit 1
fi
LOCK_BASE_PATH="$( cd ../../../common/runtime && pwd )/shell-pids"
LOCK_PATH="$LOCK_BASE_PATH/send-campaigns-daemon.pid"
function remove_lock {
if [ -d "$LOCK_PATH" ]; then
rmdir "$LOCK_PATH" > /dev/null 2>&1
fi
exit 0
}
if [ ! -d "$LOCK_BASE_PATH" ]; then
if ! mkdir -p "$LOCK_BASE_PATH" > /dev/null 2>&1; then
exit 1
fi
fi
process_running=0
if mkdir "$LOCK_PATH" > /dev/null 2>&1; then
process_running=0
else
process_running=1
fi
if [ $process_running -eq 1 ]; then
exit 0
fi
trap "remove_lock" 1 2 3 15
COMMAND="$PHP_PATH $CONSOLE_PATH option get_option --name=%s --default=%d"
parallel_processes_per_campaign=$(printf "$COMMAND" "system.cron.send_campaigns.parallel_processes_per_campaign" 3)
campaigns_at_once=$(printf "$COMMAND" "system.cron.send_campaigns.campaigns_at_once" 10)
subscribers_at_once=$(printf "$COMMAND" "system.cron.send_campaigns.subscribers_at_once" 300)
sleep_time=$(printf "$COMMAND" "system.cron.send_campaigns.pause" 30)
parallel_processes_per_campaign=$($parallel_processes_per_campaign)
campaigns_at_once=$($campaigns_at_once)
subscribers_at_once=$($subscribers_at_once)
sleep_time=$($sleep_time)
k=0
i=0
pp=0
COMMAND="$PHP_PATH -q $CONSOLE_PATH send-campaigns --campaigns_offset=%d --campaigns_limit=%d --subscribers_offset=%d --subscribers_limit=%d --parallel_process_number=%d --parallel_processes_count=%d --usleep=%d --from_daemon=1"
while [ $i -lt $campaigns_at_once ]
do
while [ $k -lt $parallel_processes_per_campaign ]
do
parallel_process_number=$(( $k + 1 ))
usleep=$(( $k * 10 + $i * 10 ))
CMD=$(printf "$COMMAND" $i 1 $(( $subscribers_at_once * $k )) $subscribers_at_once $parallel_process_number $parallel_processes_per_campaign $usleep)
$CMD > /dev/null 2>&1 &
k=$(( k + 1 ))
pp=$(( pp + 1 ))
done
i=$(( i + 1 ))
done
wait
sleep ${sleep_time:-30}
$COMMAND_FILE_PATH -p "$PHP_PATH" > /dev/null 2>&1 &
remove_lock
exit 0
Usually, it is a lock file, not a lock path. You hold the PID in the lock file for monitoring your process. In this case your lock directory does not hold any PID information. Your script also does not do any PID file/directory maintenance when it starts in case of a improper shutdown of your process without cleaning of your lock.
I like your first script better with this in mind. Monitoring the PID's running directly is cleaner. The only problem is if you start a second instance with cron, it is not aware of the PID's connect to the first instance.
You also have processLength -gt 2 which is 2, not 1 process running so you will duplicate your process threads.
It seems also that daemonize is just recalling the script with daemon which is not very useful. Also, having a variable with the same name as a function is not effective.
The correct way to make a lockfile is like this:
# Create a temporary file
echo $$ > ${LOCKFILE}.tmp$$
# Try the lock; ln without -f is atomic
if ln ${LOCKFILE}.tmp$$ ${LOCKFILE}; then
# we got the lock
else
# we didn't get the lock
fi
# Tidy up the temporary file
rm ${LOCKFILE}.tmp$$
And to release the lock:
# Unlock
rm ${LOCKFILE}
The key thing is to create the lock file to one side, using a unique name, and then try to link it to the real name. This is an atomic operation, so it should be safe.
Any solution that does "test and set" gives you a race condition to deal with. Yes, that can be sorted out, but you end up write extra code.
Is there something similar to pipefail for multiple commands, like a 'try' statement but within bash. I would like to do something like this:
echo "trying stuff"
try {
command1
command2
command3
}
And at any point, if any command fails, drop out and echo out the error of that command. I don't want to have to do something like:
command1
if [ $? -ne 0 ]; then
echo "command1 borked it"
fi
command2
if [ $? -ne 0 ]; then
echo "command2 borked it"
fi
And so on... or anything like:
pipefail -o
command1 "arg1" "arg2" | command2 "arg1" "arg2" | command3
Because the arguments of each command I believe (correct me if I'm wrong) will interfere with each other. These two methods seem horribly long-winded and nasty to me so I'm here appealing for a more efficient method.
You can write a function that launches and tests the command for you. Assume command1 and command2 are environment variables that have been set to a command.
function mytest {
"$#"
local status=$?
if (( status != 0 )); then
echo "error with $1" >&2
fi
return $status
}
mytest "$command1"
mytest "$command2"
What do you mean by "drop out and echo the error"? If you mean you want the script to terminate as soon as any command fails, then just do
set -e # DON'T do this. See commentary below.
at the start of the script (but note warning below). Do not bother echoing the error message: let the failing command handle that. In other words, if you do:
#!/bin/sh
set -e # Use caution. eg, don't do this
command1
command2
command3
and command2 fails, while printing an error message to stderr, then it seems that you have achieved what you want. (Unless I misinterpret what you want!)
As a corollary, any command that you write must behave well: it must report errors to stderr instead of stdout (the sample code in the question prints errors to stdout) and it must exit with a non-zero status when it fails.
However, I no longer consider this to be a good practice. set -e has changed its semantics with different versions of bash, and although it works fine for a simple script, there are so many edge cases that it is essentially unusable. (Consider things like: set -e; foo() { false; echo should not print; } ; foo && echo ok The semantics here are somewhat reasonable, but if you refactor code into a function that relied on the option setting to terminate early, you can easily get bitten.) IMO it is better to write:
#!/bin/sh
command1 || exit
command2 || exit
command3 || exit
or
#!/bin/sh
command1 && command2 && command3
I have a set of scripting functions that I use extensively on my Red Hat system. They use the system functions from /etc/init.d/functions to print green [ OK ] and red [FAILED] status indicators.
You can optionally set the $LOG_STEPS variable to a log file name if you want to log which commands fail.
Usage
step "Installing XFS filesystem tools:"
try rpm -i xfsprogs-*.rpm
next
step "Configuring udev:"
try cp *.rules /etc/udev/rules.d
try udevtrigger
next
step "Adding rc.postsysinit hook:"
try cp rc.postsysinit /etc/rc.d/
try ln -s rc.d/rc.postsysinit /etc/rc.postsysinit
try echo $'\nexec /etc/rc.postsysinit' >> /etc/rc.sysinit
next
Output
Installing XFS filesystem tools: [ OK ]
Configuring udev: [FAILED]
Adding rc.postsysinit hook: [ OK ]
Code
#!/bin/bash
. /etc/init.d/functions
# Use step(), try(), and next() to perform a series of commands and print
# [ OK ] or [FAILED] at the end. The step as a whole fails if any individual
# command fails.
#
# Example:
# step "Remounting / and /boot as read-write:"
# try mount -o remount,rw /
# try mount -o remount,rw /boot
# next
step() {
echo -n "$#"
STEP_OK=0
[[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$
}
try() {
# Check for `-b' argument to run command in the background.
local BG=
[[ $1 == -b ]] && { BG=1; shift; }
[[ $1 == -- ]] && { shift; }
# Run the command.
if [[ -z $BG ]]; then
"$#"
else
"$#" &
fi
# Check if command failed and update $STEP_OK if so.
local EXIT_CODE=$?
if [[ $EXIT_CODE -ne 0 ]]; then
STEP_OK=$EXIT_CODE
[[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$
if [[ -n $LOG_STEPS ]]; then
local FILE=$(readlink -m "${BASH_SOURCE[1]}")
local LINE=${BASH_LINENO[0]}
echo "$FILE: line $LINE: Command \`$*' failed with exit code $EXIT_CODE." >> "$LOG_STEPS"
fi
fi
return $EXIT_CODE
}
next() {
[[ -f /tmp/step.$$ ]] && { STEP_OK=$(< /tmp/step.$$); rm -f /tmp/step.$$; }
[[ $STEP_OK -eq 0 ]] && echo_success || echo_failure
echo
return $STEP_OK
}
For what it's worth, a shorter way to write code to check each command for success is:
command1 || echo "command1 borked it"
command2 || echo "command2 borked it"
It's still tedious but at least it's readable.
An alternative is simply to join the commands together with && so that the first one to fail prevents the remainder from executing:
command1 &&
command2 &&
command3
This isn't the syntax you asked for in the question, but it's a common pattern for the use case you describe. In general the commands should be responsible for printing failures so that you don't have to do so manually (maybe with a -q flag to silence errors when you don't want them). If you have the ability to modify these commands, I'd edit them to yell on failure, rather than wrap them in something else that does so.
Notice also that you don't need to do:
command1
if [ $? -ne 0 ]; then
You can simply say:
if ! command1; then
And when you do need to check return codes use an arithmetic context instead of [ ... -ne:
ret=$?
# do something
if (( ret != 0 )); then
Instead of creating runner functions or using set -e, use a trap:
trap 'echo "error"; do_cleanup failed; exit' ERR
trap 'echo "received signal to stop"; do_cleanup interrupted; exit' SIGQUIT SIGTERM SIGINT
do_cleanup () { rm tempfile; echo "$1 $(date)" >> script_log; }
command1
command2
command3
The trap even has access to the line number and the command line of the command that triggered it. The variables are $BASH_LINENO and $BASH_COMMAND.
Personally I much prefer to use a lightweight approach, as seen here;
yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exit 111; }
try() { "$#" || die "cannot $*"; }
asuser() { sudo su - "$1" -c "${*:2}"; }
Example usage:
try apt-fast upgrade -y
try asuser vagrant "echo 'uname -a' >> ~/.profile"
I've developed an almost flawless try & catch implementation in bash, that allows you to write code like:
try
echo 'Hello'
false
echo 'This will not be displayed'
catch
echo "Error in $__EXCEPTION_SOURCE__ at line: $__EXCEPTION_LINE__!"
You can even nest the try-catch blocks inside themselves!
try {
echo 'Hello'
try {
echo 'Nested Hello'
false
echo 'This will not execute'
} catch {
echo "Nested Caught (# $__EXCEPTION_LINE__)"
}
false
echo 'This will not execute too'
} catch {
echo "Error in $__EXCEPTION_SOURCE__ at line: $__EXCEPTION_LINE__!"
}
The code is a part of my bash boilerplate/framework. It further extends the idea of try & catch with things like error handling with backtrace and exceptions (plus some other nice features).
Here's the code that's responsible just for try & catch:
set -o pipefail
shopt -s expand_aliases
declare -ig __oo__insideTryCatch=0
# if try-catch is nested, then set +e before so the parent handler doesn't catch us
alias try="[[ \$__oo__insideTryCatch -gt 0 ]] && set +e;
__oo__insideTryCatch+=1; ( set -e;
trap \"Exception.Capture \${LINENO}; \" ERR;"
alias catch=" ); Exception.Extract \$? || "
Exception.Capture() {
local script="${BASH_SOURCE[1]#./}"
if [[ ! -f /tmp/stored_exception_source ]]; then
echo "$script" > /tmp/stored_exception_source
fi
if [[ ! -f /tmp/stored_exception_line ]]; then
echo "$1" > /tmp/stored_exception_line
fi
return 0
}
Exception.Extract() {
if [[ $__oo__insideTryCatch -gt 1 ]]
then
set -e
fi
__oo__insideTryCatch+=-1
__EXCEPTION_CATCH__=( $(Exception.GetLastException) )
local retVal=$1
if [[ $retVal -gt 0 ]]
then
# BACKWARDS COMPATIBILE WAY:
# export __EXCEPTION_SOURCE__="${__EXCEPTION_CATCH__[(${#__EXCEPTION_CATCH__[#]}-1)]}"
# export __EXCEPTION_LINE__="${__EXCEPTION_CATCH__[(${#__EXCEPTION_CATCH__[#]}-2)]}"
export __EXCEPTION_SOURCE__="${__EXCEPTION_CATCH__[-1]}"
export __EXCEPTION_LINE__="${__EXCEPTION_CATCH__[-2]}"
export __EXCEPTION__="${__EXCEPTION_CATCH__[#]:0:(${#__EXCEPTION_CATCH__[#]} - 2)}"
return 1 # so that we may continue with a "catch"
fi
}
Exception.GetLastException() {
if [[ -f /tmp/stored_exception ]] && [[ -f /tmp/stored_exception_line ]] && [[ -f /tmp/stored_exception_source ]]
then
cat /tmp/stored_exception
cat /tmp/stored_exception_line
cat /tmp/stored_exception_source
else
echo -e " \n${BASH_LINENO[1]}\n${BASH_SOURCE[2]#./}"
fi
rm -f /tmp/stored_exception /tmp/stored_exception_line /tmp/stored_exception_source
return 0
}
Feel free to use, fork and contribute - it's on GitHub.
run() {
$*
if [ $? -ne 0 ]
then
echo "$* failed with exit code $?"
return 1
else
return 0
fi
}
run command1 && run command2 && run command3
Sorry that I can not make a comment to the first answer
But you should use new instance to execute the command: cmd_output=$($#)
#!/bin/bash
function check_exit {
cmd_output=$($#)
local status=$?
echo $status
if [ $status -ne 0 ]; then
echo "error with $1" >&2
fi
return $status
}
function run_command() {
exit 1
}
check_exit run_command
For fish shell users who stumble on this thread.
Let foo be a function that does not "return" (echo) a value, but it sets the exit code as usual.
To avoid checking $status after calling the function, you can do:
foo; and echo success; or echo failure
And if it's too long to fit on one line:
foo; and begin
echo success
end; or begin
echo failure
end
You can use #john-kugelman 's awesome solution found above on non-RedHat systems by commenting out this line in his code:
. /etc/init.d/functions
Then, paste the below code at the end. Full disclosure: This is just a direct copy & paste of the relevant bits of the above mentioned file taken from Centos 7.
Tested on MacOS and Ubuntu 18.04.
BOOTUP=color
RES_COL=60
MOVE_TO_COL="echo -en \\033[${RES_COL}G"
SETCOLOR_SUCCESS="echo -en \\033[1;32m"
SETCOLOR_FAILURE="echo -en \\033[1;31m"
SETCOLOR_WARNING="echo -en \\033[1;33m"
SETCOLOR_NORMAL="echo -en \\033[0;39m"
echo_success() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_SUCCESS
echo -n $" OK "
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 0
}
echo_failure() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_FAILURE
echo -n $"FAILED"
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 1
}
echo_passed() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_WARNING
echo -n $"PASSED"
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 1
}
echo_warning() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_WARNING
echo -n $"WARNING"
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 1
}
When I use ssh I need to distinct between problems caused by connection issues and error codes of remote command in errexit (set -e) mode. I use the following function:
# prepare environment on calling site:
rssh="ssh -o ConnectionTimeout=5 -l root $remote_ip"
function exit255 {
local flags=$-
set +e
"$#"
local status=$?
set -$flags
if [[ $status == 255 ]]
then
exit 255
else
return $status
fi
}
export -f exit255
# callee:
set -e
set -o pipefail
[[ $rssh ]]
[[ $remote_ip ]]
[[ $( type -t exit255 ) == "function" ]]
rjournaldir="/var/log/journal"
if exit255 $rssh "[[ ! -d '$rjournaldir/' ]]"
then
$rssh "mkdir '$rjournaldir/'"
fi
rconf="/etc/systemd/journald.conf"
if [[ $( $rssh "grep '#Storage=auto' '$rconf'" ) ]]
then
$rssh "sed -i 's/#Storage=auto/Storage=persistent/' '$rconf'"
fi
$rssh systemctl reenable systemd-journald.service
$rssh systemctl is-enabled systemd-journald.service
$rssh systemctl restart systemd-journald.service
sleep 1
$rssh systemctl status systemd-journald.service
$rssh systemctl is-active systemd-journald.service
Checking status in functional manner
assert_exit_status() {
lambda() {
local val_fd=$(echo $# | tr -d ' ' | cut -d':' -f2)
local arg=$1
shift
shift
local cmd=$(echo $# | xargs -E ':')
local val=$(cat $val_fd)
eval $arg=$val
eval $cmd
}
local lambda=$1
shift
eval $#
local ret=$?
$lambda : <(echo $ret)
}
Usage:
assert_exit_status 'lambda status -> [[ $status -ne 0 ]] && echo Status is $status.' lls
Output
Status is 127
suppose
alias command1='grep a <<<abc'
alias command2='grep x <<<abc'
alias command3='grep c <<<abc'
either
{ command1 1>/dev/null || { echo "cmd1 fail"; /bin/false; } } && echo "cmd1 succeed" &&
{ command2 1>/dev/null || { echo "cmd2 fail"; /bin/false; } } && echo "cmd2 succeed" &&
{ command3 1>/dev/null || { echo "cmd3 fail"; /bin/false; } } && echo "cmd3 succeed"
or
{ { command1 1>/dev/null && echo "cmd1 succeed"; } || { echo "cmd1 fail"; /bin/false; } } &&
{ { command2 1>/dev/null && echo "cmd2 succeed"; } || { echo "cmd2 fail"; /bin/false; } } &&
{ { command3 1>/dev/null && echo "cmd3 succeed"; } || { echo "cmd3 fail"; /bin/false; } }
yields
cmd1 succeed
cmd2 fail
Tedious it is. But the readability isn't bad.