Shell script continue after failure - bash

How do I write a shell script that continues execution even if a specific command failed, however I want to output as error later, I tried this:
#!/bin/bash
./node_modules/.bin/wdio wdio.conf.js --spec ./test/specs/login.test.js
rc=$?
echo "print here"
chown -R gitlab-runner /gpc_testes/
chown -R gitlab-runner /gpc_fontes/
exit $rc
However the script stops when the node modules command fails.

You could use
command_that_would_fail || command_failed=1
# More code and even more
.
.
if [ ${command_failed:-0} -eq 1 ]
then
echo "command_that_would_fail failed"
fi

Suppose name of the script is test.sh.
While executing the scripting, execute it with below command
./test.sh 2>>error.log
Error due bad commands won't appear on terminal but will be stored in file error.log which can be referred afterwards.

Related

how to handle exit code for multiple sequential unix command in a shell script

I am need of shell script which have multiple commands like
Command -1 mv
command 2- cp
command -3 - sed
command -4 echo ,append etc
rc=$?
if =0 success
else
exist30
but even though move command failed script retuning return code as 0 and script showing success message.
Do i need to main RC for all the command or can i better handle return code for each command to make sure every command run successfully
If you have to deal with commands which may fail you can do the following:
if mv ...; then
echo mv success
else
echo mv failure
rc=1
fi
exit 1
For having your script to crash with the first failed command make sure to add the line set -e below your shebang (#!/bin/sh).
Keep in mind that the error handling above is compatible with set -e, the script will not stop.
set -eu is considered best practice for shell scripts, -u will result in a crash of your script if a variable used is unset.

trap exec return code in shell script

I have to run a command using exec in a shell script, and I need to trap the exit code in case of error and run another command e.g
#!/bin/sh
set +e
exec command_that_will_fail
if [ $? -eq 1 ]; then
echo "command failed, running another command"
fi
I understand that exec replaces the current shell and carries on, my problem is that I need to run another command if the exec is not sucessfull.
Your code works if there's some immediate error when it tries to run the process:
$ echo 1
1
$ echo $?
0
$ exec asd123
-bash: exec: asd123: not found
$ echo $?
127
If the executable file was found, and is started, then it will not return, because it will overtake the whole script and never return to bash again.
For example this never returns:
$ exec grep asd /dev/null
(the exit code of grep is 1, but the parent shell is overtaken, so nobody can check)
If you want to get an exit code from the process in this case, you have to start it as a subprocess, i.e. not using exec (just command_that_will_fail). In this case the bash process will act as a supervisor that waits until the subprocess finishes and can inspect the exit code.

Catching all bad signals for called commands in Bash script

We are creating a bash script for a build server. We want to ensure that when we execute a bash command inside the script, it returns with a signal of 0. If it does not, we want execution to stop. Our solution so far is to do:
#some command
if [ $? -ne 0 ] ; then
#handle error
fi
after every command that could cause this problem. This makes the code quite long and doesn't seem very elegant. We could use a bash function, perhaps. Although working with $? can be a bit tricky, and we would still have to call the function after every command. Is there a better way? I've looked at the trap command but it seems to only do signal handling with the bash script I am writing, not any commands I call.
The robust, canonical way of doing this is:
#!/bin/bash
die() {
local ret=$?
echo >&2 "$*"
exit "$ret"
}
./configure || die "Project can't be configured"
make || die "Project doesn't build"
make install || die "Installation failed"
The fragile, convenient way of doing this is set -e:
#!/bin/bash
set -e # Script will exit in many (but not all) cases if a command fails
./configure
make
make install
or equivalently (with a custom error message):
#!/bin/bash
# Will be called for many (but not all) commands that fail
trap 'ret=$?; echo >&2 "Failure on line $LINENO, exiting."; exit $ret' ERR
./configure
make
make install
For the latter two, the script will not exit for any command that is part of a conditional statement or &&/||, so while:
backup() {
set -e
scp backup.tar.gz user#host:/backup
rm backup.tar.gz
}
backup
will correctly avoid executing rm if the transfer fails, later inserting a feature like this:
if backup
then
mail -s "Backup completed successfully" user#example.com
fi
will make the backup stop exiting on failure and accidentally delete backups.

How to check the current shell and change it to bash via script?

#!/bin/bash
if [ ! -f readexportfile ]; then
echo "readexportfile does not exist"
exit 0
fi
The above is part of my script. When the current shell is /bin/csh my script fails with the following error:
If: Expression Syntax
Then: Command not found
If I run bash and then run my script, it runs fine(as expected).
So the question is: If there is any way that myscript can change the current shell and then interpretate rest of the code.
PS: If i keep bash in my script, it changes the current shell and rest of the code in script doesn't get executed.
The other replies are correct, however, to answer your question, this should do the trick:
[[ $(basename $SHELL) = 'bash' ]] || exec /bin/bash
The exec builtin replaces the current shell with the given command (in this case, /bin/bash).
You can use SHEBANG(#!) to overcome your issue.
In your code you are already using she-bang but make sure it is first and foremost line.
$ cat test.sh
#!/bin/bash
if [ ! -f readexportfile ]; then
echo "readexportfile does not exist"
exit 0
else
echo "No File"
fi
$ ./test.sh
readexportfile does not exist
$ echo $SHELL
/bin/tcsh
In the above code even though I am using CSH that code executed as we mentioned shebang in the code. In case if there is no shebang then it will take the help of shell in which you are already logged in.
In you case you also check the location of bash interpreter using
$ which bash
or
$ cat /etc/shells |grep bash

Getting exit code of last shell command in another script

I am trying to beef up my notify script. The way the script works is that I put it behind a long running shell command and then all sorts of notifications get invoked after the long running script finished.
For example:
sleep 100; my_notify
It would be nice to get the exit code of the long running script. The problem is that calling my_notify creates a new process that does not have access to the $? variable.
Compare:
~ $: ls nonexisting_file; echo "exit code: $?"; echo "PPID: $PPID"
ls: nonexisting_file: No such file or directory
exit code: 1
PPID: 6203
vs.
~ $: ls nonexisting_file; my_notify
ls: nonexisting_file: No such file or directory
exit code: 0
PPID: 6205
The my_notify script has the following in it:
#!/bin/sh
echo "exit code: $?"
echo "PPID: $PPID"
I am looking for a way to get the exit code of the previous command without changing the structure of the command too much. I am aware of the fact that if I change it to work more like time, e.g. my_notify longrunning_command... my problem would be solved, but I actually like that I can tack it at the end of a command and I fear complications of this second solution.
Can this be done or is it fundamentally incompatible with the way that shells work?
My shell is Z shell (zsh), but I would like it to work with Bash as well.
You'd really need to use a shell function in order to accomplish that. For a simple script like that it should be pretty easy to have it working in both zsh and bash. Just place the following in a file:
my_notify() {
echo "exit code: $?"
echo "PPID: $PPID"
}
Then source that file from your shell startup files. Although since that would be run from within your interactive shell, you may want to use $$ rather than $PPID.
It is incompatible. $? only exists within the current shell; if you want it available in subprocesses then you must copy it to an environment variable.
The alternative is to write a shell function that uses it in some way instead.
One method to implement this could be to use EOF tag and a master script which will create your my_notify script.
#!/bin/bash
if [ -f my_notify ] ; then
rm -rf my_notify
fi
if [ -f my_temp ] ; then
rm -rf my_temp
fi
retval=`ls non_existent_file &> /dev/null ; echo $?`
ppid=$PPID
echo "retval=$retval"
echo "ppid=$ppid"
cat >> my_notify << 'EOF'
#!/bin/bash
echo "exit code: $retval"
echo " PPID =$ppid"
EOF
sh my_notify
You can refine this script for your purpose.

Resources