Odd behavior with simple bash shellscript exit - bash

I'm start to playing around with ShellScript Unix, so maybe it's a silly question. Apologies if that's the case.
I was trying to handle the exit codes to properly address adverse situations in my code, and for this, I created a code snippet to understand the unix exit behavior. Here it is:
#!/usr/bin/bash
RES=1
if [ $RES -eq 0 ]
then
echo "Finishing with success!"
exit 0
else
echo "Finishing with error!"
exit 1
fi
I understood that, once the code is called (and terminated) I'd go back to bash prompt. However, it seems the exit instruction is also leaving bash. Is it normal? Maybe it's something related to my development environment?
Here are the messages...
bash-3.00$ . errtest.sh
Finishing with error!
$ echo $?
1
$ bash
bash-3.00$ which bash
/usr/bin/bash
For reference, I've added the return and the bash location. Hope it helps.
Thanks in advance!

This is because you're sourcing the script in your current environment (by using the . command). Try executing the script with either:
bash ./errtest.sh
or by giving the necessary permissions to the script file and executing it like this:
chmod u+x ./errtest.sh
./errtest.sh

Related

How to detect if a script in Julia got "Killed"?

So I'm running a Julia (v 0.6.3) script in a bash script called ./run.sh like so:
./julia/bin/julia scripts/my_script.jl
Now the script prematurely terminates. I'm sure of this because it doesn't finish outputting all the data it's supposed to. When I run a parsing script (written in Python) afterwards, it fails because of missing data.
I think that it terminates to insufficient RAM allocation (I'm running the script on a Docker container). If I bump up the allocated RAM the script works fine.
To catch this error in my Julia script I did the following:
try main();
catch e
println(e)
exit(1)
end
exit(0)
On top of that, I updated the bash script to check if the Julia script failed:
./julia/bin/julia scripts/my_script.jl
echo "Julia script returned: $?"
if [ $? -ne 0 ]; then
echo "Julia script failed"
exit 1
fi
However, no exception is printed from the Julia script. Furthermore, the return code is 0, so the bash bash script doesn't detect any errors either.
If I just run the the script directly from the terminal, at the very end of the output there's the word Killed. Immediately after I ran the command echo $? and I get 137, which is definitely not a successful return status. So it seems Julia and bash both know the script is terminated, but not if I run the Julia script from within a bash script...?
Another weird thing is when I run the Julia script from the bash script, the word Killed doesn't appear at all!
How can I reliably detect whether a script was prematurely terminated? Is there a way to get the reason it was killed as well (e.g. not enough RAM, stack overflow, etc)?
Your code if [ $? -ne 0 ]; then checks if the echo before it successfully completed (See Cyrus's comment).
Sometimes it makes sense to put the return value in a variable:
./julia/bin/julia scripts/my_script.jl
retval=$?
if [ $retval -ne 0 ]; then
echo "Julia script failed with $retval"
exit $retval
fi
ps: Reports a snapshot of the status of currently running processes.
ps -ef | grep 'julia'
T.

Prevent other terminals from running a script while another terminal is using it

I would like prevent other terminals from running a certain script whenever another terminal is running it however in bash but I'm not quite sure on how I would be able to go about in doing it. Any help or tip could be greatly appreciated!
In example:
When that script is being run on another terminal, all other terminals would be unable to run that certain script as well. And display a message "UNDER MAINTENANCE".
You can use the concept of a "lockfile." For example:
if [ -f ~/.mylock ]; then
echo "UNDER MAINTENANCE"
exit 1
fi
touch ~/.mylock
# ... the rest of your code
rm ~/.mylock
To get fancier/safer, you can "trap" the EXIT signal to remove it automatically at the end:
trap 'rm ~/.mylock' EXIT
Use flock and put this on top of your script:
if ! flock -xn /path/to/lockfile ; then
echo "script is already running."
echo "Aborting."
exit 1
fi
Note: path/to/lockfile could be the path to your script. Doing so would avoid to create an extra file.
To avoid race conditions, you could use flock(1) along with a
lock file. There is one flock(1) implementation
which claims to work on Linux, BSD, and OS X. I haven't seen one
explicitly for Unix.
There is some interesting discussion here.
UPDATE:
I found a really clever way from Randal L. Schwartz here. I really like this one. It relies on having flock(1) and bash, and it uses the script itself as its own lockfile. Check this out:
/usr/local/bin/onlyOne is a script to obtain the lock
#!/bin/bash
exec 200< $0
if ! flock -n 200; then
echo "there can be only one"
exit 1
fi
Then myscript uses onlyOne to obtain the lock (or not):
#!/bin/bash
source /usr/local/bin/onlyOne
# The real work goes here.
echo "${BASHPID} working"
sleep 120

What is meaning of 'exit 0' in shell script?

I recently deployed a script
exit_job(){
echo "$1"
exit 0
}
I searched the web and found that the correct exit code. Can someone explain exit 0?
0 is the shell script success code. Thus if you echo something other than this it will be returning error code, and if not handled would break your script.

How to prevent direct bash script execution and allow only usage from other script?

I have one script with common functions that is included in other my scripts with:
. ~/bin/fns
Since my ~/bin path is on the PATH, is there a way to prevent users to execute fns from command line (by returning from the script with a message), but to allow other scripts to include this file?
(Bash >= 4)
Just remove the executable bit with chmod -x . ~/bin/fns. It will still work when sourced, but you can't call it (accidentally) by its name anymore.
Some scripts at my workplace use a special shebang
#!/bin/echo Run:.
which returns
Run:. <pathname>
when you use it as a command.
Add the following at the beginning of the script you want to be only allowed to be sourced:
if [ ${0##*/} == ${BASH_SOURCE[0]##*/} ]; then
echo "WARNING"
echo "This script is not meant to be executed directly!"
echo "Use this script only by sourcing it."
echo
exit 1
fi
This will check if the current script and executed shell script file basenames match. If they match, then obviously you are executing it directly so we print a message and exit with status 1.
if (return 0 2>/dev/null) ; then
:
else
echo "Error: script was executed."
exit 1
fi

exit not working as expected in Bash

I use SSH Secure Shell client to connect to a server and run my scripts.
I want to stop a script on some conditions, so when I use exit, not only the script stops, but all the client disconnects from the server!, Here is the code:
if [[ `echo $#` -eq 0 ]]; then
echo "Missing argument- must to get a friend list";
exit
fi
for user in $*; do
if [[ !(-f `echo ${user}.user`) ]]; then
echo "The user name ${user} doesn't exist.";
exit
fi
done
A picture of the client:
Why is this happening?
You use source to run the script, this runs it in the current shell. That means that exit terminates the current shell and with that the ssh session.
replace source with bash and it should work, or better put
#!/bin/bash
on to of the file and make it executable.
exit returns from the current shell - If you've started a script by running it directly, this will exit the shell that the script is running in.
return returns from a function or sourced file (TY Dennis Williamson) - Same thing, but it doesn't terminate your current shell.
break returns from a loop - Similar to return, but can be used anywhere within a loop to stop processing more items. This is probably what you want.
if you are running from the current shell, exit will obviously exit from the shell and disconnect you. try running it in a new shell ( use a . before the script) or else use 'return' instead of exit

Resources