How to detect if a script in Julia got "Killed"? - bash

So I'm running a Julia (v 0.6.3) script in a bash script called ./run.sh like so:
./julia/bin/julia scripts/my_script.jl
Now the script prematurely terminates. I'm sure of this because it doesn't finish outputting all the data it's supposed to. When I run a parsing script (written in Python) afterwards, it fails because of missing data.
I think that it terminates to insufficient RAM allocation (I'm running the script on a Docker container). If I bump up the allocated RAM the script works fine.
To catch this error in my Julia script I did the following:
try main();
catch e
println(e)
exit(1)
end
exit(0)
On top of that, I updated the bash script to check if the Julia script failed:
./julia/bin/julia scripts/my_script.jl
echo "Julia script returned: $?"
if [ $? -ne 0 ]; then
echo "Julia script failed"
exit 1
fi
However, no exception is printed from the Julia script. Furthermore, the return code is 0, so the bash bash script doesn't detect any errors either.
If I just run the the script directly from the terminal, at the very end of the output there's the word Killed. Immediately after I ran the command echo $? and I get 137, which is definitely not a successful return status. So it seems Julia and bash both know the script is terminated, but not if I run the Julia script from within a bash script...?
Another weird thing is when I run the Julia script from the bash script, the word Killed doesn't appear at all!
How can I reliably detect whether a script was prematurely terminated? Is there a way to get the reason it was killed as well (e.g. not enough RAM, stack overflow, etc)?

Your code if [ $? -ne 0 ]; then checks if the echo before it successfully completed (See Cyrus's comment).
Sometimes it makes sense to put the return value in a variable:
./julia/bin/julia scripts/my_script.jl
retval=$?
if [ $retval -ne 0 ]; then
echo "Julia script failed with $retval"
exit $retval
fi

ps: Reports a snapshot of the status of currently running processes.
ps -ef | grep 'julia'
T.

Related

Run a command right before a script exits due to failure

Let's say there's this script
#!/bin/zsh
python -c 'a'
which will fail since a isn't defined. Just before the shell script exits, I want to run a command, say echo bye. How can that be achieved?
Flow is to be:
Python command above fails.
bye appears in terminal.
The zsh script exits.
I'd prefer it to affect the python command as little as possible such as indent, putting it in an if block, checking its exit code etc. In real life, the command is in fact multiple commands.
In the script you posted, the fact that the shell exits is unrelated to any error. The shell would exit, because the last argument hast been executed. Take for instance the script
#!/bin/zsh
python -c 'a'
echo This is the End
The final echo will always be exeuted, independent of the python command. To cause the script to exit, when python returns a non-zero exit code, you would write something like
#!/bin/zsh
python -c 'a' || exit $?
echo Successful
If you want to exit a script, whenever the first one of the commands produces a non-zeror exit status, AND at the same time want to print a message, you can use the TRAPZERR callback:
#!/bin/zsh
TRAPZERR() {
echo You have an unhandled non-zero exit code in your otherwise fabulous script
exit $?
}
python -c 'a'
echo Only Exit Code 0 encountered

$? from bash script command executed by TCL (open pipe) on windows returns wrong value

I've got tcl script with two ways of execution bash script:
#exec bash ./run.sh
open "|bash ./run.sh r"
The bash script is shown below:
#!/bin/bash
ls
if [ "$?" != "0" ]; then
echo "ERROR: Status failed!" > status
else
echo "Everything is OK!" > status
fi
I'm using tclsh for Windows with bash from git bash. When I use:
exec bash ./run.sh
I've got in status file:
Everything is OK!
otherwise:
open "|bash ./run.sh r"
got:
ERROR: Status failed!
Is there any possibility to correctly detect exit code when opened the tcl pipe?
You don't describe whether you get different results out of the ls part of the script. That matters; the ls command is most certainly capable of changing its behaviour according to the environment in which it is invoked. This matters because Tcl executes subprocesses (on Windows) directly using the CreateProcess() system call, rather than the various wrapped versions that Cygwin and git bash use. Other possibilities are that you're launching the script in a different directory and so on.
However, in general we'd expect a script to behave very similarly when launched via exec or via open |… r as they share a common core of functionality. The only differences are to do with how output and termination are waited for.
If you create a subprocess pipeline, by default you won't get to find out about errors from it until you close the pipeline. exec generates any errors “immediately” because it doesn't return control to you until the subprocess has terminated and all output has been read.

Best Option for resumable script

I am writing a script that executes around 10 back-end processes in sequence, depending on if the previous process was executed without any errors.
Now let's assume the scenario, in which lets say 5th process failed and script came out. But I want to code it in a way such that, when next time user runs it(after removing the error because of which script exited last time), he should be able to run from 5th process onwards and not again from 1st process.
To be more specific, assume following is the script:
Script Starts
Process1
if [ $? -eq 0 ] then
Process2
if [ $? -eq 0 ] then
Process3
if [ $? -eq 0 ] then
..
..
..
..
if [ $? -eq 0 ] then
Process10
else
exit
So here the script will exit anytime if any one of the process fails to complete with status 0. So again, if process5 fails, and user corrects the problem and restarts script, the script should start with process5 again and not process1 or at least there should be an option to user if he wants to resume the script or start it back from beginning i.e. process1.
What all possible ways we can code this kind of script, also please bear in mind, I am not allowed to use a temporary db, where I can store the status of each process.
I need to code in sh (shell script) in unix.
A simple solution would be to write stamp files:
#/bin/sh
set -e # Automatically abort if any simple command fails
if ! test -f cmd1-stamp; cmd1; fi
touch cmd1-stamp
if ! test -f cmd2-stamp; cmd2; fi
touch cmd2-stamp
When the script executes, if cmd1-stamp exists, cmd1 is not executed. Otherwise, cmd1 is executed. The script will abort if it fails. Note that it is very tempting to write test -f cmd1-stamp || cmd1, and this seems to work ( in bash ) but the shell specs state that the shell shall abort if the simple command that fails is not a part of an AND or OR list, and I suspect this is (yet another) instance of bash not conforming to the spec. (Although it doesn't seem to specify that the shell shall not abort if the failing command is part of an AND or OR list.)

exit not working as expected in Bash

I use SSH Secure Shell client to connect to a server and run my scripts.
I want to stop a script on some conditions, so when I use exit, not only the script stops, but all the client disconnects from the server!, Here is the code:
if [[ `echo $#` -eq 0 ]]; then
echo "Missing argument- must to get a friend list";
exit
fi
for user in $*; do
if [[ !(-f `echo ${user}.user`) ]]; then
echo "The user name ${user} doesn't exist.";
exit
fi
done
A picture of the client:
Why is this happening?
You use source to run the script, this runs it in the current shell. That means that exit terminates the current shell and with that the ssh session.
replace source with bash and it should work, or better put
#!/bin/bash
on to of the file and make it executable.
exit returns from the current shell - If you've started a script by running it directly, this will exit the shell that the script is running in.
return returns from a function or sourced file (TY Dennis Williamson) - Same thing, but it doesn't terminate your current shell.
break returns from a loop - Similar to return, but can be used anywhere within a loop to stop processing more items. This is probably what you want.
if you are running from the current shell, exit will obviously exit from the shell and disconnect you. try running it in a new shell ( use a . before the script) or else use 'return' instead of exit

Odd behavior with simple bash shellscript exit

I'm start to playing around with ShellScript Unix, so maybe it's a silly question. Apologies if that's the case.
I was trying to handle the exit codes to properly address adverse situations in my code, and for this, I created a code snippet to understand the unix exit behavior. Here it is:
#!/usr/bin/bash
RES=1
if [ $RES -eq 0 ]
then
echo "Finishing with success!"
exit 0
else
echo "Finishing with error!"
exit 1
fi
I understood that, once the code is called (and terminated) I'd go back to bash prompt. However, it seems the exit instruction is also leaving bash. Is it normal? Maybe it's something related to my development environment?
Here are the messages...
bash-3.00$ . errtest.sh
Finishing with error!
$ echo $?
1
$ bash
bash-3.00$ which bash
/usr/bin/bash
For reference, I've added the return and the bash location. Hope it helps.
Thanks in advance!
This is because you're sourcing the script in your current environment (by using the . command). Try executing the script with either:
bash ./errtest.sh
or by giving the necessary permissions to the script file and executing it like this:
chmod u+x ./errtest.sh
./errtest.sh

Resources