catching error code of background task - bash

I have a python script, in my raspberry, that runs in a infinite loop. I want to catch it's exit code in case it stops. I made a script named run like this:
#!/bin/bash
~/bin/script.py &
wait $! && echo "script exited with code $?" >> ~/bin/log/script.log &
but when I run it i get the following error:
~/bin/run: line 3: wait: pid 2728 is not a child of this shell
Can anyone give me some hint of a solution?

You are pushing your (single) script to the background and then do a blocking wait. I think, this is unnecessary. You may just write:
!/bin/bash
~/bin/script.py
echo "script exited with code $?" >> ~/bin/log/script.log

Related

How do I get entire bash script to exit (including backgrounded processes) if a bash command succeeds or fails?

I have the following bash script. I would like to run three commands
The first command starts up a front-end web server, which I would like to be backgrounded.
The second command starts up a back-end web server, which I would like to be backgrounded.
The third command runs an npm script that tests the web application.
The third command will exit with an error code if the tests were unsuccessful.
How do I get my entire bash script to exit with an error code if the third command finishes with an error? If the third command finishes without an error, I want my bash script to exit with a 0 error code.
Below is my attempt (the script doesn't work as intended). My intention is for all backgrounded processes to end after npm run my-special-command. If npm run my-special-command results in an error code, I want my script to exit with that error code. Otherwise, I want my script to exit with a 0 error code.
#!/bin/bash
history-server dist -p 8080 &
nodemon server &
npm run my-special-command || exit 1
exit 0
You could use a command substitution:
# save PID of background process
history-server dist -p 8080 &
history_server_pid=$!
# if command succeds exit 0; else terminate background process and exit 1
if [ $(npm run my-special-command 2>/dev/null) ]; then
exit 0
else
kill $history_server_pid
exit 1
fi
if the command succeeds the script will exit with status 0, or 1 otherwise.

How to capture a process Id and also add a trigger when that process finishes in a bash script?

I am trying to make a bash script to start a jar file and do it in the background. For that reason I'm using nohup. Right now I can capture the pid of the java process but I also need to be able to execute a command when the process finishes.
This is how I started
nohup java -jar jarfile.jar & echo $! > conf/pid
I also know from this answer that using ; will make a command execute after the first one finishes.
nohup java -jar jarfile.jar; echo "done"
echo "done" is just an example. My problem now is that I don't know how to combine them both. If I run echo $! first then echo "done" executes immediately. While if echo "done" goes first then echo $! will capture the PID of echo "done" instead of the one of the jarfile.
I know that I could achieve the desire functionality by polling until I don't see the PID running anymore. But I would like to avoid that as much as possible.
You can use the bash util wait once you start the process using nohup
nohup java -jar jarfile.jar &
pid=$! # Getting the process id of the last command executed
wait $pid # Waits until the process mentioned by the pid is complete
echo "Done, execute the new command"
I don't think you're going to get around "polling until you don't see the pid running anymore." wait is a bash builtin; it's what you want and I'm certain that's exactly what it does behind the scenes. But since Inian beat me to it, here's a friendly function for you anyway (in case you want to get a few things running in parallel).
alert_when_finished () {
declare cmd="${#}";
${cmd} &
declare pid="${!}";
while [[ -d "/proc/${pid}/" ]]; do :; done; #equivalent to wait
echo "[${pid}] Finished running: ${cmd}";
}
Running a command like this will give the desired effect and suppress unneeded job output:
( alert_when_finished 'sleep 5' & )

Getting results of parallel executions in bash

I have a bash scripts in which I invoke other scripts to run in parallel. With the wait command I can wait until all parallel processes have finished. But I want to know if all the processes that executed in background in parallel executed successfully (with return code 0).
My code looks like:
--calling multiple processes to execute in backgroud
process-1 &
process-2 &
process-3 &
wait
--after parallel execution finishes I want to know if all of them were successful and returned '0'
You can use wait -n which returns the exit code of the next job that terminates. Call it once for each background process.
process-1 &
process-2 &
process-3 &
wait -n && wait -n && wait -n
wait -n seems the correct solution, but since it is not present in bash 4.2.37 you can try this trick:
#!/bin/bash
(
process-1 || echo $! failed &
process-2 || echo $! failed &
process-2 || echo $! failed &
wait
) | grep -q failed
if [ $? -eq 0 ]; then
echo at least one process failed
else
echo all processes finished successfully
fi
Just make sure the string "failed" is not returned by the processes themselves while having an actual success. Also you could run processes with stdin and stderr redirected do /dev/null with process-1 &>/dev/null
I've written a tool that simplifies the solutions a bit: https://github.com/wagoodman/bashful
You provide a file describing what you want to run...
# awesome.yaml
tasks:
- name: My awesome tasks
parallel-tasks:
- cmd: ./some-script-1.sh
- cmd: ./some-script-2.sh
- cmd: ./some-script-3.sh
- cmd: ./some-script-4.sh
...and run it like so:
bashful run awesome.yaml
Then it will run your tasks in parallel with a vertical progress bar showing the status of each task. Failures are indicated in red and the program exits with 1 if there were any errors found (exit occurs after the parallel block completes).

Exit all called KornShell (ksh) scripts

How can a KornShell (ksh) script exit/kill all the processes started from another ksh script?
If scriptA.ksh calls scriptB.ksh then the following code works good enough, but is there a better solution for this?:
scriptA.ksh:
#call scriptBSnippet
scriptBSnippet.ksh ${a}
scriptB.ksh:
#if error: exit this script (scriptB) and calling script (scriptA)#
kill ${PPID}
exit 1
To add complexity what if scriptA calls scriptB which calls scriptC, then how to exit out of all three scripts if there is an error in scriptC?
scriptA.ksh:
#call scriptBSnippet
scriptBSnippet.ksh ${a}
scriptB.ksh:
#if error: exit this script (scriptB) and calling script (scriptA)#
kill ${PPID}
exit 1
scriptC.ksh:
#if error: exit this script (scriptC) and calling scripts (scriptA, scriptB)#
#kill ${PPID}
#exit 1
Thanks in advance.
Killing all processes started by the same script is a bit of a brute force method.
It would be best to have some method of communication between the processes that would allow them to gracefully shutdown.
However, if all processes are in the same process group, you can send a signal to the entire process group:
kill -${Signal:?} -${Pgid:?}
Note that two arguments are required in this case. A single argument starting with - is always interpreted as a signal.
Run some tests to see which processes get included in the process group.
parent.sh:
Shell=ksh
($Shell -c :) || exit
$Shell child1.sh & pid1=$!
$Shell child2.sh & pid2=$!
$Shell child3.sh & pid3=$!
ps -o pid,sid,pgid,tty,cmd $PPID $$ $pid1 $pid2 $pid3
exit
child.sh:
sleep 50
If you run parent.sh from a terminal - it will become the process leader.
granny.sh:
Shell=ksh
($Shell -c :) || exit
$Shell parent.sh &
wait
exit
If you run parent.sh from another script granny.sh, then that will be the process group leader, and will be included when you use the kill -SIG -PGID method.
See also this answer to:
What are “session leaders” in ps? for some background on sessions and process groups.

Using Until to Restart a Process when it dies

I read this Question:
"How do I write a bash script to restart a process if it dies".
A solution is to:
until myserver; do
echo "Server 'myserver' crashed with exit code $?. Respawning.." >&2
sleep 1
done
Will this work for Tomcat/Jetty processes that are started from a script?
Do I test the success with "kill" to see if the process restarts?
If the script returns exit codes as specified in the answer at that link, then it should work. If you go back and read that answer again, it implies that you should not use kill. Using until will test for startup because a failed startup should return a non-zero exit code. Replace "myserver" with the name of your script.
Your script can have traps that handle various signals and other conditions. Those trap handlers can set appropriate exit codes.
Here is a demo. The subshell (echo "running 'dummy'"; sleep 2; exit $result) is a standin for your script:
result=0
until (echo "running 'dummy'"; sleep 2; exit $result)
do
echo "Server 'dummy' crashed with exit code $?. Respawning.." >&2
sleep 1
done
Try it with a failing "dummy" by setting result=1 and running the until loop again.
while true
do
if pgrep jett[y] 1>/dev/null;then
sleep 1
else
# restart your program here
fi
done

Resources