Bash - show background process's output when finished and checking return code [duplicate] - bash

This question already has answers here:
How to wait in bash for several subprocesses to finish, and return exit code !=0 when any subprocess ends with code !=0?
(35 answers)
Bash: Capture output of command run in background
(6 answers)
Closed 5 years ago.
I have a simple loop, looking for all files in a directory. Inside the loop, a command is executed in the background with &.
I then have another loop that will wait for all the processes to complete and will check the return code to make sure none of them failed. If any failed, the entire script must fail.
This approach does work but the output from all the background processes is mixed together.
#!/bin/bash
for f in $(find tests -name '*.test.php')
do
phpunit "$f" &
done
FAIL=0
for job in `jobs -p`
do
wait $job || let "FAIL=$?"
done
exit $FAIL
I can make every process output only when they are finished by executing each command in a subshell like this
echo "$(phpunit "$f")" &
Now the output looks great but there's no obvious way to get the return code. $? will give me the return code of echo which is always 0 and it breaks checking if a test failed.
Is there a way to get a nice output (all at once when finished) and check the return value at the same time?
I thought about directing the outputs to files but how am I going to echo them after wait? Actually, I'd like to avoid writing to files if possible.
Edit
This should never have been marked duplicate, because now I can't properly answer my own question... Here is the solution I found:
In the beginning of the script I added set -o pipefail
In the original solution I replaced phpunit "$f" & with phpunit "$f" | php -r "echo file_get_contents('php://stdin');" &
With the moreutils package installed, this can also be phpunit "$f" | sponge &
That's it. When a test finishes running, it outputs. When a test fails, the exit code of the main script is 1. When everything passes it's 0. Complete script:
#!/bin/bash
set -o pipefail
for f in $(find tests -name '*.test.php')
do
phpunit "$f" | sponge &
done
FAIL=0
for job in `jobs -p`
do
wait $job || let "FAIL=$?"
done
exit $FAIL

Related

Is there a way to stop scripts that are running simultaneously if one of them send an echo?

I need to find if a value (actually it's more complex than that) is in one of 20 servers I have. And I need to do it as fast as possible. Right now I am sending the scripts simultaneously to all the servers. My main script is something like this (but with all the servers):
#!/bin/sh
#mainScript.sh
value=$1
c1=`cat serverList | sed -n '1p'`
c2=`cat serverList | sed -n '2p'`
sh find.sh $value $c1 & sh find.sh $value $c2
#!/bin/sh
#find.sh
#some code here .....
if [ $? -eq 0 ]; then
rm $tempfile
else
myValue=`sed -n '/VALUE/p' $tempfile | awk 'BEGIN{FS="="} {print substr($2, 8, length($2)-2)}'`
echo "$myValue"
fi
So the script only returns a response if it finds the value in the server. I would like to know if there is a way to stop executing the other scripts if one of them already return a value.
I tried adding an "exit" on the find.sh script but it won't stop all the scripts. Can somebody please tell me if what I want to do is possible?
I would suggest that you use something that can handle this for you: GNU Parallel. From the linked tutorial:
If you are looking for success instead of failures, you can use success. This will finish as soon as the first job succeeds:
parallel -j2 --halt now,success=1 echo {}\; exit {} ::: 1 2 3 0 4 5 6
Output:
1
2
3
0
parallel: This job succeeded:
echo 0; exit 0
I suggest you start by modifying your find.sh so that its return code depends on its success, that will let us identify a successful call more easily; for instance:
myValue=`sed -n '/VALUE/p' $tempfile | awk 'BEGIN{FS="="} {print substr($2, 8, length($2)-2)}'`
success=$?
echo "$myValue"
exit $success
To terminate all the find.sh processes spawned by your script you can use pkill with a Parent Process ID criteria and a command name criteria :
pkill -P $$ find.sh # $$ refers to the current process' PID
Note that this requires that you start the find.sh script directly rather than passing it as a parameter to sh. Normally that shouldn't be a problem, but if you have a good reason to call sh rather than your script, you can replace find.sh in the pkill command by sh (assuming you're not spawning other scripts you wouldn't want to kill).
Now that find.sh exits with success only when it finds the expected string, you can plug the two actions with && and run the whole thing in background :
{ find.sh $value $c1 && pkill -P $$ find.sh; } &
The first occurrence of find.sh that terminates with success will invoke the pkill command that will terminate all others (those killed processes will have non-zero exit codes and therefore won't run their associated pkill).

Run bash script in background by default

I know I can run my bash script in the background by using bash script.sh & disown or alternatively, by using nohup. However, I want to run my script in the background by default, so when I run bash script.sh or after making it executable, by running ./script.sh it should run in the background by default. How can I achieve this?
Self-contained solution:
#!/bin/sh
# Re-spawn as a background process, if we haven't already.
if [[ "$1" != "-n" ]]; then
nohup "$0" -n &
exit $?
fi
# Rest of the script follows. This is just an example.
for i in {0..10}; do
sleep 2
echo $i
done
The if statement checks if the -n flag has been passed. If not, it calls itself with nohup (to disassociate the calling terminal so closing it doesn't close the script) and & (to put the process in the background and return to the prompt). The parent then exits to leave the background version to run. The background version is explicitly called with the -n flag, so wont cause an infinite loop (which is hell to debug!).
The for loop is just an example. Use tail -f nohup.out to see the script's progress.
Note that I pieced this answer together with this and this but neither were succinct or complete enough to be a duplicate.
Simply write a wrapper that calls your actual script with nohup actualScript.sh &.
Wrapper script wrapper.sh
#! /bin/bash
nohup ./actualScript.sh &
Actual script in actualScript.sh
#! /bin/bash
for i in {0..10}
do
sleep 10 #script is running, test with ps -eaf|grep actualScript
echo $i
done
tail -f 10 nohup.out
0
1
2
3
4
...
Adding to Heath Raftery's answer, what worked for me is a variation of what he suggested such as this:
if [[ "$1" != "-n" ]]; then
$0 -n & disown
exit $?
fi

Close pipe even if subprocesses of first command is still running in background

Suppose I have test.sh as below. The intent is to run some background task(s) by this script, that continuously updates some file. If the background task is terminated for some reason, it should be started again.
#!/bin/sh
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done &
echo $! > pidfile
and want to call it like ./test.sh | otherprogram, e. g. ./test.sh | cat.
The pipe is not being closed as the background process still exists and might produce some output. How can I tell the pipe to close at the end of test.sh? Is there a better way than checking for existence of pidfile before calling the pipe command?
As a variant I tried using #!/bin/bash and disown at the end of test.sh, but it is still waiting for the pipe to be closed.
What I actually try to achieve: I have a "status" script which collects the output of various scripts (uptime, free, date, get-xy-from-dbus, etc.), similar to this test.sh here. The output of the script is passed to my window manager, which displays it. It's also used in my GNU screen bottom line.
Since some of the scripts that are used might take some time to create output, I want to detach them from output collection. So I put them in a while true; do script; sleep 1; done loop, which is started if it is not running yet.
The problem here is now that I don't know how to tell the calling script to "really" detach the daemon process.
See if this serves your purpose:
(I am assuming that you are not interested in any stderr of commands in while loop. You would adjust the code, if you are. :-) )
#!/bin/bash
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done >/dev/null 2>&1 &
echo $! > pidfile
If you want to explicitly close a file descriptor, like for example 1 which is standard output, you can do it with:
exec 1<&-
This is valid for POSIX shells, see: here
When you put the while loop in an explicit subshell and run the subshell in the background it will give the desired behaviour.
(while true; do
echo "something" >> somewhere
sleep 1
done)&

Shell script: Ensure that script isn't executed if already running [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Quick-and-dirty way to ensure only one instance of a shell script is running at a time
I've set up a cronjob to backup my folders properly which I am quite proud of. However I've found out, by looking at the results from the backups, that my backup script has been called more than once by Crontab, resulting in multiple backups running at the same time.
Is there any way I can ensure that a certain shell script to not run if the very same script already is executing?
A solution without race condition or early exit problems is to use a lock file. The flock utility handles this very well and can be used like this:
flock -n /var/run/your.lockfile -c /your/script
It will return immediately with a non 0 status if the script is already running.
The usual and simple way to do this is to put something like:
if [[ -f /tmp/myscript.running ]] ; then
exit
fi
touch /tmp/myscript.running
at the top of you script and
rm -f /tmp/myscript.running
at the end, and in trap functions in case it doesn't reach the end.
This still has a few potential problems (such as a race condition at the top) but will do for the vast majority of cases.
A good way without a lock file:
ps | grep $0 | grep -v grep > /var/tmp/$0.pid
pids=$(cat /var/tmp/$0.pid | cut -d ' ' -f 1)
for pid in $pids
do
if [ $pid -ne $$ ]; then
logprint " $0 is already running. Exiting"
exit 7
fi
done
rm -f /var/tmp/$0.pid
This does it without a lock file, which is cool.
ps into a temp file, scrape the first field (pid #) and look for ourselves. If we find a different one, then somebody's already running. The grep $0 is to shorten the list to just those instances of this program, and the grep -v grep gets rid of the line that is the grep itself:)
You can use a tmp file.
Named it tmpCronBkp.a, tmpCronBkp.b, tmpCronBkp.c... etc. Releated of yout backup script.
Create it on script start and delete it at the end...
In a while, check if file exist or not and what file exist.
Have you tried this way?

Shell fragment to make sure only one instance a shell script runs at any given time [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Quick-and-dirty way to ensure only one instance of a shell script is running at a time
At a previous workplace we used to have a highly-refined bash function called run-only-once that we could write into any long-running shell script, and then call it at the start of the script, which would check to see whether the script was already running as another process, and if so exit with a notification to STDOUT.
Does anyone have a function/fragment like this they could share?
Our old function (which I no longer have) would check for a PID file (in scriptname.$$ format) in /var/run, and use then either exit or simply continue. In the case where a PID file existed, it would do some checks to make sure that the process was still active. It also had a few options for controlling whether a notification was output at all.
From memory, our function only worked in bash. Bonus points for a /bin/sh version.
Put this at the start of the script
SCRIPTNAME=`basename $0`
PIDFILE=/var/run/${SCRIPTNAME}.pid
if [ -f ${PIDFILE} ]; then
#verify if the process is actually still running under this pid
OLDPID=`cat ${PIDFILE}`
RESULT=`ps -ef | grep ${OLDPID} | grep ${SCRIPTNAME}`
if [ -n "${RESULT}" ]; then
echo "Script already running! Exiting"
exit 255
fi
fi
#grab pid of this process and update the pid file with it
PID=`ps -ef | grep ${SCRIPTNAME} | head -n1 | awk ' {print $2;} '`
echo ${PID} > ${PIDFILE}
and at the end
if [ -f ${PIDFILE} ]; then
rm ${PIDFILE}
fi
This first of all checks for the existence of the pid file and exits if it's present. If so then it confirms that a process under this script name with the old pid is running and exits if so. If not then it carries on and updates the script with the new pid file. The bit at the end checks for the existence of the pid file and deletes it, so the script can run next time.
Check permissions on /var/run are OK for your script though, otherwise create the PID file in another directory. Same directory as the script runs in would be fine.
There are two ways to do atomic locks from the shell. The simplest and most portable is mkdir. Creation of a directory will always fail if a file/directory by that name already exists, so simply use a "lock directory" instead of "lock file".
mkdir "$LOCK" || { echo "Script already running" ; exit 1 ; }
The other method is the noclobber option, set using set -C. This option forces the shell to open files for output redirection with the O_EXCL flag. Use it like this:
set -C
> "$LOCK" || { echo "Script already running" ; exit 1 ; }
set +C
Rather than redirecting a null command, you might prefer to echo the current PID or something else useful to be stored in the file.
It's worth keeping in mind that some ancient historic shells have broken implementations of the noclobber option -C which do not use O_EXCL but instead buggy non-atomic checks for file existence. Probably not an issue in the 21st century, but you should be aware just in case.
It is more reliable to use the lockfile utility in Linux.
From the man page:
...
lockfile important.lock
...
access_"important"_to_your_hearts_content
...
rm -f important.lock
...
You can specify retries and wait times. This was specifically created for this purpose so it takes care of race conditions

Resources