How to write process ID and get exit code of the next to last command - bash

I want to run a command, write the process id instantly to a file when the command started and afterwards get the exit status of the command. This means, while the process id has to be written instantly, I want the exit status only when the initial command has finished.
The following statement will unfortunately run the command, write the process id instantly but it won't wait for the command to be finished. Furthermore I will only get the exit status of the echo command, not of the initial command
command in my case is rdiff-backup.
How do I need to modify the statement?
<command> & echo $! > "/pid_file"
RESULT=$?
if [ "$RESULT" -ne "0" ]; then
echo "Finished with errors"
fi

You need to wait on the background process to get its exit status:
_command_for_background_ & echo $! > pid_file
: ... do other things, if any ...
#
# it is better to grab $? on the same line to prevent any
# future modifications inadvertently breaking the strict sequence
#
wait $(< pid_file); child_status=$?
if [[ $child_status != 0 ]]; then
echo "Finished with errors"
fi

Related

Detecting exit status on process substitution

I'm currently using bash 4.1 and I'm using a function to perform a SVN cat on a repository file. After that, it iterates over each line to perform some transformations (mostly concatenations and such). If said file does not exist, the script should stop with an error message. The script is as follows:
function getFile {
svnCat=`svn cat (file) 2>&1`
if [[ -n $(echo "$svnCat" | grep "W160013") ]]; then # W160013 is the returned value by SVN stderr case a file doesn't exist
echo "File doesn't exist" >&2
exit 1
else
echo "$svnCat" | while read -r; do
#Do your job
done
fi
}
function processFile{
while read -r do
#do stuff
done < <(getFile)
#do even more stuff
}
However, in situations where a file does not exist, the error message is printed once but the script keeps executing. Is there a way to detect that the while looped failed and should stop the script completely?
Can't use set -e option since I require to delete some files that were created in the process.
Update: I've tried to add || exit after the done command as follows:
function processFile{
while read -r do
#do stuff
done || exit 1 < <(getFile)
However, the script is waiting for user output and when I press enter, it executes the content in the while loop
Tracking exit status from a process substitution is tricky, and requires a very modern version of bash (off the top of my head, I want to say 4.3 or newer). Prior to that, the $! after the <(getFile) will not be correctly populated, and so the wait will fail (or, worse, refer to a previously-started subprocess).
#!/usr/bin/env bash
### If you *don't* want any transforms at this stage, eliminate getFile entirely
### ...and just use < <(svn cat "$1") in processFile; you can/should rely on svn cat itself
### ...to have a nonzero exit status in the event of *any* failure; if it fails to do so,
### ...file a bug upstream.
getFile() {
local content
content=$(svn cat "$1") || exit # pass through exit status of failed svn cat
while read -r line; do
echo "Generating a transformed version of $line"
done <<<"$content"
}
processFile() {
local getFileFd getFilePid line
# start a new process running getFile; record its pid to use to check exit status later
exec {getFileFd}< <(getFile "$1"); getFilePid=$!
# actual loop over received content
while IFS= read -r line; do
echo "Retrieved line $line from process $getFilePid"
done <&"$getFileFd"
# close the FIFO from the subprocess
exec {getFileFd}<&-
# then use wait to wait for it to exit and collect its exit status
if wait "$getFilePid"; then
echo "OK: getFile reports success" >&2
else
getFileRetval=$?
echo "ERROR: getFile returned exit status $getFileRetval" >&2
fi
}

Is it logical to use the killall command to exit a script?

I am working around with a pin generator and I have come across a small issue.
I know of a few different methods to exiting a script but I have been playing around with calling the same script that is running as a child process, however when the child process is not called, the script exits perfectly. When called, the parent script does not exit properly after the child has completed and exited and the parent script loops back to the user input. I cannot think of anything other than possibly using the "wait" command though I don't know if this command would be proper with this code. Any thoughts on using the "killall" command to exit the script? I have tested it out, as you may see it in the code below, but I am left with the message, "Terminated" and if I can use killall how would I prevent that message from printing to standard out? Here is my code:
#!/bin/bash
clear
echo ""
echo "Now generating a random pin."
sleep 3
echo ""
echo "----------------------------------------------"
echo ""
# Generates a random 8-digit number
gen_num=$(tr -dc '0-9' </dev/urandom | head -c 8)
echo " Pin = $gen_num "
echo ""
echo "Pin has been generated!"
sleep 3
echo ""
clear
PS3="Would you like to generate another pin?: "
select CHOICE in "YES" "NO"
do
if [ "$CHOICE" == "YES" ]
then
bash "/home/yokai/Modules/Wps-options.sh"
elif [ "$CHOICE" == "NO" ]
then
clear
echo ""
echo "Okay bye!"
sleep 3
clear
killall "Wps-options.sh"
break
exit 0
fi
done
exit 0
You don't need to call the same script recursively (and then kill all its instances). The following script performs the task without forking:
#!/bin/bash
gen_pin () {
echo 'Now generating a random pin.'
# Generates a random 8-digit number
gen_num="$(tr -dc '0-9' </dev/urandom | head -c 8)"
echo "Pin = ${gen_num}"
PS3='Would you like to generate another pin?:'
select CHOICE in 'NO' 'YES'
do
case ${CHOICE} in
'NO')
echo 'OK'
exit 0;;
*)
break;;
esac
done
}
while true
do
gen_pin
done
You can find a lot of information about how to program in bash here.
First of all, when you execut
bash "/home/yokai/Modules/Wps-options.sh"
The script forks and crates a child process, then, it waits for the child termination, and it does not continue with execution, unless, your script Wps-options.sh executes something else in background (forking again) without reaping its child. But i can not tell you more because i dont know what is in your script Wps-options.sh
To prevent messages to be printed to stdout when you execute killall:
killall "Wps-options.sh" 1> /dev/null 2> /dev/null
1> stands for stdout redirection to file /dev/null and 2> stands for stderr redirection to file /dev/null

Trying to capture stdout of background process in bash script

I wrote the following to be used with Jenkins to kick off my selenium tests. Now since they are being executed as background processes, when they fail Jenkins believes they haven't. As you can see my futile attempt on making this work. Any ideas? Thought about piping the output into a file and grepping for keywords. Then I realized I don't know how to reflect an error if a grep for string returns true.
#!/bin/bash
FAIL=0
echo "starting"
ruby /root/selenium-tests/test/test1.rb &
ruby /root/selenium-tests/test/test2.rb &
ruby /root/selenium-tests/test/test3.rb &
#wait
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "PASS"
else
echo "FAIL! ($FAIL)"
fi
Jenkins most likely looks at the process exit code to determine whether tests fails. This is what all Unix tools do.
There are multiple ways of doing this. If your test files output something like "FAIL" instead of properly returning an exit code, you can do:
#!/bin/bash
(
ruby /root/selenium-tests/test/test1.rb &
ruby /root/selenium-tests/test/test2.rb &
ruby /root/selenium-tests/test/test3.rb &
wait
) > log
! grep "FAIL" log
exit $? # <- happens implicitly at the end of the script, and can be left out
In this case, grep finding "FAIL" will cause the script to fail, and Jenkins to detect the failure.
The more correct way, if your scripts return proper exit codes, is your method but without relying on job control (which by default is turned off in non-interactive shells), and returning correct exit codes:
for test in /root/selenium-tests/test/test*.rb
do
ruby "$test" &
pids+=( $! )
done
for pid in "${pids[#]}"
do
if wait $pid
then
echo "$pid succeeded"
else
echo "$pid failed"
(( failures++ ))
fi
done
if [[ $failures -gt 0 ]]
then
echo "FAIL: $failures failed tests"
exit 1 # return failure
else
echo "PASS!"
exit 0 # return success
fi

How to get kill command result in bash script

I have a bash script which run a command to get its result and do something depends on the result. Here is the script:
#!/bin/bash
commandResult=$(($myCommand) 2>&1)
if [[ "$commandResult" == *Error* ]]; then
x="failed"
else
x="success"
fi
echo $x
exit 0;
There is no problem with this script, the issue is when I try to kill $myCommand in the middle of running the script via kill -9 $myCommand in command line, the $commandResult will be null and the "success" will be printed.
How could I put the kill result in the $commandResult or any other way to find out if process killed in this script?
Any help would be much appreciated.
You should be checking your command's exit code, not its output to standard error. myCommand should exit with 0 on success, and some non-zero code on failure. If it is killed via the kill command, it's exit code will automatically be 128+n, where n is the signal you used to kill it. Then you can test for success with
if myCommand; then
echo success
exit 0
else
status=$?
echo failure
exit $status
fi
Also, you probably don't need to use kill -9. Start with kill (which sends the gentler TERM signal); if that doesn't work, step up to kill -2 (INT, equivalent of Ctrl-C).

How can I get both the process id and the exit code from a bash script?

I need a bash script that does the following:
Starts a background process with all output directed to a file
Writes the process's exit code to a file
Returns the process's pid (right away, not when process exits).
The script must exit
I can get the pid but not the exit code:
$ executable >>$log 2>&1 &
pid=`jobs -p`
Or, I can capture the exit code but not the pid:
$ executable >>$log;
# blocked on previous line until process exits
echo $0 >>$log;
How can I do all of these at the same time?
The pid is in $!, no need to run jobs. And the return status is returned by wait:
$executable >> $log 2>&1 &
pid=$!
wait $!
echo $? # return status of $executable
EDIT 1
If I understand the additional requirement as stated in a comment, and you want the script to return immediately (without waiting for the command to finish), then it will not be possible to have the initial script write the exit status of the command. But it is easy enough to have an intermediary write the exit status as soon as the child finishes. Something like:
sh -c "$executable"' & echo pid=$! > pidfile; wait $!; echo $? > exit-status' &
should work.
EDIT 2
As pointed out in the comments, that solution has a race condition: the main script terminates before the pidfile is written. The OP solves this by doing a polling sleep loop, which is an abomination and I fear I will have trouble sleeping at night knowing that I may have motivated such a travesty. IMO, the correct thing to do is to wait until the child is done. Since that is unacceptable, here is a solution that blocks on a read until the pid file exists instead of doing the looping sleep:
{ sh -c "$executable > $log 2>&1 &"'
echo $! > pidfile
echo # Alert parent that the pidfile has been written
wait $!
echo $? > exit-status
' & } | read

Resources