Summary: I have a bash script that runs a process in background, and is supposed to work as a normal command and inside a command substitution block such as $(...). The script itself spawns a process that forks to background. It can be reduced to this test case:
#!/bin/sh
echo something
sleep 5 &
Running this script in a shell will return immediately (and print "something"), running it inside $(...) will hang for 5 seconds, waiting for the backgrounded "sleep" to finish.
Applies to anything that is started inside the command substitution shell and spawns processes in background, including any children in that process tree apparently. Seems to affect both bash and zsh, haven't tried others.
Original question: I have a bash script that is supposed to print a value to stdout and also copy it to the X clipboard every time it runs.
#!/bin/sh
echo something
echo something | xclip -selection clipboard
This script (let's call it "something") is meant to be used to get this word (which is actually the output of another command) and be used in different ways such as:
$ something
something
$ xclip -o -selection clipboard
something
$ echo $(something)
^C
Prints to normal stdout, copies the output to the clipboard to be used in normal X applications, and should also be able to use the stdout with bash command substitution, to insert this word in the middle of any command.
However the bash command substitution seems to force xclip to stay alive in foreground. xclip normally daemonizes itself since the X clipboard requires that a client provides the clipboard contents, and the default behavior is to make it quit once the clipboard contents are replaced.
After having this issue with xclip I made the minimal test case that I wrote at the beginning of this question, so it seems to apply that anything that daemonizes inside the $(...) shell
Can anyone explain this behavior? Is there any way I can avoid it?
If you want the backgrounded process to not interfere with command substitution, you have to disconnect its stdout. This will return immediately:
$ cat bg.sh
#!/bin/sh
echo before
sleep 5 >/dev/null &
echo after
$ date; x=$(./bg.sh); date; echo "$x"
Sat Jun 1 13:02:26 EDT 2013
Sat Jun 1 13:02:26 EDT 2013
before
after
You will lose the ability to capture the backgrounded process's stdout, but if you're running it in the background you probably don't care. the bg.sh process can always write to disk.
Related
I have a script (lets call it parent.sh) that makes 2 calls to a second script (child.sh) that runs a java process. The child.sh scripts are run in the background by placing an & at the end of the line in parent.sh. However, when i run parent.sh, i need to press Ctrl+C to return to the terminal screen. What is the reason for this? Is it something to do with the fact that the child.sh processes are running under the parent.sh process. So the parent.sh doesn't die until the childs do?
parent.sh
#!/bin/bash
child.sh param1a param2a &
child.sh param1b param2b &
exit 0
child.sh
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#email.com
As you can see, I don't want to run the java process in the background because i want to send a mail out when the process dies. Doing it as above works fine from a functional standpoint, but i would like to know how i can get it to return to the terminal after executing parent.sh.
What i ended up doing was to make to change parent.sh to the following
#!/bin/bash
child.sh param1a param2a > startup.log &
child.sh param1b param2b > startup2.log &
exit 0
I would not have come to this solution without your suggestions and root cause analysis of the issue. Thanks!
And apologies for my inaccurate comment. (There was no input, I answered from memory and I remembered incorrectly.)
The following link from the Linux Documentation Project suggests adding a wait after your mail command in child.sh:
http://tldp.org/LDP/abs/html/x9644.html
Summary of the above document
Within a script, running a command in the background with an ampersand (&)
may cause the script to hang until ENTER is hit. This seems to occur with
commands that write to stdout. It can be a major annoyance.
....
....
As Walter Brameld IV explains it:
As far as I can tell, such scripts don't actually hang. It just
seems that they do because the background command writes text to
the console after the prompt. The user gets the impression that
the prompt was never displayed. Here's the sequence of events:
Script launches background command.
Script exits.
Shell displays the prompt.
Background command continues running and writing text to the
console.
Background command finishes.
User doesn't see a prompt at the bottom of the output, thinks script
is hanging.
If you change child.sh to look like the following you shouldn't experience this annoyance:
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#gmail.com
wait
Or as #SebastianStigler states in a comment to your question above:
Add a > /dev/null at the end of the line with mail. mail will otherwise try to start its interactive mode.
This will cause the mail command to write to /dev/null rather than stdout which should also stop this annoyance.
Hope this helps
The process was still linked to the controlling terminal because STDOUT needs somewhere to go. You solved that problem by redirecting to a file ( > startup.log ).
If you're not interested in the output, discard STDOUT completely ( >/dev/null ).
If you're not interested in errors, either, discard both ( &>/dev/null ).
If you want the processes to keep running even after you log out of your terminal, use nohup — that effectively disconnects them from what you are doing and leaves them to quietly run in the background until you reboot your machine (or otherwise kill them).
nohup child.sh param1a param2a &>/dev/null &
What I usually do is pause my script, run it in the background and then disown it like
./script
^Z
bg
disown
However, I would like to be able to cancel my script at any time. If I have a script that runs indefinitely, I would like to be able to cancel it after a few hours or a day or whenever I feel like cancelling it.
Since you are having a bit of trouble following along, let's see if we can keep it simple for you. (this presumes you can write to /tmp, change as required). Let's start your script in the background and create a PID file containing the PID of its process.
$ ./script & echo $! > /tmp/scriptPID
You can check the contents of /tmp/scriptPID
$ cat /tmp/scriptPID
######
Where ###### is the PID number of the running ./script process. You can further confirm with pidof script (which will return the same ######). You can use ps aux | grep script to view the number as well.
When you are ready to kill the ./script process, you simply pass the number (e.g. ######) to kill. You can do that directly with:
$ kill $(</tmp/scriptPID)
(or with the other methods listed in my comment)
You can add rm /tmp/scriptPID to remove the pid file after killing the process.
Look things over and let me know if you have any further questions.
Ive got a script that takes a quite a long time to run, as it has to handle many thousands of files. I want to make this script as fool proof as possible. To this end, I want to check if the user ran the script using nohup and '&'. E.x.
me#myHost:/home/me/bin $ nohup doAlotOfStuff.sh &. I want to make 100% sure the script was run with nohup and '&', because its a very painful recovery process if the script dies in the middle for whatever reason.
How can I check those two key paramaters inside the script itself? and if they are missing, how can I stop the script before it gets any farther, and complain to the user that they ran the script wrong? Better yet, is there way I can force the script to run in nohup &?
Edit: the server enviornment is AIX 7.1
The ps utility can get the process state. The process state code will contain the character + when running in foreground. Absence of + means code is running in background.
However, it will be hard to tell whether the background script was invoked using nohup. It's also almost impossible to rely on the presence of nohup.out as output can be redirected by user elsewhere at will.
There are 2 ways to accomplish what you want to do. Either bail out and warn the user or automatically restart the script in background.
#!/bin/bash
local mypid=$$
if [[ $(ps -o stat= -p $mypid) =~ "+" ]]; then
echo Running in foreground.
exec nohup $0 "$#" &
exit
fi
# the rest of the script
...
In this code, if the process has a state code +, it will print a warning then restart the process in background. If the process was started in the background, it will just proceed to the rest of the code.
If you prefer to bailout and just warn the user, you can remove the exec line. Note that the exit is not needed after exec. I left it there just in case you choose to remove the exec line.
One good way to find if a script is logging to nohup, is to first check that the nohup.out exists, and then to echo to it and ensure that you can read it there. For example:
echo "complextag"
if ( $(cat nohup.out | grep "complextag" ) != "complextag" );then
# various commands complaining to the user, then exiting
fi
This works because if the script's stdout is going to nohup.out, where they should be going (or whatever out file you specified), then when you echo that phrase, it should be appended to the file nohup.out. If it doesn't appear there, then the script was nut run using nohup and you can scold them, perhaps by using a wall command on a temporary broadcast file. (if you want me to elaborate on that I can).
As for being run in the background, if it's not running you should know by checking nohup.
I have a bash script with a loop that calls a hard calculation routine every iteration. I use the results from every calculation as input to the next. I need make bash stop the script reading until every calculation is finished.
for i in $(cat calculation-list.txt)
do
./calculation
(other commands)
done
I know the sleep program, and i used to use it, but now the time of the calculations varies greatly.
Thanks for any help you can give.
P.s>
The "./calculation" is another program, and a subprocess is opened. Then the script passes instantly to next step, but I get an error in the calculation because the last is not finished yet.
If your calculation daemon will work with a precreated empty logfile, then the inotify-tools package might serve:
touch $logfile
inotifywait -qqe close $logfile & ipid=$!
./calculation
wait $ipid
(edit: stripped a stray semicolon)
if it closes the file just once.
If it's doing an open/write/close loop, perhaps you can mod the daemon process to wrap some other filesystem event around the execution? `
#!/bin/sh
# Uglier, but handles logfile being closed multiple times before exit:
# Have the ./calculation start this shell script, perhaps by substituting
# this for the program it's starting
trap 'echo >closed-on-calculation-exit' 0 1 2 3 15
./real-calculation-daemon-program
Well, guys, I've solved my problem with a different approach. When the calculation is finished a logfile is created. I wrote then a simple until loop with a sleep command. Although this is very ugly, it works for me and it's enough.
for i in $(cat calculation-list.txt)
do
(calculations routine)
until [[ -f $logfile ]]; do
sleep 60
done
(other commands)
done
Easy. Get the process ID (PID) via some awk magic and then use wait too wait for that PID to end. Here are the details on wait from the advanced Bash scripting guide:
Suspend script execution until all jobs running in background have
terminated, or until the job number or process ID specified as an
option terminates. Returns the exit status of waited-for command.
You may use the wait command to prevent a script from exiting before a
background job finishes executing (this would create a dreaded orphan
process).
And using it within your code should work like this:
for i in $(cat calculation-list.txt)
do
./calculation >/dev/null 2>&1 & CALCULATION_PID=(`jobs -l | awk '{print $2}'`);
wait ${CALCULATION_PID}
(other commands)
done
I'm using bash, and as I understand it, exec followed by a command is supposed to replace the shell without creating a new process. For example,
exec echo hello
has the appearance of printing "hello" and then immediately exiting, because after echo is done, the shell process isn't there to return to anymore.
If I put this as part of a pipeline - for instance,
exec echo hello | sed 's/hell/heck/'
or
echo hello | exec sed 's/hell/heck/'
my expectation is that, similarly, the shell would terminate as a result of its process being replaced away. This is not what happens in reality, though - both the commands above print "hecko" and return to the shell normally, just as if the word "exec" wasn't there. Why is this?
There is sentence in bash manual:
Each command in a pipeline is executed as a separate process (i.e., in
a subshell).
So in both examples two processes are spawned by the pipeline first and 'exec' is executed inside one of spawned process - without impact on shell executing the pipeline.