bash script that kills a process while running in the background [duplicate] - bash

I could run something in endless loop like this:
$ while true; do foo; done
I could run something detached like this:
$ foo &
But I can't run something detached in endless loop like this:
$ while true; do foo & ; done
-bash: syntax error near unexpected token `;'
How to run an infinite detached loop in one line of shell code?

You should clarify whether you want
many copies of foo running in the background or
a single copy of foo running within a backgrounded while loop.
choroba's answer will do the former. To do the latter you can use subshell syntax:
(while true; do foo; done) &

& is a terminator like ;, you can't mix them. Just use
while : ; do foo & done
I'd add a sleep somehwere, otherwise you'll quickly flood your system.

If you want to run the loop in the background, rather than each individual foo command, you can put the whole thing in parentheses:
( while true; do foo; done ) &

All good answers here already - just be aware that your original loop with #choroba's correction will successfully make a very messy infinite string of processes spawned in quick succession.
Note that there are several useful variants. You could throw a delay inside the loop like this -
while true; do sleep 1 && date & done
But that won't cause any delay between processes spawned. Consider something more like this:
echo 0 > cond # false
delay=10 # secs
then
until (( $(<cond) )) do; sleep $delay && foo & done
or
while sleep $delay; do (( $(<cond) )) || foo & done
then make sure foo sets cond to 1 when you want it to stop spawning.
But I'd try for a more controlled approach, like
until foo; do sleep $delay; done &
That runs foo in the foreground OF a loop running in background, so to speak, and only tries again until foo exits cleanly.
You get the idea.

Related

How to get a stdout message once a background process finishes?

I realize that there are several other questions on SE about notifications upon completion of background tasks, and how to queue up jobs to start after others end, and questions like these, but I am looking for a simpler answer to a simpler question.
I want to start a very simple background job, and get a simple stdout text notification of its completion.
For example:
cp My_Huge_File.txt New_directory &
...and when it done, my bash shell would display a message. This message could just be the completed job's PID, but if I could program unique messages per background process, that would be cool too, so I could have numerous background jobs running without confusion.
Thanks for any suggestions!
EDIT: user000001's answer separates commands with ;. I separated commands with && in my original example. The only difference I notice is that you don't have to surround your base command with braces if you use&&. Semicolons are a bit more flexible, so I've updated my examples.
The first thing that comes to mind is
{ sleep 2; echo "Sleep done"; } &
You can also suppress the accompanying stderr output from the above line:
{ { sleep 2; echo "Sleep done"; } & } 2>/dev/null
If you want to save your program output (stdout) to a log file for later viewing, you can use:
{ { sleep 2; echo "Sleep done"; } & } 2>/dev/null 1>myfile.log
Here's even a generic form you might use (You can even make an alias so that you can run it at any time without having to type so much!):
# dont hesitate to add semicolons for multiple commands
CMD="cp My_Huge_File.txt New_directory"
{ eval $CMD & } 2>/dev/null 1>myfile.log
You might also pipe stdout into another process using | in case you wish to process output in real time with other scripts or software. tee is also a helpful tool in case you wish to use multiple pipes. For reference, there are more examples of I/O redirection here.
You could use command grouping:
{ slow_program; echo ok; } &
or the wait command
slow_program &
wait
echo ok
The most reliable way is to simply have the output from the background process go to a temporary file and then consume the temporary file.
When you have a background process running it can be difficult to capture the output into something useful because multiple jobs will overwrite eachother
For example, if you have two processes which each print out a string with a number "this is my string1" "this is my string2" then it is possible for you to end up with output that looks like this:
"this is mthis is my string2y string1"
instead of:
this is my string1
this is my string2
By using temporary files you guarantee that the output will be correct.
As I mentioned in my comment above, bash already does this kind of notification by default, as far as I know. Here's an example I just made:
$ sleep 5 &
[1] 25301
$ sleep 10 &
[2] 25305
$ sleep 3 &
[3] 25309
$ jobs
[1] Done sleep 5
[2]- Running sleep 10 &
[3]+ Running sleep 3 &
$ :
[3]+ Done sleep 3
$ :
[2]+ Done sleep 10
$

Terminating a shell function non-interactively

Is there a way to terminate a shell function non-interactively without killing the shell that's running it?
I know that the shell can be told how to respond to a signal (e.g. USR1), but I can't figure out how the signal handler would terminate the function.
If necessary you may assume that the function to be terminate has been written in such a way that it is "terminable" (i.e. by declaring some suitable options).
(My immediate interest is in how to do this for zsh, but I'm also interested in knowing how to do it for bash and for /bin/sh.)
EDIT: In response to Rob Watt's suggestion:
% donothing () { echo $$; sleep 1000000 }
% donothing
47139
If at this point I hit Ctrl-C at the same terminal that is running the shell, then the function donothing does indeed terminate, and I get the command prompt back. But if instead, from a different shell session, I run
% kill -s INT 47139
...the donothing function does not terminate.
Maybe I'm not fully understand what you want, but maybe something like this?
trap "stopme=1" 2
function longcycle() {
last=$1
for i in 1 2 3 4 5
do
[ ! -z "$stopme" ] && return
echo $i
sleep 1
done
}
stopme=""
echo "Start 1st cycle"
longcycle
echo "1st cycle end"
echo "2nd cycle"
stopme=""
longcycle
echo "2nd cycle end"
The above is for bash. Run it, and try press CTRL-C.
Or for not interactively, Save the above as for example my_command, then try:
$ ./my_command & #into background
$ kill -2 $! #send CTRL-C to the bg process
EDIT:
Solution for your sleep example in the bash:
$ donothing() { trap '[[ $mypid ]] && trap - 2 && kill $mypid' 0 2; sleep 1000000 & mypid=$!;wait; }
$ donothing
when you send a signal from another terminal will terminate it. Remeber, signal '0' je "normal end of the process". Semantic name: 0=EXIT, 2=INT... etc.
and remeber too, than signals are sending to processes not to the functions. In your example, the process is the current (interactive shell), so must use the wait trick to get something interrupt-able... Not a nice solution - but the only way when want interrupt something what is running in interactive shell (not a forked one) from the another terminal...

how to let the next bash loop wait until the program inside the loop is done?

I realized that the following loop in a bash script will send ./a.out to background, and the run will return to the system before even a single ./a.out running is done.
#!/bin/bash
for i in 1,2,3
do
echo $i
./a.out
done
The question is how to let the next bash loop wait until "./a.out" is done?
BTW, I thought this should be a common problem but didn't find similar questions, or I may need more searching skills...
What you're describing is the default behavior. You might be confused by the fact that your loop only runs once, with i="1,2,3", and therefore appears to print all at once and then exit immediately.
Try for i in 1 2 3 instead, and see if you get the output you expect.
You may find the "C style" for loop easier, ie:
for ((i = 1 ; i < 4 ; i++))

Bash: How to lock also when perform an outside script

Here is my bash code:
(
flock -n -e 200 || (echo "This script is currently being run" && exit 1)
sleep 10
...Call some functions which is written in another script...
sleep 5
) 200>/tmp/blah.lockfile
I'm running the script from two shells successively and as long as the first one is at "sleep 5" all goes good, meaning that the other one doesn't start. But when the first turns to perform the code from another script (other file) the second run starts to execute.
So I have two questions here:
What should I do to prevent this script and all its "children" from run while the script OR its "child" is still running.
(I didn't find a more appropriate expression for running another script other than a "child", sorry for that :) ).
According to man page, -n causes the process to exit when it fails to gain the lock, but as far as I can see it just wait until it can run. What am I missing ?
Your problem may be fairly mundane. Namely,
false || ( exit 1 )
Does not cause the script to exit. Rather, the exit instructs the subshell to exit. So change your first line to:
flock -n -e 200 || { echo "This script is currently being run"; exit 1; } >&2

Code block usage { } in bash

I was wondering why code block was used in this example below:
possibly_hanging_job & { sleep ${TIMEOUT}; eval 'kill -9 $!' &> /dev/null; }
This could have been written like this ( without using code block as well) ..right ?
possibly_hanging_job &
sleep ${TIMEOUT}
eval 'kill -9 $!' &> /dev/null
Putting the last two commands in braces makes it clear that “These are not just two additional commands that we happen to be running after the long-running process that might hang; they are, instead, integral to getting it shut down correctly before we proceed with the rest of the shell script.” If the author had instead written:
command &
a
b
c
it would not be completely clear that a and b are just part of getting command to end correctly. By writing it like this:
command & { a; b; }
c
the author makes it clearer that a and b exist for the sake of getting command completely ended and cleaned up before the actual next step, c, occurs.
Actually I even wonder why there's an eval. As far as I see it should also work without that.
Regarding your actual question:
I guess the code block is there to emphasize that the sleep belongs to kill. But it's not necessary. It should also work like this:
possibly_hanging_job & sleep ${TIMEOUT}; kill -9 $! &> /dev/null

Resources