Code block usage { } in bash - bash

I was wondering why code block was used in this example below:
possibly_hanging_job & { sleep ${TIMEOUT}; eval 'kill -9 $!' &> /dev/null; }
This could have been written like this ( without using code block as well) ..right ?
possibly_hanging_job &
sleep ${TIMEOUT}
eval 'kill -9 $!' &> /dev/null

Putting the last two commands in braces makes it clear that “These are not just two additional commands that we happen to be running after the long-running process that might hang; they are, instead, integral to getting it shut down correctly before we proceed with the rest of the shell script.” If the author had instead written:
command &
a
b
c
it would not be completely clear that a and b are just part of getting command to end correctly. By writing it like this:
command & { a; b; }
c
the author makes it clearer that a and b exist for the sake of getting command completely ended and cleaned up before the actual next step, c, occurs.

Actually I even wonder why there's an eval. As far as I see it should also work without that.
Regarding your actual question:
I guess the code block is there to emphasize that the sleep belongs to kill. But it's not necessary. It should also work like this:
possibly_hanging_job & sleep ${TIMEOUT}; kill -9 $! &> /dev/null

Related

bash script that kills a process while running in the background [duplicate]

I could run something in endless loop like this:
$ while true; do foo; done
I could run something detached like this:
$ foo &
But I can't run something detached in endless loop like this:
$ while true; do foo & ; done
-bash: syntax error near unexpected token `;'
How to run an infinite detached loop in one line of shell code?
You should clarify whether you want
many copies of foo running in the background or
a single copy of foo running within a backgrounded while loop.
choroba's answer will do the former. To do the latter you can use subshell syntax:
(while true; do foo; done) &
& is a terminator like ;, you can't mix them. Just use
while : ; do foo & done
I'd add a sleep somehwere, otherwise you'll quickly flood your system.
If you want to run the loop in the background, rather than each individual foo command, you can put the whole thing in parentheses:
( while true; do foo; done ) &
All good answers here already - just be aware that your original loop with #choroba's correction will successfully make a very messy infinite string of processes spawned in quick succession.
Note that there are several useful variants. You could throw a delay inside the loop like this -
while true; do sleep 1 && date & done
But that won't cause any delay between processes spawned. Consider something more like this:
echo 0 > cond # false
delay=10 # secs
then
until (( $(<cond) )) do; sleep $delay && foo & done
or
while sleep $delay; do (( $(<cond) )) || foo & done
then make sure foo sets cond to 1 when you want it to stop spawning.
But I'd try for a more controlled approach, like
until foo; do sleep $delay; done &
That runs foo in the foreground OF a loop running in background, so to speak, and only tries again until foo exits cleanly.
You get the idea.

Bash script does not quit on first "exit" call when calling the problematic function using $(func)

Sorry I cannot give a clear title for what's happening but here is the simplified problem code.
#!/bin/bash
# get the absolute path of .conf directory
get_conf_dir() {
local path=$(some_command) || { echo "please install some_command first."; exit 100; }
echo "$path"
}
# process the configuration
read_conf() {
local conf_path="$(get_conf_dir)/foo.conf"
[ -r "$conf_path" ] || { echo "conf file not found"; exit 200; }
# more code ...
}
read_conf
So basically here what I am trying to do is, reading a simple configuration file in bash script, and I have some trouble in error handling.
The some_command is a command which comes from a 3rd party library (i.e. greadlink from coreutils), required for obtain the path.
When running the code above, I expect it outputs "command not found" because that's where the FIRST error occurs, but actually it always prints "conf file not found".
I am very confused about such behavior, and I think BASH probably intent to handle thing like this but I don't know why. And most importantly, how to fix it?
Any idea would be greatly appreciated.
Do you see your please install some_command first message anywhere? Is it in $conf_path from the local conf_path="$(get_conf_dir)/foo.conf" line? Do you have a $conf_path value of please install some_command first/foo.conf? Which then fails the -r test?
No, you don't. (But feel free to echo the value of $conf_path in that exit 200 block to confirm this fact.) (Also Error messages should, in general, get sent to standard error and not standard output anyway. So they should be echo "..." 2>&1. That way they don't be caught by the normal command substitution at all.)
The reason you don't is because that exit 100 block is never happening.
You can see this with set -x at the top of your script also. Go try it.
See what I mean?
The reason it isn't happening is that the failure return of some_command is being swallowed by the local path=$(some_command) assignment statement.
Try running this command:
f() { local a=$(false); echo "Returned: $?"; }; f
Do you expect to see Returned: 1? You might but you won't see that.
What you will see is Returned: 0.
Now try either of these versions:
f() { a=$(false); echo "Returned: $?"; }; f
f() { local a; a=$(false); echo "Returned: $?"; }; f
Get the output you expected in the first place?
Right. local and export and declare and typeset are statements on their own. They have their own return values. They ignore (and replace) the return value of the commands that execute in their contexts.
The solution to your problem is to split the local path and path=$(some_command) statements.
http://www.shellcheck.net/ catches this (and many other common errors). You should make it your friend.
In addition to the above (if you've managed to follow along this far) even with the changes mentioned so far your exit 100 won't exit the main script since it will only exit the sub-shell spawned by the command substitution in the assignment.
If you want that exit 100 to exit your script then you either need to notice and re-exit with it (check for get_conf_dir failure after the conf_path assignment and exit with the previous exit code) or drop the get_conf_dir function itself and just do that inline in read_conf.

How to get a stdout message once a background process finishes?

I realize that there are several other questions on SE about notifications upon completion of background tasks, and how to queue up jobs to start after others end, and questions like these, but I am looking for a simpler answer to a simpler question.
I want to start a very simple background job, and get a simple stdout text notification of its completion.
For example:
cp My_Huge_File.txt New_directory &
...and when it done, my bash shell would display a message. This message could just be the completed job's PID, but if I could program unique messages per background process, that would be cool too, so I could have numerous background jobs running without confusion.
Thanks for any suggestions!
EDIT: user000001's answer separates commands with ;. I separated commands with && in my original example. The only difference I notice is that you don't have to surround your base command with braces if you use&&. Semicolons are a bit more flexible, so I've updated my examples.
The first thing that comes to mind is
{ sleep 2; echo "Sleep done"; } &
You can also suppress the accompanying stderr output from the above line:
{ { sleep 2; echo "Sleep done"; } & } 2>/dev/null
If you want to save your program output (stdout) to a log file for later viewing, you can use:
{ { sleep 2; echo "Sleep done"; } & } 2>/dev/null 1>myfile.log
Here's even a generic form you might use (You can even make an alias so that you can run it at any time without having to type so much!):
# dont hesitate to add semicolons for multiple commands
CMD="cp My_Huge_File.txt New_directory"
{ eval $CMD & } 2>/dev/null 1>myfile.log
You might also pipe stdout into another process using | in case you wish to process output in real time with other scripts or software. tee is also a helpful tool in case you wish to use multiple pipes. For reference, there are more examples of I/O redirection here.
You could use command grouping:
{ slow_program; echo ok; } &
or the wait command
slow_program &
wait
echo ok
The most reliable way is to simply have the output from the background process go to a temporary file and then consume the temporary file.
When you have a background process running it can be difficult to capture the output into something useful because multiple jobs will overwrite eachother
For example, if you have two processes which each print out a string with a number "this is my string1" "this is my string2" then it is possible for you to end up with output that looks like this:
"this is mthis is my string2y string1"
instead of:
this is my string1
this is my string2
By using temporary files you guarantee that the output will be correct.
As I mentioned in my comment above, bash already does this kind of notification by default, as far as I know. Here's an example I just made:
$ sleep 5 &
[1] 25301
$ sleep 10 &
[2] 25305
$ sleep 3 &
[3] 25309
$ jobs
[1] Done sleep 5
[2]- Running sleep 10 &
[3]+ Running sleep 3 &
$ :
[3]+ Done sleep 3
$ :
[2]+ Done sleep 10
$

Terminating a shell function non-interactively

Is there a way to terminate a shell function non-interactively without killing the shell that's running it?
I know that the shell can be told how to respond to a signal (e.g. USR1), but I can't figure out how the signal handler would terminate the function.
If necessary you may assume that the function to be terminate has been written in such a way that it is "terminable" (i.e. by declaring some suitable options).
(My immediate interest is in how to do this for zsh, but I'm also interested in knowing how to do it for bash and for /bin/sh.)
EDIT: In response to Rob Watt's suggestion:
% donothing () { echo $$; sleep 1000000 }
% donothing
47139
If at this point I hit Ctrl-C at the same terminal that is running the shell, then the function donothing does indeed terminate, and I get the command prompt back. But if instead, from a different shell session, I run
% kill -s INT 47139
...the donothing function does not terminate.
Maybe I'm not fully understand what you want, but maybe something like this?
trap "stopme=1" 2
function longcycle() {
last=$1
for i in 1 2 3 4 5
do
[ ! -z "$stopme" ] && return
echo $i
sleep 1
done
}
stopme=""
echo "Start 1st cycle"
longcycle
echo "1st cycle end"
echo "2nd cycle"
stopme=""
longcycle
echo "2nd cycle end"
The above is for bash. Run it, and try press CTRL-C.
Or for not interactively, Save the above as for example my_command, then try:
$ ./my_command & #into background
$ kill -2 $! #send CTRL-C to the bg process
EDIT:
Solution for your sleep example in the bash:
$ donothing() { trap '[[ $mypid ]] && trap - 2 && kill $mypid' 0 2; sleep 1000000 & mypid=$!;wait; }
$ donothing
when you send a signal from another terminal will terminate it. Remeber, signal '0' je "normal end of the process". Semantic name: 0=EXIT, 2=INT... etc.
and remeber too, than signals are sending to processes not to the functions. In your example, the process is the current (interactive shell), so must use the wait trick to get something interrupt-able... Not a nice solution - but the only way when want interrupt something what is running in interactive shell (not a forked one) from the another terminal...

starting remote script via ssh containing nohup

I want to start a script remotely via ssh like this:
ssh user#remote.org -t 'cd my/dir && ./myscript data my#email.com'
The script does various things which work fine until it comes to a line with nohup:
nohup time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 &
it is supposed to do start the program myprog, pipe its output to mylog and send an email with some datafiles created by myprog as attachment and the log as body. Though when the script reaches this line, ssh outputs:
Connection to remote.org closed.
What is the problem here?
Thanks for any help
Your command runs a pipeline of processes in the background, so the calling script will exit straight away (or very soon afterwards). This will cause ssh to close the connection. That in turn will cause a SIGHUP to be sent to any process attached to the terminal that the -t option caused to be created.
Your time ./myprog process is protected by a nohup, so it should carry on running. But your mutt isn't, and that is likely to be the issue here. I suggest you change your command line to:
nohup sh -c "time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 " &
so the entire pipeline gets protected. (If that doesn't fix it it may be necessary to do something with file descriptors - for instance mutt may have other issues with the terminal not being around - or the quoting may need tweaking depending on the parameters - but give that a try for now...)
This answer may be helpful. In summary, to achieve the desired effect, you have to do the following things:
Redirect all I/O on the remote nohup'ed command
Tell your local SSH command to exit as soon as it's done starting the remote process(es).
Quoting the answer I already mentioned, in turn quoting wikipedia:
Nohuping backgrounded jobs is for example useful when logged in via SSH, since backgrounded jobs can cause the shell to hang on logout due to a race condition [2]. This problem can also be overcome by redirecting all three I/O streams:
nohup myprogram > foo.out 2> foo.err < /dev/null &
UPDATE
I've just had success with this pattern:
ssh -f user#host 'sh -c "( (nohup command-to-nohup 2>&1 >output.file </dev/null) & )"'
Managed to solve this for a use case where I need to start backgrounded scripts remotely via ssh using a technique similar to other answers here, but in a way I feel is more simple and clean (at least, it makes my code shorter and -- I believe -- better-looking), by explicitly closing all three streams using the stream-close redirection syntax (as discussed at the following locations:
https://unix.stackexchange.com/questions/131801/closing-a-file-descriptor-vs
https://unix.stackexchange.com/questions/70963/difference-between-2-2-dev-null-dev-null-and-dev-null-21
http://www.tldp.org/LDP/abs/html/io-redirection.html#CFD
https://www.gnu.org/software/bash/manual/html_node/Redirections.html
Rather than the more widely used but (IMHO) hackier "redirect to/from /dev/null", resulting in the deceptively simple:
nohup script.sh >&- 2>&- <&-&
2>&1 works just as well as 2>&-, but I feel the latter is ever-so-slightly more clear. ;) Most people might have a space preceding the final "background job" ampersand, but since it is not required (as the ampersand itself functions like a semicolon in normal usage), I prefer to omit it. :)

Resources