What's the effect of combining exec and & in shell script? - bash

I maintained some legacy Linux shell script codes, and I met something like this:
#!/bin/sh
foo()
{
exec some_shell_command &
return 0
}
foo
I'm very curious about the effect of such shell scripts. Is some_shell_command executed in another subprocess? And after the execution of exec command, does shell script process become the some_shell_command process?
Thanks in advance.
update:
The script is:
exec /mnt/usr/bin/pppd $DIAL_DEV unit $count call $PROVIDER ipparam $PROVIDER &
and at sometime:
# Shutdown ppp connection.
pppOff() {
# Get device index.
local index=$1
# Check connection.
pppCheck $index
if [ $? -ne 0 ]; then
echo "invalid pppd: "$index
return 1
fi
# Get pid.
local PID=$PPPD_PID
echo "pppd pid for "$index": "$PID
# Kill
kill -TERM ${PID}
return 0
}
after executing the pppOff, the script itself is killed. So pppd is executed as the same process as the script maybe.

Is some_shell_command executed in another subprocess?
Yes.
after the execution of exec command, does shell script process become the some_shell_command process?
There are two processes, the one spawned for the background becomes some_shell_command. Parent continues execution.
does it mean 'exec' is meaningless?
It has very little meaning in this specific context. Generally, you should expect that Bash optimizes and if Bash finds out there is only one command, it will optimize it to an exec.
$ strace -ff bash -c '/bin/echo 1' 2>&1 | grep clone
# nothing, because `fork()` is optimized out
There are cases (see https://github.com/bminor/bash/blob/f3a35a2d601a55f337f8ca02a541f8c033682247/builtins/evalstring.c#L441 https://github.com/bminor/bash/blob/f3a35a2d601a55f337f8ca02a541f8c033682247/builtins/evalstring.c#L124 ) where the command is not optimized, mostly in the case of like trap or some signal handling that Bash needs to execute after the command is done.
Another difference is that exec requires specifically an executable, where without exec then some_shell_command could be a built-in, function or an alias.

Related

Bash - Hiding a command but not its output

I have a bash script (this_script.sh) that invokes multiple instances of another TCL script.
set -m
for vars in $( cat vars.txt );
do
exec tclsh8.5 the_script.tcl "$vars" &
done
while [ 1 ]; do fg 2> /dev/null; [ $? == 1 ] && break; done
The multi threading portion was taken from Aleksandr's answer on: Forking / Multi-Threaded Processes | Bash.
The script works perfectly (still trying to figure out the last line). However, this line is always displaed: exec tclsh8.5 the_script.tcl "$vars"
How do I hide that line? I tried running the script as :
bash this_script.sh > /dev/null
But this hides the output of the invoked tcl scripts too (I need the output of the TCL scripts).
I tried adding the /dev/null to the end of the statement within the for statement, but that too did not work either. Basically, I am trying to hide the command but not the output.
You should use $! to get the PID of the background process just started, accumulate those in a variable, and then wait for each of those in turn in a second for loop.
set -m
pids=""
for vars in $( cat vars.txt ); do
tclsh8.5 the_script.tcl "$vars" &
pids="$pids $!"
done
for pid in $pids; do
wait $pid
# Ought to look at $? for failures, but there's no point in not reaping them all
done

Quit from pipe in bash

For following bash statement:
tail -Fn0 /tmp/report | while [ 1 ]; do echo "pre"; exit; echo "past"; done
I got "pre", but didn't quit to the bash prompt, then if I input something into /tmp/report, I could quit from this script and get into bash prompt.
I think that's reasonable. the 'exit' make the 'while' statement quit, but the 'tail' still alive. If something input into /tmp/report, the 'tail' will output to pipe, then 'tail' will detect the pipe is close, then 'tail' quits.
Am I right? If not, would anyone provide a correct interpretation?
Is it possible to add anything into 'while' statement to quit from the whole pipe statement immediately? I know I could save the pid of tail into a temporary file, then read this file in the 'while', then kill the tail. Is there a simpler way?
Let me enlarge my question. If use this tail|while in a script file, is it possible to fulfill following items simultaneously?
a. If Ctrl-C is inputed or signal the main shell process, the main shell and various subshells and background processes spawned by the main shell will quit
b. I could quit from tail|while only at a trigger case, and preserve other subprocesses keep running
c. It's better not use temporary file or pipe file.
You're correct. The while loop is executing in a subshell because its input is redirected, and exit just exits from that subshell.
If you're running bash 4.x, you may be able to achieve what you want with a coprocess.
coproc TAIL { tail -Fn0 /tmp/report.txt ;}
while [ 1 ]
do
echo "pre"
break
echo "past"
done <&${TAIL[0]}
kill $TAIL_PID
http://www.gnu.org/software/bash/manual/html_node/Coprocesses.html
With older versions, you can use a background process writing to a named pipe:
pipe=/tmp/tail.$$
mkfifo $pipe
tail -Fn0 /tmp/report.txt >$pipe &
TAIL_PID=$!
while [ 1 ]
do
echo "pre"
break
echo "past"
done <$pipe
kill $TAIL_PID
rm $pipe
You can (unreliably) get away with killing the process group:
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done
This may send the signal to more processes than you want. If you run the above command in an interactive terminal you should be okay, but in a script it is entirely possible (indeed likely) the the process group will include the script running the command. To avoid sending the signal, it would be wise to enable monitoring and run the pipeline in the background to ensure that a new process group is formed for the pipeline:
#!/bin/sh
# In Posix shells that support the User Portability Utilities option
# this includes bash & ksh), executing "set -m" turns on job control.
# Background processes run in a separate process group. If the shell
# is interactive, a line containing their exit status is printed to
# stderr upon their completion.
set -m
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done &
wait
Note that I've replaced the while [ 1 ] with while : because while [ 1 ] is poor style. (It behaves exactly the same as while [ 0 ]).

executing bash loop while command is running

I want to build a bash script that executes a command and in the meanwhile performs other stuff, with the possibility of killing the command if the script is killed. Say, executes a cp of a large file and in the meanwhile prints the elapsed time since copy started, but if the script is killed it kills also the copy.
I don't want to use rsync, for 2 reasons: 1) is slow and 2) I want to learn how to do it, it could be useful.
I tried this:
until cp SOURCE DEST
do
#evaluates time, stuff, commands, file dimensions, not important now
#and echoes something
done
but it doesn't execute the do - done block, as it is waiting that the copy ends. Could you please suggest something?
until is the opposite of while. It's nothing to do with doing stuff while another command runs. For that you need to run your task in the background with &.
cp SOURCE DEST &
pid=$!
# If this script is killed, kill the `cp'.
trap "kill $pid 2> /dev/null" EXIT
# While copy is running...
while kill -0 $pid 2> /dev/null; do
# Do stuff
...
sleep 1
done
# Disable the trap on a normal exit.
trap - EXIT
kill -0 checks if a process is running. Note that it doesn't actually signal the process and kill it, as the name might suggest. Not with signal 0, at least.
There are three steps involved in solving your problem:
Execute a command in the background, so it will keep running while your script does something else. You can do this by following the command with &. See the section on Job Control in the Bash Reference Manual for more details.
Keep track of that command's status, so you'll know if it is still running. You can do this with the special variable $!, which is set to the PID (process identifier) of the last command you ran in the background, or empty if no background command was started. Linux creates a directory /proc/$PID for every process that is running and deletes it when the process exits, so you can check for the existence of that directory to find out if the background command is still running. You can learn more than you ever wanted to know about /proc from the Linux Documentation Project's File System Hierarchy page or Advanced Bash-Scripting Guide.
Kill the background command if your script is killed. You can do this with the trap command, which is a bash builtin command.
Putting the pieces together:
# Look for the 4 common signals that indicate this script was killed.
# If the background command was started, kill it, too.
trap '[ -z $! ] || kill $!' SIGHUP SIGINT SIGQUIT SIGTERM
cp $SOURCE $DEST & # Copy the file in the background.
# The /proc directory exists while the command runs.
while [ -e /proc/$! ]; do
echo -n "." # Do something while the background command runs.
sleep 1 # Optional: slow the loop so we don't use up all the dots.
done
Note that we check the /proc directory to find out if the background command is still running, because kill -0 will generate an error if it's called when the process no longer exists.
Update to explain the use of trap:
The syntax is trap [arg] [sigspec …], where sigspec … is a list of signals to catch, and arg is a command to execute when any of those signals is raised. In this case, the command is a list:
'[ -z $! ] || kill $!'
This is a common bash idiom that takes advantage of the way || is processed. An expression of the form cmd1 || cmd2 will evaluate as successful if either cmd1 OR cmd2 succeeds. But bash is clever: if cmd1 succeeds, bash knows that the complete expression must also succeed, so it doesn't bother to evaluate cmd2. On the other hand, if cmd1 fails, the result of cmd2 determines the overall result of the expression. So an important feature of || is that it will execute cmd2 only if cmd1 fails. That means it's a shortcut for the (invalid) sequence:
if cmd1; then
# do nothing
else
cmd2
fi
With that in mind, we can see that
trap '[ -z $! ] || kill $!' SIGHUP SIGINT SIGQUIT SIGTERM
will test whether $! is empty (which means the background task was never executed). If that fails, which means the task was executed, it kills the task.
here is the simplest way to do that using ps -p :
[command_1_to_execute] &
pid=$!
while ps -p $pid &>/dev/null; do
[command_2_to_be_executed meanwhile command_1 is running]
sleep 10
done
This will run every 10 seconds the command_2 if the command_1 is still running in background .
hope this will help you :)
What you want is to do two things at once in shell. The usual way to do that is with a job. You can start a background job by ending the command with an ampersand.
copy $SOURCE $DEST &
You can then use the jobs command to check its status.
Read more:
Gnu Bash Job Control

How can I run multiple bash scripts in unison?

I'm learning Bash for a Unix class, and I'm trying to figure out how to run a script, then run a second script while the first is running and have the two interact. To clarify, the scripts look like this:
#!/bin/bash
num = 1
trap exit 0 SIGINT SIGTERM
trap "{ echo &num ; num++; }" SIGUSR1
while :
do
sleep 2
done
and the second one:
#!/bin/bash
if ps | grep "$1" > /dev/null
then
kill -SIGUSR1 $1
else
echo "Process doesn't exist"
fi
exit 0
In case the code isn't correct, the general idea is for the first script to loop until it recieves a SIGINT or SIGTERM, and echo and increment a number whenever it receives a SIGUSR1. The second script takes a pid as an argument and checks if it exists, and sends a SIGUSR1 to the given process. The problem is that when I run the first script, I can't do anything unless I move it to the background with ctrl-z, but when it's there it doesn't seem to respond to any signal except a kill signal. Any ideas on how to make this work?
You can use mycommand & to run a script in the background. Ctrl-Z stops the script, but you can then use bg to let it run in the background. In either case, you can use fg to bring it to the foreground again.
Also note that you can't have spaces around the = in assignments, and you can use let num++ to increment num. You should also singlequote the command in trap, to prevent "$num" from expanding.
All in all:
#!/bin/bash
num=1
trap exit 0 SIGINT SIGTERM
trap '{ echo $num ; let num++; }' SIGUSR1
while :
do
sleep 2
done
Finally, you can more easily check if a pid exists by just using kill -0 pid, or just attempting to sigusr1 it and check the result, to avoid grep "123" matching the substring of pid "1234" and such.
You need to make the first script run in the background. When you press Ctrl+Z it is suspended. Then you can type "bg" to make it run in the background (it will stop again if it tries to read from standard input, to allow you to switch back to it with the "fg" command).
Another way is to start script1 already in the background like this:
$ ./script1 &
The ampersand starts a job in the background and returns you to the prompt immediately.
Look in the bash man page under "JOB CONTROL" (here's a copy) for more information on how this works. The key commands to deal with jobs from an interactive shell is "jobs", "fg", and "bg".

Close pipe even if subprocesses of first command is still running in background

Suppose I have test.sh as below. The intent is to run some background task(s) by this script, that continuously updates some file. If the background task is terminated for some reason, it should be started again.
#!/bin/sh
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done &
echo $! > pidfile
and want to call it like ./test.sh | otherprogram, e. g. ./test.sh | cat.
The pipe is not being closed as the background process still exists and might produce some output. How can I tell the pipe to close at the end of test.sh? Is there a better way than checking for existence of pidfile before calling the pipe command?
As a variant I tried using #!/bin/bash and disown at the end of test.sh, but it is still waiting for the pipe to be closed.
What I actually try to achieve: I have a "status" script which collects the output of various scripts (uptime, free, date, get-xy-from-dbus, etc.), similar to this test.sh here. The output of the script is passed to my window manager, which displays it. It's also used in my GNU screen bottom line.
Since some of the scripts that are used might take some time to create output, I want to detach them from output collection. So I put them in a while true; do script; sleep 1; done loop, which is started if it is not running yet.
The problem here is now that I don't know how to tell the calling script to "really" detach the daemon process.
See if this serves your purpose:
(I am assuming that you are not interested in any stderr of commands in while loop. You would adjust the code, if you are. :-) )
#!/bin/bash
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done >/dev/null 2>&1 &
echo $! > pidfile
If you want to explicitly close a file descriptor, like for example 1 which is standard output, you can do it with:
exec 1<&-
This is valid for POSIX shells, see: here
When you put the while loop in an explicit subshell and run the subshell in the background it will give the desired behaviour.
(while true; do
echo "something" >> somewhere
sleep 1
done)&

Resources