I have two programs, prog1 and prog2, prog2 takes the PID of prog1 as argument and they have to run in parallel.
I need stdin and stdout of prog1 to stay connected to the shell, note that it's not an interactive shell.
I tried like this, but stdin of prog1 is not connected to the shell:
#!/bin/bash
./prog1 & (./prog2 $! 1>&2 0<&-)
fg
From bash manual:
If a command is terminated by the control operator ‘&’, the shell executes the command asynchronously in a subshell. This is known as executing the command in the background, and these are referred to as asynchronous commands. The shell does not wait for the command to finish, and the return status is 0 (true). When job control is not active (see Job Control), the standard input for asynchronous commands, in the absence of any explicit redirections, is redirected from /dev/null.
So let's make some explicit redirections! The following scripts:
==> prog1 <==
#!/bin/bash
set -x
read data
echo "$data"
==> prog2 <==
#!/bin/bash
while kill -0 "$1"; do sleep 0.1; done
==> script.sh <==
#!/bin/bash
set -x
./prog1 <&0 &
# ^^^^ - the important part
./prog2 $! >&2 <&-
wait
When executing ./script.sh results in:
$ LC_ALL=C ./script.sh
+ ./prog2 1835980
+ ./prog1
+ read data
bla
+ echo bla
bla
./prog2: line 2: kill: (1835980) - No such process
+ wait
Related
Appending a & to the end of a command starts it in the background. E.g.:
$ wget google.com &
[1] 7072
However, this prints a job number and PID. Is it possible to prevent these?
Note: I still want to keep the output of wget – it's just the [1] 7072 that I want to get rid of.
There is an option to the set builtin, set -b, that controls the output of this line, but the choice is limited to "immediately" (when set) and "wait for next prompt" (when unset).
Example of immediate printing when the option is set:
$ set -b
$ sleep 1 &
[1] 9696
$ [1]+ Done sleep 1
And the usual behaviour, waiting for the next prompt:
$ set +b
$ sleep 1 &
[1] 840
$ # Press enter here
[1]+ Done sleep 1
So as far as I can see, these can't be suppressed. The good news is, though, that job control messages are not displayed in a non-interactive shell:
$ cat sleeptest
#!/bin/bash
sleep 1 &
$ ./sleeptest
$
So if you start a command in the background in a subshell, there won't be any messages. To do that in an interactive session, you can run your command in a subshell like this (thanks to David C. Rankin):
$ ( sleep 1 & )
$
which also results in no job control prompts.
From the Advanced Bash-Scripting Guide:
Suppressing stdout.
cat $filename >/dev/null
# Contents of the file will not list to stdout.
Suppressing stderr (from Example
16-3).
rm $badname 2>/dev/null
# So error messages [stderr] deep-sixed.
Suppressing output from both stdout and stderr.
cat $filename 2>/dev/null >/dev/null
#1 If "$filename" does not exist, there will be no error message output.
# If "$filename" does exist, the contents of the file will not list to stdout.
# Therefore, no output at all will result from the above line of code.
#
# This can be useful in situations where the return code from a command
#+ needs to be tested, but no output is desired.
#
# cat $filename &>/dev/null
# also works, as Baris Cicek points out.
I have a bash script (this_script.sh) that invokes multiple instances of another TCL script.
set -m
for vars in $( cat vars.txt );
do
exec tclsh8.5 the_script.tcl "$vars" &
done
while [ 1 ]; do fg 2> /dev/null; [ $? == 1 ] && break; done
The multi threading portion was taken from Aleksandr's answer on: Forking / Multi-Threaded Processes | Bash.
The script works perfectly (still trying to figure out the last line). However, this line is always displaed: exec tclsh8.5 the_script.tcl "$vars"
How do I hide that line? I tried running the script as :
bash this_script.sh > /dev/null
But this hides the output of the invoked tcl scripts too (I need the output of the TCL scripts).
I tried adding the /dev/null to the end of the statement within the for statement, but that too did not work either. Basically, I am trying to hide the command but not the output.
You should use $! to get the PID of the background process just started, accumulate those in a variable, and then wait for each of those in turn in a second for loop.
set -m
pids=""
for vars in $( cat vars.txt ); do
tclsh8.5 the_script.tcl "$vars" &
pids="$pids $!"
done
for pid in $pids; do
wait $pid
# Ought to look at $? for failures, but there's no point in not reaping them all
done
For following bash statement:
tail -Fn0 /tmp/report | while [ 1 ]; do echo "pre"; exit; echo "past"; done
I got "pre", but didn't quit to the bash prompt, then if I input something into /tmp/report, I could quit from this script and get into bash prompt.
I think that's reasonable. the 'exit' make the 'while' statement quit, but the 'tail' still alive. If something input into /tmp/report, the 'tail' will output to pipe, then 'tail' will detect the pipe is close, then 'tail' quits.
Am I right? If not, would anyone provide a correct interpretation?
Is it possible to add anything into 'while' statement to quit from the whole pipe statement immediately? I know I could save the pid of tail into a temporary file, then read this file in the 'while', then kill the tail. Is there a simpler way?
Let me enlarge my question. If use this tail|while in a script file, is it possible to fulfill following items simultaneously?
a. If Ctrl-C is inputed or signal the main shell process, the main shell and various subshells and background processes spawned by the main shell will quit
b. I could quit from tail|while only at a trigger case, and preserve other subprocesses keep running
c. It's better not use temporary file or pipe file.
You're correct. The while loop is executing in a subshell because its input is redirected, and exit just exits from that subshell.
If you're running bash 4.x, you may be able to achieve what you want with a coprocess.
coproc TAIL { tail -Fn0 /tmp/report.txt ;}
while [ 1 ]
do
echo "pre"
break
echo "past"
done <&${TAIL[0]}
kill $TAIL_PID
http://www.gnu.org/software/bash/manual/html_node/Coprocesses.html
With older versions, you can use a background process writing to a named pipe:
pipe=/tmp/tail.$$
mkfifo $pipe
tail -Fn0 /tmp/report.txt >$pipe &
TAIL_PID=$!
while [ 1 ]
do
echo "pre"
break
echo "past"
done <$pipe
kill $TAIL_PID
rm $pipe
You can (unreliably) get away with killing the process group:
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done
This may send the signal to more processes than you want. If you run the above command in an interactive terminal you should be okay, but in a script it is entirely possible (indeed likely) the the process group will include the script running the command. To avoid sending the signal, it would be wise to enable monitoring and run the pipeline in the background to ensure that a new process group is formed for the pipeline:
#!/bin/sh
# In Posix shells that support the User Portability Utilities option
# this includes bash & ksh), executing "set -m" turns on job control.
# Background processes run in a separate process group. If the shell
# is interactive, a line containing their exit status is printed to
# stderr upon their completion.
set -m
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done &
wait
Note that I've replaced the while [ 1 ] with while : because while [ 1 ] is poor style. (It behaves exactly the same as while [ 0 ]).
I have a main shell file main.sh, and numerous background subshell files all called by execution. These files are a.sh, b.sh, c.sh...
I am calling these subshell files in the main such as the following:
#main.sh
./a.sh &
./b.sh &
./c.sh &
In each of the background subshell file, the format looks like the following
#a.sh
echo "This is a process"
the subshells are almost identical, the output messages are necessary for my task but it is hard to identify which messages correspond to which subshell process. I am thinking of assigning them an PID and prepend that PID to the messages output from each of the subshell.
This looks like the following:
[123]This is a process
[234]This is a process
...
Thanks!
EDIT It would be great to enable this feature in the main.sh instead of modify the subshell process since I only want to see them some times for debugging.
The variable $$ contains the current PID.
echo "[$$]This is a process"
Why don't you just launch them like so?
./a.sh | sed 's/^/[a] /' &
./b.sh | sed 's/^/[b] /' &
./c.sh | sed 's/^/[c] /' &
or even if you really want to include the PID:
./a.sh | sed "s/^/[$!] /" &
./b.sh | sed "s/^/[$!] /" &
./c.sh | sed "s/^/[$!] /" &
should work.
If it doesn't need to be the actual process ID, but just a unique identifier, you can set a known environment variable:
#main.sh
MY_ID=1 ./a.sh &
MY_ID=2 ./b.sh &
MY_ID=3 ./c.sh &
#a.sh
echo "[$MY_ID]This is a process"
In bash 4 or later, you can access the process ID of a shell with $BASHPID, so that you
don't need to specify a MY_ID when you launch each background process.
#a.sh
echo "[$BASHPID]This is a process"
If you want to automatically prepend every line of output with the identifier, you'll need to define a wrapper for echo instead of using it directly:
my_echo () { echo "[$MY_ID]$*"; }
my_echo () { echo "[$BASHPID]$*"; }
You could create a bash wrapper function called "echo" and export it, so your background scripts use this overriding version:
ubuntu#ubuntu:~$ cat bgscript
#!/bin/bash
for i in {1..3}; do echo "i=$i"; done
ubuntu#ubuntu:~$ function echo { builtin echo "[$$]$#"; }
ubuntu#ubuntu:~$ export -f echo
ubuntu#ubuntu:~$ ./bgscript &
[1] 7290
ubuntu#ubuntu:~$ [7290]i=1
[7290]i=2
[7290]i=3
[1]+ Done ./bgscript
ubuntu#ubuntu:~$
You could achieve something like this using bash coprocesses (bash 4 required). The code would go like:
prepend_pid() {
coproc "${#}"
pid=${!}
trap "kill ${pid}" EXIT
sed -e "s:^:${pid} :" </dev/fd/${COPROC[0]}
}
prepend_pid ./a.sh &
prepend_pid ./b.sh &
prepend_pid ./c.sh &
This is because you need to meet two goals:
You need to fork the script first to know its PID,
You need to redirect the script output before forking it.
A coprocess is basically a ./a.sh & but with the input and output redirected to anonymous pipes. The pipe fds are placed in ${COPROC[#]} array. If you don't have new enough bash, you can use named pipes instead.
What is done here:
You start the script in a coprocess. Its output gets queued to a pipe that's not used yet.
You obtain the forked PID from ${!}.
You set a trap to kill the spawned script when the reader terminates.
You start a new sed process that reads the script output from the pipe at fd ${COPROC[0]} and prepends the pid to it.
Note that sed itself would block the script, so you need to fork it as well. And since it needs to access the pipe, you can't fork it separately. Instead, we fork the whole function.
The trap makes sure that the spawned script will be killed when you kill the spawned function. Thanks to that, you can do something like:
prepend_pid ./a.sh &
spid=$!
sleep 4
kill ${spid}
without worrying that ./a.sh will keep running in the background with no output.
I have a very simple unix bash script I am using to execute a command every second. It has the following form:
while : ; do
cat /proc/`pidof iBrowser.bin`/smaps | awk -f ./myawkscript.awk >> $DIRPATH
sleep 1
done
The script runs fine, but it won't stop! If I hit ctrl-C while the script is running, the process does not stop, and I get the following error:
cat: can't open '/proc//smaps': No such file or directory
Does anyone know how this can be avoided?
You should consider a trap function. See this and this.
To trap ctrl-c, you'd define a handler, eg:
ctrl_c ()
{
# Handler for Control + C Trap
echo ""
echo "Control + C Caught..."
exit
}
And then state that you wish to trap it with that handler:
trap ctrl_c SIGINT
Alternatively...
you could run the script in the background by appending &, e.g.
$ ./your_script.sh &
Which would present you with a job id in [square brackets]:
$ ./your_script.sh &
[1] 5183
(in this case 1). When you were done, you could terminate the process with
$ kill %1
Note the percent sign indicates you are referencing a job and not a process id
awk -f ./myawkscript.awk /proc/`pidof iBrowser.bin`/smaps >> $DIRPATH \
|| exit 1
will exit the script if the awk invocation fails, which happens when pidof fails due to an erroneous path. I've taking the liberty of removing your UUOC.