I have two bash scripts:
a.sh:
echo "running"
doit=true
if [ $doit = true ];then
./b.sh &
fi
some-long-operation-binary
echo "done"
b.sh:
for i in {0..50}; do
echo "counting";
sleep 1;
done
I get this output:
> ./a.sh
running
counting
Why do I only see the first "counting" from b.sh and then nothing anymore? (Currently some-long-operation-binary just sleep 5 for this example). I first thought that due to setting b.sh in the background, its STDOUT is lost, but why do I see the first output? More importantly: is b.sh still running and doing its thing (its iteration)?
For context:
b.sh is going to poll a service provided by some-long-operation-binary, which is only available after some time the latter has run, and when ready, would write its content to a file.
Apologies if this is just rubbish, it's a bit late...
You should add #!/bin/bash or the like to b.sh that uses a Bash-like expansion, to make sure Bash is actually running the script. Otherwise there may be (indeed) only one loop iteration happening.
When you start a background process, it is usually a good practice to kill it and wait for it, no matter which way the script exits.
#!/bin/bash
set -e -o pipefail
declare -i show_counter=1
counter() {
local -i i
for ((i = 0;; ++i)); do
echo "counting $((i))"
sleep 1
done
}
echo starting
if ((show_counter)); then
counter &
declare -i counter_pid="${!}"
trap 'kill "${counter_pid}"
wait -n "${counter_pid}" || :
echo terminating' EXIT
fi
sleep 10 # long-running process
Related
I know I can run my bash script in the background by using bash script.sh & disown or alternatively, by using nohup. However, I want to run my script in the background by default, so when I run bash script.sh or after making it executable, by running ./script.sh it should run in the background by default. How can I achieve this?
Self-contained solution:
#!/bin/sh
# Re-spawn as a background process, if we haven't already.
if [[ "$1" != "-n" ]]; then
nohup "$0" -n &
exit $?
fi
# Rest of the script follows. This is just an example.
for i in {0..10}; do
sleep 2
echo $i
done
The if statement checks if the -n flag has been passed. If not, it calls itself with nohup (to disassociate the calling terminal so closing it doesn't close the script) and & (to put the process in the background and return to the prompt). The parent then exits to leave the background version to run. The background version is explicitly called with the -n flag, so wont cause an infinite loop (which is hell to debug!).
The for loop is just an example. Use tail -f nohup.out to see the script's progress.
Note that I pieced this answer together with this and this but neither were succinct or complete enough to be a duplicate.
Simply write a wrapper that calls your actual script with nohup actualScript.sh &.
Wrapper script wrapper.sh
#! /bin/bash
nohup ./actualScript.sh &
Actual script in actualScript.sh
#! /bin/bash
for i in {0..10}
do
sleep 10 #script is running, test with ps -eaf|grep actualScript
echo $i
done
tail -f 10 nohup.out
0
1
2
3
4
...
Adding to Heath Raftery's answer, what worked for me is a variation of what he suggested such as this:
if [[ "$1" != "-n" ]]; then
$0 -n & disown
exit $?
fi
I have a bash script (this_script.sh) that invokes multiple instances of another TCL script.
set -m
for vars in $( cat vars.txt );
do
exec tclsh8.5 the_script.tcl "$vars" &
done
while [ 1 ]; do fg 2> /dev/null; [ $? == 1 ] && break; done
The multi threading portion was taken from Aleksandr's answer on: Forking / Multi-Threaded Processes | Bash.
The script works perfectly (still trying to figure out the last line). However, this line is always displaed: exec tclsh8.5 the_script.tcl "$vars"
How do I hide that line? I tried running the script as :
bash this_script.sh > /dev/null
But this hides the output of the invoked tcl scripts too (I need the output of the TCL scripts).
I tried adding the /dev/null to the end of the statement within the for statement, but that too did not work either. Basically, I am trying to hide the command but not the output.
You should use $! to get the PID of the background process just started, accumulate those in a variable, and then wait for each of those in turn in a second for loop.
set -m
pids=""
for vars in $( cat vars.txt ); do
tclsh8.5 the_script.tcl "$vars" &
pids="$pids $!"
done
for pid in $pids; do
wait $pid
# Ought to look at $? for failures, but there's no point in not reaping them all
done
I've a little problem, probably it's a stupid question, but I started learning bash about a week ago...
I have 2 script, a.sh and b.sh. I need to make both running constantly. b.sh should waits for a signal from a.sh
(I'm trying to explain:
a.sh and b.sh run --> a.sh sends a signal to b.sh -> b.sh traps signal, does something --> a.sh does something else and then sends another signal --> b.sh traps signal, does something --> etc.)
This is what I've tried:
a.sh:
#!/bin/bash
./b.sh &;
bpid=$!;
# do something.....
while true
do
#do something....
if [ condition ]
then
kill -SIGUSR1 $bpid;
fi
done
b.sh:
#!/bin/bash
while true
do
trap "echo I'm here;" SIGUSR1;
done
When I run a.sh I get no output from b.sh, even if I redirect the standard output to a file...
However, when I run b.sh in background from my bash shell, it seems to answer to my SIGUSR1 (sent with the same command, directly from shell) (I'm getting the right output)
What I'm missing?
EDIT:
this is a simple example that I'm trying to run:
a.sh:
#!/bin/bash
./b.sh &
lastpid=$!;
if [ "$1" == "something" ]
then
kill -SIGUSR1 $lastpid;
fi
b.sh:
#!/bin/bash
trap "echo testlog 1>temp" SIGUSR1;
while true
do
wait
done
I can't get the file "temp" when running a.sh.
However if I execute ./b.sh & and then kill -SIGUSR1 PIDOFB manually, everything working fine...
One of the possible solutions would be the next one (perhaps, it's dirty one, but it works):
a.sh:
#!/bin/bash
BPIDFILE=b.pid
echo "a.sh: started"
echo "a.sh: starting b.sh.."
./b.sh &
sleep 1
BPID=`cat $BPIDFILE`
echo "a.sh: ok; b.sh pid: $BPID"
if [ "$1" == "something" ]; then
kill -SIGUSR1 $BPID
fi
# cleaning up..
rm $BPIDFILE
echo "a.sh: quitting"
b.sh:
#!/bin/bash
BPIDFILE=b.pid
trap 'echo "got SIGUSR1" > b.log; echo "b.sh: quitting"; exit 0' SIGUSR1
echo "b.sh: started"
echo "b.sh: writing my PID to $BPIDFILE"
echo $$ > $BPIDFILE
while true; do
sleep 3
done
The idea is to simply write down a PID value from within a b (background) script and read it from the a (main) script.
I have two shell scripts say A and B. I need to run A in the background and run B in the foreground till A finishes its execution in the background. I need to repeat this process for couple of runs, hence once A finishes, I need to suspend current iteration and move to next iteration.
Rough idea is like this:
for((i=0; i< 10; i++))
do
./A.sh &
for ((c=0; c< C_MAX; c++))
do
./B.sh
done
continue
done
how do I use 'wait' and 'continue' so that B runs as many times while A is in the background and the entire process moves to next iteration once A finishes
Use the PID of the current background process:
./A.sh &
while ps -p $! >/dev/null; do
./B.sh
done
I am just translating your rough idea into bash scripting.
The core idea to implement the wait-continue mechanism (while ps -p $A_PID >/dev/null; do...) is taken from #thiton who posted an answer earlier to your question.
for i in `seq 0 10`
do
./A.sh &
A_PID=$!
for i in `seq 0 $C_MAX`
do
./B.sh
done
while ps -p $A_PID >/dev/null; do
sleep 1
done
done
I have a pair of shell programs that talk over a named pipe. The reader creates the pipe when it starts, and removes it when it exits.
Sometimes, the writer will attempt to write to the pipe between the time that the reader stops reading and the time that it removes the pipe.
reader: while condition; do read data <$PIPE; do_stuff; done
writer: echo $data >>$PIPE
reader: rm $PIPE
when this happens, the writer will hang forever trying to open the pipe for writing.
Is there a clean way to give it a timeout, so that it won't stay hung until killed manually? I know I can do
#!/bin/sh
# timed_write <timeout> <file> <args>
# like "echo <args> >> <file>" with a timeout
TIMEOUT=$1
shift;
FILENAME=$1
shift;
PID=$$
(X=0; # don't do "sleep $TIMEOUT", the "kill %1" doesn't kill the sleep
while [ "$X" -lt "$TIMEOUT" ];
do sleep 1; X=$(expr $X + 1);
done; kill $PID) &
echo "$#" >>$FILENAME
kill %1
but this is kind of icky. Is there a shell builtin or command to do this more cleanly (without breaking out the C compiler)?
The UNIX "standard" way of dealing with this is to use Expect, which comes with timed-run example: run a program for only a given amount of time.
Expect can do wonders for scripting, well worth learning it. If you don't like Tcl, there is a Python Expect module as well.
This pair of programs works much more nicely after being re-written in Perl using Unix domain sockets instead of named pipes. The particular problem in this question went away entirely, since if/when one end dies the connection disappears instead of hanging.
This question comes up periodically (though I couldn't find it with a search). I've written two shell scripts to use as timeout commands: one for things that read standard input and one for things that don't read standard input. This stinks, and I've been meaning to write a C program, but I haven't gotten around to it yet. I'd definitely recommend writing a timeout command in C once and for all. But meanwhile, here's the simpler of the two shell scripts, which hangs if the command reads standard input:
#!/bin/ksh
# our watchdog timeout in seconds
maxseconds="$1"
shift
case $# in
0) echo "Usage: `basename $0` <seconds> <command> [arg ...]" 1>&2 ;;
esac
"$#" &
waitforpid=$!
{
sleep $maxseconds
echo "TIMED OUT: $#" 1>&2
2>/dev/null kill -0 $waitforpid && kill -15 $waitforpid
} &
killerpid=$!
>>/dev/null 2>&1 wait $waitforpid
# this is the exit value we care about, so save it and use it when we
rc=$?
# zap our watchdog if it's still there, since we no longer need it
2>>/dev/null kill -0 $killerpid && kill -15 $killerpid
exit $rc
The other script is online at http://www.cs.tufts.edu/~nr/drop/timeout.
trap 'kill $(ps -L $! -o pid=); exit 30' 30
echo kill -30 $$ 2\>/dev/null | at $1 2>/dev/null
shift; eval $# &
wait