How can I wait for a file to be finished being written to in shell script? - bash

I have a shell script called parent.sh which does some stuff, then goes off and calls another shell script child.sh which does some processing and writes some output to a file output.txt.
I would like the parent.sh script to only continue processing after that output.txt file has been written to. How can I know that the file has finished being written to?
Edit: Adding answers to questions:
Does child.sh finish writing to the file before it exits? Yes
Does parent.sh run child.sh in the foreground or the background? I'm not sure - it's being called from withing parent.sh like this: ./child.sh "$param1" "$param2"

You need the wait command. wait will wait until all sub-processes have finished before continuing.
parent.sh:
#!/bin/bash
rm output.txt
./child.sh &
# Wait for the child script to finish
#
wait
echo "output.txt:"
cat output.txt
child.sh:
#!/bin/bash
for x in $(seq 10); do
echo $x >&2
echo $x
sleep 1
done > output.txt
Here is the output from ./parent.sh:
[sri#localhost ~]$ ./parent.sh
1
2
3
4
5
6
7
8
9
10
output.txt:
1
2
3
4
5
6
7
8
9
10

Related

Bash: check job is finished

Theoretical question. Can someone explain me why jobs keep returning Done when it's already done?
root#test:~# cat 1.sh
#!/bin/bash
sleep 5 &
while true; do
echo $(jobs) && sleep 1
done
root#test:~# ./1.sh
[1]+ Running sleep 5 &
[1]+ Running sleep 5 &
[1]+ Running sleep 5 &
[1]+ Running sleep 5 &
[1]+ Running sleep 5 &
[1]+ Done sleep 5
[1]+ Done sleep 5
[1]+ Done sleep 5
[1]+ Done sleep 5
^C
GNU bash, version 5.0.3(1)-release (x86_64-pc-linux-gnu)
Because job control is disabled in scripts, bash ignores signal SIGCHLD and is not notified (and doesn't want to) about terminating background processes.
Because jobs is executed inside a subshell, the parent shell environment doesn't know that the last jobs already checked the child exit status and that the child terminated. Because of that, each time a new subshell is created, it's fresh environment is not aware that the message was printed, so it prints another one.

Issues with script run from /etc/rc.local

I'm trying to run a bash script at boot time from /etc/rc.local on a headless Raspberry Pi 4 (Raspbian buster lite - Debian based). I've done something similar on a Pi 3 with success so I'm confused about why the Pi 4 would misbehave - or behave differently.
The script executed from /etc/rc.local fires but appears to just exit at seemingly random intervals with no indication as to why it's being terminated.
To test it, I dumbed down the script and just stuck the following into a test script called /home/pi/test.sh:
#!/bin/bash
exec 2> /tmp/output # send stderr from rc.local to a log file
exec 1>&2 # send stdout to the same log file
set -x # tell bash to display commands before execution
while true
do
echo 'Still alive'
sleep .1
done
I then call it from /etc/rc.local just before the exit line:
#!/bin/sh -e
#
# rc.local - executed at the end of each multiuser runlevel
#
# Make sure that the script will "exit 0" on success or any other
# value on error.
/home/pi/test.sh
echo $? >/tmp/exiterr #output exit code to /tmp/exiterr
exit 0
The contents of /tmp/output:
+ true
+ echo 'Still alive'
Still alive
+ sleep .1
+ true
+ echo 'Still alive'
Still alive
+ sleep .1
and /tmp/exiterr shows
0
If I reduce the sleep period, /tmp/output is longer (over 6000 lines without the sleep).
Any ideas why the script is exiting shortly after starting?

applescript blocks shell script cmd when writing to pipe

The following script works as expected when executed from an Applescript do shell script command.
#!/bin/sh
sleep 10 &
#echo "hello world" > /tmp/apipe &
cpid=$!
sleep 1
if ps -ef | grep $cpid | grep sleep | grep -qv grep ; then
echo "killing blocking cmd..."
kill -KILL $cpid
# non zero status to inform launch script of problem...
exit 1
fi
But, if the sleep command (line 2) is swaped to the echo command in (line 3) together with the if statement, the script blocks when run from Applescript but runs fine from the terminal command line.
Any ideas?
EDIT: I should have mentioned that the script works properly when a consumer/reader is connected to the pipe. It only block when nothing is reading from the pipe...
OK, the following will do the trick. It basically kills the job using its jobid. Since there is only one, it's the current job %%.
I was lucky that I came across the this answer or it would have driven me crazy :)
#!/bin/sh
echo $1 > $2 &
sleep 1
# Following is necessary. Seems to need it or
# job will not complete! Also seen at
# https://stackoverflow.com/a/10736613/348694
echo "Checking for running jobs..."
jobs
kill %% >/dev/null 2>&1
if [ $? -eq 0 ] ; then
echo "Taking too long. Killed..."
exit 1
fi
exit 0

Exiting whole bash script

I have two script, script1 have a while loop, script 2 called in this while loop. in script 2 is case statement with an exiting whole program option. however when the exit 0 called it only exit from script 2 not exit while loop in script 1. Any idea to do that? script details as below
script1.sh
while read -r line;
do
bash script2.sh
done<list.txt
script2.sh
read input </dev/tty
case "input" in
e)
exit 0
;;
*)
echo $input
;;
esac
To be clear what I want the program to run is when I press e I want the whole program stop not just finish one loop.
Thank you for the help in advance
Script 2 is in a different process to script 1, so script 2 can't kill script 1.
2 Solutions:
Make script 2 run in the same process as script run (e.g. . script2)
Make script 2 return a value that script 1 can check for and exit if matched.
Put:
set -e
at the beginning of script 1. And in script 2, replace 'exit 0' with 'exit 1'.

Start background process from shellscript then bring back to foreground later

I'm trying to make a shell script that does the following:
Start program x
While x is running execute some commands, for example:
echo "blabla" >> ~/blabla.txt
After the execution of those commands program x should be running in the foreground, so that it can take user input.
So far I have:
~/x &
echo "blabla" >> ~/blabla.txt
However, I don't know how to move x back to the foreground. This is all called from a shell script so I don't know the job number of x to move to the foreground.
Note: everything has to be automated, no user interaction with the shell script should be needed.
Any suggestions are welcome :)
Although absolutely don't understand why someone may need such script, and I'm sure than exists more elegant and more better/correct solution - but ok - the next demostrating how:
The script what going to background (named as bgg)
#!/bin/bash
for i in $(seq 10)
do
echo "bg: $i"
sleep 1
done
read -p 'BGG enter something:' -r data
echo "$0 got: $data"
the main script (main.sh)
set -m #this is important
echo "Sending script bgg to background - will cycle 10 secs"
./bgg & 2>/dev/null
echo "Some commands"
date
read -r -p 'main.sh - enter something:' fgdata
echo "Main.sh got: ==$fgdata=="
jnum=$(jobs -l | grep " $! " | sed 's/\[\(.*\)\].*/\1/')
echo "Backgroung job number: $jnum"
echo "Now sleeping 3 sec"
sleep 3
echo "Bringing $jnum to foreground - wait until the BG job will read"
fg $jnum
run the ./main.sh - and the result will be something like
Sending bgg to background - will cycle 10 secs
Some commands
Mon Mar 3 00:04:57 CET 2014
main.sh - enter something:bg: 1
bg: 2
bg: 3
bg: 4
bg: 5
qqbg: 6
qqqqq
Main.sh got: ==qqqqqqq==
Backgroung job number: 1
Now sleeping 3 sec
bg: 7
bg: 8
bg: 9
Bringing 1 to foreground - wait until the BG job will read
./bgg
bg: 10
BGG enter something:wwwwwww
./bgg got: wwwwwww
You can use fg to bring the last background process to foreground

Resources