How can I wait for certain output from a process then continue in Bash? - bash

I'm trying to write a bash script to do some stuff, start a process, wait for that process to say it's ready, and then do more stuff while that process continues to run. The issue I'm running into is finding a way to wait for that process to be ready before continuing, and allowing it to continue to run.
In my specific case I'm trying to setup a PPP connection. I need to wait until it has connected before I run the next command. I would also like to stop the script if PPP fails to connect. pppd prints to stdout.
In psuedo code what I want to do is:
[some stuff]
echo START
[set up the ppp connection]
pppd <options> /dev/ttyUSB0
while 1
if output of pppd contains "Script /etc/ppp/ipv6-up finished (pid ####), status = 0x0"
break
if output of pppd contains "Sending requests timed out"
exit 1
[more stuff, and pppd continues to run]
echo CONTINUING
Any ideas on how to do this?

I had to do something similar waiting for a line in /var/log/syslog to appear. This is what worked for me:
FILE_TO_WATCH=/var/log/syslog
SEARCH_PATTERN='file system mounted'
tail -f -n0 ${FILE_TO_WATCH} | grep -qe ${SEARCH_PATTERN}
if [ $? == 1 ]; then
echo "Search terminated without finding the pattern"
fi
It pipes all new lines appended to the watched file to grep and instructs grep to exit quietly as soon as the pattern is discovered. The following if statement detects if the 'wait' terminated without finding the pattern.

The quickest solution I came up with was to run pppd with nohup in the background and check the nobup.out file for stdout. It ended up something like this:
sudo nohup pppd [options] 2> /dev/null &
# check to see if it started correctly
PPP_RESULT="unknown"
while true; do
if [[ $PPP_RESULT != "unknown" ]]; then
break
fi
sleep 1
# read in the file containing the std out of the pppd command
# and look for the lines that tell us what happened
while read line; do
if [[ $line == Script\ /etc/ppp/ipv6-up\ finished* ]]; then
echo "pppd has been successfully started"
PPP_RESULT="success"
break
elif [[ $line == LCP:\ timeout\ sending\ Config-Requests ]]; then
echo "pppd was unable to connect"
PPP_RESULT="failed"
break
elif [[ $line == *is\ locked\ by\ pid* ]]; then
echo "pppd is already running and has locked the serial port."
PPP_RESULT="running"
break;
fi
done < <( sudo cat ./nohup.out )
done

There's a tool called "Expect" that does almost exactly what you want. More info: http://en.wikipedia.org/wiki/Expect
You might also take a look at the man pages for "chat", which is a pppd feature that does some of the stuff that expect can do.

If you go with expect, as #sblom advised, please check autoexpect.
You run what you need via autoexpect command and it will create expect script.
Check man page for examples.

Sorry for the late response but a simpler way would to use wait.
wait is a BASH built-in command which waits for a process to finish
Following is the excerpt from the MAN page.
wait [n ...]
Wait for each specified process and return its termination sta-
tus. Each n may be a process ID or a job specification; if a
job spec is given, all processes in that job's pipeline are
waited for. If n is not given, all currently active child pro-
cesses are waited for, and the return status is zero. If n
specifies a non-existent process or job, the return status is
127. Otherwise, the return status is the exit status of the
last process or job waited for.
For further reference on usage:
Refer to wiki page

Related

Trying to exit main command from a piped grep condition

I'm struggling to find a good solution for what I'm trying to do.
So I have a CreateReactApp instance that is booted through a yarn run start:e2e. As soon as the output from that command has "Compiled successfully", I want to be able to run next command in the bash script.
Different things I tried:
if yarn run start:e2e | grep "Compiled successfully"; then
exit 0
fi
echo "THIS NEEDS TO RUN"
This does appear to stop the logs, but it does not run the next command.
yarn run start:e2e | while read -r line;
do
echo "$line"
if [[ "$line" == *"Compiled successfully!"* ]]; then
exit 0
fi
done
echo "THIS NEEDS TO RUN"
yarn run start:e2e | grep -q "Compiled successfully";
echo $?
echo "THIS NEEDS TO RUN"
I've read about the differences between pipes / process substitions, but don't see a practical implementation regarding my use case..
Can someone enlighten me on what I'm doing wrong?
Thanks in advance!
EDIT: Because I got multiple proposed solutions and none of those worked I'll maybe redefine my main problem a bit.
So the yarn run start:e2e boots op a react app, that has a sort of "watch" mode. So it keeps spewing out logs after the "Compiled successfully" part, when changes occur to the source code, typechecks, ....
After the React part is booted (so if the log Compiled succesfully is outputted) the logs do not matter anymore but the localhost:3000 (that the yarn compiles to) must remain active.
Then I run other commands after the yarn run to do some testing on the localhost:3000
So basically what I want to achieve in pseudo (the pipe stuff in command A is very abstract and may not even look like the correct solution but trying to explain thoroughly):
# command A
yarn run dev | cmd_to_watch_the_output "Compiled succesfully" | exit 0 -> localhost:3000 active but the shell is back in 'this' window
-> keep watching the output until Compiled succesfully occurs
-> If it occurs, then the logs does not matter anymore and I want to run command B
# command B
echo "I WANT TO SEE THIS LOG"
... do other stuff ...
I hope this clears it up a bit more :D
Thanks already for the propositions!
If you want yarn run to keep running even after Compiled successfully, you can't just pipe its stdout to another program that exits after that line: that stdout needs to have somewhere to go so yarn's future attempts to write logs don't fail or block.
#!/usr/bin/env bash
case $BASH_VERSION in
''|[0-3].*|4.[012].*) echo "Error: bash 4.3+ required" >&2; exit 1;;
esac
exec {yarn_fd}< <(yarn run); yarn_pid=$!
while IFS= read -r line <&$yarn_fd; do
printf '%s\n' "$line"
if [[ $line = *"Compiled successfully!"* ]]; then
break
fi
done
# start a background process that reads future stdout from `yarn run`
cat <&$yarn_fd >/dev/null & cat_pid=$!
# close the FD from that background process so `cat` has the only copy
exec {yarn_fd}<&-
echo "Doing other things here!"
echo "When ready to shut down yarn, kill $yarn_pid and $cat_pid"

Can't terminate command from a different process

I have a command "command1" that runs indefinitely (must be killed with Ctrl+c), and that at random intervals outputs new lines to stdout. My goal is to run it and see if it outputs a certain "target" line within 10 seconds. If the target output is generated, stop immediately with success, otherwise wait for the 10 seconds and fail.
I came up with this:
timeout 10 bash -c '(while read line; do [[ "$line" == "target" ]] && break; done < <(command1))'
It works, but the problem is that when a match is found, although the timeout command completes and returns successfully, command1 will continue to run indefinitely as a background process. I need it to stop as well when "break" is executed. If a match is not found, and the timeout expires, command1 is stopped correctly.
I also tried this:
timeout 10 bash -c '(command1 | while read line; do [[ "$line" == "target" ]] && exit; done)'
Which does not leave any spurious processes running. The problem is that the exit command does not terminate command1 since it is in a separate process, and the timeout always expires even if the target is found before.
I was exploring some alternative options, such as wait -n, but the same problem persists, and I must use bash 4.2, so wait -n isn't even an option.
Any suggestions would be greatly appreciated.
When command1 does not terminate itself, you can kill it manually.
By the way: Instead of while read ... you can use grep.
timeout 10 bash -c 'command1 | (grep -m1 -Fx "target"; pkill -P $PPID command1)'
-P $PPID ensures that only the command1 from this command is killed, and not some other command1 that might run in another shell at the same time.
This assumes that command1 is a single command, and not something like (cmd1; cmd2; ...). For that case, you could simply kill the whole bash process using kill $PPID.
Found what works best for my case:
timeout 10 bash -c 'grep -q -m1 "target" <(command1); pkill -P $!'
All processes terminate gracefully when either the target is found or the timeout expires. If found, command returns 0, if not found, command returns 124.
Thank you #Socowi for some very helpful hints that put me on the right track.

Unable to exit line in bash script

I am writing a script to start an application, grep for the word "server startup", exit and then execute the next command. But it would not exit and execute next cmd after condition is met. Any help?
#!/bin/bash
application start; tail -f /application/log/file/name | \
while read line ; do
echo "$line" | grep "Server startup"
if [ $? = 0 ]
then
echo "application started...!"
fi
done
Don't Use Tail's Follow Flag
Tail's follow flag (e.g. -f) will not exit, and will continue to follow the file until it receives an appropriate signal or encounters an error condition. You will need to find a different approach to tracking data at the end of your file, such as watch, logwatch, or periodic log rotation using logrotate. The best tool to use will depend a lot on the format and frequency of your log data.

Close pipe even if subprocesses of first command is still running in background

Suppose I have test.sh as below. The intent is to run some background task(s) by this script, that continuously updates some file. If the background task is terminated for some reason, it should be started again.
#!/bin/sh
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done &
echo $! > pidfile
and want to call it like ./test.sh | otherprogram, e. g. ./test.sh | cat.
The pipe is not being closed as the background process still exists and might produce some output. How can I tell the pipe to close at the end of test.sh? Is there a better way than checking for existence of pidfile before calling the pipe command?
As a variant I tried using #!/bin/bash and disown at the end of test.sh, but it is still waiting for the pipe to be closed.
What I actually try to achieve: I have a "status" script which collects the output of various scripts (uptime, free, date, get-xy-from-dbus, etc.), similar to this test.sh here. The output of the script is passed to my window manager, which displays it. It's also used in my GNU screen bottom line.
Since some of the scripts that are used might take some time to create output, I want to detach them from output collection. So I put them in a while true; do script; sleep 1; done loop, which is started if it is not running yet.
The problem here is now that I don't know how to tell the calling script to "really" detach the daemon process.
See if this serves your purpose:
(I am assuming that you are not interested in any stderr of commands in while loop. You would adjust the code, if you are. :-) )
#!/bin/bash
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done >/dev/null 2>&1 &
echo $! > pidfile
If you want to explicitly close a file descriptor, like for example 1 which is standard output, you can do it with:
exec 1<&-
This is valid for POSIX shells, see: here
When you put the while loop in an explicit subshell and run the subshell in the background it will give the desired behaviour.
(while true; do
echo "something" >> somewhere
sleep 1
done)&

How do I make sure my bash script isn't already running?

I have a bash script I want to run every 5 minutes from cron... but there's a chance the previous run of the script isn't done yet... in this case, i want the new run to just exit. I don't want to rely on just a lock file in /tmp.. I want to make sure sure the process is actually running before i honor the lock file (or whatever)...
Here is what I have stolen from the internet so far... how do i smarten it up a bit? or is there a completely different way that's better?
if [ -f /tmp/mylockFile ] ; then
echo 'Script is still running'
else
echo 1 > /tmp/mylockFile
/* Do some stuff */
rm -f /tmp/mylockFile
fi
# Use a lockfile containing the pid of the running process
# If script crashes and leaves lockfile around, it will have a different pid so
# will not prevent script running again.
#
lf=/tmp/pidLockFile
# create empty lock file if none exists
cat /dev/null >> $lf
read lastPID < $lf
# if lastPID is not null and a process with that pid exists , exit
[ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit
echo not running
# save my pid in the lock file
echo $$ > $lf
# sleep just to make testing easier
sleep 5
There is at least one race condition in this script. Don't use it for a life support system, lol. But it should work fine for your example, because your environment doesn't start two scripts simultaneously. There are lots of ways to use more atomic locks, but they generally depend on having a particular thing optionally installed, or work differently on NFS, etc...
You might want to have a look at the man page for the flock command, if you're lucky enough to get it on your distribution.
NAME
flock - Manage locks from shell scripts
SYNOPSIS
flock [-sxon] [-w timeout] lockfile [-c] command...
Never use a lock file always use a lock directory.
In your specific case, it's not so important because the start of the script is scheduled in 5min intervals. But if you ever reuse this code for a webserver cgi-script you are toast.
if mkdir /tmp/my_lock_dir 2>/dev/null
then
echo "running now the script"
sleep 10
rmdir /tmp/my_lock_dir
fi
This has a problem if you have a stale lock, means the lock is there but no associated process. Your cron will never run.
Why use a directory? Because mkdir is an atomic operation. Only one process at a time can create a directory, all other processes get an error. This even works across shared filesystems and probably even between different OS types.
Store your pid in mylockFile. When you need to check, look up ps for the process with the pid you read from file. If it exists, your script is running.
If you want to check the process's existence, just look at the output of
ps aux | grep your_script_name
If it's there, it's not dead...
As pointed out in the comments and other answers, using the PID stored in the lockfile is much safer and is the standard approach most apps take. I just do this because it's convenient and I almost never see the corner cases (e.g. editing the file when the cron executes) in practice.
If you use a lockfile, you should make sure that the lockfile is always removed. You can do this with 'trap':
if ( set -o noclobber; echo "locked" > "$lockfile") 2> /dev/null; then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
echo "Locking succeeded" >&2
rm -f "$lockfile"
else
echo "Lock failed - exit" >&2
exit 1
fi
The noclobber option makes the creation of lockfile atomic, like using a directory.
As a one-liner and if you do not want to use a lockfile (e.g. b/c/ of a read only filesystem, etc)
test "$(pidof -x $(basename $0))" != $$ && exit
It checks that the full list of PID that bear the name of your script is equal to the current PID. The "-x" also checks for the name of shell scripts.
Bash makes it even shorter and faster:
[[ "$(pidof -x $(basename $0))" != $$ ]] && exit
In some cases, you might want to be able to distinguish between who is running the script and allow some concurrency but not all. In that case, you can use per-user, per-tty or cron-specific locks.
You can use environment variables such as $USER or the output of a program such as tty to create the filename. For cron, you can set a variable in the crontab file and test for it in your script.
you can use this one:
pgrep -f "/bin/\w*sh .*scriptname" | grep -vq $$ && exit
I was trying to solve this problem today and I came up with the below:
COMMAND_LINE="$0 $*"
JOBS=$(SUBSHELL_PID=$BASHPID; ps axo pid,command | grep "${COMMAND_LINE}" | grep -v $$ | g rep -v ${SUBSHELL_PID} | grep -v grep)
if [[ -z "${JOBS}" ]]
then
# not already running
else
# already running
fi
This relies on $BASHPID which contains the PID inside a subshell ($$ in the subshell is the parent pid). However, this relies on Bash v4 and I needed to run this on OSX which has Bash v3.2.48. I ultimately came up with another solution and it is cleaner:
JOBS=$(sh -c "ps axo pid,command | grep \"${COMMAND_LINE}\" | grep -v grep | grep -v $$")
You can always just:
if ps -e -o cmd | grep scriptname > /dev/null; then
exit
fi
But I like the lockfile myself, so I wouldn't do this without the lock file as well.
Since a socket solution has not yet been mentioned it is worth pointing out that sockets can be used as effective mutexes. Socket creation is an atomic operation, like mkdir is as Gunstick pointed out, so a socket is suitable to use as a lock or mutex.
Tim Kay's Perl script 'Solo' is a very small and effective script to make sure only one copy of a script can be run at any one time. It was designed specifically for use with cron jobs, although it works perfectly for other tasks as well and I've used it for non-crob jobs very effectively.
Solo has one advantage over the other techniques mentioned so far in that the check is done outside of the script you only want to run one copy of. If the script is already running then a second instance of that script will never even be started. This is as opposed to isolating a block of code inside the script which is protected by a lock. EDIT: If flock is used in a cron job, rather than from inside a script, then you can also use that to prevent a second instance of the script from starting - see example below.
Here's an example of how you might use it with cron:
*/5 * * * * solo -port=3801 /path/to/script.sh args args args
# "/path/to/script.sh args args args" is only called if no other instance of
# "/path/to/script.sh" is running, or more accurately if the socket on port 3801
# is not open. Distinct port numbers can be used for different programs so that
# if script_1.sh is running it does not prevent script_2.sh from starting, I've
# used the port range 3801 to 3810 without conflicts. For Linux non-root users
# the valid port range is 1024 to 65535 (0 to 1023 are reserved for root).
* * * * * solo -port=3802 /path/to/script_1.sh
* * * * * solo -port=3803 /path/to/script_2.sh
# Flock can also be used in cron jobs with a distinct lock path for different
# programs, in the example below script_3.sh will only be started if the one
# started a minute earlier has already finished.
* * * * * flock -n /tmp/path.to.lock -c /path/to/script_3.sh
Links:
Solo web page: http://timkay.com/solo/
Solo script: http://timkay.com/solo/solo
Hope this helps.
You can use this.
I'll just shamelessly copy-paste the solution here, as it is an answer for both questions (I would argue that it's actually a better fit for this question).
Usage
include sh_lock_functions.sh
init using sh_lock_init
lock using sh_acquire_lock
check lock using sh_check_lock
unlock using sh_remove_lock
Script File
sh_lock_functions.sh
#!/bin/bash
function sh_lock_init {
sh_lock_scriptName=$(basename $0)
sh_lock_dir="/tmp/${sh_lock_scriptName}.lock" #lock directory
sh_lock_file="${sh_lock_dir}/lockPid.txt" #lock file
}
function sh_acquire_lock {
if mkdir $sh_lock_dir 2>/dev/null; then #check for lock
echo "$sh_lock_scriptName lock acquired successfully.">&2
touch $sh_lock_file
echo $$ > $sh_lock_file # set current pid in lockFile
return 0
else
touch $sh_lock_file
read sh_lock_lastPID < $sh_lock_file
if [ ! -z "$sh_lock_lastPID" -a -d /proc/$sh_lock_lastPID ]; then # if lastPID is not null and a process with that pid exists
echo "$sh_lock_scriptName is already running.">&2
return 1
else
echo "$sh_lock_scriptName stopped during execution, reacquiring lock.">&2
echo $$ > $sh_lock_file # set current pid in lockFile
return 2
fi
fi
return 0
}
function sh_check_lock {
[[ ! -f $sh_lock_file ]] && echo "$sh_lock_scriptName lock file removed.">&2 && return 1
read sh_lock_lastPID < $sh_lock_file
[[ $sh_lock_lastPID -ne $$ ]] && echo "$sh_lock_scriptName lock file pid has changed.">&2 && return 2
echo "$sh_lock_scriptName lock still in place.">&2
return 0
}
function sh_remove_lock {
rm -r $sh_lock_dir
}
Usage example
sh_lock_usage_example.sh
#!/bin/bash
. /path/to/sh_lock_functions.sh # load sh lock functions
sh_lock_init || exit $?
sh_acquire_lock
lockStatus=$?
[[ $lockStatus -eq 1 ]] && exit $lockStatus
[[ $lockStatus -eq 2 ]] && echo "lock is set, do some resume from crash procedures";
#monitoring example
cnt=0
while sh_check_lock # loop while lock is in place
do
echo "$sh_scriptName running (pid $$)"
sleep 1
let cnt++
[[ $cnt -gt 5 ]] && break
done
#remove lock when process finished
sh_remove_lock || exit $?
exit 0
Features
Uses a combination of file, directory and process id to lock to make sure that the process is not already running
You can detect if the script stopped before lock removal (eg. process kill, shutdown, error etc.)
You can check the lock file, and use it to trigger a process shutdown when the lock is missing
Verbose, outputs error messages for easier debug

Resources