Trying to exit main command from a piped grep condition - bash

I'm struggling to find a good solution for what I'm trying to do.
So I have a CreateReactApp instance that is booted through a yarn run start:e2e. As soon as the output from that command has "Compiled successfully", I want to be able to run next command in the bash script.
Different things I tried:
if yarn run start:e2e | grep "Compiled successfully"; then
exit 0
fi
echo "THIS NEEDS TO RUN"
This does appear to stop the logs, but it does not run the next command.
yarn run start:e2e | while read -r line;
do
echo "$line"
if [[ "$line" == *"Compiled successfully!"* ]]; then
exit 0
fi
done
echo "THIS NEEDS TO RUN"
yarn run start:e2e | grep -q "Compiled successfully";
echo $?
echo "THIS NEEDS TO RUN"
I've read about the differences between pipes / process substitions, but don't see a practical implementation regarding my use case..
Can someone enlighten me on what I'm doing wrong?
Thanks in advance!
EDIT: Because I got multiple proposed solutions and none of those worked I'll maybe redefine my main problem a bit.
So the yarn run start:e2e boots op a react app, that has a sort of "watch" mode. So it keeps spewing out logs after the "Compiled successfully" part, when changes occur to the source code, typechecks, ....
After the React part is booted (so if the log Compiled succesfully is outputted) the logs do not matter anymore but the localhost:3000 (that the yarn compiles to) must remain active.
Then I run other commands after the yarn run to do some testing on the localhost:3000
So basically what I want to achieve in pseudo (the pipe stuff in command A is very abstract and may not even look like the correct solution but trying to explain thoroughly):
# command A
yarn run dev | cmd_to_watch_the_output "Compiled succesfully" | exit 0 -> localhost:3000 active but the shell is back in 'this' window
-> keep watching the output until Compiled succesfully occurs
-> If it occurs, then the logs does not matter anymore and I want to run command B
# command B
echo "I WANT TO SEE THIS LOG"
... do other stuff ...
I hope this clears it up a bit more :D
Thanks already for the propositions!

If you want yarn run to keep running even after Compiled successfully, you can't just pipe its stdout to another program that exits after that line: that stdout needs to have somewhere to go so yarn's future attempts to write logs don't fail or block.
#!/usr/bin/env bash
case $BASH_VERSION in
''|[0-3].*|4.[012].*) echo "Error: bash 4.3+ required" >&2; exit 1;;
esac
exec {yarn_fd}< <(yarn run); yarn_pid=$!
while IFS= read -r line <&$yarn_fd; do
printf '%s\n' "$line"
if [[ $line = *"Compiled successfully!"* ]]; then
break
fi
done
# start a background process that reads future stdout from `yarn run`
cat <&$yarn_fd >/dev/null & cat_pid=$!
# close the FD from that background process so `cat` has the only copy
exec {yarn_fd}<&-
echo "Doing other things here!"
echo "When ready to shut down yarn, kill $yarn_pid and $cat_pid"

Related

How to best handle command that shows error in console but is returning exit 0

I'm running into an issue with a Jenkins Pipeline. As part of our deploy process there is bash script that runs to validate the deployment files and deploy to an environment. There is a specific command at the end that uses a vendor's cli tool to deploy to our environment. If there is an error in this command, it appears to still be returning exit 0 and the build does not deploy but it is showing the job completed successfully in Jenkins. I thought about making an if statement as something like this to make the job fail if there is an error:
if $myCommand | grep -q '*** ERROR ***' &> /dev/null
then
exit 1
fi
I do want the command to finish and deploy if an error is not found in this command. My question is would this work and/or is there a better way to do this?
That's a fine way to do it, but your example is not grepping stderr, it is only grepping stdout. You'll want:
if $myCommand 2>&1 | grep ...
Or you could capture the output using Command substitution and print that message, otherwise, yeah just grep -q
output=$("$myCommand")
if [[ $output = *'*** ERROR ***'* ]]; then
printf 'Uh oh, something went wrong!\n' >&2
printf '%s\n' "$output" >&2
exit 1
fi
Although this might work or any other answer on this post, still this is not a solution but just a band-aid, the proper solution is to fix the program/utility/command that does not properly give an exit status that you can act upon.

Output redirection to console in shell script , not reflecting realtime

I have encountered a weird problem with console output when calling a subscript from inside another script.
Below is the Main Script which is calling a TestScript.
The TestScript is an installation script written in perl which takes some time to execute and prints messages as the installation progresses.
My problem here is that the output from the called perl script is only shown on the console once the installation is completed and the script returns.
Oddly i have used this kind of syntax successfully before for calling shell scripts and it works fine for them and output is shown simultaneously without waiting for the subscript to return.
I need to capture the output of the script so that i can grep if the installation was successful.
I do not control the perl script and cannot modify it in any way.
Any help would be greatly appreciated.
Thanks in advance.
#!/bin/sh
echo " Main script"
output=`/var/tmp/Packages/TestScript.pl | tee /dev/tty`
exitCode=$?
echo $output | grep -q "Installation completed successfully"
if [ $? -eq 0 ]; then
echo "Installation was successful"
fi
echo $exitCode

Bash script: `exit 0` fails to exit

So I have this Bash script:
#!/bin/bash
PID=`ps -u ...`
if [ "$PID" = "" ]; then
echo $(date) Server off: not backing up
exit
else
echo "say Server backup in 10 seconds..." >> fifo
sleep 10
STARTTIME="$(date +%s)"
echo nosave >> fifo
echo savenow >> fifo
tail -n 3 -f server.log | while read line
do
if echo $line | grep -q 'save complete'; then
echo $(date) Backing up...
OF="./backups/backup $(date +%Y-%m-%d\ %H:%M:%S).tar.gz"
tar -czhf "$OF" data
echo autosave >> fifo
echo "$(date) Backup complete, resuming..."
echo "done"
exit 0
echo "done2"
fi
TIMEDIFF="$(($(date +%s)-STARTTIME))"
if ((TIMEDIFF > 70)); then
echo "Save took too long, canceling backup."
exit 1
fi
done
fi
Basically, the server takes input from a fifo and outputs to server.log. The fifo is used to send stop/start commands to the server for autosaves. At the end, once it receives the message from the server that the server has completed a save, it tar's the data directory and starts saves again.
It's at the exit 0 line that I'm having trouble. Everything executes fine, but I get this output:
srv:scripts $ ./backup.sh
Sun Nov 24 22:42:09 EST 2013 Backing up...
Sun Nov 24 22:42:10 EST 2013 Backup complete, resuming...
done
But it hangs there. Notice how "done" echoes but "done2" fails. Something is causing it to hang on exit 0.
ADDENDUM: Just to avoid confusion for people looking at this in the future, it hangs at the exit line and never returns to the command prompt. Not sure if I was clear enough in my original description.
Any thoughts? This is the entire script, there's nothing else going on and I'm calling it direct from bash.
Here's a smaller, self contained example that exhibits the same behavior:
echo foo > file
tail -f file | while read; do exit; done
The problem is that since each part of the pipeline runs in a subshell, exit only exits the while read loop, not the entire script.
It will then hang until tail finds a new line, tries to write it, and discovers that the pipe is broken.
To fix it, you can replace
tail -n 3 -f server.log | while read line
do
...
done
with
while read line
do
...
done < <(tail -n 3 -f server.log)
By redirecting from a process substitution instead, the flow doesn't have to wait for tail to finish like it would in a pipeline, and it won't run in a subshell so that exit will actually exits the entire script.
But it hangs there. Notice how "done" echoes but "done2" fails.
done2 won't be printed at all since exit 0 has already ended your script with return code 0.
I don't know the details of bash subshells inside loops, but normally the appropriate way to exit a loop is to use the "break" command. In some cases that's not enough (you really need to exit the program), but refactoring that program may be the easiest (safest, most portable) way to solve that. It may also improve readability, because people don't expect programs to exit in the middle of a loop.

Catching errors in Bash with glassfish commands [return code in pipes]

I am writing a bash script to manage deployments to a GF server for several environments. What I would like to know is how can I get the result of a GF command and then determine whether to continue or exit.
For example
Say I want to redeploy, I have this script
$GF_ASADMIN --port $GF_PORT redeploy --name $EAR_FILE_NAME --keepstate=true $EAR_FILE | tee -a $LOG
The variables are already defined. So GF will start to redeploy and either suceed or fail. I want to check if it does and act accordingly. I have this right after it.
RC=$?
if [[ $RC -eq 0 ]];
then echoInfo "Application Successfully redeployed!" | tee -a $LOG;
else
echoError "Failed to redeploy application!"
exit 1
fi;
However, it doesnt really seem to work .
The problem is the pipe
$GF_ASADMIN ... | tee -a $LOG
$? reflects the return code of tee.
Your are looking for PIPESTATUS. See man bash:
PIPESTATUS
An array variable (see Arrays below) containing a list of exit
status values from the processes in the most-recently-executed
foreground pipeline (which may contain only a single command).
See also this example to clarify the PIPESTATUS
false | true
echo ${PIPESTATUS[#]}
Output is: 1 0
The corrected code is:
RC=${PIPESTATUS[0]}
Or try using a code block redirect, for example:
{
if "$GF_ASADMIN" --port $GF_PORT redeploy --name "$EAR_FILE_NAME" --keepstate=true "$EAR_FILE"
then
echo Info "Application Successfully redeployed!"
else
echo Error "Failed to redeploy application!" >&2
exit 1
fi
} | tee -a "$LOG"

How can I wait for certain output from a process then continue in Bash?

I'm trying to write a bash script to do some stuff, start a process, wait for that process to say it's ready, and then do more stuff while that process continues to run. The issue I'm running into is finding a way to wait for that process to be ready before continuing, and allowing it to continue to run.
In my specific case I'm trying to setup a PPP connection. I need to wait until it has connected before I run the next command. I would also like to stop the script if PPP fails to connect. pppd prints to stdout.
In psuedo code what I want to do is:
[some stuff]
echo START
[set up the ppp connection]
pppd <options> /dev/ttyUSB0
while 1
if output of pppd contains "Script /etc/ppp/ipv6-up finished (pid ####), status = 0x0"
break
if output of pppd contains "Sending requests timed out"
exit 1
[more stuff, and pppd continues to run]
echo CONTINUING
Any ideas on how to do this?
I had to do something similar waiting for a line in /var/log/syslog to appear. This is what worked for me:
FILE_TO_WATCH=/var/log/syslog
SEARCH_PATTERN='file system mounted'
tail -f -n0 ${FILE_TO_WATCH} | grep -qe ${SEARCH_PATTERN}
if [ $? == 1 ]; then
echo "Search terminated without finding the pattern"
fi
It pipes all new lines appended to the watched file to grep and instructs grep to exit quietly as soon as the pattern is discovered. The following if statement detects if the 'wait' terminated without finding the pattern.
The quickest solution I came up with was to run pppd with nohup in the background and check the nobup.out file for stdout. It ended up something like this:
sudo nohup pppd [options] 2> /dev/null &
# check to see if it started correctly
PPP_RESULT="unknown"
while true; do
if [[ $PPP_RESULT != "unknown" ]]; then
break
fi
sleep 1
# read in the file containing the std out of the pppd command
# and look for the lines that tell us what happened
while read line; do
if [[ $line == Script\ /etc/ppp/ipv6-up\ finished* ]]; then
echo "pppd has been successfully started"
PPP_RESULT="success"
break
elif [[ $line == LCP:\ timeout\ sending\ Config-Requests ]]; then
echo "pppd was unable to connect"
PPP_RESULT="failed"
break
elif [[ $line == *is\ locked\ by\ pid* ]]; then
echo "pppd is already running and has locked the serial port."
PPP_RESULT="running"
break;
fi
done < <( sudo cat ./nohup.out )
done
There's a tool called "Expect" that does almost exactly what you want. More info: http://en.wikipedia.org/wiki/Expect
You might also take a look at the man pages for "chat", which is a pppd feature that does some of the stuff that expect can do.
If you go with expect, as #sblom advised, please check autoexpect.
You run what you need via autoexpect command and it will create expect script.
Check man page for examples.
Sorry for the late response but a simpler way would to use wait.
wait is a BASH built-in command which waits for a process to finish
Following is the excerpt from the MAN page.
wait [n ...]
Wait for each specified process and return its termination sta-
tus. Each n may be a process ID or a job specification; if a
job spec is given, all processes in that job's pipeline are
waited for. If n is not given, all currently active child pro-
cesses are waited for, and the return status is zero. If n
specifies a non-existent process or job, the return status is
127. Otherwise, the return status is the exit status of the
last process or job waited for.
For further reference on usage:
Refer to wiki page

Resources