Output redirection to console in shell script , not reflecting realtime - shell

I have encountered a weird problem with console output when calling a subscript from inside another script.
Below is the Main Script which is calling a TestScript.
The TestScript is an installation script written in perl which takes some time to execute and prints messages as the installation progresses.
My problem here is that the output from the called perl script is only shown on the console once the installation is completed and the script returns.
Oddly i have used this kind of syntax successfully before for calling shell scripts and it works fine for them and output is shown simultaneously without waiting for the subscript to return.
I need to capture the output of the script so that i can grep if the installation was successful.
I do not control the perl script and cannot modify it in any way.
Any help would be greatly appreciated.
Thanks in advance.
#!/bin/sh
echo " Main script"
output=`/var/tmp/Packages/TestScript.pl | tee /dev/tty`
exitCode=$?
echo $output | grep -q "Installation completed successfully"
if [ $? -eq 0 ]; then
echo "Installation was successful"
fi
echo $exitCode

Related

Trying to exit main command from a piped grep condition

I'm struggling to find a good solution for what I'm trying to do.
So I have a CreateReactApp instance that is booted through a yarn run start:e2e. As soon as the output from that command has "Compiled successfully", I want to be able to run next command in the bash script.
Different things I tried:
if yarn run start:e2e | grep "Compiled successfully"; then
exit 0
fi
echo "THIS NEEDS TO RUN"
This does appear to stop the logs, but it does not run the next command.
yarn run start:e2e | while read -r line;
do
echo "$line"
if [[ "$line" == *"Compiled successfully!"* ]]; then
exit 0
fi
done
echo "THIS NEEDS TO RUN"
yarn run start:e2e | grep -q "Compiled successfully";
echo $?
echo "THIS NEEDS TO RUN"
I've read about the differences between pipes / process substitions, but don't see a practical implementation regarding my use case..
Can someone enlighten me on what I'm doing wrong?
Thanks in advance!
EDIT: Because I got multiple proposed solutions and none of those worked I'll maybe redefine my main problem a bit.
So the yarn run start:e2e boots op a react app, that has a sort of "watch" mode. So it keeps spewing out logs after the "Compiled successfully" part, when changes occur to the source code, typechecks, ....
After the React part is booted (so if the log Compiled succesfully is outputted) the logs do not matter anymore but the localhost:3000 (that the yarn compiles to) must remain active.
Then I run other commands after the yarn run to do some testing on the localhost:3000
So basically what I want to achieve in pseudo (the pipe stuff in command A is very abstract and may not even look like the correct solution but trying to explain thoroughly):
# command A
yarn run dev | cmd_to_watch_the_output "Compiled succesfully" | exit 0 -> localhost:3000 active but the shell is back in 'this' window
-> keep watching the output until Compiled succesfully occurs
-> If it occurs, then the logs does not matter anymore and I want to run command B
# command B
echo "I WANT TO SEE THIS LOG"
... do other stuff ...
I hope this clears it up a bit more :D
Thanks already for the propositions!
If you want yarn run to keep running even after Compiled successfully, you can't just pipe its stdout to another program that exits after that line: that stdout needs to have somewhere to go so yarn's future attempts to write logs don't fail or block.
#!/usr/bin/env bash
case $BASH_VERSION in
''|[0-3].*|4.[012].*) echo "Error: bash 4.3+ required" >&2; exit 1;;
esac
exec {yarn_fd}< <(yarn run); yarn_pid=$!
while IFS= read -r line <&$yarn_fd; do
printf '%s\n' "$line"
if [[ $line = *"Compiled successfully!"* ]]; then
break
fi
done
# start a background process that reads future stdout from `yarn run`
cat <&$yarn_fd >/dev/null & cat_pid=$!
# close the FD from that background process so `cat` has the only copy
exec {yarn_fd}<&-
echo "Doing other things here!"
echo "When ready to shut down yarn, kill $yarn_pid and $cat_pid"

How to grep the output of a command inside a shell script when scheduling using cron

I have a simple shell script where I need to check if my EMR job is running or not and I am just printing a log but it does not seem to work properly when scheduling the script using cron as it always prints the if block statement because the value of "status_live" var is always empty so if anyone can suggest what is wrong here otherwise on manually running the script it works properly.
#!/bin/sh
status_live=$(yarn application -list | grep -i "Streaming App")
if [ -z $status_live ]
then
echo "Running spark streaming job again at: "$(date) &
else
echo "Spark Streaming job is running, at: "$(date)
fi
Your script cannot run in cron because cron script has no environment context at all.
For example try to run your script as another use nobody that has no shell.
sudo -u nobody <script-full-path>
It will fail because it has no environment context.
The solution is to add your user environment context to your script. Just add source to your .bash_profile
sed -i "2a source $HOME/.bash_profile" <script-full-path>
Your script should look like:
#!/bin/sh
source /home/<your user name>/.bash_profile
status_live=$(yarn application -list | grep -i "Streaming App")
if [ -z $status_live ]
then
echo "Running spark streaming job again at: "$(date) &
else
echo "Spark Streaming job is running, at: "$(date)
fi
Now try to run it again with user nobody, if it works than cron will work as well.
sudo -u nobody <script-full-path>
Note that cron has no standard output. and you will need to redirect standard output from your script to a log file.
<script-full-path> >> <logfile-full-path>
# $? will have the last command status in bash shell scripting
# your complete command here below and status_live is 0 if it finds in grep (i.e. true in shell if condition.)
yarn application -list | grep -i "Streaming App"
status_live=$?
echo status_live: ${status_live}
if [ "$status_live" -eq 0 ]; then
echo "success
else
echo "fail"
fi

How to best handle command that shows error in console but is returning exit 0

I'm running into an issue with a Jenkins Pipeline. As part of our deploy process there is bash script that runs to validate the deployment files and deploy to an environment. There is a specific command at the end that uses a vendor's cli tool to deploy to our environment. If there is an error in this command, it appears to still be returning exit 0 and the build does not deploy but it is showing the job completed successfully in Jenkins. I thought about making an if statement as something like this to make the job fail if there is an error:
if $myCommand | grep -q '*** ERROR ***' &> /dev/null
then
exit 1
fi
I do want the command to finish and deploy if an error is not found in this command. My question is would this work and/or is there a better way to do this?
That's a fine way to do it, but your example is not grepping stderr, it is only grepping stdout. You'll want:
if $myCommand 2>&1 | grep ...
Or you could capture the output using Command substitution and print that message, otherwise, yeah just grep -q
output=$("$myCommand")
if [[ $output = *'*** ERROR ***'* ]]; then
printf 'Uh oh, something went wrong!\n' >&2
printf '%s\n' "$output" >&2
exit 1
fi
Although this might work or any other answer on this post, still this is not a solution but just a band-aid, the proper solution is to fix the program/utility/command that does not properly give an exit status that you can act upon.

How can I test a command before a pipe, capture it's output and understand if the command ran successfully or not

I'm writting a script that will execute commands on different servers automatically and I am trying to log all it's content and output. I am having difficulties with redirections and command status output. Meaning was the command successfull and what is its output?
I have tried many directions such redirecting the command output to a function or file. I have tried a simple if statement. Both worked for there respected function. But when I am trying to combine both of them the script always return the command to be successfull. And to some level it is.
#!/bin/bash
function Executing(){
if [ "$1" != "" ]; then
if [ $DEBUG = "true" ]; then
echo "$1" | Debug i s
if $1 2>&1 | Debug o o;then
echo "$1" | Debug s r
else
echo "$1" | Debug e r
fi
else
eval $1
fi
fi
}
Executing "apt-get update"
Also note that the system requires sudo to execute apt-get update. Thus the script should report & log an error. Right now tho, whenever I call the Executing function, the function returns a successful execution. I am pretty sure it does because of the added pipe | debug o o that captures the output and formats it. Which I'll later redirect to a log file.
So I tested if apt-get update;then echo yes;else echo no; fi which worked in my terminal. But as soon as I pipe the output to the function debug, the if statement returns true.
Try with set -o pipefail.
Check the manual: The Set Builtin
Usually, bash returns the exit code of the last command of a pipe group, so your Debug command was the one that was checked. With that setting, bash returns the exit status of the last (rightmost) command to exit with a non-zero status. If your command fails, that is the status that get propagated.

How can I prevent bash from reporting an error when attempting to call a non-existing script?

I am writing a simple script in bash to check whether or not a bunch of dependencies are installed on the current system. My script attempts to run a sample script with the -h flag, greps the output for a keyword i would expected to be returned by the sample scripts, and therefore knows whether or not the sample script is installed on the system.
I then pass this through a conditional statement that basically says sample scripts = OK or sample scripts = FAIL. However, in the case in which the sample script isn't installed on the system, bash throws the warning -bash: sample_script: command not found. How can I prevent this from displaying? I tried using the 1>&2 error redirection, but the warning still appears on the screen (I want the OK/FAIL output text to be displayed on the user's screen upon running my script).
Thanks for any suggestions!
If you just want to suppress errors (stderr) and let the "OK" or "FAIL" you are echoing (stdout) pass through, you would do:
./yourscript.sh 2> /dev/null
Although the better approach would be to test whether sample_script is executable before trying to execute it. For instance:
if [ -x "$script" ]; then
*do whatever generates FAIL or OK*
fi
#devnull dixit
command -h 2>/dev/null
I use this function to be independent of which, whence, type -p and whatnot:
pathto () {
DIRLIST=$(echo "$PATH"|tr : ' ')
for e in "$#"; do
for d in $DIRLIST; do
test -f "$d/$e" -a -x "$d/$e" && echo "$d/$e"
done
done
}
pathto script will echo the full path if it can be found (and is executable). Returning 0 or 1 instead left as an exercise :-)
for bash:
if ! type -P sample_script &> /dev/null; then
echo Error: sample_script is not installed. Come back later. >&2
exit 1
fi
sample_script "$#"

Resources