Constantly Checking The Output From A Bash Command - bash

I have a script which calls an infinite command which outputs data and will never stop. I want to check if that command outputs a specific string, say "Success!", and then I'd like to stop it and move to the next line on my script.
How is this possible?
Thanks in advance.
Edit:
#!/bin/bash
xrandr --output eDP-1-1 --mode 2048x1536
until steam steam://run/730 | grep "Game removed" ; do
:
done
echo "Success!"
xrandr --output eDP-1-1 --mode 3840x2160
All my script does is to change resolution, start a Steam game, and then wait for a message to appear in the output which indicates the game was closed and change the resolution back to what it was.
The message I've mentioned appears every time, but the Steam process is still running. Could that be the problem?
If I close Steam manually then the script continues like I want it to.

You need to use the until built-in construct for this,
until command-name | grep -q "Success!"; do
:
done
This will wait indefinitely until the string you want appears in stdout of your command, at which point grep succeeds and exits the loop. The -q flag in grep instructs it to run it silent and : is a short-hand for no-operation.
Difference between until and while - the former runs the loop contents until the exit-code of the command/operator used becomes 0 i.e. success, at which point it exits and while does the vice-versa of that.

Related

Loop trough docker output until I find a String in bash

I am quite new to bash (barely any experience at all) and I need some help with a bash script.
I am using docker-compose to create multiple containers - for this example let's say 2 containers. The 2nd container will execute a bash command, but before that, I need to check that the 1st container is operational and fully configured. Instead of using a sleep command I want to create a bash script that will be located in the 2nd container and once executed do the following:
Execute a command and log the console output in a file
Read that file and check if a String is present. The command that I will execute in the previous step will take a few seconds (5 - 10) seconds to complete and I need to read the file after it has finished executing. I suppose i can add sleep to make sure the command is finished executing or is there a better way to do this?
If the string is not present I want to execute the same command again until I find the String I am looking for
Once I find the string I am looking for I want to exit the loop and execute a different command
I found out how to do this in Java, but if I need to do this in a bash script.
The docker-containers have alpine as an operating system, but I updated the Dockerfile to install bash.
I tried this solution, but it does not work.
#!/bin/bash
[command to be executed] > allout.txt 2>&1
until
tail -n 0 -F /path/to/file | \
while read LINE
do
if echo "$LINE" | grep -q $string
then
echo -e "$string found in the console output"
fi
done
do
echo "String is not present. Executing command again"
sleep 5
[command to be executed] > allout.txt 2>&1
done
echo -e "String is found"
In your docker-compose file make use of depends_on option.
depends_on will take care of startup and shutdown sequence of your multiple containers.
But it does not check whether a container is ready before moving to another container startup. To handle this scenario check this out.
As described in this link,
You can use tools such as wait-for-it, dockerize, or sh-compatible wait-for. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
OR
Alternatively, write your own wrapper script to perform a more application-specific health check.
In case you don't want to make use of above tools then check this out. Here they use a combination of HEALTHCHECK and service_healthy condition as shown here. For complete example check this.
Just:
while :; do
# 1. Execute a command and log the console output in a file
command > output.log
# TODO: handle errors, etc.
# 2. Read that file and check if a String is present.
if grep -q "searched_string" output.log; then
# Once I find the string I am looking for I want to exit the loop
break;
fi
# 3. If the string is not present I want to execute the same command again until I find the String I am looking for
# add ex. sleep 0.1 for the loop to delay a little bit, not to use 100% cpu
done
# ...and execute a different command
different_command
You can timeout a command with timeout.
Notes:
colon is a utility that returns a zero exit status, much like true, I prefer while : instead of while true, they mean the same.
The code presented should work in any posix shell.

Correct script construct to monitor RaspPi sound card output and execute CEC command to activate AV receiver?

Apologies in advance - I'm a complete beginner so the code below is probably a car crash but I'd really appreciate any help if anyone can spare a minute?
Aim - I have my RaspPi as a music source to my AV receiver. I've installed libcec onto RPi and the receiver is cec enabled so I am trying to write a script that sends the 'active source' command to the AVR whenever the sound card is active.
The active source command:
echo 'as' | cec-client -d 1 -s
The script to return sound card status:
grep RUNNING /proc/asound/card*/pcm*/sub*/status
I've tried to represent the following logic:
1. If music is playing - send active command (turns on AVR with correct channel) and create the empty file 'yamaha-yes'
2. If yamaha-yes file exists check that music is playing - if not then remove 'yamaha-yes' file.
The idea with the yamaha-yes file is to prevent the script from continually sending the active source command whilst music is playing - it just needs sending once, so I've tried to write it so that the presence of the file whilst music playing leads to no further action.
I was hoping to use the 'watch' command from boot to have this running continually.
#!/bin/bash
musicon="$( grep RUNNING /proc/asound/card*/pcm*/sub*/status )"
file="/etc/yamaha-yes"
if [ -e $file ] ; then
if [ "$musicon" = "" ] ; then
sudo rm /etc/yamaha-yes
fi
else
if [ "$musicon" ] ; then
echo 'as' | cec-client -d 1 -s
sudo touch /etc/yamaha-yes
fi
fi
The current error returned is 'line 8: [: too many arguments'. But I suspect there is a lot more wrong with it than that and was hoping to check I was on the right track before flogging it any further!
Thanks in advance! Tom
EDIT
Some changes made in line with Marc's advice and the code now seems to work - though I realise it still isn't the most elegant read! Perhaps there is a better way of scripting it?
The short answer is that you are missing quotes [ "$musicon" = 'RUNNING' ]. However, there are problems with how your grepping is going to work. When audio is running you could end up with multiple RUNNING values (giving the string RUNNING\nRUNNING or longer) returned which will not equal RUNNING. Also, $musicoff is broken because the ! command doesnt change a commands output, only its return value. "$musicoff" will equal RUNNING only if audio is running, which is the exact opposite of what you want. Fortunately, the fix also simplifies the script.
The behavior of grep is that it returns 0 (true) if it found the search text anywhere, otherwise it returns 1 (false). So, instead of comparing the output of grep against a specific value (which might fail in certain cases) use the return value of grep directly:
musicon="grep RUNNING /proc/asound/card*/pcm*/sub*/status"
if $musicon ; then
echo Music is on;
fi
if ! $musicon; then
echo Music is off;
fi

The "more" command fails to respond to automated input

I tried to use herestrings to automate a script input and hence wanted to quit "more" of a file in it.
I've used:
$ echo q | more big_file.txt
$ more big_file.txt <<< q
$ yes q | more big_file.txt
$ echo -e "q\n" | more build.txt
but "more" command fails to quit.
Generally, the above mentioned input methods work for other commands in bash, but more seems to be exceptional,
Any idea what makes this a foul attempt?
NOTE: I don't want the data out of "more", but to quit "more" through an automated sequence is the target
When it detects it's running on a terminal, more only takes its input from it. This is so you can run things like:
$ cat multiple_files*.txt | more
(When it's not on a terminal, it doesn't even page, it degrades to behaving like cat.)
Seriously though, whatever your misguided reason for wanting to keep more in the loop is, you're not going to have it quit voluntarily with anything else than terminal input. So:
either you fake it. E.g. with expect
or you have it quit nonvoluntary. With a signal. Try SIGQUIT, move to SIGTERM, fall down to SIGKILL as a last resort.

Why ftam service will start and return prompt from terminal but not from bash script?

I am starting ftam server (ft820.rc on CentOS 5) using bash version bash 3.0 and I am having an issue with starting it from the script, namely in the script I do
ssh -nq root#$ip /etc/init.d/ft820.rc start
and the script won't continue after this line, although when I do on the machine defined by $ip
/etc/init.d/ft820.rc start
I will get the prompt back just after the service is started.
This is the code for start in ft820.rc
SPOOLPATH=/usr/spool/vertel
BINPATH=/usr/bin/osi/ft820
CONFIGFILE=${SPOOLPATH}/ffs.cfg
# Set DBUSERID to any value at all. Just need to make sure it is non-null for
# lockclr to work properly.
DBUSERID=
export DBUSERID
# if startup requested then ...
if [ "$1" = "start" ]
then
mask=`umask`
umask 0000
# startup the lock manager
${BINPATH}/lockmgr -u 16
# update attribute database
${BINPATH}/fua ${CONFIGFILE} > /dev/null
# clear concurrency locks
${BINPATH}/finit -cy ${CONFIGFILE} >/dev/null
# startup filestore
${BINPATH}/ffs ${CONFIGFILE}
if [ $? = 0 ]
then
echo Vertel FT-820 Filestore running.
else
echo Error detected while starting Vertel FT-820 Filestore.
fi
umask $mask
I repost here (on request of #Patryk) what I put in the comments on the question:
"is it the same when doing the ssh... in the commandline? ie, can you indeed connect without entering a password, using the pair of private_local_key and the corresponding public_key that you previously inserted in the destination root#$ip:~/.ssh/authorized_keys file ? – Olivier Dulac 20 hours ago "
"you say that, at the commandline (and NOT in the script) you can ssh root#.... and it works without asking for your pwd ? (ie, it can then be run from a script?) – Olivier Dulac 20 hours ago "
" try the ssh without the '-n' and even without -nq at all : ssh root#$ip /etc/init.d/ft820.rc start (you could even add ssh -v , which will show you local (1:) and remote (2:) events in a very verbose way, helping in knowing where it gets stuck exactly) – Olivier Dulac 19 hours ago "
"also : before the "ssh..." line in the script, make another line with, for example: ssh root#ip "set ; pwd ; id ; whoami" and see if that works and shows the correct information. This may help be sure the ssh part is working. The "set" part will also show you the running shell (ex: if it contains BASH= , you're running bash. Otherwise SHELL=... should give a good hint (sometimes not correct) about which shell gets invoked) – Olivier Dulac 19 hours ago "
" please try without the '-n' (= run in background and wait, instead of just run and then quit). It it doesn't work, try adding -t -t -t (3 times) to the ssh, to force it to allocate a tty. But first, please drop the '-n'. – Olivier Dulac 18 hours ago "
Apparently what worked was to add the -t option to the ssh command. (you can go up to put '-t -t -t' to further force it to try to allocate the tty, depending on the situation)
I guess it's because the invoked command expected to be run within an interactive session, and so needed a "tty" to be the stdout
A possibility (but just a wild guess) : the invoked rc script outputs information, but in a buffered environment (ie, when not launched via your terminal), the calling script couldn't see enough lines to fill the buffer and start printing anything out (like when you do a "grep something | somethings else" in a buffered environment and ctrl+c before the buffer was big enough to display anything : you end up thinking no lines were foudn by the grep, whereas there was maybe a few lines already in the buffer). There is tons to be said about buffering, and I am just beginning to read about it all. forcing ssh to allocate a tty made the called command think it was outputting to a live terminal session, and that may have turned off the buffering and allowed the result to show. Maybe in the first case, it worked too, but you could never see the output?

Shell script that continuously checks a text file for log data and then runs a program

I have a java program that stops often due to errors which is logged in a .log file. What can be a simple shell script to detect a particular text in the last/latest line say
[INFO] Stream closed
and then run the following command
java -jar xyz.jar
This should keep on happening forever(possibly after every two minutes or so) because xyz.jar writes the log file.
The text stream closed can arrive a lot of times in the log file. I just want it to take an action when it comes in the last line.
How about
while [[ true ]];
do
sleep 120
tail -1 logfile | grep -q "[INFO] Stream Closed"
if [[ $? -eq 1 ]]
then
java -jar xyz.jar &
fi
done
There may be condition where the tailed last log "Stream Closed" is not the real last log and the process is still logging the messages. We can avoid this condition by checking if the process is alive or not. If the process exited and the last log is "Stream Closed" then we need to restart the application.
#!/bin/bash
java -jar xyz.jar &
PID=$1
while [ true ]
do
tail -1 logfile | grep -q "Stream Closed" && kill -0 $PID && sleep 20 && continue
java -jar xyz.jar &
PID=$1
done
I would prefer checking whether the corresponding process is still running and restart the program on that event. There might be other errors that cause the process to stop. You can use a cronjob to periodically (like every minute) perform such a check.
Also, you might want to improve your java code so that it does not crash that often (if you have access to the code).
i solved this using a watchdog script that checks directly (grep) if program(s) is(are) running. by calling watchdog every minute (from cron under ubuntu), i basically guarantee (programs and environment are VERY stable) that no program will stay offline for more than 59 seconds.
this script will check a list of programs using the name in an array and see if each one is running, and, in case not, start it.
#!/bin/bash
#
# watchdog
#
# Run as a cron job to keep an eye on what_to_monitor which should always
# be running. Restart what_to_monitor and send notification as needed.
#
# This needs to be run as root or a user that can start system services.
#
# Revisions: 0.1 (20100506), 0.2 (20100507)
# first prog to check
NAME[0]=soc_gt2
# 2nd
NAME[1]=soc_gt0
# 3rd, etc etc
NAME[2]=soc_gp00
# START=/usr/sbin/$NAME
NOTIFY=you#gmail.com
NOTIFYCC=you2#mail.com
GREP=/bin/grep
PS=/bin/ps
NOP=/bin/true
DATE=/bin/date
MAIL=/bin/mail
RM=/bin/rm
for nameTemp in "${NAME[#]}"; do
$PS -ef|$GREP -v grep|$GREP $nameTemp >/dev/null 2>&1
case "$?" in
0)
# It is running in this case so we do nothing.
echo "$nameTemp is RUNNING OK. Relax."
$NOP
;;
1)
echo "$nameTemp is NOT RUNNING. Starting $nameTemp and sending notices."
START=/usr/sbin/$nameTemp
$START 2>&1 >/dev/null &
NOTICE=/tmp/watchdog.txt
echo "$NAME was not running and was started on `$DATE`" > $NOTICE
# $MAIL -n -s "watchdog notice" -c $NOTIFYCC $NOTIFY < $NOTICE
$RM -f $NOTICE
;;
esac
done
exit
i do not use the log verification, though you could easily incorporate that into your own version (just change grep for log check, for example).
if you run it from command line (or putty, if you are remotely connected), you will see what was working and what wasnt. have been using it for months now without a hiccup. just call it whenever you want to see what's working (regardless of it running under cron).
you could also place all your critical programs in one folder, do a directory list and check if every file in that folder has a program running under the same name. or read a txt file line by line, with every line correspoding to a program that is supposed to be running. etcetcetc
A good way is to use the awk command:
tail -f somelog.log | awk '/.*[INFO] Stream Closed.*/ { system("java -jar xyz.jar") }'
This continually monitors the log stream and when the regular expression matches its fires off whatever system command you have set, which is anything you would type into a shell.
If you really wanna be good you can put that line into a .sh file and run that .sh file from a process monitoring daemon like upstart to ensure that it never dies.
Nice and clean =D

Resources