Display output of command in terminal while using command substitution - bash

So I'm trying to check for the output of a command, but I also want to be able display the output directly in the terminal.
#!/bin/bash
while :
do
OUT=$(streamlink -o "$NAME" "$STREAM" best)
echo "$OUT"
if [[ $OUT == *"No playable streams"* ]]; then
echo "Delaying!"
sleep 15s
fi
done
This is what I tried to do.
The code checks if the output of a command contains that error substring, if so it'd add a delay. It works well on that part.
But it doesn't work well when the command is actually successfully downloading a file as it won't perform that echo until it is finished with the download (which would take hours). So until then I have no way of personally checking the output of the command
Plus the output of this particular command displays and updates the speed and filesize in real-time, something echo wouldn't be able to replicate.
So is there a way to be able to display the output of a command in real-time, while also command substituting them in order to check the output for substrings after the command is finished?

Use a temporary file:
TEMP=$(mktemp) || exit 1
while true
do
streamlink -o "$NAME" "$STREAM" best |& tee "$TEMP"
OUT=$( cat "$TEMP" )
#echo "$OUT" # not longer needed
if [[ $OUT == *"No playable streams"* ]]; then
echo "Delaying!"
sleep 15s
fi
done
# not really needed here because of endless loop
rm -f "$TEMP"

Related

monitoring and searching a file with inotify, and command line tools

Log files is written line by line by underwater drones on a server. TWhen at surface, the drones speak slowly to the server (say ~200o/s on a phone line which is not stable) and only from time to time (say every ~6h). Depending on the messages, I have to execute commands on the server while the drones are online and when they hang up other commands. Other processes may be looking at the same files with similar tasks.
A lot can be found on this website on somewhat similar problems but the solution I have built on is still unsatisfactory. Presently I'm doing this with bash
while logfile_drone=`inotifywait -e create --format '%f' log_directory`; do
logfile=log_directory/${logfile_drone}
while action=`inotifywait -q -t 120 -e modify -e close --format '%e' ${logfile} ` ; do
exidCode=$?
lastLine=$( tail -n2 ${logFile} | head -n1 ) # because with tail -n1 I can got only part of the line. this happens quite often
match =$( # match set to true if lastLine matches some pattern )
if [[ $action == 'MODIFY' ]] && $match ; then # do something ; fi
if [[ $( echo $action | cut -c1-5 ) == 'CLOSE' ]] ; then
# do something
break
fi
if [[ $exitCode -eq 2 ]] ; then break ; fi
done
# do something after the drone has hang up
done # wait for a new call from the same or another drone
The main problems are :
the second inotify misses lines, may be because of the other processes looking at the same file.
the way I catch the time out doesn't seem to work.
I can't monitor 2 drones simultaneously.
Basically the code works more or less but isn't very robust. I wonder if problem 3 can be managed by putting the second while loop in a function which is put in background when called. Finally, I wonder if a higher level language (I'm familiar with php which has a PECL extension for inotify) would not do this much better. However, I imagine that php will not solve problem 3 better than than bash.
Here is the code where I'm facing the problem of abrupt exit from the while loop, implemented according to Philippe's answer, which works fine otherwise:
while read -r action ; do
...
resume=$( grep -e 'RESUMING MISSION' <<< $lastLine )
if [ -n "$resume" ] ; then
ssh user#another_server "/usr/bin/php /path_to_matlab_command/matlabCmd.php --drone=${vehicle}" &
fi
if [ $( echo $action | cut -c1-5 ) == 'CLOSE' ] ; then ... ; sigKill=true ; fi
...
if $sigKill ; then break; fi
done < <(inotifywait -q -m -e modify -e close_write --format '%e' ${logFile})
When I comment the line with ssh the script can exit properly with a break triggered by CLOSE, otherwise the while loop finishes abruptly after the ssh command. The ssh is put in background because the matlab code runs for long time.
monitor mode (-m) of inotifywait may serve better here :
inotifywait -m -q -e create -e modify -e close log_directory |\
while read -r dir action file; do
...
done
monitor mode (-m) does not buffer, it just print all events to standard output.
To preserve the variables :
while read -r dir action file; do
echo $dir $action $file
done < <(inotifywait -m -q -e create -e modify -e close log_directory)
echo "End of script"

bash: pgrep in a commad substition

I want to build a small script (called check_process.sh) that checks if a certain process $PROC_NAME is running. If it does, it returns its PID or -1 otherwise.
My idea is to use pgrep -f <STRING> in a command substitution.
If I run this code directly in the command line:
export ARG1=foo_name
export RES=$(pgrep -f ${ARG1})
if [[ $RES == "" ]]; then echo "-1" ; else echo "$RES"; fi
everything goes fine: PID or -1 depending on the process status.
My script check_process.sh contains the same lines plus an extra variable to pass the process' name :
#!/bin/bash
export ARG1=$1
export RES=$(pgrep -f ${ARG1})
if [[ $RES == "" ]]; then echo "-1" ; else echo "$RES"; fi
But this code does not work!
If the process is currently running I get two PIDs (the process' PID and something else...), whereas when I check a process that is not running I get the something else !
I am puzzled. Any idea?
Thanks in advance!
If you add the -a flag to pgrep inside your script, you can see something like that (I ran ./check_process.sh vlc):
17295 /usr/bin/vlc --started-from-file ~/test.mkv
18252 /bin/bash ./check_process.sh vlc
So the "something else" is the pid of the running script itself.
The pgrep manual explains the -f flag:
The pattern is normally only matched against the process name. When -f is set, the full command line is used.
Obviously, the script command line contain the lookup process name ('vlc') as an argument, hence it appears at the pgrep -f result.
If you're looking just for the process name matches you can remove the -f flag and get your desired result.
If you wish to stay with the -f flag, you can filter out the current pid:
#!/bin/bash
ARG1=$1
TMP=$(pgrep -f ${ARG1})
RES=$(echo "${TMP}" | grep -v $$)
if [[ $RES == "" ]]; then echo "-1" ; else echo "${RES}"; fi

testing a program in bash

I wrote a program in c++ and now I have a binary. I have also generated a bunch of tests for testing. Now I want to automate the process of testing with bash. I want to save three things in one execution of my binary:
execution time
exit code
output of the program
Right now I am stack up with a script that only tests that binary does its job and returns 0 and doesn't save any information that I mentioned above. My script looks like this
#!/bin/bash
if [ "$#" -ne 2 ]; then
echo "Usage: testScript <binary> <dir_with_tests>"
exit 1
fi
binary="$1"
testsDir="$2"
for test in $(find $testsDir -name '*.txt'); do
testname=$(basename $test)
encodedTmp=$(mktemp /tmp/encoded_$testname)
decodedTmp=$(mktemp /tmp/decoded_$testname)
printf 'testing on %s...\n' "$testname"
if ! "$binary" -c -f $test -o $encodedTmp > /dev/null; then
echo 'encoder failed'
rm "$encodedTmp"
rm "$decodedTmp"
continue
fi
if ! "$binary" -u -f $encodedTmp -o $decodedTmp > /dev/null; then
echo 'decoder failed'
rm "$encodedTmp"
rm "$decodedTmp"
continue
fi
if ! diff "$test" "$decodedTmp" > /dev/null ; then
echo "result differs with input"
else
echo "$testname passed"
fi
rm "$encodedTmp"
rm "$decodedTmp"
done
I want save output of $binary in a variable and not send it into /dev/null. I also want to save time using time bash function
As you asked for the output to be saved in a shell variable, I tried answering this without using output redirection – which saves output in (temporary) text files (which then have to be cleaned).
Saving the command output
You can replace this line
if ! "$binary" -c -f $test -o $encodedTmp > /dev/null; then
with
if ! output=$("$binary" -c -f $test -o $encodedTmp); then
Using command substitution saves the program output of $binary in the shell variable. Command substitution (combined with shell variable assignment) also allows exit codes of programs to be passed up to the calling shell so the conditional if statement will continue to check if $binary executed without error.
You can view the program output by running echo "$output".
Saving the time
Without a more sophisticated form of Inter-Process Communication, there’s no way for a shell that’s a sub-process of another shell to change the variables or the environment of its parent process so the only way that I could save both the time and the program output was to combine them in the one variable:
if ! time-output=$(time "$binary" -c -f $test -o $encodedTmp) 2>&1); then
Since time prints its profiling information to stderr, I use the parentheses operator to run the command in subshell whose stderr can be redirected to stdout. The programming output and the output of time can be viewed by running echo "$time-output" which should return something similar to:
<program output>
<blank line>
real 0m0.041s
user 0m0.000s
sys 0m0.046s
You can get the process status in bash by using $? and print it out by echo $?.
And to catch the output of time, you could use sth like that
{ time sleep 1 ; } 2> time.txt
Or you can save the output of the program and execution time at once
(time ls) > out.file 2>&1
You can save output to a file using output redirection. Just change first /dev/null line:
if ! "$binary" -c -f $test -o $encodedTmp > /dev/null; then
to
if ! "$binary" -c -f $test -o $encodedTmp > prog_output; then
then change second and third /dev/null lines respectively:
if ! "$binary" -u -f $encodedTmp -o $decodedTmp >> prog_output; then
if ! diff "$test" "$decodedTmp" >> prog_output; then
To measure program execution put
start=$(date +%s)
on the first line
then
end=$(date +%s)
echo "Execution time in seconds: " $((end-start)) >> prog_output
on the end.

Lynx is stopping loop?

I'll just apologize beforehand; this is my first ever post, so I'm sorry if I'm not specific enough, if the question has already been answered and I just didn't look hard enough, and if I use incorrect formatting of some kind.
That said, here is my issue: In bash, I am trying to create a script that will read a file that lists several dozen URL's. Once it reads each line, I need it to run a set of actions on that, the first being to use lynx to navigate to the website. However, in practice, it will run once perfectly on the first line. Lynx goes, the download works, and then the subsequent renaming and organizing of that file go through as well. But then it skips all the other lines and acts like it has finished the whole file.
I have tested to see if it was lynx causing the issue by eliminating all the other parts of the code, and then by just eliminating lynx. It works without Lynx, but, of course, I need lynx for the rest of the output to be of any use to me. Let me just post the code:
!#/bin/bash
while read line; do
echo $line
lynx -accept_all_cookies $line
echo "lynx done"
od -N 2 -h *.zip | grep "4b50"
echo "od done, if 1 starting..."
if [[ $? -eq 0 ]]
then ls *.*>>logs/zips.log
else
od -N 2 -h *.exe | grep "5a4d"
echo "if 2 starting..."
if [[ $? -eq 0 ]]
then ls *.*>>logs/exes.log
else
od -N 2 -h *.exe | grep "5a4d, 4b50"
echo "if 3 starting..."
if [[ $? -eq 1 ]]
then
ls *.*>>logs/failed.log
fi
echo "if 3 done"
fi
echo "if 2 done"
fi
echo "if 1 done..."
FILE=`(ls -tr *.* | head -1)`
NOW=$(date +"%m_%d_%Y")
echo "vars set"
mv $FILE "criticalfreepri/${FILE%%.*}(ZCH,$NOW).${FILE#*.}" -u
echo "file moved"
rm *.zip *.exe
echo "file removed"
done < "lynx"
$SHELL
Just to be sure, I do have a file called "lynx" that contains the urls separated by a return each. Also, I used all those "echo"s to do my own sort of debugging, but I have tried it with and without the echo's. When I execute the script, the echo's all show up...
Any help is appreciated, and thank you all so much! Hope I didn't break any rules on this post!
PS: I'm on Linux Mint running things through the "terminal" program. I'm scripting with bash in Gedit, if any of that info is relevant. Thanks!
EDIT: Actually, the echo tests repeat for all three lines. So it would appear that lynx simply can't start again in the same loop?
Here is a simplified version of the script, as requested:
!#/bin/bash
while read -r line; do
echo $line
lynx $line
echo "lynx done"
done < "ref/url"
read "lynx"
$SHELL
Note that I have changed the sites the "url" file goes to:
`www.google.com
www.majorgeeks.com
http://www.sophos.com/en-us/products/free-tools/virus-removal-tool.aspx`
Lynx is not designed to use in scripts because it locks the terminal. Lynx is an interactive console browser.
If you want to access URLs in a script use wget, for example:
wget http://www.google.com/
For exit codes see: http://www.gnu.org/software/wget/manual/html_node/Exit-Status.html
to parse the html-content use:
VAR=`wget -qO- http://www.google.com/`
echo $VAR
I found a way which may fulfilled your requirement to run lynx command in loop with substitution of different url link.
Use
echo `lynx $line`
(Echo the lynx $line in single quote('))
instead of lynx $line. You may refer below:
your code
!#/bin/bash
while read -r line; do
echo $line
lynx $line
echo "lynx done"
done < "ref/url"
read "lynx"
$SHELL
try on below
!#/bin/bash
while read -r line; do
echo $line
echo `lynx $line`
echo "lynx done"
done < "ref/url"
I should have answered this question a long time ago. I got the program working, it's now on Github!
Anyway, I simply had to wrap the loop inside a function. Something like this:
progdownload () {
printlog "attmpting download from ${URL}"
if echo "${URL}" | grep -q "http://www.majorgeeks.com/" ; then
lynx -cmd_script="${WORKINGDIR}/support/mgcmd.txt" --accept-all-cookies ${URL}
else wget ${URL}
fi
}
URL="something.com"
progdownload

How can I use tail utility to view a log file that is frequently recreated

I need a solution in creating a script to tail a log file that is recreated (with the same name) after it reaches a certain size.
Using "tail -f" causes the tailing to stop when the file is recreated/rotated.
What I would like to do is create a script that would tail the file and after it reaches 100 lines for example, then restart the command... Or even better to restart the command when the file is recreated?
Is it possible?
Yes! Use this (the retry will make tail retry when the file doesn't exist or is otherwise inaccessible rather than just failing - such as potentially when you are changing files):
tail -f --retry <filename>
OR
tail --follow=name --retry
OR
tail -F <filename>
try running
watch "tail -f" yourfile.log
If tail -F is not available, and you are trying to recover from logrotate, you may add the copytruncate option to your logrotate.d/ spec file so instead of creating a new file each time after rotation, the file is kept and truncated, while a copy is rotated out.
This way the old file handle continues to point to the new (truncated) log file where new logs are appended.
Note that there may be some loss of data during this copy-truncate process.
Since you dont have a tail that support all the features and because you dont have watch you could use a simple script that loop indefinitely to execute the tail.
#!/bin/bash
PID=`mktemp`
while true;
do
[ -e "$1" ] && IO=`stat -c %i "$1"`
[ -e "$1" ] && echo "restarting tail" && { tail -f "$1" 2> /dev/null & echo $! > $PID; }
# as long as the file exists and the inode number did not change
while [[ -e "$1" ]] && [[ $IO = `stat -c %i "$1"` ]]
do
sleep 0.5
done
[ ! -z $PID ] && kill `cat $PID` 2> /dev/null && echo > $PID
sleep 0.5
done 2> /dev/null
rm -rf $PID
You might want to use trap to cleanly exit this script. This is up to you.
Basicaly, this script check if the inode number (using stat -c %i "$1") change to kill the tail command and start a new one when the file is recreated.
Note: you might get rid of the echo "restarting tail" which will pollute your output. It was only useful for testing. Also problems might occur if the file is replaced after we check the inode number and before we start the tail process.

Resources