Display output from last command in bash until loop - bash

until commandThatProducesOutput | grep -m 1 "Done"
do
???
sleep 5
done
While this script is running, I'd like to pipe the output that commandThatProducesOutput produces to the screen but can't seem to get the correct syntax.

How about:
output=$(commandThatProducesOutput)
until echo "$output" | grep -m 1 "Done"
do
echo "$output"
output=$(commandThatProducesOutput)
done

Related

Ignoring all but the (multi-line) results of the last query sent to a program

I have an executable that accepts queries from stdin and responds to them, reading until EOF. Additionally I have an input file and a special command, let's call those EXEC, FILE and CMD respectively.
What I need to do is:
Pass FILE to EXEC as input.
Disregard all the output corresponding to commands read from FILE (/dev/null/).
Pass CMD as the last command.
Fetch output for the last command and save it in a variable.
EXEC's output can be multiline for each query.
I know how to pass FILE + CMD into the EXEC:
echo ${CMD} | cat ${FILE} - | ${EXEC}
but I have no idea how to fetch only output resulting from CMD.
Is there a magical one-liner that does this?
After looking around I've found the following partial solution:
mkfifo mypipe
(tail -f mypipe) | ${EXEC} &
cat ${FILE} | while read line; do
echo ${line} > mypipe
done
echo ${CMD} > mypipe
This allows me to redirect my input, but now the output gets printed to screen. I want to ignore all the output produced by EXEC in the while loop and get only what it prints for the last line.
I tried what first came into my mind, which is:
(tail -f mypipe) | ${EXEC} > somefile &
But it didn't work, the file was empty.
This is race-prone -- I'd suggest putting in a delay after the kill, or using an explicit sigil to determine when it's been received. That said:
#!/usr/bin/env bash
# route FD 4 to your output routine
exec 4> >(
output=; trap 'output=1' USR1
while IFS= read -r line; do
[[ $output ]] && printf '%s\n' "$line"
done
); out_pid=$!
# Capture the PID for the process substitution above; note that this requires a very
# new version of bash (4.4?)
[[ $out_pid ]] || { echo "ERROR: Your bash version is too old" >&2; exit 1; }
# Run your program in another process substitution, and close the parent's handle on FD 4
exec 3> >("$EXEC" >&4) 4>&-
# cat your file to FD 3...
cat "$file" >&3
# UGLY HACK: Wait to let your program finish flushing output from those commands
sleep 0.1
# notify the subshell writing output to disk that the ignored input is done...
kill -USR1 "$out_pid"
# UGLY HACK: Wait to let the subprocess actually receive the signal and set output=1
sleep 0.1
# ...and then write the command for which you actually want content logged.
echo "command" >&3
In validating this answer, I'm doing the following:
EXEC=stub_function
stub_function() {
local count line
count=0
while IFS= read -r line; do
(( ++count ))
printf '%s: %s\n' "$count" "$line"
done
}
cat >file <<EOF
do-not-log-my-output-1
do-not-log-my-output-2
do-not-log-my-output-3
EOF
file=file
export -f stub_function
export file EXEC
Output is only:
4: command
You could pipe it into a sed:
var=$(YOUR COMMAND | sed '$!d')
This will put only the last line into the variable
I think, that your proram EXEC does something special (open connection or remember state). When that is not the case, you can use
${EXEC} < ${FILE} > /dev/null
myvar=$(echo ${CMD} | ${EXEC})
Or with normal commands:
# Do not use (printf "==%s==\n" 1 2 3 ; printf "oo%soo\n" 4 5 6) | cat
printf "==%s==\n" 1 2 3 | cat > /dev/null
myvar=$(printf "oo%soo\n" 4 5 6 | cat)
When you need to give all input to one process, perhaps you can think of a marker that you can filter on:
(printf "==%s==\n" 1 2 3 ; printf "%s\n" "marker"; printf "oo%soo\n" 4 5 6) | cat | sed '1,/marker/ d'
You should examine your EXEC what could be used. When it is running SQL, you might use something like
(cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d'
and write this in a var with
myvar=$( (cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d' )

How to 'grep' a continuous log file stream with 'tailf' and when the needed string is obtained, close/break the 'tailf' automatically?

into a bash script, I need to grep a contiuous log streaming and when the proper string is filtered, I need to stop the 'tailf' command to move ond with other implementations.
The common command that works is:
tailf /dir/dir/dir/server.log | grep --line-buffered "Started in"
after the "Started in" line is gathered, I need to break down the "tailf" command.
All this stuff into a bash script.
use grep -m1, it means return the first match then stop:
-m num, --max-count=num
Stop reading the file after num matches.
tailf /dir/dir/dir/server.log | grep -m1 "Started in"
Figured out...
tailf /dir/dir/dir/server.log | while read line
do
echo $line | grep "thing_to_grep"
if [ "$?" -eq "0" ]; then
echo "";echo "[ message ]";echo "";
kill -2 -$$
fi
done
$$ is the PID of the current shell, in this case associated to the "tailf" command.

Bash - catch the output of a command

I am trying to check the output of a command and run different commands depending on the output.
count="1"
for f in "$#"; do
BASE=${f%.*}
# if [ -e "${BASE}_${suffix}_${suffix2}.mp4" ]; then
echo -e "Reading GPS metadata using MediaInfo file ${count}/${##} "$(basename "${BASE}_${suffix}_${suffix2}.mp4")"
mediainfo "${BASE}_${suffix}_${suffix2}.mp4" | grep "©xyz" | head -n 1
if [[ $? != *xyz* ]]; then
echo -e "WARNING!!! No GPS information found! File ${count}/${##} "$(basename "${BASE}_${suffix}_${suffix2}.mp4")" || exit 1
fi
((count++))
done
MediaInfo is the command I am checking the output of.
If a video file has "©xyz" atom written into it the output looks like this:
$ mediainfo FILE | grep "©xyz" | head -n 1
$ ©xyz : +60.9613-125.9309/
$
otherwise it is null
$ mediainfo FILE | grep "©xyz" | head -n 1
$
The above code does not work and echos the warning even when ©xyz presented.
Any ideas of what I am doing wrong?
The syntax you are using the capture the output of the mediainfo command is plain wrong. When using grep you can use its return code (the output of $?) directly in the if-conditional
if mediainfo "${BASE}_${suffix}_${suffix2}.mp4" | grep -q "©xyz" 2> /dev/null;
then
..
The -q flag in grep instructs it to run the command silently without throwing any results to stdout, and the part 2>/dev/null suppresses any errors thrown via stderr, so you will get the if-conditional pass when the string is present and fail if not present
$? is the exit code of the command: a number between 0 and 255. It's not related to stdout, where your value "xyz" is written.
To match in stdout, you can just use grep:
if mediainfo "${BASE}_${suffix}_${suffix2}.mp4" | grep -q "©xyz"
then
echo "It contained that thing"
else
echo "It did not"
fi

In my bash script, how do I write a while loop that only exits if the output of "tail" doesn't contain a string?

I’m using Amazon Linux with bash shell. In my bash script, how do I construct a while loop that will spin so long as the command
tail -10 /usr/java/jboss/standalone/log/server.log
does not contain the string “FrameworkServlet ‘myprojectDispatcher': initialization completed”?
You can use:
tail -n 10 -f /usr/java/jboss/standalone/log/server.log |
awk '/FrameworkServlet.*myprojectDispatcher.*initialization completed/{exit} 1'
awk will exit when it encounters search string otherwise it will keep writing input to stdout.
However do keep in mind that the tail command is buffered and to avoid that behavior try stdbuf gnu utility:
stdbuf -i0 -o0 -e0 tail -n 10 -f /usr/java/jboss/standalone/log/server.log |
awk '/FrameworkServlet.*myprojectDispatcher.*initialization completed/{exit} 1'
I can try this:
#!/bin/bash
MATCH="FrameworkServlet ‘myprojectDispatcher': initialization completed"
while :
do
if tail /usr/java/jboss/standalone/log/server.log | grep -q "$MATCH"; then
exit 0
else
sleep 1
fi
done
while [ -nz grep -q "FrameworkServlet ‘myprojectDispatcher': initialization completed" /usr/java/jboss/standalone/log/server.log ]; do
# wait a second
sleep 1
done
# do the stuff
echo "we got it!"

Bash for loop stops after one iteration without error [duplicate]

This question already has answers here:
What does set -e mean in a bash script?
(10 answers)
Closed 2 years ago.
Lets say I have a file dates.json:
2015-11-01T12:01:52
2015-11-03T03:58:57
2015-11-09T02:43:59
2015-11-10T08:22:00
2015-11-11T05:14:51
2015-11-11T12:47:02
2015-11-13T08:33:40
I want to separate the rows to different files according to the date.
I made the following script:
#!/bin/bash
set -e
file="$1"
for i in $(seq 1 1 31); do
if [ $i -lt 10 ]; then
echo 'looking for 2015-11-0'$i
cat $file | grep "2015-11-0"$i > $i.json
else
echo 'looking for 2015-11-'$i
cat $file | grep "2015-11-"$i > $i.json
fi
done
When I execute I get the following:
$ bash example.sh dates.json
looking for 2015-11-01
looking for 2015-11-02
If I try without the cat... rows the script prints all the echo commands, and if I try only the cat | grep command on the command line it works.
Would you know why does it behave like this?
If you need set -e in other parts of the script, you need to handle grep not to stop your script:
# cat $file | grep "2015-11-0"$i > $i.json
grep "2015-11-0"$i "$file" > $i.json || :
set -e forces script to exit if command exits with non-zero status.
+
grep returns 1 if it fails to find match in file.
+
dates.json has no 2015-11-02
=
error
it's because of our set -e which causes the script to exit. Remove this line, then it should work

Resources