Ignoring all but the (multi-line) results of the last query sent to a program - bash

I have an executable that accepts queries from stdin and responds to them, reading until EOF. Additionally I have an input file and a special command, let's call those EXEC, FILE and CMD respectively.
What I need to do is:
Pass FILE to EXEC as input.
Disregard all the output corresponding to commands read from FILE (/dev/null/).
Pass CMD as the last command.
Fetch output for the last command and save it in a variable.
EXEC's output can be multiline for each query.
I know how to pass FILE + CMD into the EXEC:
echo ${CMD} | cat ${FILE} - | ${EXEC}
but I have no idea how to fetch only output resulting from CMD.
Is there a magical one-liner that does this?
After looking around I've found the following partial solution:
mkfifo mypipe
(tail -f mypipe) | ${EXEC} &
cat ${FILE} | while read line; do
echo ${line} > mypipe
done
echo ${CMD} > mypipe
This allows me to redirect my input, but now the output gets printed to screen. I want to ignore all the output produced by EXEC in the while loop and get only what it prints for the last line.
I tried what first came into my mind, which is:
(tail -f mypipe) | ${EXEC} > somefile &
But it didn't work, the file was empty.

This is race-prone -- I'd suggest putting in a delay after the kill, or using an explicit sigil to determine when it's been received. That said:
#!/usr/bin/env bash
# route FD 4 to your output routine
exec 4> >(
output=; trap 'output=1' USR1
while IFS= read -r line; do
[[ $output ]] && printf '%s\n' "$line"
done
); out_pid=$!
# Capture the PID for the process substitution above; note that this requires a very
# new version of bash (4.4?)
[[ $out_pid ]] || { echo "ERROR: Your bash version is too old" >&2; exit 1; }
# Run your program in another process substitution, and close the parent's handle on FD 4
exec 3> >("$EXEC" >&4) 4>&-
# cat your file to FD 3...
cat "$file" >&3
# UGLY HACK: Wait to let your program finish flushing output from those commands
sleep 0.1
# notify the subshell writing output to disk that the ignored input is done...
kill -USR1 "$out_pid"
# UGLY HACK: Wait to let the subprocess actually receive the signal and set output=1
sleep 0.1
# ...and then write the command for which you actually want content logged.
echo "command" >&3
In validating this answer, I'm doing the following:
EXEC=stub_function
stub_function() {
local count line
count=0
while IFS= read -r line; do
(( ++count ))
printf '%s: %s\n' "$count" "$line"
done
}
cat >file <<EOF
do-not-log-my-output-1
do-not-log-my-output-2
do-not-log-my-output-3
EOF
file=file
export -f stub_function
export file EXEC
Output is only:
4: command

You could pipe it into a sed:
var=$(YOUR COMMAND | sed '$!d')
This will put only the last line into the variable

I think, that your proram EXEC does something special (open connection or remember state). When that is not the case, you can use
${EXEC} < ${FILE} > /dev/null
myvar=$(echo ${CMD} | ${EXEC})
Or with normal commands:
# Do not use (printf "==%s==\n" 1 2 3 ; printf "oo%soo\n" 4 5 6) | cat
printf "==%s==\n" 1 2 3 | cat > /dev/null
myvar=$(printf "oo%soo\n" 4 5 6 | cat)
When you need to give all input to one process, perhaps you can think of a marker that you can filter on:
(printf "==%s==\n" 1 2 3 ; printf "%s\n" "marker"; printf "oo%soo\n" 4 5 6) | cat | sed '1,/marker/ d'
You should examine your EXEC what could be used. When it is running SQL, you might use something like
(cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d'
and write this in a var with
myvar=$( (cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d' )

Related

Why `for x in {1..3}; do echo "$x" && sleep 2 ; done | tee output123` doesn't need `tee -a`?

command:
for x in {1..3}; do echo "$x" && sleep 2 ; done | tee output123
writes
1
2
3
correctly to output123, why -a for tee is not necessary here?
And I know, for :
for x in {1..3}; do echo "$x" | tee -a output123 && sleep 2 ; done ,
it needs tee -a.
I guess there's something to do with the bash loop?
In the first one the loop runs and all its output is given to a single tee. In the latter one tee is run for each loop iteration so without -a each execution of it will just overwrite the file.
Note that these aren’t equivalent if the file already exists.

bash - print a line every X seconds (like sed every X lines)

I know with sed you can pipe the output of a command so that you can print every X lines.
make all | sed -n '2~5'
Is there an equivalent command to print a line every X seconds?
make all | print_line_every_sec '5'
In 5 seconds timeout read one line and discard anything else:
while
# timeout 5 seconds
! timeout 5 sh -c '
# read one line
if IFS= read -r line; then
# output the line
printf "%s\n" "$line"
# discard the input for the rest of 5 seconds
cat >/dev/null
fi
# will get here only, if there is nothing to read
'
# that means that `timeout` will always return 124 if stdin is still open
# and it will return 0 exit status only if there is nothing to read
# so we loop on nonzero exit status of timeout.
do :; done
and a oneliner:
while ! timeout 0.5 sh -c 'IFS= read -r line && printf "%s\n" "$line" && cat >/dev/null'; do :; done
But maybe something simpler - just discard 5 seconds of data each one line:
while IFS= read -r line; do
printf "%s\n" "$line"
timeout 5 cat >/dev/null
done
or
while IFS= read -r line &&
printf "%s\n" "$line" &&
! timeout 5 cat >/dev/null
do :; done
If you want the most recent message every 5 seconds, this is a try :
make all | {
display(){
if (( $SECONDS >= 5)); then
if test -n "${last_line+x}"; then
# print only if there is a message in the last 5 seconds
echo $last_line; unset last_line
fi
SECONDS=0
fi
}
SECONDS=0
while true; do
while IFS= read -t 0.001 line; do
last_line=$line
display
done
display
done
}
Even if the proposed solutions are interesting and beautiful, the most elegant solution IMHO is a awk solution. If you want to issue
make all | print_line_every_sec 5
then you have to create the script print_line_every_sec as follows, including a test to avoid an infinite loop:
#!/bin/bash
if [ $1 -le 0 ] ; then echo $(basename $0): invalid argument \'$1\'; exit 1; fi
awk -v delay=$1 'BEGIN {t = systime ()}
{if (systime() >= t) {print $0 ; t += delay}}'
This might work for you (GNU sed):
sed 'e sleep 1' file
Print a line every n (in the above example 1 ) second.
To print 5 lines every 2 seconds, use:
sed '1~5e sleep 2' file
You can do it by watch command.
If You need only print your output every X second, you could use something like this:
watch -n X "Your CMD"
if you need to designate any change on your output, it would be useful to use -d switch :
watch -n X -d "Your CMD"

How to run commands off of a pipe

I would like to run commands such as "history" or "!23" off of a pipe.
How might I achieve this?
Why does the following command not work?
echo "history" | xargs eval $1
To answer (2) first:
history and eval are both bash builtins. So xargs cannot run either of them.
xargs does not use $1 arguments. man xargs for the correct syntax.
For (1), it doesn't really make much sense to do what you are attempting because shell history is not likely to be synchronised between invocations, but you could try something like:
{ echo 'history'; echo '!23'; } | bash -i
or:
{ echo 'history'; echo '!23'; } | while read -r cmd; do eval "$cmd"; done
Note that pipelines run inside subshells. Environment changes are not retained:
x=1; echo "x=2" | while read -r cmd; do eval "$cmd"; done; echo "$x"
You can try like this
First redirect the history commands to a file (cut out the line numbers)
history | cut -c 8- > cmd.txt
Now Create this script hcmd.sh(Referred to this Read a file line by line assigning the value to a variable)
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
echo "Text read from file: $line"
$line
done < "cmd.txt"
Run it like this
./hcmd.sh

displaying command output in stdout then save to file with transformation?

I have a long-running command which outputs periodically. to demonstrate let's assume it is:
function my_cmd()
{
for i in {1..9}; do
echo -n $i
for j in {1..$i}
echo -n " "
echo $i
sleep 1
done
}
the output will be:
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
I want to display the command output meanwhile save it to a file at the same time.
this can be done by my_cmd | tee -a res.txt.
Now I want to display the output to terminal as-is but save to file with a transformed flavor, say with sed "s/ //g".
so the res.txt becomes:
11
22
33
44
66
77
88
99
how can I do this transformation on-the-fly without waiting for command exits then read the file again?
Note that in your original code, {1..$i} is an error because sequences can't contain variables. I've replaced it with seq. Also, you're missing a do and a done for the inner for loop.
At any rate, I would use process substitution.
#!/usr/bin/env bash
function my_cmd {
for i in {1..9}; do
printf '%d' "$i"
for j in $(seq 1 $i); do
printf ' '
done
printf '%d\n' "$j"
sleep 1
done
}
my_cmd | tee >(tr -d ' ' >> res.txt)
Process substitution usually causes bash to create an entry in /dev/fd which is fed to the command in question. The contents of the substitution run asynchronously, so it doesn't block the process sending data to it.
Note that the process substitution isn't a REAL file, so the -a option for tee is meaningless. If you really want to append to your output file, >> within the substitution is the way to go.
If you don't like process substitution, another option would be to redirect to alternate file descriptors. For example, instead of the last line in the script above, you could use:
exec 5>&1
my_cmd | tee /dev/fd/5 | tr -d ' ' > res.txt
exec 5>&-
This creates a file descriptor, /dev/fd/5, which redirects to your real stdout, the terminal. It then tells tee to write to this, allowing the normal stdout from tee to be processed by additional pipe elements before final redirection to your log file.
The method you choose is up to you. I find process substitution clearer.
Something you need to modify in your function. And you may use tee in the for loop to print and write file at the same time. The following script may get the result you desire.
#!/bin/bash
filename="a.txt"
[ -f $filename ] && rm $filename
for i in {1..9}; do
echo -n $i | tee -a $filename
for((j=1;j<=$i;j++)); do
echo -n " "
done
echo $i | tee -a $filename
sleep 1
done
Instead of double loop, I would use printf and its formatting capability %Xs to pad with blank characters.
Moreover I would use double printing (for stdout and your file) rather than using pipe and starting new processes.
So your function could look like this:
function my_cmd() {
for i in {1..9}; do
printf "%s %${i}s\n" $i $i
printf "%s%s\n" $i $i >> res.txt
done
}

Bash Script - using cmd instead of cat

I wrote a script, including this loop:
#!/bin/bash
cat "$1" | while read -r line; do
echo "$line"; sleep 2;
done
A shellcheck run put out the following message:
SC2002: Useless cat. Consider 'cmd < file | ..' or 'cmd file | ..' instead.
I changed the script to:
#!/bin/bash
cmd < "$1" | while read -r line; do
echo "$line"; sleep 2;
done
but now bash exits with:
cmd: command not found
what have I done wrong?
Your cmd is the whole while cond; do ... done compound statement and in this case the redirection needs to come at the end:
while read -r line; do
echo "$line"; sleep 0.2
done < "$1"
Remove the | and have the end line as :
done < "$1"

Resources