Why `for x in {1..3}; do echo "$x" && sleep 2 ; done | tee output123` doesn't need `tee -a`? - bash

command:
for x in {1..3}; do echo "$x" && sleep 2 ; done | tee output123
writes
1
2
3
correctly to output123, why -a for tee is not necessary here?
And I know, for :
for x in {1..3}; do echo "$x" | tee -a output123 && sleep 2 ; done ,
it needs tee -a.
I guess there's something to do with the bash loop?

In the first one the loop runs and all its output is given to a single tee. In the latter one tee is run for each loop iteration so without -a each execution of it will just overwrite the file.
Note that these aren’t equivalent if the file already exists.

Related

bash - print a line every X seconds (like sed every X lines)

I know with sed you can pipe the output of a command so that you can print every X lines.
make all | sed -n '2~5'
Is there an equivalent command to print a line every X seconds?
make all | print_line_every_sec '5'
In 5 seconds timeout read one line and discard anything else:
while
# timeout 5 seconds
! timeout 5 sh -c '
# read one line
if IFS= read -r line; then
# output the line
printf "%s\n" "$line"
# discard the input for the rest of 5 seconds
cat >/dev/null
fi
# will get here only, if there is nothing to read
'
# that means that `timeout` will always return 124 if stdin is still open
# and it will return 0 exit status only if there is nothing to read
# so we loop on nonzero exit status of timeout.
do :; done
and a oneliner:
while ! timeout 0.5 sh -c 'IFS= read -r line && printf "%s\n" "$line" && cat >/dev/null'; do :; done
But maybe something simpler - just discard 5 seconds of data each one line:
while IFS= read -r line; do
printf "%s\n" "$line"
timeout 5 cat >/dev/null
done
or
while IFS= read -r line &&
printf "%s\n" "$line" &&
! timeout 5 cat >/dev/null
do :; done
If you want the most recent message every 5 seconds, this is a try :
make all | {
display(){
if (( $SECONDS >= 5)); then
if test -n "${last_line+x}"; then
# print only if there is a message in the last 5 seconds
echo $last_line; unset last_line
fi
SECONDS=0
fi
}
SECONDS=0
while true; do
while IFS= read -t 0.001 line; do
last_line=$line
display
done
display
done
}
Even if the proposed solutions are interesting and beautiful, the most elegant solution IMHO is a awk solution. If you want to issue
make all | print_line_every_sec 5
then you have to create the script print_line_every_sec as follows, including a test to avoid an infinite loop:
#!/bin/bash
if [ $1 -le 0 ] ; then echo $(basename $0): invalid argument \'$1\'; exit 1; fi
awk -v delay=$1 'BEGIN {t = systime ()}
{if (systime() >= t) {print $0 ; t += delay}}'
This might work for you (GNU sed):
sed 'e sleep 1' file
Print a line every n (in the above example 1 ) second.
To print 5 lines every 2 seconds, use:
sed '1~5e sleep 2' file
You can do it by watch command.
If You need only print your output every X second, you could use something like this:
watch -n X "Your CMD"
if you need to designate any change on your output, it would be useful to use -d switch :
watch -n X -d "Your CMD"

displaying command output in stdout then save to file with transformation?

I have a long-running command which outputs periodically. to demonstrate let's assume it is:
function my_cmd()
{
for i in {1..9}; do
echo -n $i
for j in {1..$i}
echo -n " "
echo $i
sleep 1
done
}
the output will be:
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
I want to display the command output meanwhile save it to a file at the same time.
this can be done by my_cmd | tee -a res.txt.
Now I want to display the output to terminal as-is but save to file with a transformed flavor, say with sed "s/ //g".
so the res.txt becomes:
11
22
33
44
66
77
88
99
how can I do this transformation on-the-fly without waiting for command exits then read the file again?
Note that in your original code, {1..$i} is an error because sequences can't contain variables. I've replaced it with seq. Also, you're missing a do and a done for the inner for loop.
At any rate, I would use process substitution.
#!/usr/bin/env bash
function my_cmd {
for i in {1..9}; do
printf '%d' "$i"
for j in $(seq 1 $i); do
printf ' '
done
printf '%d\n' "$j"
sleep 1
done
}
my_cmd | tee >(tr -d ' ' >> res.txt)
Process substitution usually causes bash to create an entry in /dev/fd which is fed to the command in question. The contents of the substitution run asynchronously, so it doesn't block the process sending data to it.
Note that the process substitution isn't a REAL file, so the -a option for tee is meaningless. If you really want to append to your output file, >> within the substitution is the way to go.
If you don't like process substitution, another option would be to redirect to alternate file descriptors. For example, instead of the last line in the script above, you could use:
exec 5>&1
my_cmd | tee /dev/fd/5 | tr -d ' ' > res.txt
exec 5>&-
This creates a file descriptor, /dev/fd/5, which redirects to your real stdout, the terminal. It then tells tee to write to this, allowing the normal stdout from tee to be processed by additional pipe elements before final redirection to your log file.
The method you choose is up to you. I find process substitution clearer.
Something you need to modify in your function. And you may use tee in the for loop to print and write file at the same time. The following script may get the result you desire.
#!/bin/bash
filename="a.txt"
[ -f $filename ] && rm $filename
for i in {1..9}; do
echo -n $i | tee -a $filename
for((j=1;j<=$i;j++)); do
echo -n " "
done
echo $i | tee -a $filename
sleep 1
done
Instead of double loop, I would use printf and its formatting capability %Xs to pad with blank characters.
Moreover I would use double printing (for stdout and your file) rather than using pipe and starting new processes.
So your function could look like this:
function my_cmd() {
for i in {1..9}; do
printf "%s %${i}s\n" $i $i
printf "%s%s\n" $i $i >> res.txt
done
}

Ignoring all but the (multi-line) results of the last query sent to a program

I have an executable that accepts queries from stdin and responds to them, reading until EOF. Additionally I have an input file and a special command, let's call those EXEC, FILE and CMD respectively.
What I need to do is:
Pass FILE to EXEC as input.
Disregard all the output corresponding to commands read from FILE (/dev/null/).
Pass CMD as the last command.
Fetch output for the last command and save it in a variable.
EXEC's output can be multiline for each query.
I know how to pass FILE + CMD into the EXEC:
echo ${CMD} | cat ${FILE} - | ${EXEC}
but I have no idea how to fetch only output resulting from CMD.
Is there a magical one-liner that does this?
After looking around I've found the following partial solution:
mkfifo mypipe
(tail -f mypipe) | ${EXEC} &
cat ${FILE} | while read line; do
echo ${line} > mypipe
done
echo ${CMD} > mypipe
This allows me to redirect my input, but now the output gets printed to screen. I want to ignore all the output produced by EXEC in the while loop and get only what it prints for the last line.
I tried what first came into my mind, which is:
(tail -f mypipe) | ${EXEC} > somefile &
But it didn't work, the file was empty.
This is race-prone -- I'd suggest putting in a delay after the kill, or using an explicit sigil to determine when it's been received. That said:
#!/usr/bin/env bash
# route FD 4 to your output routine
exec 4> >(
output=; trap 'output=1' USR1
while IFS= read -r line; do
[[ $output ]] && printf '%s\n' "$line"
done
); out_pid=$!
# Capture the PID for the process substitution above; note that this requires a very
# new version of bash (4.4?)
[[ $out_pid ]] || { echo "ERROR: Your bash version is too old" >&2; exit 1; }
# Run your program in another process substitution, and close the parent's handle on FD 4
exec 3> >("$EXEC" >&4) 4>&-
# cat your file to FD 3...
cat "$file" >&3
# UGLY HACK: Wait to let your program finish flushing output from those commands
sleep 0.1
# notify the subshell writing output to disk that the ignored input is done...
kill -USR1 "$out_pid"
# UGLY HACK: Wait to let the subprocess actually receive the signal and set output=1
sleep 0.1
# ...and then write the command for which you actually want content logged.
echo "command" >&3
In validating this answer, I'm doing the following:
EXEC=stub_function
stub_function() {
local count line
count=0
while IFS= read -r line; do
(( ++count ))
printf '%s: %s\n' "$count" "$line"
done
}
cat >file <<EOF
do-not-log-my-output-1
do-not-log-my-output-2
do-not-log-my-output-3
EOF
file=file
export -f stub_function
export file EXEC
Output is only:
4: command
You could pipe it into a sed:
var=$(YOUR COMMAND | sed '$!d')
This will put only the last line into the variable
I think, that your proram EXEC does something special (open connection or remember state). When that is not the case, you can use
${EXEC} < ${FILE} > /dev/null
myvar=$(echo ${CMD} | ${EXEC})
Or with normal commands:
# Do not use (printf "==%s==\n" 1 2 3 ; printf "oo%soo\n" 4 5 6) | cat
printf "==%s==\n" 1 2 3 | cat > /dev/null
myvar=$(printf "oo%soo\n" 4 5 6 | cat)
When you need to give all input to one process, perhaps you can think of a marker that you can filter on:
(printf "==%s==\n" 1 2 3 ; printf "%s\n" "marker"; printf "oo%soo\n" 4 5 6) | cat | sed '1,/marker/ d'
You should examine your EXEC what could be used. When it is running SQL, you might use something like
(cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d'
and write this in a var with
myvar=$( (cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d' )

Matching one line from continuous stream in Linux shell

How can I make the following commands exit immediately after the first line is matched? I understand that SIGPIPE is not sent to cat until it tries to write next time (tail bug report), but I don't understand how to solve this issue.
cat <( echo -ne "asdf1\nzxcv1\nasdf2\n"; sleep 5; echo -ne "zxcv2\nasdf3\n" ) | grep --line-buffered zxcv | head --lines=1
cat <( echo -ne "asdf1\nzxcv1\nasdf2\n"; sleep 5; echo -ne "zxcv2\nasdf3\n" ) | grep --max-count=1 zxcv
NB: I actually had tail --follow before the pipesign, but replaced it with catand sleep to simplify testing. The shell in question is GNU bash 4.4.12(1)-release, and I'm running MINGW that came with Git-for-Windows 2.12.2.2.
CLARIFICATION: I have a jboss server which is started in a docker container and which outputs couple thousand lines of text within three minutes to a log file. My goal is to watch this file until a status line is printed, analyze line contents and return it to a human or Jenkins user. Of course, I can grep whole file and sleep for a second in a loop, but I'd rather avoid this if at all possible. Furthermore, this looping would interfere with my usage of timeout routine to limit maximum execution time. So, is it possible to listen for a pipe until a certain line appears and stop as soon as that happens?
Related question: Why does bash ignore SIGINT if its currently running child handles it?
Interesting question! I've verified that head dies after printing the first line (removed background job noise):
$ (printf '%s\n' a b a; sleep 5; printf '%s\n' a) | grep --line-buffered a | head --lines=1 & sleep 1; pstree $$
a
bash─┬─bash───sleep
├─grep
└─pstree
At first glance, it appears head doesn't send SIGPIPE, but I get conflicting information from running strace grep:
$ (printf '%s\n' a b a; sleep 10; printf '%s\n' a) | strace grep --line-buffered a | head --lines=1
…
--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=21950, si_uid=1000} ---
+++ killed by SIGPIPE +++
… and killing grep:
$ (printf '%s\n' a b a; sleep 10; printf '%s\n' a) | grep --line-buffered a | head --lines=1 & sleep 1; kill -PIPE $(pgrep grep); sleep 5; pstree $$
a
bash─┬─bash───sleep
└─pstree
Killing grep and then sleep fixes the issue:
$ (printf '%s\n' a b a; sleep 10; printf '%s\n' a) | grep --line-buffered a | head --lines=1 & sleep 1; kill -PIPE $(pgrep grep); sleep 1; kill -PIPE $(pgrep sleep); sleep 5; pstree $$
a
bash───pstree
Conclusion: WTF?
I've ended up doing following to be able to break following log both on a matching line and after a timeout.
#!/bin/sh
TOP_PID=$$
container_id="$1"
LOG_PATH=/opt/jboss/jboss-eap-6.2/standalone/log/server.log
await_startup () {
status=$(check_status)
follow_log --timeout $timeout &
local bgjob_pid; local bgjob_status;
bgjob_pid=$(jobs -p)
test -n "$bgjob_pid" || die "Could not start background job to follow log."
bgjob_status=true
while [ "$status" = "not started" ] && $bgjob_status; do
sleep 1s
status=$(check_status)
if kill -0 $bgjob_pid 2>/dev/null; then
bgjob_status=true
else
bgjob_status=false
fi
done
kill -KILL $bgjob_pid 2>/dev/null
}
follow_log () {
# argument parsing skipped...
docker exec $container_id timeout $timeout tail --follow=name ---disable-inotify --max-unchanged-stats=2 /$LOG_PATH
}
check_status () {
local line;
line=$(docker exec $container_id grep --extended-regexp --only-matching 'JBoss EAP .+ started.+in' /$LOG_PATH | tail --lines=1)
if [ -z "$line" ]; then
printf "not started"
elif printf "%s" "$line" | grep --quiet "with errors"; then
printf "started and unhealthy"
else
printf "healthy"
fi
}
die () {
test -n "$1" && printf "%s\n" "$1"
kill -s TERM $TOP_PID
return 1
} 1>&2

Error Handling on bash script

Infinite loop on bash script and I want to run forever but (I guess) something goes wrong script is killed. Is there any way like try-catch, just continue to running forever, unconditionaly.
#!/bin/bash
iteration=0
for (( ; ; ))
do
process_id=`ps -ef | grep java | grep TEST | awk '{print $2}' `
kill_command='kill -3 '$process_id
time=`date | awk '{print substr($4,0,5)}' `
last_write=`ls -l /files/*.txt | awk '{print $8}' `
if [ "$time" != "$last_write" ]
then
$kill_command
sleep 1
$kill_command
sleep 1
$kill_command
sleep 1
/test/show_queue.sh
fi
let "iteration+=1"
if [ "$iteration" == "30" ]
then
let "iteration=0"
$kill_command
echo '------------' >> memory_status.log
date >> memory_status.log
prstat -n 7 1 1 >> memory_status.log
echo '------------' >> memory_status.log
/test/show_queue.sh
fi
sleep 60
done
A very simple way to do it is to use two scripts. One with the loop and one which does the killing task :
for (( ; ; ))
do
DoKillingTask
rc=$? # <- You get the return code of the script and decide what to do
done
If it continues to be killed, Mikel (in comment of your question) is right.

Resources