bash - print a line every X seconds (like sed every X lines) - bash

I know with sed you can pipe the output of a command so that you can print every X lines.
make all | sed -n '2~5'
Is there an equivalent command to print a line every X seconds?
make all | print_line_every_sec '5'

In 5 seconds timeout read one line and discard anything else:
while
# timeout 5 seconds
! timeout 5 sh -c '
# read one line
if IFS= read -r line; then
# output the line
printf "%s\n" "$line"
# discard the input for the rest of 5 seconds
cat >/dev/null
fi
# will get here only, if there is nothing to read
'
# that means that `timeout` will always return 124 if stdin is still open
# and it will return 0 exit status only if there is nothing to read
# so we loop on nonzero exit status of timeout.
do :; done
and a oneliner:
while ! timeout 0.5 sh -c 'IFS= read -r line && printf "%s\n" "$line" && cat >/dev/null'; do :; done
But maybe something simpler - just discard 5 seconds of data each one line:
while IFS= read -r line; do
printf "%s\n" "$line"
timeout 5 cat >/dev/null
done
or
while IFS= read -r line &&
printf "%s\n" "$line" &&
! timeout 5 cat >/dev/null
do :; done

If you want the most recent message every 5 seconds, this is a try :
make all | {
display(){
if (( $SECONDS >= 5)); then
if test -n "${last_line+x}"; then
# print only if there is a message in the last 5 seconds
echo $last_line; unset last_line
fi
SECONDS=0
fi
}
SECONDS=0
while true; do
while IFS= read -t 0.001 line; do
last_line=$line
display
done
display
done
}

Even if the proposed solutions are interesting and beautiful, the most elegant solution IMHO is a awk solution. If you want to issue
make all | print_line_every_sec 5
then you have to create the script print_line_every_sec as follows, including a test to avoid an infinite loop:
#!/bin/bash
if [ $1 -le 0 ] ; then echo $(basename $0): invalid argument \'$1\'; exit 1; fi
awk -v delay=$1 'BEGIN {t = systime ()}
{if (systime() >= t) {print $0 ; t += delay}}'

This might work for you (GNU sed):
sed 'e sleep 1' file
Print a line every n (in the above example 1 ) second.
To print 5 lines every 2 seconds, use:
sed '1~5e sleep 2' file

You can do it by watch command.
If You need only print your output every X second, you could use something like this:
watch -n X "Your CMD"
if you need to designate any change on your output, it would be useful to use -d switch :
watch -n X -d "Your CMD"

Related

Unexpected behavior when processing input via stdin but file input works fine

I have a program which transposes a matrix. It works properly when passed a file as a parameter, but it gives strange output when given input via stdin.
This works:
$ cat m1
1 2 3 4
5 6 7 8
$ ./matrix transpose m1
1 5
2 6
3 7
4 8
This doesn't:
$ cat m1 | ./matrix transpose
5
[newline]
[newline]
[newline]
This is the code I'm using to transpose the matrix:
function transpose {
# Set file to be argument 1 or stdin
FILE="${1:-/dev/stdin}"
if [[ $# -gt 1 ]]; then
print_stderr "Too many arguments. Exiting."
exit 1
elif ! [[ -r $FILE ]]; then
print_stderr "File not found. Exiting."
exit 1
else
col=1
read -r line < $FILE
for num in $line; do
cut -f$col $FILE | tr '\n' '\t'
((col++))
echo
done
exit 0
fi
}
And this code handles the argument passing:
# Main
COMMAND=$1
if func_exists $COMMAND; then
$COMMAND "${#:2}"
else
print_stderr "Command \"$COMMAND\" not found. Exiting."
exit 1
fi
I'm aware of this answer but I can't figure out where I've gone wrong. Any ideas?
for num in $line; do
cut -f$col $FILE | tr '\n' '\t'
((col++))
echo
done
This loop reads $FILE over and over, once for each column. That works fine for a file but isn't suitable for stdin, which is a stream of data that can only be read once.
A quick fix would be to read the file into memory and use <<< to pass it to read and cut.
matrix=$(< "$FILE")
read -r line <<< "$matrix"
for num in $line; do
cut -f$col <<< "$matrix" | tr '\n' '\t'
((col++))
echo
done
See An efficient way to transpose a file in Bash for a variety of more efficient one-pass solutions.

Ignoring all but the (multi-line) results of the last query sent to a program

I have an executable that accepts queries from stdin and responds to them, reading until EOF. Additionally I have an input file and a special command, let's call those EXEC, FILE and CMD respectively.
What I need to do is:
Pass FILE to EXEC as input.
Disregard all the output corresponding to commands read from FILE (/dev/null/).
Pass CMD as the last command.
Fetch output for the last command and save it in a variable.
EXEC's output can be multiline for each query.
I know how to pass FILE + CMD into the EXEC:
echo ${CMD} | cat ${FILE} - | ${EXEC}
but I have no idea how to fetch only output resulting from CMD.
Is there a magical one-liner that does this?
After looking around I've found the following partial solution:
mkfifo mypipe
(tail -f mypipe) | ${EXEC} &
cat ${FILE} | while read line; do
echo ${line} > mypipe
done
echo ${CMD} > mypipe
This allows me to redirect my input, but now the output gets printed to screen. I want to ignore all the output produced by EXEC in the while loop and get only what it prints for the last line.
I tried what first came into my mind, which is:
(tail -f mypipe) | ${EXEC} > somefile &
But it didn't work, the file was empty.
This is race-prone -- I'd suggest putting in a delay after the kill, or using an explicit sigil to determine when it's been received. That said:
#!/usr/bin/env bash
# route FD 4 to your output routine
exec 4> >(
output=; trap 'output=1' USR1
while IFS= read -r line; do
[[ $output ]] && printf '%s\n' "$line"
done
); out_pid=$!
# Capture the PID for the process substitution above; note that this requires a very
# new version of bash (4.4?)
[[ $out_pid ]] || { echo "ERROR: Your bash version is too old" >&2; exit 1; }
# Run your program in another process substitution, and close the parent's handle on FD 4
exec 3> >("$EXEC" >&4) 4>&-
# cat your file to FD 3...
cat "$file" >&3
# UGLY HACK: Wait to let your program finish flushing output from those commands
sleep 0.1
# notify the subshell writing output to disk that the ignored input is done...
kill -USR1 "$out_pid"
# UGLY HACK: Wait to let the subprocess actually receive the signal and set output=1
sleep 0.1
# ...and then write the command for which you actually want content logged.
echo "command" >&3
In validating this answer, I'm doing the following:
EXEC=stub_function
stub_function() {
local count line
count=0
while IFS= read -r line; do
(( ++count ))
printf '%s: %s\n' "$count" "$line"
done
}
cat >file <<EOF
do-not-log-my-output-1
do-not-log-my-output-2
do-not-log-my-output-3
EOF
file=file
export -f stub_function
export file EXEC
Output is only:
4: command
You could pipe it into a sed:
var=$(YOUR COMMAND | sed '$!d')
This will put only the last line into the variable
I think, that your proram EXEC does something special (open connection or remember state). When that is not the case, you can use
${EXEC} < ${FILE} > /dev/null
myvar=$(echo ${CMD} | ${EXEC})
Or with normal commands:
# Do not use (printf "==%s==\n" 1 2 3 ; printf "oo%soo\n" 4 5 6) | cat
printf "==%s==\n" 1 2 3 | cat > /dev/null
myvar=$(printf "oo%soo\n" 4 5 6 | cat)
When you need to give all input to one process, perhaps you can think of a marker that you can filter on:
(printf "==%s==\n" 1 2 3 ; printf "%s\n" "marker"; printf "oo%soo\n" 4 5 6) | cat | sed '1,/marker/ d'
You should examine your EXEC what could be used. When it is running SQL, you might use something like
(cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d'
and write this in a var with
myvar=$( (cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d' )

How to browse a line from a file?

I have a file that contains 10 lines with this sort of content:
aaaa,bbb,132,a.g.n.
I wanna walk throw every line, char by char and put the data before the " , " is met in an output file.
if [ $# -eq 2 ] && [ -f $1 ]
then
echo "Read nr of fields to be saved or nr of commas."
read n
nrLines=$(wc -l < $1)
while $nrLines!="1" read -r line || [[ -n "$line" ]]; do
do
for (( i=1; i<=$n; ++i ))
do
while [ read -r -n1 temp ]
do
if [ temp != "," ]
then
echo $temp > $(result$i)
else
fi
done
paste -d"\n" $2 $(result$i)
done
nrLines=$($nrLines-1)
done
else
echo "File not found!"
fi
}
In parameter $2 I have an empty file in which I will store the data from file $1 after I extract it without the " , " and add a couple of comments.
Example:
My input_file contains:
a.b.c.d,aabb,comp,dddd
My output_file is empty.
I call my script: ./script.sh input_file output_file
After execution the output_file contains:
First line info: a.b.c.d
Second line info: aabb
Third line info: comp
(yes, without the 4th line info)
You can do what you want very simply with parameter-expansion and substring-removal using bash alone. For example, take an example file:
$ cat dat/10lines.txt
aaaa,bbb,132,a.g.n.
aaaa,bbb,133,a.g.n.
aaaa,bbb,134,a.g.n.
aaaa,bbb,135,a.g.n.
aaaa,bbb,136,a.g.n.
aaaa,bbb,137,a.g.n.
aaaa,bbb,138,a.g.n.
aaaa,bbb,139,a.g.n.
aaaa,bbb,140,a.g.n.
aaaa,bbb,141,a.g.n.
A simple one-liner using native bash string handling could simply be the following and give the following results:
$ while read -r line; do echo ${line%,*}; done <dat/10lines.txt
aaaa,bbb,132
aaaa,bbb,133
aaaa,bbb,134
aaaa,bbb,135
aaaa,bbb,136
aaaa,bbb,137
aaaa,bbb,138
aaaa,bbb,139
aaaa,bbb,140
aaaa,bbb,141
Paremeter expansion w/substring removal works as follows:
var=aaaa,bbb,132,a.g.n.
Beginning at the left and removing up to, and including, the first ',' is:
${var#*,} # bbb,132,a.g.n.
Beginning at the left and removing up to, and including, the last ',' is:
${var##*,} # a.g.n.
Beginning at the right and removing up to, and including, the first ',' is:
${var%,*} # aaaa,bbb,132
Beginning at the left and removing up to, and including, the last ',' is:
${var%%,*} # aaaa
Note: the text to remove above is represented with a wildcard '*', but wildcard use is not required. It can be any allowable text. For example, to only remove ,a.g.n where the preceding number is 136, you can do the following:
${var%,136*},136 # aaaa,bbb,136 (all others unchanged)
To print 2016 th line from a file named file.txt u have to run a command like this-
sed -n '2016p' < file.txt
More-
sed -n '2p' < file.txt
will print 2nd line
sed -n '2011p' < file.txt
2011th line
sed -n '10,33p' < file.txt
line 10 up to line 33
sed -n '1p;3p' < file.txt
1st and 3th line
and so on...
For more detail, please have a look in this tutorial and this answer.
In native bash the following should do what you want, assuming you replace the contents of your script.sh with the below:
#!/bin/bash
IN_FILE=${1}
OUT_FILE=${2}
IFS=\,
while read line; do
set -- ${line}
for ((i=1; i<=${#}; i++)); do
((${i}==4)) && continue
((n+=1))
printf '%s\n' "Line ${n} info: ${!i}"
done
done < ${IN_FILE} > ${OUT_FILE}
This will not print the 4th field of each line within the input file, on a new line in the output file (I assume this is your requirement as per your comment?).
[wspace#wspace sandbox]$ awk -F"," 'BEGIN{OFS="\n"}{for(i=1; i<=NF-1; i++){print "line Info: "$i}}' data.txt
line Info: a.b.c.d
line Info: aabb
line Info: comp
This little snippet can ignore the last field.
updated:
#!/usr/bin/env bash
if [ ! -f "$1" -o $# -ne 2 ];then
echo "Usage: $(basename $0) input_file out_file"
exit 127
fi
input_file=$1
output_file=$2
: > $output_file
if [ "$(wc -l < $1)" -ne 0 ];then
while true
do
read -r -n1 char
if [ "$char" == "" ];then
break
elif [ $char != "," ];then
temp=$temp$char
else
echo "line info: $temp" >> $output_file
temp=""
fi
done < $input_file
else
echo "file $1 is empty"
fi
Maybe this is what you want
Did you try
sed "s|,|\n|g" $1 | head -n -1 > $2
I assume that only the last word would not have a comma on its right.
Try this (tested with you sample line) :
#!/bin/bash
# script.sh
echo "Number of fields to save ?"
read nf
while IFS=$',' read -r -a arr; do
newarr=${arr[#]:0:${nf}}
done < "$1"
for i in ${newarr[#]};do
printf "%s\n" $i
done > "$2"
Execute script with :
$ ./script.sh inputfile outputfile
Number of fields ?
3
$ cat outputfile
a.b.c.d
aabb
comp
All words separated with commas are stored into an array $arr
A tmp array $newarr removes last $n element ($n get the read command).
It loops over new array and prints result in $2, the outputfile.

How to display progress of another command

I need help with a bash script I run like this:
do_something > file.txt (I'm using the third line of this file.txt in another echo output)
Now I need to get a number of characters on the second line of the file.txt.
(There are only dots - ".")
I can get the number of characters with this command:
progress=$(awk 'NR==2' file.txt | grep -o \. | wc -w)
But the problem is, that the file.txt and the number of characters on the second line is "progress bar" so it's changing in time from 0 - XY (i.e. 100) characters.
I want to use it to see a progress in percentage: echo -ne "$progress % \\r"
How could I do that in a loop? do_something > file.txt must start just once. In next ~5-20 seconds it's printing dots on the second line and I have to take this number updated every second to my output echo "XY %".
How can I read from file.txt every second and find there "new/updated" count of characters?* < edit
edit:
* it's real-time process. My do_something > file.txt is "printing" dots to this file and I want print result saved in $progress in real-time. So first command is printing dots to file and I'm counting them in real-time every second and print how many percent is done from 0-100 %
What you want to do is run do_something > file.txt in the background and then monitor it. You can use the special kill signal 0 to do this.
do_something > file.txt &
PID=$!
while kill -0 $PID 2> /dev/null
do
[calculate percent complete]
[display percent complete]
sleep 5
done
First, you should run your command in the background:
do_something > file.txt &
Then you can watch the changes in the output file. This will infinitely print the second line of file.txt every second.
while true; do sed -n '2p' < file.txt; sleep 1; done
If you want to print only how many characters are on the second line, you can do this:
while true; do sed -n '2p' < file.txt | wc -m; sleep 1; done
If you want to stop when there is 100 characters on the second line, you can do this:
MAX="100"
CUR="0"
while [ $CUR -lt $MAX ]; do CUR=`sed -n '2p' < sprint | wc -m`; echo $CUR; sleep 1; done

How do I use Head and Tail to print specific lines of a file

I want to say output lines 5 - 10 of a file, as arguments passed in.
How could I use head and tail to do this?
where firstline = $2 and lastline = $3 and filename = $1.
Running it should look like this:
./lines.sh filename firstline lastline
head -n XX # <-- print first XX lines
tail -n YY # <-- print last YY lines
If you want lines from 20 to 30 that means you want 11 lines starting from 20 and finishing at 30:
head -n 30 file | tail -n 11
#
# first 30 lines
# last 11 lines from those previous 30
That is, you firstly get first 30 lines and then you select the last 11 (that is, 30-20+1).
So in your code it would be:
head -n $3 $1 | tail -n $(( $3-$2 + 1 ))
Based on firstline = $2, lastline = $3, filename = $1
head -n $lastline $filename | tail -n $(( $lastline -$firstline + 1 ))
Aside from the answers given by fedorqui and Kent, you can also use a single sed command:
#!/bin/sh
filename=$1
firstline=$2
lastline=$3
# Basics of sed:
# 1. sed commands have a matching part and a command part.
# 2. The matching part matches lines, generally by number or regular expression.
# 3. The command part executes a command on that line, possibly changing its text.
#
# By default, sed will print everything in its buffer to standard output.
# The -n option turns this off, so it only prints what you tell it to.
#
# The -e option gives sed a command or set of commands (separated by semicolons).
# Below, we use two commands:
#
# ${firstline},${lastline}p
# This matches lines firstline to lastline, inclusive
# The command 'p' tells sed to print the line to standard output
#
# ${lastline}q
# This matches line ${lastline}. It tells sed to quit. This command
# is run after the print command, so sed quits after printing the last line.
#
sed -ne "${firstline},${lastline}p;${lastline}q" < ${filename}
Or, to avoid any external utilites, if you're using a recent version of bash (or zsh):
#!/bin/sh
filename=$1
firstline=$2
lastline=$3
i=0
exec <${filename} # redirect file into our stdin
while read ; do # read each line into REPLY variable
i=$(( $i + 1 )) # maintain line count
if [ "$i" -ge "${firstline}" ] ; then
if [ "$i" -gt "${lastline}" ] ; then
break
else
echo "${REPLY}"
fi
fi
done
try this one-liner:
awk -vs="$begin" -ve="$end" 'NR>=s&&NR<=e' "$f"
in above line:
$begin is your $2
$end is your $3
$f is your $1
Save this as "script.sh":
#!/bin/sh
filename="$1"
firstline=$2
lastline=$3
linestoprint=$(($lastline-$firstline+1))
tail -n +$firstline "$filename" | head -n $linestoprint
There is NO ERROR HANDLING (for simplicity) so you have to call your script as following:
./script.sh yourfile.txt firstline lastline
$ ./script.sh yourfile.txt 5 10
If you need only line "10" from yourfile.txt:
$ ./script.sh yourfile.txt 10 10
Please make sure that:
(firstline > 0) AND (lastline > 0) AND (firstline <= lastline)

Resources