I have a long-running command which outputs periodically. to demonstrate let's assume it is:
function my_cmd()
{
for i in {1..9}; do
echo -n $i
for j in {1..$i}
echo -n " "
echo $i
sleep 1
done
}
the output will be:
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
I want to display the command output meanwhile save it to a file at the same time.
this can be done by my_cmd | tee -a res.txt.
Now I want to display the output to terminal as-is but save to file with a transformed flavor, say with sed "s/ //g".
so the res.txt becomes:
11
22
33
44
66
77
88
99
how can I do this transformation on-the-fly without waiting for command exits then read the file again?
Note that in your original code, {1..$i} is an error because sequences can't contain variables. I've replaced it with seq. Also, you're missing a do and a done for the inner for loop.
At any rate, I would use process substitution.
#!/usr/bin/env bash
function my_cmd {
for i in {1..9}; do
printf '%d' "$i"
for j in $(seq 1 $i); do
printf ' '
done
printf '%d\n' "$j"
sleep 1
done
}
my_cmd | tee >(tr -d ' ' >> res.txt)
Process substitution usually causes bash to create an entry in /dev/fd which is fed to the command in question. The contents of the substitution run asynchronously, so it doesn't block the process sending data to it.
Note that the process substitution isn't a REAL file, so the -a option for tee is meaningless. If you really want to append to your output file, >> within the substitution is the way to go.
If you don't like process substitution, another option would be to redirect to alternate file descriptors. For example, instead of the last line in the script above, you could use:
exec 5>&1
my_cmd | tee /dev/fd/5 | tr -d ' ' > res.txt
exec 5>&-
This creates a file descriptor, /dev/fd/5, which redirects to your real stdout, the terminal. It then tells tee to write to this, allowing the normal stdout from tee to be processed by additional pipe elements before final redirection to your log file.
The method you choose is up to you. I find process substitution clearer.
Something you need to modify in your function. And you may use tee in the for loop to print and write file at the same time. The following script may get the result you desire.
#!/bin/bash
filename="a.txt"
[ -f $filename ] && rm $filename
for i in {1..9}; do
echo -n $i | tee -a $filename
for((j=1;j<=$i;j++)); do
echo -n " "
done
echo $i | tee -a $filename
sleep 1
done
Instead of double loop, I would use printf and its formatting capability %Xs to pad with blank characters.
Moreover I would use double printing (for stdout and your file) rather than using pipe and starting new processes.
So your function could look like this:
function my_cmd() {
for i in {1..9}; do
printf "%s %${i}s\n" $i $i
printf "%s%s\n" $i $i >> res.txt
done
}
Related
I have a command like below
md5sum test1.txt | cut -f 1 -d " " >> test.txt
I want output of the above result prefixed with File_CheckSum:
Expected output: File_CheckSum: <checksumvalue>
I tried as follows
echo 'File_Checksum:' >> test.txt | md5sum test.txt | cut -f 1 -d " " >> test.txt
but getting result as
File_Checksum:
adbch345wjlfjsafhals
I want the entire output in 1 line
File_Checksum: adbch345wjlfjsafhals
echo writes a newline after it finishes writing its arguments. Some versions of echo allow a -n option to suppress this, but it's better to use printf instead.
You can use a command group to concatenate the the standard output of your two commands:
{ printf 'File_Checksum: '; md5sum test.txt | cut -f 1 -d " "; } >> test.txt
Note that there is a race condition here: you can theoretically write to test.txt before md5sum is done reading from it, causing you to checksum more data than you intended. (Your original command mentions test1.txt and test.txt as separate files, so it's not clear if you are really reading from and writing to the same file.)
You can use command grouping to have a list of commands executed as a unit and redirect the output of the group at once:
{ printf 'File_Checksum: '; md5sum test1.txt | cut -f 1 -d " " } >> test.txt
printf "%s: %s\n" "File_Checksum:" "$(md5sum < test1.txt | cut ...)" > test.txt
Note that if you are trying to compute the hash of test.txt(the same file you are trying to write to), this changes things significantly.
Another option is:
{
printf "File_Checksum: "
md5sum ...
} > test.txt
Or:
exec > test.txt
printf "File_Checksum: "
md5sum ...
but be aware that all subsequent commands will also write their output to test.txt. The typical way to restore stdout is:
exec 3>&1
exec > test.txt # Redirect all subsequent commands to `test.txt`
printf "File_Checksum: "
md5sum ...
exec >&3 # Restore original stdout
Operator &&
e.g. mkdir example && cd example
I am looping our the a grep result. The result contains 10 lines (every line has different content). So the loop stuff in the loop gets executed 10 times.
I need to get the index, 0-9, in the run so i can do actions based on the index.
ABC=(cat test.log | grep "stuff")
counter=0
for x in $ABC
do
echo $x
((counter++))
echo "COUNTER $counter"
done
Currently the counter won't really change.
Output:
51209
120049
148480
1211441
373948
0
0
0
728304
0
COUNTER: 1
If your requirement is to only print counter(which is as per shown samples only), in that case you could use awk(if you are ok with it), this could be done in a single awk like, without creating variable and then using grep like you are doing currently, awk could perform both search and counter printing in a single shot.
awk -v counter=0 '/stuff/{print "counter=" counter++}' Input_file
Replace stuff string above with the actual string you are looking for and place your actual file name for Input_file in above.
This should print like:
counter=1
counter=2
........and so on
Your shell script contains what should be an obvious syntax error.
ABC=(cat test.log | grep "stuff")
This fails with
-bash: syntax error near unexpected token `|'
There is no need to save the output in a variable if you only want to process one at a time (and obviously no need for the useless cat).
grep "stuff" test.log | nl
gets you numbered lines, though the index will be 1-based, not zero-based.
If you absolutely need zero-based, refactoring to Awk should solve it easily:
awk '/stuff/ { print n++, $0 }' test.log
If you want to loop over this and do something more with this information,
awk '/stuff/ { print n++, $0 }' test.log |
while read -r index output; do
echo index is "$index"
echo output is "$output"
done
Because the while loop executes in a subshell the value of index will not be visible outside of the loop. (I guess that's what your real code did with the counter as well. I don't think that part of the code you posted will repro either.)
Do not store the result of grep in a scalar variable $ABC.
If the line of the log file contains whitespaces, the variable $x
is split on them due to the word splitting of bash.
(BTW the statement ABC=(cat test.log | grep "stuff") causes a syntax error.)
Please try something like:
readarray -t abc < <(grep "stuff" test.log)
for x in "${abc[#]}"
do
echo "$x"
echo "COUNTER $((++counter))"
done
or
readarray -t abc < <(grep "stuff" test.log)
for i in "${!abc[#]}"
do
echo "${abc[i]}"
echo "COUNTER $((i + 1))"
done
you can use below increment statement-
counter=$(( $counter + 1));
I have an executable that accepts queries from stdin and responds to them, reading until EOF. Additionally I have an input file and a special command, let's call those EXEC, FILE and CMD respectively.
What I need to do is:
Pass FILE to EXEC as input.
Disregard all the output corresponding to commands read from FILE (/dev/null/).
Pass CMD as the last command.
Fetch output for the last command and save it in a variable.
EXEC's output can be multiline for each query.
I know how to pass FILE + CMD into the EXEC:
echo ${CMD} | cat ${FILE} - | ${EXEC}
but I have no idea how to fetch only output resulting from CMD.
Is there a magical one-liner that does this?
After looking around I've found the following partial solution:
mkfifo mypipe
(tail -f mypipe) | ${EXEC} &
cat ${FILE} | while read line; do
echo ${line} > mypipe
done
echo ${CMD} > mypipe
This allows me to redirect my input, but now the output gets printed to screen. I want to ignore all the output produced by EXEC in the while loop and get only what it prints for the last line.
I tried what first came into my mind, which is:
(tail -f mypipe) | ${EXEC} > somefile &
But it didn't work, the file was empty.
This is race-prone -- I'd suggest putting in a delay after the kill, or using an explicit sigil to determine when it's been received. That said:
#!/usr/bin/env bash
# route FD 4 to your output routine
exec 4> >(
output=; trap 'output=1' USR1
while IFS= read -r line; do
[[ $output ]] && printf '%s\n' "$line"
done
); out_pid=$!
# Capture the PID for the process substitution above; note that this requires a very
# new version of bash (4.4?)
[[ $out_pid ]] || { echo "ERROR: Your bash version is too old" >&2; exit 1; }
# Run your program in another process substitution, and close the parent's handle on FD 4
exec 3> >("$EXEC" >&4) 4>&-
# cat your file to FD 3...
cat "$file" >&3
# UGLY HACK: Wait to let your program finish flushing output from those commands
sleep 0.1
# notify the subshell writing output to disk that the ignored input is done...
kill -USR1 "$out_pid"
# UGLY HACK: Wait to let the subprocess actually receive the signal and set output=1
sleep 0.1
# ...and then write the command for which you actually want content logged.
echo "command" >&3
In validating this answer, I'm doing the following:
EXEC=stub_function
stub_function() {
local count line
count=0
while IFS= read -r line; do
(( ++count ))
printf '%s: %s\n' "$count" "$line"
done
}
cat >file <<EOF
do-not-log-my-output-1
do-not-log-my-output-2
do-not-log-my-output-3
EOF
file=file
export -f stub_function
export file EXEC
Output is only:
4: command
You could pipe it into a sed:
var=$(YOUR COMMAND | sed '$!d')
This will put only the last line into the variable
I think, that your proram EXEC does something special (open connection or remember state). When that is not the case, you can use
${EXEC} < ${FILE} > /dev/null
myvar=$(echo ${CMD} | ${EXEC})
Or with normal commands:
# Do not use (printf "==%s==\n" 1 2 3 ; printf "oo%soo\n" 4 5 6) | cat
printf "==%s==\n" 1 2 3 | cat > /dev/null
myvar=$(printf "oo%soo\n" 4 5 6 | cat)
When you need to give all input to one process, perhaps you can think of a marker that you can filter on:
(printf "==%s==\n" 1 2 3 ; printf "%s\n" "marker"; printf "oo%soo\n" 4 5 6) | cat | sed '1,/marker/ d'
You should examine your EXEC what could be used. When it is running SQL, you might use something like
(cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d'
and write this in a var with
myvar=$( (cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d' )
I'm capturing stdout (log) in a file using tail -f file_name to save a specific string with grep and sed (to exit the tail) :
tail -f log.txt | sed /'INFO'/q | grep 'INFO' > info_file.txt
This works fine, but I want to terminate the command in case it does not find the pattern (INFO) in the log file after some time
I want something like this (which does not work) to exit the script after a timeout (60sec):
tail -f log.txt | sed /'INFO'/q | grep 'INFO' | read -t 60
Any suggestions?
This seems to work for me...
read -t 60 < <(tail -f log.txt | sed /'INFO'/q | grep 'INFO')
Since you only want to capture one line:
#!/bin/bash
IFS= read -r -t 60 line < <(tail -f log.txt | awk '/INFO/ { print; exit; }')
printf '%s\n' "$line" >info_file.txt
For a more general case, where you want to capture more than one line, the following uses no external commands other than tail:
#!/usr/bin/env bash
end_time=$(( SECONDS + 60 ))
while (( SECONDS < end_time )); do
IFS= read -t 1 -r line && [[ $line = *INFO* ]] && printf '%s\n' "$line"
done < <(tail -f log.txt)
A few notes:
SECONDS is a built-in variable in bash which, when read, will retrieve the time in seconds since the shell was started. (It loses this behavior after being the target of any assignment -- avoiding such mishaps is part of why POSIX variable-naming conventions reserving names with lowercase characters for application use are valuable).
(( )) creates an arithmetic context; all content within is treated as integer math.
<( ) is a process substitution; it evaluates to the name of a file-like object (named pipe, /dev/fd reference, or similar) which, when read from, will contain output from the command contained therein. See BashFAQ #24 for a discussion of why this is more suitable than piping to read.
The timeout command, (part of the Debian/Ubuntu "coreutils" package), seems suitable:
timeout 1m tail -f log.txt | grep 'INFO'
Getting into bash, I love it, but it seems there are lots of subtleties that end up making a big difference in functionality, and whatnot, anyway here is my question:
I know this works:
total=0
for i in $(grep number some.txt | cut -d " " -f 1); do
(( total+=i ))
done
But why doesn't this?:
grep number some.txt | cut -d " " -f 1 | while read i; do (( total+=i )); done
some.txt:
1 number
2 number
50 number
both the for and the while loop receive 1, 2, and 50 separately, but the for loop shows the total variable being 53 in the end, while in the while loop code, it just stays in zero. I know there's some fundamental knowledge I'm lacking here, please help me.
I also don't get the differences in piping, for example
If I run
grep number some.txt | cut -d " " -f 1 | while read i; echo "-> $i"; done
I get the expected output
-> 1
-> 2
-> 50
But if run like so
while read i; echo "-> $i"; done <<< $(grep number some.txt | cut -d " " -f 1)
then the output changes to
-> 1 2 50
This seems weird to me since grep outputs the result in separate lines. As if this wasn't ambiguous, if I had a file with only numbers 1 2 3 in separate lines, and I ran
while read i; echo "-> $i"; done < someother.txt
Then the output would be printed by the echo in different lines, as expected in the previous example. I know < is for files and <<< for command outputs, but why does that line difference exist?
Anyways, I was hoping someone could shed some light on the matter, thank you for your time!
grep number some.txt | cut -d " " -f 1 | while read i; do (( total+=i )); done
Each command in a pipeline is run in a subshell. That means when you put the while read loop in a pipeline any variable assignments are lost.
See: BashFAQ 024 - "I set variables in a loop that's in a pipeline. Why do they disappear after the loop terminates? Or, why can't I pipe data to read?"
while read i; do echo "-> $i"; done <<< "$(grep number some.txt | cut -d " " -f 1)"
To preserve grep's newlines, add double quotes. Otherwise the result of $(...) is subject to word splitting which collapses all the whitespace into single spaces.