How to output bash command to stdout and pipe to another command at the same time? - bash

I'm working on a server and to show detailed GPU information I use these commands:
nvidia-smi
ps -up `nvidia-smi |tail -n +16 | head -n -1 | sed 's/\s\s*/ /g' | cut -d' ' -f3`
However as you can see, nvidia-smi is called twice. How can I make the output of nvidia-smi go to output and pipe to another command at the same time?

Use tee:
ps -up `nvidia-smi |tee /dev/stderr |tail -n +16 | head -n -1 | sed 's/\s\s*/ /g' | cut -d' ' -f3`
Since stdout is piped, you can't make a copy to it, so I picked stderr to show output.
If /dev/stderr is not available, use /proc/self/fd/2.

Related

How to assign a piped output as a variable while pipe is continuous

I want to update the download status at every 5 second of a downloading file to my telegram bot. Also here I'm using bash.
aria2c $url --summary-interval=5 2>&1 | tee output.log | grep -oP "(\d+(\.\d+)?(?=%))"
This thing derive me download percentage after each 5 seconds. I want to use this download percentage for my bot to update it regularly. I tried these
aria2c $url --summary-interval=5 2>&1 | tee output.log | grep -oP "(\d+(\.\d+)?(?=%))" | { read text; curl -s "https://api.legram.org/bot${tg_token}/editMessageText" --data "message_id=${msg_id}&text=DOWNLOADED-${text}&chat_id=${ch_id}&parse_mode=HTML&disable_web_page_preview=True"; }
Try 2
aria2c $url --summary-interval=5 2>&1 | tee output.log | text=$(grep -oP "(\d+(\.\d+)?(?=%))") | curl -s "https://api.legram.org/bot${tg_token}/editMessageText" --data "message_id=${msg_id}&text=DOWNLOADED-${text}%&chat_id=${ch_id}&parse_mode=HTML&disable_web_page_preview=True"; }
But none works. Then for testing I tried this
aria2c $url --summary-interval=5 2>&1 | tee output.log | grep -oP "(\d+(\.\d+)?(?=%))" | { read text; echo "$text"; }
I just got one output at last(which might be the first download %), unlike what it should be. Can anyone get me the working code.
The problem is that you only run read (and then updates the status) once, so it reads a single line (and updates the status once). You need a loop, so it'll repeat the read+update process over & over. You can use a while loop to do this. If it should exit when there's no more input to process, make read the while condition:
aria2c $url --summary-interval=5 2>&1 |
tee output.log |
grep -oP "(\d+(\.\d+)?(?=%))" |
while read text; do
curl -s "https://api.legram.org/bot${tg_token}/editMessageText" --data "message_id=${msg_id}&text=DOWNLOADED-${text}&chat_id=${ch_id}&parse_mode=HTML&disable_web_page_preview=True"
done

How to take a single line of input from a long running command, then kill it?

Is there a way to take one line of input from a stream, pass it on as an argument and kill the stream?
In pseudo-bash code:
tail -f stream | filter | take-one-and-kill-tail | xargs use-value
Edit: actual script so far is:
i3-msg -t subscribe -m '["window"]'| stdbuf -o0 -e0 jq -r 'select(.change == "new") | "\(.container.window)\n"' | head -0
and it has following (undesirable) behaviour:
$ i3-msg -t subscribe -m '["window"]'| stdbuf -oL -eL jq -r 'select(.change == "new") | "\(.container.window)\n\n"' | head -1
# first event happens, window id is printed
79691787
# second event happens, head -1 quits
$
You could run the command in subshell and kill that shell.
In this example I'm killing the stream after the first info message:
#!/bin/bash
( sudo stdbuf -oL tail -f /var/log/syslog | stdbuf -oL grep -m1 info ; kill $$ )
Note: ( pipeline ) will run the pipeline in a subshell. $$ contains the pid of the current shell. (Which is the subshell in the above example)
In the above example grep -m1 is ensuring that only one line of ouput is read/written before killing the pipe.
If your filter program does not support such an option like -m1, you could pipe to awk and exit awk after the first line of input. The remaining concept stays the same:
( sudo stdbuf -oL tail -f /var/log/syslog \
| stdbuf -oL grep info \
| awk '{print;exit}' ; kill $$)

journalctl --after-cursor is not working well with shell script

I am trying to get the logs from journalctl after a specified time with the use of cursor option.
Below is code in script file.
value=$( journalctl -n 0 --show-cursor | tail -1 | cut -f2- -d: | sed 's/ /"/;s/$/"/')
echo "$value"
sleep 20
echo "journalctl --after-cursor=$value"
journalctl --after-cursor=$value
The ouput of this script file is
"s=3057f92d5b3e4afdb5eb91c22e880074;i=1f646;b=0bc7e6e9f16f4847b2d50e0a0dd31023;m=a10d4c4d1;
t=5bba8ac2477ae;x=1cc1645fed6ffc79"
journalctl --after-cursor="s=3057f92d5b3e4afdb5eb91c22e880074;i=1f646;
b=0bc7e6e9f16f4847b2d50e0a0dd31023;m=a10d4c4d1;t=5bba8ac2477ae;x=1cc1645fed6ffc79"
Failed to seek to cursor: Invalid argument
As we can see above the journalctl --after-cursor results in "Failed to seek cursor error".
However if the same is executed in command line terminal, the --after-cursor gives the output.
Is there something needed to be done before calling journalctl with after-cursor option in shell script?
Bash is very finicky about interpolation. You need to trim the double-quotes around $value and then quote it later on:
value=$( journalctl -n 0 --show-cursor | tail -1 | cut -f2- -d: | sed 's/ /"/;s/$/"/') | tr -d '"')
echo "$value"
sleep 20
echo "journalctl --after-cursor=$value"
journalctl --after-cursor="$value"
I've confirmed that this works by running the above and I've also used this same method on another script which uses cursors.

grep whole file and tail it without using "line-buffered" option in grep

I have BusyBox v1.21.0 installed and it has got very basic grep operation, no --line-buffered option
grep sync_complete /var/log/messages
tail -f /var/log/messages | grep sync_complete
is it possible to combine above commands into single line command? thanks!
Use -n +1 to make tail read the file from the start:
tail -n +1 -f /var/log/messages | grep sync_complete

How to use tail in combination with sed .

I want to beep a Sound , incase there is any Exception ocured in Log Files .
I am using bash script .
But unfortunately when tail is used in combintaion with sed , it doesn't work .
I have tried with the below commands and posting here .
tail -f mylogs.log | grep "Exception" | sed -e $'s/Exception/Exception\a/'
tail -f mylogs.log | sed -e $'s/Exception/Exception\a/'
tail -f mylogs.log | grep "Exception" | sed -e $'s/Exception/Exception\a/'
The problem is that grep sees that it's not writing to the terminal, so it buffers its output, eventually writing big chunks that sed can process all at once. To tell it to print out lines as soon as they're available, use the --line-buffered option:
tail -f mylogs.log \
| grep --line-buffered Exception \
| sed -u -e $'s/Exception/Exception\a/'
(Note that I've also added the -u flag to sed, which is similar to grep's --line-buffered option. In my testing it didn't seem to make a difference for this command, but I figure it's better to include it just in case.)

Resources