I'm sure this is SUPER easy. I am trying to read a string from STDOUT. I submit a job onto another machine and then I WANT to be able to send "bjobs" which checks if the job has been finished. I want to be able to read STDOUT and detect when it has finished then move on.
This is what I have and it isn't working but I feel super close!
Waiting for stdout to read "No unfinished job found"
bjobs
IFS= read -r line
echo "$line"
while "$line" != "No unfinished job found"
do
echo "$line"
sleep 30s
bjobs
IFS= read -r line
done
any help would be appreciated! This is one of my first shell scripts
The thing that you are missing is that read will read from its stdin ... not its stdout. So you have to arrange that its stdin is corresponds to the stdout of the command that you want it to read. The straight-forward way to do that is to use a pipe (|)
For example:
$ bjobs | ( IFS= ; read -r line ; echo "$line" ; while "$line" != "No unfinished job found" ; do ; echo "$line" ; sleep 30s ; read -r line ; done )
The ( ... ) is creating a subshell ...
Related
I'd like to know when an application hasn't print a line in stdout for N seconds.
Here is a reproducible example:
#!/bin/bash
dmesg -w | {
while IFS= read -t 3 -r line
do
echo "$line"
done
echo "NO NEW LINE"
}
echo "END"
I can see the NO NEW LINE but the pipe doesn't stop and the bash doesn't continue. END is never displayed.
How to exit from the braces' code?
Source: https://unix.stackexchange.com/questions/117501/in-bash-script-how-to-capture-stdout-line-by-line
How to exit from the brackets' code?
Not all commands exit when they can't write to output or receive SIGPIPE, and they will not exit until they actually notice they can't write to output. Instead, run the command in the background. If the intention is not to wait on the process, in bash you could just use process substitution:
{
while IFS= read -t 3 -r line; do
printf "%s\n" "$line"
done
echo "end"
} < <(dmesg -w)
You could also use coprocess. Or just run the command in the background with a pipe and kill it when done with it.
I would like to do something like the log analyzer of the running process. Let's say I run a server, stdout passes through the pipe to the bash script, where is IF statement. IF the string "somethings" appears in the output, then the script kills the server. If not then it normally prints stdout and still is running.
Example:
./server | if.bash
The contents of if.bash:
if grep 'somethings'; then
kill app
else
echo server output
fi
The above code successfully runs the test, but doesn't print the original stdout. How can I ensure that content is still printed?
Read the output in a loop:
while read -r line; do
if [[ $line =~ something ]];
then
kill app
break
else
printf "%s\n" "$line"
fi
done
Another option is to use tee when running the script:
./server | tee /dev/tty | if.bash
tee will output the messages on the terminal and also send them to the pipe.
I'd like to ask user a confirmation to read from stdin (Display output [Y/n]). It works Ok if some arguments were provided, or no arguments were provided but there was some input. However, if some data was piped to the script, there's no confirmation.
#!/bin/bash
output_file=$(mktemp)
cleanup() {
rm -f "$output_file"
}
trap cleanup 0 1 2 3 15
if [ $# -gt 0 ]; then
while [ $# -gt 0 ]; do
echo "$1" >> "$output_file"
shift
done
else
while read -r line; do
echo "$line" >> "$output_file"
done
fi
while true; do
read -p "Display output? [Y/n]" response
if [ -z "$response" ]; then
break
fi
case $response in
[Yy]*) break;;
[Nn]*) exit;;
esac
done
less "$output_file"
What prevent read -p to work? What should be done to provide consistent behavior?
The read command reads input from standard in. If you have standard in fed from a pipe then read looks for its data from the pipe, not from your terminal.
On most platforms you can work around this by redirecting the read command's input directly from the tty device, as in:
read -p "Display output? [Y/n]" response </dev/tty
If the script read everything from standard input, what is the read -p going to get? And it likely doesn't prompt if the input is not an 'interactive device' (aka terminal). Have you checked the Bash man page for read? It says:
-pprompt
Display prompt, without a trailing newline, before attempting to read any input. The prompt is displayed only if input is coming from a terminal.
When your input is from a pipe, it is not from a terminal.
I have script that is launching a subshell/background command to read input and then doing more work:
#!/bin/bash
(
while true; do
read -u 0 -r -e -p "test_rl> " line || break
echo "line: ${line}"
done
) &
sleep 3600 # more work
With the above I don't even get a prompt. If I exec 3>&0 prior to launching the subshell and then read from descriptor 3 (-u 3) then I at least get the prompt, but the read command still doesn't get any input that I type.
How do I get the read builtin to read correctly from the terminal (parent's stdin file descriptor)?
How do I get the read builtin to read correctly from the terminal
(parent's stdin file descriptor)?
You might want to try this (using the parent's filedescriptors):
#!/bin/bash
(
while true; do
read -u 0 -r -e -p "test_rl> " line || break
echo "line: ${line}"
done
)<&0 >&1 &
sleep 3600 # more work
Using process substitution, we can get every lines of output of a command .
# Echoes every seconds using process substitution
while read line; do
echo $line
done < <(for i in $(seq 1 10); do echo $i && sleep 1; done)
By the same way above, I want to get the stdout output of 'wpa_supplicant' command, while discarding stderr.
But nothing can be seen on screen!
while read line; do
echo $line
done < <(wpa_supplicant -Dwext -iwlan1 -c${MY_CONFIG_FILE} 2> /dev/null)
I confirmed that typing the same command in prompt shows its output normaly.
$ wpa_supplicant -Dwext -iwlan1 -c${MY_CONFIG_FILE} 2> /dev/null
What is the mistake? Any help would be appreciated.
Finally I found the answer here!
The problem was easy... the buffering. Using stdbuf (and piping), the original code will be modified as below.
stdbuf -oL wpa_supplicant -iwlan1 -Dwext -c${MY_CONFIG_FILE} | while read line; do
echo "! $line"
done
'stdbuf -oL' make the stream line buffered, so I can get every each line from the running process.