Read full stdin until EOF when stdin comes from `cat` bash - bash

I'm trying to read full stdin into a variable :
script.sh
#/bin/bash
input=""
while read line
do
echo "$line"
input="$input""\n""$line"
done < /dev/stdin
echo "$input" > /tmp/test
When I run ls | ./script.sh or mostly any other commands, it works fine.
However It doesn't work when I run cat | ./script.sh , enter my message, and then hit Ctrl-C to exit cat.
Any ideas ?

I would stick to the one-liner
input=$(cat)
Of course, Ctrl-D should be used to signal end-of-file.

Related

How to monitore the stdout of a command with a timer?

I'd like to know when an application hasn't print a line in stdout for N seconds.
Here is a reproducible example:
#!/bin/bash
dmesg -w | {
while IFS= read -t 3 -r line
do
echo "$line"
done
echo "NO NEW LINE"
}
echo "END"
I can see the NO NEW LINE but the pipe doesn't stop and the bash doesn't continue. END is never displayed.
How to exit from the braces' code?
Source: https://unix.stackexchange.com/questions/117501/in-bash-script-how-to-capture-stdout-line-by-line
How to exit from the brackets' code?
Not all commands exit when they can't write to output or receive SIGPIPE, and they will not exit until they actually notice they can't write to output. Instead, run the command in the background. If the intention is not to wait on the process, in bash you could just use process substitution:
{
while IFS= read -t 3 -r line; do
printf "%s\n" "$line"
done
echo "end"
} < <(dmesg -w)
You could also use coprocess. Or just run the command in the background with a pipe and kill it when done with it.

Fails to read lines from running process in bash

Using process substitution, we can get every lines of output of a command .
# Echoes every seconds using process substitution
while read line; do
echo $line
done < <(for i in $(seq 1 10); do echo $i && sleep 1; done)
By the same way above, I want to get the stdout output of 'wpa_supplicant' command, while discarding stderr.
But nothing can be seen on screen!
while read line; do
echo $line
done < <(wpa_supplicant -Dwext -iwlan1 -c${MY_CONFIG_FILE} 2> /dev/null)
I confirmed that typing the same command in prompt shows its output normaly.
$ wpa_supplicant -Dwext -iwlan1 -c${MY_CONFIG_FILE} 2> /dev/null
What is the mistake? Any help would be appreciated.
Finally I found the answer here!
The problem was easy... the buffering. Using stdbuf (and piping), the original code will be modified as below.
stdbuf -oL wpa_supplicant -iwlan1 -Dwext -c${MY_CONFIG_FILE} | while read line; do
echo "! $line"
done
'stdbuf -oL' make the stream line buffered, so I can get every each line from the running process.

Copy *unbuffered* stdout to file from within bash script itself

I want to copy stdout to a log file from within a bash script, meaning I don't want to call the script with output piped to tee, I want the script itself to handle it. I've successfully used this answer to accomplish this, using the following code:
#!/bin/bash
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 10
echo "world"
This works, but has the downside of output being buffered until the script is completed, as is also discussed in the linked answer. In the above example, both "hello" and "world" will show up in the log only after the 10 seconds have passed.
I am aware of the stdbuf command, and if running the script with
stdbuf -oL ./myscript.sh
then stdout is indeed continuously printed both to the file and the terminal.
However, I'd like this to be handled from within the script as well. Is there any way to combine these two solutions? I'd rather not resort to a wrapper script that simply calls the original script enclosed with "stdbuf -oL".
You can use a workaround and make the script execute itself with stdbuf, if a special argument is present:
#!/bin/bash
if [[ "$1" != __BUFFERED__ ]]; then
prog="$0"
stdbuf -oL "$prog" __BUFFERED__ "$#"
else
shift #discard __BUFFERED__
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 1
echo "world"
fi
This will mostly work:
if you run the script with ./test, it shows unbuffered [] hello\n[] world.
if you run the script with ./test 123 456, it shows [123] hello\n[123] world like you want.
it won't work, however, if you run it with bash test - $0 is set to test which is not your script. Fixing this is not in the scope of this question though.
The delay in your first solution is caused by sed, not by tee. Try this instead:
#!/bin/bash
exec 6>&1 2>&1>&>(tee -a myscript.log)
To "undo" the tee effect:
exec 1>&6 2>&6 6>&-

Redirect stdin in a script to another process

Say I have a bash script that get some input via stdin. Now in that script I want to launch another process and have that process get the same data via its stdin.
#!/bin/bash
echo STDIN | somecommand
Now the "echo STDIN" thing above is obviously bogus, the question is how to do that? I could use read to read each line from stdin, append it into a temp file, then
cat my_temp_file | somecommand
but that is somehow kludgy.
When you write a bash script, the standard input is automatically inherited by any command within it that tries to read it, so, for example, if you have a script myscript.sh containing:
#!/bin/bash
echo "this is my cat"
cat
echo "I'm done catting"
And you type:
$ myscript.sh < myfile
You obtain:
this is my cat
<... contents of my file...>
I'm done catting
Can tee help you?
echo 123 | (tee >( sed s/1/a/ ) >(sed s/3/c/) >/dev/null )

Append text to stderr redirects in bash

Right now I'm using exec to redirect stderr to an error log with
exec 2>> ${errorLog}
The only downside is that I have to start each run with a timestamp since exec just pushes the text straight into the log file. Is there a way to redirect stderr but allow me to append text to it, such as a time stamp?
This is very interesting. I've asked a guy who knows bash quite well, and he told me this way:
foo() { while IFS='' read -r line; do echo "$(date) $line" >> file.txt; done; };
First, that creates a function reading one line of raw input from stdin, while the assignment to IFS makes it doesn't ignore blanks. Having read one line, it outputs it with the appropriate data prepended. Then you have to tell bash to redirect stderr into that function:
exec 2> >(foo)
Everything you write into stderr will now go through the foo function. Note when you do it in an interactive shell, you won't see the prompt anymore, because it's printed to stderr, and the read in foo is line buffered :)
You could simple just use:
exec 1> >( sed "s/^/$(date '+[%F %T]'): /" | tee -a ${LOGFILE}) 2>&1
This will not completely solve your Problem regarding Prompt not shown (itt will show after a short time, but not realtime, since the pipe will cache some data...), but will display the output 1:1 on stdout as well as in the file.
The only problem is, that I could not solve, is, to do this from a function, since that opens a subshell, where the exec is useless for the main program...
This example redirects stdout and stderr without loosing the original stdout and stderr. Also errors in the stdout handler are logged to the stderr handler. The file descriptors are saved in variables and closed in the child processes. Bash takes care, that no collisions occur.
#! /bin/bash
stamp ()
{
local LINE
while IFS='' read -r LINE; do
echo "$(date '+%Y-%m-%d %H:%M:%S,%N %z') $$ $LINE"
done
}
exec {STDOUT}>&1
exec {STDERR}>&2
exec 2> >(exec {STDOUT}>&-; exec {STDERR}>&-; exec &>> stderr.log; stamp)
exec > >(exec {STDOUT}>&-; exec {STDERR}>&-; exec >> stdout.log; stamp)
for n in $(seq 3); do
echo loop $n >&$STDOUT
echo o$n
echo e$n >&2
done
This requires a current Bash version but thanks to Shellshock one can rely on this nowadays.
cat q23123 2> tmp_file ;cat tmp_file | sed -e "s/^/$(date '+[%F %T]'): /g" >> output.log; rm -f tmp_file

Resources