Redirect terminal stdout through command - bash

All I want to do is just redirect the executed command's stdout to a pipe. An example will explain it better than I do.
$ echo "Hello world" | cowsay
outputs "Hello world" in cowsay, i want to preprocess terminal's / bash stdout to pass through cowsay
$ echo "Hello world"
this should output the same as the first command.
Thanks in advance.

You can use process substitution:
#!/bin/bash
exec > >(cowsay)
echo "Hello world"
However, there are caveats. If you start anything in the background, the cow will wait for it to finish:
#!/bin/bash
exec > >(cowsay)
echo "Hello world"
sleep 30 & # Runs in the background
exit # Script exits immediately but no cow appears
In this case, the script will exit with no output. 30 seconds later, when sleep exits, the cow suddenly shows up.
You can fix this by telling the programs to write somewhere else, so that the cow doesn't have to wait for them:
#!/bin/bash
exec > >(cowsay)
echo "Hello world"
sleep 30 > /dev/null & # Runs in the background and doesn't keep the cow waiting
exit # Script exits and the cow appears immediately.
If you don't start anything in the background, one of your tools or programs do. To find which ones, redirect or comment them out one by one until cows appear.

You can use a named pipe:
mkfifo /tmp/cowsay_pipe
cowsay < /tmp/cowsay_pipe &
exec > /tmp/cowsay_pipe # Redirect all future output to the pipe
echo "Hello world"

Related

tee hangs in bash -- is there an alternative syntax?

Let's say you have a series of scripts that you don't own and, therefore, can't modify, that may spawn background processes without redirecting stdout and stderr. I've noticed that in bash, tee'ing the output, as shown in the following example, does not return when the script is done if the background process is still running (and has open file descriptors for stdout or stderr).
./runme.sh 2>&1| tee runme.out
Where runme.sh is defined as:
#!/bin/bash
# Start a fake daemon
perl -e 'while(1) { sleep(1) }' &
printf "Enter your name: "
read name
echo "Goodbye $name"
How can I run scripts like this in bash while capturing all output and get back to the prompt when the script is done?
alternative syntax could be to use process substitution
./runme.sh > >(tee runme.out) 2>&1
this way tee is no more a child process of current shell and shell will wait only for runme.sh termination whereas in a pipeline it's waiting for all process termination.
Note that tee and subprocesses are still running after runme.sh terminates.
does not return when the script is done if the background process is still running (and has open file descriptors for stdout or stderr)
So don't do that. Daemon tools will generally redirect stdout/err for this reason, and you can do it manually too:
perl -e 'while(1) { sleep(1) }' < /dev/null > mydaemon.log 2>&1 &
Now that it's not keeping the pipe open, you can tee robustly without hacks.

Start process in background quietly

Appending a & to the end of a command starts it in the background. E.g.:
$ wget google.com &
[1] 7072
However, this prints a job number and PID. Is it possible to prevent these?
Note: I still want to keep the output of wget – it's just the [1] 7072 that I want to get rid of.
There is an option to the set builtin, set -b, that controls the output of this line, but the choice is limited to "immediately" (when set) and "wait for next prompt" (when unset).
Example of immediate printing when the option is set:
$ set -b
$ sleep 1 &
[1] 9696
$ [1]+ Done sleep 1
And the usual behaviour, waiting for the next prompt:
$ set +b
$ sleep 1 &
[1] 840
$ # Press enter here
[1]+ Done sleep 1
So as far as I can see, these can't be suppressed. The good news is, though, that job control messages are not displayed in a non-interactive shell:
$ cat sleeptest
#!/bin/bash
sleep 1 &
$ ./sleeptest
$
So if you start a command in the background in a subshell, there won't be any messages. To do that in an interactive session, you can run your command in a subshell like this (thanks to David C. Rankin):
$ ( sleep 1 & )
$
which also results in no job control prompts.
From the Advanced Bash-Scripting Guide:
Suppressing stdout.
cat $filename >/dev/null
# Contents of the file will not list to stdout.
Suppressing stderr (from Example
16-3).
rm $badname 2>/dev/null
# So error messages [stderr] deep-sixed.
Suppressing output from both stdout and stderr.
cat $filename 2>/dev/null >/dev/null
#1 If "$filename" does not exist, there will be no error message output.
# If "$filename" does exist, the contents of the file will not list to stdout.
# Therefore, no output at all will result from the above line of code.
#
# This can be useful in situations where the return code from a command
#+ needs to be tested, but no output is desired.
#
# cat $filename &>/dev/null
# also works, as Baris Cicek points out.

Close pipe even if subprocesses of first command is still running in background

Suppose I have test.sh as below. The intent is to run some background task(s) by this script, that continuously updates some file. If the background task is terminated for some reason, it should be started again.
#!/bin/sh
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done &
echo $! > pidfile
and want to call it like ./test.sh | otherprogram, e. g. ./test.sh | cat.
The pipe is not being closed as the background process still exists and might produce some output. How can I tell the pipe to close at the end of test.sh? Is there a better way than checking for existence of pidfile before calling the pipe command?
As a variant I tried using #!/bin/bash and disown at the end of test.sh, but it is still waiting for the pipe to be closed.
What I actually try to achieve: I have a "status" script which collects the output of various scripts (uptime, free, date, get-xy-from-dbus, etc.), similar to this test.sh here. The output of the script is passed to my window manager, which displays it. It's also used in my GNU screen bottom line.
Since some of the scripts that are used might take some time to create output, I want to detach them from output collection. So I put them in a while true; do script; sleep 1; done loop, which is started if it is not running yet.
The problem here is now that I don't know how to tell the calling script to "really" detach the daemon process.
See if this serves your purpose:
(I am assuming that you are not interested in any stderr of commands in while loop. You would adjust the code, if you are. :-) )
#!/bin/bash
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done >/dev/null 2>&1 &
echo $! > pidfile
If you want to explicitly close a file descriptor, like for example 1 which is standard output, you can do it with:
exec 1<&-
This is valid for POSIX shells, see: here
When you put the while loop in an explicit subshell and run the subshell in the background it will give the desired behaviour.
(while true; do
echo "something" >> somewhere
sleep 1
done)&

Running bash commands in the background without printing job and process ids

To run a process in the background in bash is fairly easy.
$ echo "Hello I'm a background task" &
[1] 2076
Hello I'm a background task
[1]+ Done echo "Hello I'm a background task"
However the output is verbose. On the first line is printed the job id and process id of the background task, then we have the output of the command, finally we have the job id, its status and the command which triggered the job.
Is there a way to suppress the output of running a background task such that the output looks exactly as it would without the ampersand at the end? I.e:
$ echo "Hello I'm a background task" &
Hello I'm a background task
The reason I ask is that I want to run a background process as part of a tab-completion command so the output of that command must be uninterrupted to make any sense.
Not related to completion, but you could supress that output by putting the call in a subshell:
(echo "Hello I'm a background task" &)
Building off of #shellter's answer, this worked for me:
tyler#Tyler-Linux:~$ { echo "Hello I'm a background task" & disown; } 2>/dev/null; sleep .1;
Hello I'm a background task
tyler#Tyler-Linux:~$
I don't know the reasoning behind this, but I remembered from an old post that disown prevents bash from outputting the process ids.
In some newer versions of bash and in ksh93 you can surround it with a sub-shell or process group (i.e. { ... }).
/home/shellter $ { echo "Hello I'm a background task" & } 2>/dev/null
Hello I'm a background task
/home/shellter $
Based on this answer, I came up with the more concise and correct:
silent_background() {
{ 2>&3 "$#"& } 3>&2 2>/dev/null
disown &>/dev/null # Prevent whine if job has already completed
}
silent_background date
Building on the above answer, if you need to allow stderr to come through from the command:
f() { echo "Hello I'm a background task" >&2; }
{ f 2>&3 &} 3>&2 2>/dev/null
Try:
user#host:~$ read < <( echo "Hello I'm a background task" & echo $! )
user#host:~$ echo $REPLY
28677
And you have hidden both the output and the PID. Note that you can still retrieve the PID from $REPLY
Sorry for the response to an old post, but I figure this is useful to others, and it's the first response on Google.
I was having an issue with this method (subshells) and using 'wait'. However, as I was running it inside a function, I was able to do this:
function a {
echo "I'm background task $1"
sleep 5
}
function b {
for i in {1..10}; do
a $i &
done
wait
} 2>/dev/null
And when I run it:
$ b
I'm background task 1
I'm background task 3
I'm background task 2
I'm background task 4
I'm background task 6
I'm background task 7
I'm background task 5
I'm background task 9
I'm background task 8
I'm background task 10
And there's a delay of 5 seconds before I get my prompt back.
The subshell solution works, but I also wanted to be able to wait on the background jobs (and not have the "Done" message at the end). $! from a subshell is not "waitable" in the current interactive shell. The only solution that worked for me was to use my own wait function, which is very simple:
myWait() {
while true; do
sleep 1; STOP=1
for p in $*; do
ps -p $p >/dev/null && STOP=0 && break
done
((STOP==1)) && return 0
done
}
i=0
((i++)); p[$i]=$(do_whatever1 & echo $!)
((i++)); p[$i]=$(do_whatever2 & echo $!)
..
myWait ${p[*]}
Easy enough.

How to suppress Terminated message after killing in bash?

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Resources