Start process in background quietly - bash

Appending a & to the end of a command starts it in the background. E.g.:
$ wget google.com &
[1] 7072
However, this prints a job number and PID. Is it possible to prevent these?
Note: I still want to keep the output of wget – it's just the [1] 7072 that I want to get rid of.

There is an option to the set builtin, set -b, that controls the output of this line, but the choice is limited to "immediately" (when set) and "wait for next prompt" (when unset).
Example of immediate printing when the option is set:
$ set -b
$ sleep 1 &
[1] 9696
$ [1]+ Done sleep 1
And the usual behaviour, waiting for the next prompt:
$ set +b
$ sleep 1 &
[1] 840
$ # Press enter here
[1]+ Done sleep 1
So as far as I can see, these can't be suppressed. The good news is, though, that job control messages are not displayed in a non-interactive shell:
$ cat sleeptest
#!/bin/bash
sleep 1 &
$ ./sleeptest
$
So if you start a command in the background in a subshell, there won't be any messages. To do that in an interactive session, you can run your command in a subshell like this (thanks to David C. Rankin):
$ ( sleep 1 & )
$
which also results in no job control prompts.

From the Advanced Bash-Scripting Guide:
Suppressing stdout.
cat $filename >/dev/null
# Contents of the file will not list to stdout.
Suppressing stderr (from Example
16-3).
rm $badname 2>/dev/null
# So error messages [stderr] deep-sixed.
Suppressing output from both stdout and stderr.
cat $filename 2>/dev/null >/dev/null
#1 If "$filename" does not exist, there will be no error message output.
# If "$filename" does exist, the contents of the file will not list to stdout.
# Therefore, no output at all will result from the above line of code.
#
# This can be useful in situations where the return code from a command
#+ needs to be tested, but no output is desired.
#
# cat $filename &>/dev/null
# also works, as Baris Cicek points out.

Related

How can I conditionally copy output to a file without repeating echo/printf statements? [duplicate]

I know how to redirect stdout to a file:
exec > foo.log
echo test
this will put the 'test' into the foo.log file.
Now I want to redirect the output into the log file AND keep it on stdout
i.e. it can be done trivially from outside the script:
script | tee foo.log
but I want to do declare it within the script itself
I tried
exec | tee foo.log
but it didn't work.
#!/usr/bin/env bash
# Redirect stdout ( > ) into a named pipe ( >() ) running "tee"
exec > >(tee -i logfile.txt)
# Without this, only stdout would be captured - i.e. your
# log file would not contain any error messages.
# SEE (and upvote) the answer by Adam Spiers, which keeps STDERR
# as a separate stream - I did not want to steal from him by simply
# adding his answer to mine.
exec 2>&1
echo "foo"
echo "bar" >&2
Note that this is bash, not sh. If you invoke the script with sh myscript.sh, you will get an error along the lines of syntax error near unexpected token '>'.
If you are working with signal traps, you might want to use the tee -i option to avoid disruption of the output if a signal occurs. (Thanks to JamesThomasMoon1979 for the comment.)
Tools that change their output depending on whether they write to a pipe or a terminal (ls using colors and columnized output, for example) will detect the above construct as meaning that they output to a pipe.
There are options to enforce the colorizing / columnizing (e.g. ls -C --color=always). Note that this will result in the color codes being written to the logfile as well, making it less readable.
The accepted answer does not preserve STDERR as a separate file descriptor. That means
./script.sh >/dev/null
will not output bar to the terminal, only to the logfile, and
./script.sh 2>/dev/null
will output both foo and bar to the terminal. Clearly that's not
the behaviour a normal user is likely to expect. This can be
fixed by using two separate tee processes both appending to the same
log file:
#!/bin/bash
# See (and upvote) the comment by JamesThomasMoon1979
# explaining the use of the -i option to tee.
exec > >(tee -ia foo.log)
exec 2> >(tee -ia foo.log >&2)
echo "foo"
echo "bar" >&2
(Note that the above does not initially truncate the log file - if you want that behaviour you should add
>foo.log
to the top of the script.)
The POSIX.1-2008 specification of tee(1) requires that output is unbuffered, i.e. not even line-buffered, so in this case it is possible that STDOUT and STDERR could end up on the same line of foo.log; however that could also happen on the terminal, so the log file will be a faithful reflection of what could be seen on the terminal, if not an exact mirror of it. If you want the STDOUT lines cleanly separated from the STDERR lines, consider using two log files, possibly with date stamp prefixes on each line to allow chronological reassembly later on.
Solution for busybox, macOS bash, and non-bash shells
The accepted answer is certainly the best choice for bash. I'm working in a Busybox environment without access to bash, and it does not understand the exec > >(tee log.txt) syntax. It also does not do exec >$PIPE properly, trying to create an ordinary file with the same name as the named pipe, which fails and hangs.
Hopefully this would be useful to someone else who doesn't have bash.
Also, for anyone using a named pipe, it is safe to rm $PIPE, because that unlinks the pipe from the VFS, but the processes that use it still maintain a reference count on it until they are finished.
Note the use of $* is not necessarily safe.
#!/bin/sh
if [ "$SELF_LOGGING" != "1" ]
then
# The parent process will enter this branch and set up logging
# Create a named piped for logging the child's output
PIPE=tmp.fifo
mkfifo $PIPE
# Launch the child process with stdout redirected to the named pipe
SELF_LOGGING=1 sh $0 $* >$PIPE &
# Save PID of child process
PID=$!
# Launch tee in a separate process
tee logfile <$PIPE &
# Unlink $PIPE because the parent process no longer needs it
rm $PIPE
# Wait for child process, which is running the rest of this script
wait $PID
# Return the error code from the child process
exit $?
fi
# The rest of the script goes here
Inside your script file, put all of the commands within parentheses, like this:
(
echo start
ls -l
echo end
) | tee foo.log
Easy way to make a bash script log to syslog. The script output is available both through /var/log/syslog and through stderr. syslog will add useful metadata, including timestamps.
Add this line at the top:
exec &> >(logger -t myscript -s)
Alternatively, send the log to a separate file:
exec &> >(ts |tee -a /tmp/myscript.output >&2 )
This requires moreutils (for the ts command, which adds timestamps).
Using the accepted answer my script kept returning exceptionally early (right after 'exec > >(tee ...)') leaving the rest of my script running in the background. As I couldn't get that solution to work my way I found another solution/work around to the problem:
# Logging setup
logfile=mylogfile
mkfifo ${logfile}.pipe
tee < ${logfile}.pipe $logfile &
exec &> ${logfile}.pipe
rm ${logfile}.pipe
# Rest of my script
This makes output from script go from the process, through the pipe into the sub background process of 'tee' that logs everything to disc and to original stdout of the script.
Note that 'exec &>' redirects both stdout and stderr, we could redirect them separately if we like, or change to 'exec >' if we just want stdout.
Even thou the pipe is removed from the file system in the beginning of the script it will continue to function until the processes finishes. We just can't reference it using the file name after the rm-line.
Bash 4 has a coproc command which establishes a named pipe to a command and allows you to communicate through it.
Can't say I'm comfortable with any of the solutions based on exec. I prefer to use tee directly, so I make the script call itself with tee when requested:
# my script:
check_tee_output()
{
# copy (append) stdout and stderr to log file if TEE is unset or true
if [[ -z $TEE || "$TEE" == true ]]; then
echo '-------------------------------------------' >> log.txt
echo '***' $(date) $0 $# >> log.txt
TEE=false $0 $# 2>&1 | tee --append log.txt
exit $?
fi
}
check_tee_output $#
rest of my script
This allows you to do this:
your_script.sh args # tee
TEE=true your_script.sh args # tee
TEE=false your_script.sh args # don't tee
export TEE=false
your_script.sh args # tee
You can customize this, e.g. make tee=false the default instead, make TEE hold the log file instead, etc. I guess this solution is similar to jbarlow's, but simpler, maybe mine has limitations that I have not come across yet.
Neither of these is a perfect solution, but here are a couple things you could try:
exec >foo.log
tail -f foo.log &
# rest of your script
or
PIPE=tmp.fifo
mkfifo $PIPE
exec >$PIPE
tee foo.log <$PIPE &
# rest of your script
rm $PIPE
The second one would leave a pipe file sitting around if something goes wrong with your script, which may or may not be a problem (i.e. maybe you could rm it in the parent shell afterwards).

ksh on BSD style UNIX. How to redirect stderr to tee and a file

A similar question as been asked here
But it wont work for me
"shell.ksh" > >(tee here.log ) 2> >(tee err.log)
ksh: 0403-057 Syntax error: `>' is not expected.
this command will not redirect stderr just stdop
"shell.ksh" | tee file.log 2>&1
I am on AIX running some UNIX similar to BSD that does not have GNU extensions
and this is ksh
ps $$
PID TTY STAT TIME COMMAND
40632390 pts/6 A 0:00 -ksh
Any short and sweet solns folks ?
Here is a wrapper script that should do what you want. I can't confirm that it works on AIX ksh, but it works using sh (AT&T Research) 93u+ 2012-08-01 (Debian's ksh package calls it the "Real, AT&T version of the Korn shell." This is intentionally neither pdksh nor its mksh fork).
This is portable and should work in all POSIX shells.
pipe1+2.ksh
#!/bin/ksh
touch err.log here.log # create the logs now so tail -f doesn't complain
tail -f here.log & # follow the output we'll pipe to later (run in bg)
HERE=$! # save the process ID of this backgrounded process
tail -f err.log >&2 & # follow output (as above), pipe output to stderr
ERR=$! # save the process ID
"$#" > here.log 2> err.log # run given command and pipe stdout and stderr
RET=$? # save return value
sleep 1 # tail polls at 1/s, so wait one second
kill $HERE $ERR # stop following those logs
exit $RET # restore the exit code from given command
So you invoke it as pipe1+2.ksh shell.ksh (assuming both commands are in your path).
Here's a test version of shell.ksh:
#!/bin/ksh
echo this output is stdout 1
echo this output is stderr >&2
echo this output is stdout 2
and a trial run:
# ksh pipe1+2.ksh ksh shell.ksh
this output is stdout 1
this output is stdout 2
this output is stderr
# cat here.log
this output is stdout 1
this output is stdout 2
# cat err.log
this output is stderr
A note: because tail -f isn't instantaneous, stdout vs stderr will be slightly out of order. It won't even be consistent between runs, at least with code that runs as fast as echo.
I doubt your version of tail supports it, but GNU tail has a -s SECONDS flag that lets you specify a different polling interval as a decimal value, so you can try e.g. tail -f -s 0.01 … to improve the ordering of what is displayed. If your version of sleep also supports decimal values, change it to match that number.
(I had originally crafted a complicated file descriptor piping response based on one of the "more elaborate combinations" in the amazing Csh Programming Considered Harmful, but that didn't seem to work. See the comments and/or the initial version of this answer.)

Copy *unbuffered* stdout to file from within bash script itself

I want to copy stdout to a log file from within a bash script, meaning I don't want to call the script with output piped to tee, I want the script itself to handle it. I've successfully used this answer to accomplish this, using the following code:
#!/bin/bash
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 10
echo "world"
This works, but has the downside of output being buffered until the script is completed, as is also discussed in the linked answer. In the above example, both "hello" and "world" will show up in the log only after the 10 seconds have passed.
I am aware of the stdbuf command, and if running the script with
stdbuf -oL ./myscript.sh
then stdout is indeed continuously printed both to the file and the terminal.
However, I'd like this to be handled from within the script as well. Is there any way to combine these two solutions? I'd rather not resort to a wrapper script that simply calls the original script enclosed with "stdbuf -oL".
You can use a workaround and make the script execute itself with stdbuf, if a special argument is present:
#!/bin/bash
if [[ "$1" != __BUFFERED__ ]]; then
prog="$0"
stdbuf -oL "$prog" __BUFFERED__ "$#"
else
shift #discard __BUFFERED__
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 1
echo "world"
fi
This will mostly work:
if you run the script with ./test, it shows unbuffered [] hello\n[] world.
if you run the script with ./test 123 456, it shows [123] hello\n[123] world like you want.
it won't work, however, if you run it with bash test - $0 is set to test which is not your script. Fixing this is not in the scope of this question though.
The delay in your first solution is caused by sed, not by tee. Try this instead:
#!/bin/bash
exec 6>&1 2>&1>&>(tee -a myscript.log)
To "undo" the tee effect:
exec 1>&6 2>&6 6>&-

Close pipe even if subprocesses of first command is still running in background

Suppose I have test.sh as below. The intent is to run some background task(s) by this script, that continuously updates some file. If the background task is terminated for some reason, it should be started again.
#!/bin/sh
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done &
echo $! > pidfile
and want to call it like ./test.sh | otherprogram, e. g. ./test.sh | cat.
The pipe is not being closed as the background process still exists and might produce some output. How can I tell the pipe to close at the end of test.sh? Is there a better way than checking for existence of pidfile before calling the pipe command?
As a variant I tried using #!/bin/bash and disown at the end of test.sh, but it is still waiting for the pipe to be closed.
What I actually try to achieve: I have a "status" script which collects the output of various scripts (uptime, free, date, get-xy-from-dbus, etc.), similar to this test.sh here. The output of the script is passed to my window manager, which displays it. It's also used in my GNU screen bottom line.
Since some of the scripts that are used might take some time to create output, I want to detach them from output collection. So I put them in a while true; do script; sleep 1; done loop, which is started if it is not running yet.
The problem here is now that I don't know how to tell the calling script to "really" detach the daemon process.
See if this serves your purpose:
(I am assuming that you are not interested in any stderr of commands in while loop. You would adjust the code, if you are. :-) )
#!/bin/bash
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done >/dev/null 2>&1 &
echo $! > pidfile
If you want to explicitly close a file descriptor, like for example 1 which is standard output, you can do it with:
exec 1<&-
This is valid for POSIX shells, see: here
When you put the while loop in an explicit subshell and run the subshell in the background it will give the desired behaviour.
(while true; do
echo "something" >> somewhere
sleep 1
done)&

Ending Timestamp not printing on Shell Script: Using trap

I have a shell script I use for deployments. Since I want to capture the output of the entire process, I've wrapped it in a subshell and tail that out:
#! /usr/bin/env ksh
# deploy.sh
########################################################################
(yadda, yadda, yadda)
########################################################################
# LOGGING WRAPPER
#
dateFormat=$(date +"%Y.%m.%d-%H.%M.%S")
(
print -n "EXECUING: $0 $*: "
date
#
########################################################################
(yadda, yadda, yadda)
#
# Tail Startup
#
trap 'printf "Stopping Script: ";date;exit 0"' INT
print "TAILING LOG: YOU MAY STOP THIS WITH A CTRL-C WHEN YOU SEE THAT SERVER HAS STARTED"
sleep 2
./tailLog.sh
) 2>&1 | tee "deployment.$dateFormat.log"
#
########################################################################
Before I employed the subshell, the trap command worked. When you pressed CNTL-C, the program would print Stopping Script: and the date.
However, I wanted to make sure that no one forgets to save the output of this script, so I employed the subshell to automatically save the output. And, now trap doesn't seem to be working.
What am I doing wrong?
NEW INFORMATION
A little more playing around. I now see the issue isn't the shell or subshell. It's the damn pipe!
If I don't pipe the output to tee, the trap works fine. If I pipe the output to tee, the trap doesn't work.
So, the real question is how do I tee the output and still be able to use trap?
TEST PROGRAM
Before you answer, please, please, try these test programs:
#! /bin/ksh
dateFormat=$(date +"%Y.%m.%d-%H:%M:%S")
(
trap 'printf "The script was killed at: %s\n", "$(date)"' SIGINT
echo "$0 $*"
while sleep 2
do
print -n "The time is now "
date
done
) | tee somefile
And
#! /bin/ksh
dateFormat=$(date +"%Y.%m.%d-%H:%M:%S")
(
trap 'printf "The script was killed at: %s\n", "$(date)"' SIGINT
echo "$0 $*"
while sleep 2
do
print -n "The time is now "
date
done
)
The top one pipes to somefile..... The bottom one doesn't. The bottom one, the trap works. The top one, the trap doesn't. See if you can get the pipe to work and the "The script was killed at" line to print into the teed out file.
The pipe does work. The trap doesn't, but only when I have the pipe. You can move the trap statement all around and put in layers and layers of sub shells. There's some minor thing I am doing that's wrong, and I have no idea what it is.
Since trap stops the running process – logShell.sh – I think the pipe doesn't get executed at all. You can't do it this way.
One solution could be editing logShell.sh to write line by line in your log file. Maybe you could post it and we can discuss how you manage it.
OK, now I've got it. You have to use tee with -i to ignore interrupt signals.
#! /bin/ksh
dateFormat=$(date +"%Y.%m.%d-%H:%M:%S")
(
trap 'printf "The script was killed at: %s\n", "$(date)"' SIGINT
echo "$0 $*"
while sleep 2
do
print -n "The time is now "
date
done
) | tee -i somefile
this one works fine!

Resources