A small example to show my problem. The default shell is bash but my scripts use sh. This is the crontab line which I added for root (start.sh has to be run as root):
*/1 * * * * "/home/mydir/start.sh" "/home/mydir" 2>&1 | logger
Contents of start.sh:
#!/bin/sh
nohup "$1"/start_sleeper.sh "$1" &
Contents of start_sleeper.sh:
#!/bin/sh
/usr/bin/python -u "$1"/sleeper.py "$1" >> "$1"/log &
sleeper.py prints a message every 5 seconds which is appended to log in the same directory. It should continue to run in the background while start.sh can proceed and then exit. It indeed proceeds with any code below the nohup "$1"/sta.. line, but it does not exit for some reason:
pgrep -lf sleeper.py
22303 /usr/bin/python -u /home/mydir/sleeper.py /home/mydir
pgrep -lf start.sh
22296 /bin/sh -c "/home/mydir/start.sh" "/home/mydir" 2>&1 | logger
When I omit 2>&1 | logger from crontab then start.sh exits. Is there any way in this case to pipe output to the logger without start.sh remaining opened?
My solution was to add /bin/kill -SIGHUP $PPID at the end of start.sh. The cronline spawns 2 processes, one for the actual start.sh and one for piping output to logger. The SIGHUP breaks the 'connection' between these two, thus they will both exit. Seems hacky but I'm out of ideas.
Note that $PPID is not standardly available in all shells.
Related
I want to run this command inside a docker container ( ubuntu:18.04 image ):
(cd inse/; sh start.sh > log.txt 2>&1 ;) &
but when I run it, it does not log it to log.txt. When I run it this way:
(cd inse/; sh start.sh > log.txt 2>&1 ;)
It locks the forground (as it should do) and when I kill it I see that the log.txt file is filled with log stuff, which means It works correctly.
Why is this behaviour happening?
The contents of start.sh is:
#!/usr/bin/env sh
. venv/bin/activate;
python3 main.py;
UPDATE:
Actually this command is not the entry point of container and I run it inside another shell but inside a long running container (testing container).
Using with nohup, no success:
(cd inse/; nohup sh start.sh | tee log.txt;) &
I think this problem refers to using () the subshell concept inside sh. It seems it does not let output go anywhere when ran in background.
UPDATE 2:
Even this does not work:
sh -c "cd inse/; sh start.sh > log.txt 2>&1 &"
UPDATE 3:
Not even this:
sh -c "cd inse/; sh start.sh > log.txt 2>&1;" &
I found what was causing the problem.
It was buffered python output. This problem is caused by python.
I should have used python unbuffered output:
python -u blahblah
Try to use this command and please check that have full access to that folder where log.txt is created.use CMD/RUN step in Dockerfile to run start.sh.
CMD /inse/start.sh > log.txt 2>&1 ;
OR
RUN /inse/start.sh > log.txt 2>&1 ;
I know how to redirect stdout to a file:
exec > foo.log
echo test
this will put the 'test' into the foo.log file.
Now I want to redirect the output into the log file AND keep it on stdout
i.e. it can be done trivially from outside the script:
script | tee foo.log
but I want to do declare it within the script itself
I tried
exec | tee foo.log
but it didn't work.
#!/usr/bin/env bash
# Redirect stdout ( > ) into a named pipe ( >() ) running "tee"
exec > >(tee -i logfile.txt)
# Without this, only stdout would be captured - i.e. your
# log file would not contain any error messages.
# SEE (and upvote) the answer by Adam Spiers, which keeps STDERR
# as a separate stream - I did not want to steal from him by simply
# adding his answer to mine.
exec 2>&1
echo "foo"
echo "bar" >&2
Note that this is bash, not sh. If you invoke the script with sh myscript.sh, you will get an error along the lines of syntax error near unexpected token '>'.
If you are working with signal traps, you might want to use the tee -i option to avoid disruption of the output if a signal occurs. (Thanks to JamesThomasMoon1979 for the comment.)
Tools that change their output depending on whether they write to a pipe or a terminal (ls using colors and columnized output, for example) will detect the above construct as meaning that they output to a pipe.
There are options to enforce the colorizing / columnizing (e.g. ls -C --color=always). Note that this will result in the color codes being written to the logfile as well, making it less readable.
The accepted answer does not preserve STDERR as a separate file descriptor. That means
./script.sh >/dev/null
will not output bar to the terminal, only to the logfile, and
./script.sh 2>/dev/null
will output both foo and bar to the terminal. Clearly that's not
the behaviour a normal user is likely to expect. This can be
fixed by using two separate tee processes both appending to the same
log file:
#!/bin/bash
# See (and upvote) the comment by JamesThomasMoon1979
# explaining the use of the -i option to tee.
exec > >(tee -ia foo.log)
exec 2> >(tee -ia foo.log >&2)
echo "foo"
echo "bar" >&2
(Note that the above does not initially truncate the log file - if you want that behaviour you should add
>foo.log
to the top of the script.)
The POSIX.1-2008 specification of tee(1) requires that output is unbuffered, i.e. not even line-buffered, so in this case it is possible that STDOUT and STDERR could end up on the same line of foo.log; however that could also happen on the terminal, so the log file will be a faithful reflection of what could be seen on the terminal, if not an exact mirror of it. If you want the STDOUT lines cleanly separated from the STDERR lines, consider using two log files, possibly with date stamp prefixes on each line to allow chronological reassembly later on.
Solution for busybox, macOS bash, and non-bash shells
The accepted answer is certainly the best choice for bash. I'm working in a Busybox environment without access to bash, and it does not understand the exec > >(tee log.txt) syntax. It also does not do exec >$PIPE properly, trying to create an ordinary file with the same name as the named pipe, which fails and hangs.
Hopefully this would be useful to someone else who doesn't have bash.
Also, for anyone using a named pipe, it is safe to rm $PIPE, because that unlinks the pipe from the VFS, but the processes that use it still maintain a reference count on it until they are finished.
Note the use of $* is not necessarily safe.
#!/bin/sh
if [ "$SELF_LOGGING" != "1" ]
then
# The parent process will enter this branch and set up logging
# Create a named piped for logging the child's output
PIPE=tmp.fifo
mkfifo $PIPE
# Launch the child process with stdout redirected to the named pipe
SELF_LOGGING=1 sh $0 $* >$PIPE &
# Save PID of child process
PID=$!
# Launch tee in a separate process
tee logfile <$PIPE &
# Unlink $PIPE because the parent process no longer needs it
rm $PIPE
# Wait for child process, which is running the rest of this script
wait $PID
# Return the error code from the child process
exit $?
fi
# The rest of the script goes here
Inside your script file, put all of the commands within parentheses, like this:
(
echo start
ls -l
echo end
) | tee foo.log
Easy way to make a bash script log to syslog. The script output is available both through /var/log/syslog and through stderr. syslog will add useful metadata, including timestamps.
Add this line at the top:
exec &> >(logger -t myscript -s)
Alternatively, send the log to a separate file:
exec &> >(ts |tee -a /tmp/myscript.output >&2 )
This requires moreutils (for the ts command, which adds timestamps).
Using the accepted answer my script kept returning exceptionally early (right after 'exec > >(tee ...)') leaving the rest of my script running in the background. As I couldn't get that solution to work my way I found another solution/work around to the problem:
# Logging setup
logfile=mylogfile
mkfifo ${logfile}.pipe
tee < ${logfile}.pipe $logfile &
exec &> ${logfile}.pipe
rm ${logfile}.pipe
# Rest of my script
This makes output from script go from the process, through the pipe into the sub background process of 'tee' that logs everything to disc and to original stdout of the script.
Note that 'exec &>' redirects both stdout and stderr, we could redirect them separately if we like, or change to 'exec >' if we just want stdout.
Even thou the pipe is removed from the file system in the beginning of the script it will continue to function until the processes finishes. We just can't reference it using the file name after the rm-line.
Bash 4 has a coproc command which establishes a named pipe to a command and allows you to communicate through it.
Can't say I'm comfortable with any of the solutions based on exec. I prefer to use tee directly, so I make the script call itself with tee when requested:
# my script:
check_tee_output()
{
# copy (append) stdout and stderr to log file if TEE is unset or true
if [[ -z $TEE || "$TEE" == true ]]; then
echo '-------------------------------------------' >> log.txt
echo '***' $(date) $0 $# >> log.txt
TEE=false $0 $# 2>&1 | tee --append log.txt
exit $?
fi
}
check_tee_output $#
rest of my script
This allows you to do this:
your_script.sh args # tee
TEE=true your_script.sh args # tee
TEE=false your_script.sh args # don't tee
export TEE=false
your_script.sh args # tee
You can customize this, e.g. make tee=false the default instead, make TEE hold the log file instead, etc. I guess this solution is similar to jbarlow's, but simpler, maybe mine has limitations that I have not come across yet.
Neither of these is a perfect solution, but here are a couple things you could try:
exec >foo.log
tail -f foo.log &
# rest of your script
or
PIPE=tmp.fifo
mkfifo $PIPE
exec >$PIPE
tee foo.log <$PIPE &
# rest of your script
rm $PIPE
The second one would leave a pipe file sitting around if something goes wrong with your script, which may or may not be a problem (i.e. maybe you could rm it in the parent shell afterwards).
I know exec is for executing a program in current process as quoted down from here
exec replaces the current program in the current process, without
forking a new process. It is not something you would use in every
script you write, but it comes in handy on occasion.
I'm looking at a bash script a line of which I can't understand exactly.
#!/bin/bash
LOG="log.txt"
exec &> >(tee -a "$LOG")
echo Logging output to "$LOG"
Here, exec doesn't have any program name to run. what does it mean? and it seems to be capturing the execution output to a log file. I would understand if it was exec program |& tee log.txt but here, I cannot understand exec &> >(tee -a log.txt). why another > after &>?
What's the meaning of the line? (I know -a option is for appending and &> is for redirecting including stderr)
EDIT : after I selected the solution, I found the exec &> >(tee -a "$LOG") works when it is bash shell(not sh). So I modified the initial #!/bin/sh to #!/bin/bash. But exec &>> "$LOG" works both for bash and sh.
From man bash:
exec [-cl] [-a name] [command [arguments]]
If command is not specified, any redirections take effect in the
current shell, [...]
And the rest:
&> # redirects stdout and stderr
>(cmd) # redirects to a process
See process substitution.
Here is the piece of code from shell script that is causing the problem.
LOG_FILE="/home/sample.log"
PID_FILE="/home/sample.pid"
sudo -u user1 trinidad -e production > "$LOG_FILE" 2>&1 & echo $! > "$PID_FILE"
PARENT_PID=`cat "$PID_FILE"`
pgrep -P "$PARENT_PID" > "$PID_FILE"
But here the last command does not print anything to PID_FILE. So for debugging purpose I tried echoing echo $PARENT_PID. It correctly prints the output like 1234.
Also in shell script If I do pgrep -P 1234 then also it prints the child process correctly but only if I do pgrep -P $PARENT_PID then it prints nothing.
You are writing stuff into a file and then reading the file back in. While that is just wasteful, not actually an explanation of your problem, I would refactor to
LOG_FILE="/home/sample.log"
PID_FILE="/home/sample.pid"
sudo -u user1 trinidad -e production > "$LOG_FILE" 2>&1 &
PARENT_PID=$!
pgrep -P "$PARENT_PID" > "$PID_FILE"
I'm guessing your actual problem is that the sudo process doesn't spawn any children. The action of pgrep -P is to print processes which are children of the PID you specify; if your process doesn't spawn any children, it won't print any.
How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null