I'm attempting to write a bash script that uses nohup and passes errors to rsyslog.
I've tried this command with different variations of the log variable (see below) but can't get the output passed to anything but a std txt file. I can't get it to pipe.
nohup imageprocessor.sh > "$LOG" &
Is it possible to pipe nohup output or do I need a different command.
A couple of variations of log that I have tried
LOG="|/usr/bin/logger -t workspaceworker -p LOCAL5.info &2"
or
LOG="|logtosyslog.sh"
or
LOG="logtosyslog.sh"
A way in bash to redirect output to syslog is:
exec > >(logger -t myscript)
stdout is then sent to logger command
exec 2> >(logger -t myscript)
for stderr
Not directly. nohup will detach the child process, so piping the output of the nohup command isn't helpful. This is what you want:
nohup sh -c 'imageprocessor.sh | logger'
Related
If I set -x in my bash session ( v4.1.2(2) - CentOS 6.10), I get :
$ ls /root
+ ls --color=auto /root
ls: cannot open directory /root: Permission denied
Great, it echo's the command I ran and prints out the terminal. This is expected. Now if I redirect both stdout and stderr to the another file.
$ ls /root &> stuff.txt
+ ls --color=auto /root
It still prints the command to the terminal.
QUESTION
Where is set -x having bash print to if it isn't stderr or stdout?
The set -x command prints tracing information to stderr.
When you run this command...
ls /root &> stuff.txt
You're only redirecting stdout and stderr for the ls command. You're not changing either for your current shell, which is where you have run set -x.
As Mad Physicist points out, the technical answer is "it logs to BASH_XTRACEFD", which defaults to stderr. You can redirect trace logging for the current shell to another file by doing something like:
# open a new file descriptor for logging
exec 4> trace.log
# redirect trace logs to fd 4
BASH_XTRACEFD=4
# enable tracing
set -x
When you execute a command, you can redirect the standard output (known as /dev/stdout) of the command directly to the file. Also if the command generates error-output (generally send to /dev/stderr) you can also redirect it to a file as:
$ command > /path/to/output.txt 2> /path/to/error.txt
When you execute the command set -x, you ask it to generate a trace of the commands being executed. It does this by sending messages to /dev/stderr. In contrast to a normal command, you cannot easily redirect this in a similar way as with a normal command. This is because bash executes the script and at the same time generates the trace to /dev/stderr. So if you would like to catch the trace, you would have to redirect the error output of bash directly. This can be done by the command
exec 2> /path/to/trace.txt
note: this will at the same time also contain all the error output of any command executed in the script.
Examples:
#!/usr/bin/env bash
set -x
command
This sends all output and error output to the terminal
#!/usr/bin/env bash
set -x
command 2> /path/to/command.err
This sends the output of command and the trace of bash to the terminal but catches the error output of command in a file
#!/usr/bin/env bash
set -x
exec 2> /path/to/trace.err
command 2> /path/to/command.err
This sends the output of command to the terminal, the error output of command to a file, and the trace of the script to /path/to/trace.err
#!/usr/bin/env bash
set -x
exec 2> /path/to/trace_and_command.err
command
This sends the output of command to the terminal, the trace and the error of command to a file.
I've been using below command to make tail to write nohup.out and also print the output on the terminal.
nohup train.py & tail -f nohup.out
However, I need nohup to use different file names.
When I try
nohup python train.py & tail -F vanila_v1.out
I'm getting following error message.
tail: cannot open 'vanila_v1.out' for readingnohup: ignoring input and appending output to 'nohup.out': No such file or directory
I also tried
nohup python train.py & tail -F nohup.out > vanila_v1.txt
Then it doesn't write an output on stdout.
How do I make nohup to write other than nohup.out? I don't mind simultaneously writing two different files. But to keep track of different processes, I need the name to be different.
Thanks.
You need to pipe the STDOUT and STDERR for the nohup command like:
$ nohup python train.py > vanila_v1.out 2>&1 & tail -F vanila_v1.out
At this point, the process will go into the background and you can use tail -f vanila_v1.out. That's one way to do it.
A little more information is available here for the STDOUT and STDERR link. Here is another question that uses the tee command rather that > to achieve the same in one go.
I know exec is for executing a program in current process as quoted down from here
exec replaces the current program in the current process, without
forking a new process. It is not something you would use in every
script you write, but it comes in handy on occasion.
I'm looking at a bash script a line of which I can't understand exactly.
#!/bin/bash
LOG="log.txt"
exec &> >(tee -a "$LOG")
echo Logging output to "$LOG"
Here, exec doesn't have any program name to run. what does it mean? and it seems to be capturing the execution output to a log file. I would understand if it was exec program |& tee log.txt but here, I cannot understand exec &> >(tee -a log.txt). why another > after &>?
What's the meaning of the line? (I know -a option is for appending and &> is for redirecting including stderr)
EDIT : after I selected the solution, I found the exec &> >(tee -a "$LOG") works when it is bash shell(not sh). So I modified the initial #!/bin/sh to #!/bin/bash. But exec &>> "$LOG" works both for bash and sh.
From man bash:
exec [-cl] [-a name] [command [arguments]]
If command is not specified, any redirections take effect in the
current shell, [...]
And the rest:
&> # redirects stdout and stderr
>(cmd) # redirects to a process
See process substitution.
Our shell script contains the header
#!/bin/bash -x
that causes the commands to also be listed. Instead of having to type
$ ./script.sh &> log.txt
I would like to add a command to this script that will log all following output (also) to a log file. How this is possible?
You can place this line at the start of your script:
# redirect stdout/stderr to a file
exec &> log.txt
EDIT: As per comments below:
#!/bin/bash -x
# redirect stdout/stderr to a file and still show them on terminal
exec &> >(tee log.txt; exit)
I run my_program via a bash wrapper script, and use exec to prevent forking a separate process:
#! /bin/bash
exec my_program >> /tmp/out.log 2>&1
Now I would like to duplicate all output into two different files, but still prevent forking, so I do not want to use a pipe and tee like this:
#! /bin/bash
exec my_program 2>&1 | tee -a /tmp/out.log >> /tmp/out2.log
How to do that with bash?
The reasons for avoid forking is to make sure that:
all signals sent to the bash script also reaches my_program (including non-trappable signals).
waitpid(3) on the bash-script can never return before my_program has also terminated.
I think the best you can do is to redirect standard output and error to tee via a process substitution:
exec > >( tee -a /tmp/out.log >> /tmp/out2.log) 2>&1
then exec to replace the bash script with your program (which will keep the same open file handles to standard output).
exec my_program