Linux (Gentoo) and Linux (Redhat on AWS free)
I am a member of the pcap group and can run tcpdump as a non-root user.
I am trying to run a script the runs tcpdump in the background and send the output to a text file temp.txt. My script will create a file called temp.txt but /usr/bin/tcpdump -tttt will not write to it.
I can run the script without nohup.
/usr/sbin/tcpdump -c 10 -tttt > `pwd`/temp.txt
Why will the nohup not work? The following is my script:
#!/bin/bash
#tpd-txt.sh
nohup /usr/sbin/tcpdump -c 10 -tttt > `pwd`/temp.txt > /dev/null 2>&1 &
Try
nohup /usr/sbin/tcpdump -c 10 -tttt 2>&1 >./temp.txt &
I am assuming you want to redirect standard error to output so it can be captured in logs.
Below is quick reference guide for output redirection in bash.
1>filename
# Redirect stdout to file "filename."
1>>filename
# Redirect and append stdout to file "filename."
2>filename
# Redirect stderr to file "filename."
2>>filename
# Redirect and append stderr to file "filename."
&>filename
# Redirect both stdout and stderr to file "filename."
2>&1
# Redirects stderr to stdout.
# Error messages get sent to the same place as standard output.
Related
I want to add a command to my bash script that directs all stderr and stdout to specific files. From this and many other sources, I know that from the command line I would use:
/path/to/script.sh >> log_file 2>> err_file
However, I want something inside my script, something akin to these slurm flags:
#!/bin/bash
#SBATCH -o slurm.stdout.txt # Standard output log
#SBATCH -e slurm.stderr.txt # Standard error log
<code>
Is there a way to direct output from within a script, or do I need to use >> log_file 2>> err_file every time I call the script? Thanks
You can use this:
exec >> file
exec 2>&1
at the start of your bash script. This will append both stdout and stderr to your file.
You can use this at the start of your bash script:
# Redirected Output
exec > log_file 2> err_file
If the file does exist it is truncated to zero size. If you prefer to append, use this:
# Appending Redirected Output
exec >> log_file 2>> err_file
If you want to redirect both stdout and stderr to the same file, then you can use:
# Redirected Output
exec &> log_file
# This is semantically equivalent to
exec > log_file 2>&1
If you prefer to append, use this:
# Appending Redirected Output
exec >> log_file 2>&1
#SBATCH --output=serial_test_%j.log # Standard output and error log
This will send all output, i.e stdout and stderr to a single log file called serial_test_<JOBID>.log
Ref: https://help.rc.ufl.edu/doc/Sample_SLURM_Scripts
If I set -x in my bash session ( v4.1.2(2) - CentOS 6.10), I get :
$ ls /root
+ ls --color=auto /root
ls: cannot open directory /root: Permission denied
Great, it echo's the command I ran and prints out the terminal. This is expected. Now if I redirect both stdout and stderr to the another file.
$ ls /root &> stuff.txt
+ ls --color=auto /root
It still prints the command to the terminal.
QUESTION
Where is set -x having bash print to if it isn't stderr or stdout?
The set -x command prints tracing information to stderr.
When you run this command...
ls /root &> stuff.txt
You're only redirecting stdout and stderr for the ls command. You're not changing either for your current shell, which is where you have run set -x.
As Mad Physicist points out, the technical answer is "it logs to BASH_XTRACEFD", which defaults to stderr. You can redirect trace logging for the current shell to another file by doing something like:
# open a new file descriptor for logging
exec 4> trace.log
# redirect trace logs to fd 4
BASH_XTRACEFD=4
# enable tracing
set -x
When you execute a command, you can redirect the standard output (known as /dev/stdout) of the command directly to the file. Also if the command generates error-output (generally send to /dev/stderr) you can also redirect it to a file as:
$ command > /path/to/output.txt 2> /path/to/error.txt
When you execute the command set -x, you ask it to generate a trace of the commands being executed. It does this by sending messages to /dev/stderr. In contrast to a normal command, you cannot easily redirect this in a similar way as with a normal command. This is because bash executes the script and at the same time generates the trace to /dev/stderr. So if you would like to catch the trace, you would have to redirect the error output of bash directly. This can be done by the command
exec 2> /path/to/trace.txt
note: this will at the same time also contain all the error output of any command executed in the script.
Examples:
#!/usr/bin/env bash
set -x
command
This sends all output and error output to the terminal
#!/usr/bin/env bash
set -x
command 2> /path/to/command.err
This sends the output of command and the trace of bash to the terminal but catches the error output of command in a file
#!/usr/bin/env bash
set -x
exec 2> /path/to/trace.err
command 2> /path/to/command.err
This sends the output of command to the terminal, the error output of command to a file, and the trace of the script to /path/to/trace.err
#!/usr/bin/env bash
set -x
exec 2> /path/to/trace_and_command.err
command
This sends the output of command to the terminal, the trace and the error of command to a file.
I set up a cron that will run a script this script will run a command which renews lets encrypt.
#!/bin/bash
/usr/local/sbin/certbot-auto renew --renew-hook "service nginx reload" -q >> /var/log/certbot-renew.log | mail -s "CERTBOT Renewals" test#test.com < /var/log/certbot-renew.log
exit 0
This produced an email every time the cron ran but what I want is if there is an error/renewal to send an email. Ive read up that if I use &> this will write errors will this work if i replace >> with &> or should I be using 2>&1 to capture both stdout and stderr?
On this command
command >>file 2>&1 | other command
The output is redirected to a file >>, then to a pipe, a tee can duplicate the output.
command 2>&1 | tee -a file | other command
Otherwise some shell accept &>> to redirect stdout and stderr to a file in append mode.
following command do the same, the order is important (fd1 is redirected to file and fd2 to fd1)
command >>file 2>&1
Our shell script contains the header
#!/bin/bash -x
that causes the commands to also be listed. Instead of having to type
$ ./script.sh &> log.txt
I would like to add a command to this script that will log all following output (also) to a log file. How this is possible?
You can place this line at the start of your script:
# redirect stdout/stderr to a file
exec &> log.txt
EDIT: As per comments below:
#!/bin/bash -x
# redirect stdout/stderr to a file and still show them on terminal
exec &> >(tee log.txt; exit)
I'm trying to redirect stdout to log file within the script using following code:
LOGFILE=logfile.txt
exec 2> $LOGFILE
But the logfile.txt is empty. Could anyone give me some hint?
The redirect notation 2> means to redirect fd 2, which is stderr, not stdout.
If you want to use stdout, write
exec >$LOGFILE
If you want both
exec >$LOGFILE 2>&1