redirect all output in a bash script when using set -x - bash

I have a bash script that has set -x in it. Is it possible to redirect the debug prints of this script and all its output to a file? Ideally I would like to do something like this:
#!/bin/bash
set -x
(some magic command here...) > /tmp/mylog
echo "test"
and get the
+ echo test
test
output in /tmp/mylog, not in stdout.

This is what I've just googled and I remember myself using this some time ago...
Use exec to redirect both standard output and standard error of all commands in a script:
#!/bin/bash
logfile=$$.log
exec > $logfile 2>&1
For more redirection magic check out Advanced Bash Scripting Guide - I/O Redirection.
If you also want to see the output and debug on the terminal in addition to in the log file, see redirect COPY of stdout to log file from within bash script itself.
If you want to handle the destination of the set -x trace output independently of normal STDOUT and STDERR, see bash storing the output of set -x to log file.

the -x output goes to stderr, so to log it do:
set -x
exec 2>/tmp/mylog

To redirect stderr and stdout:
exec &>> $LOG_FILE_NAME
If you want to append to file. To overwrite file:
exec &> $LOG_FILE_NAME

In my case, the script was being called multiple times from elsewhere, and I wasn't seeing everything, so I did an append instead, and it worked:
exec 1>>FILENAME 2>&1
set -x
To avoid confusion, be sure to delete FILENAME before each run.

Related

How can I capture the raw command that a shell script is running?

As an example, I am trying to capture the raw commands that are output by the following script:
https://github.com/adampointer/go-deribit/blob/master/scripts/generate-models.sh
I have tried to following a previous answer:
BASH: echoing the last command run
but the output I am getting is as follows:
last command is gojson -forcefloats -name="${struct}" -tags=json,mapstructure -pkg=${p} >> models/${p}/${name%.*}_request.go
What I would like to do is capture the raw command, in other words have variables such as ${struct}, ${p} and ${p}/${name%.*} replaced by the actual values that were used.
How do I do this?
At the top of the script after the hashbang #!/usr/bin/env bash or #!/bin/bash (if there is any) add set -x
set -x Print commands and their arguments as they are executed
Run the script in debug mode which will trace all the commands in the script: https://stackoverflow.com/a/10107170/988525.
You can do that without editing the script by typing "bash generate-models.sh -x".

linux - bash: pipe _everything_ to a logfile

In an interactive bash script I use
exec > >(tee -ia logfile.log)
exec 2>&1
to write the scripts output to a logfile. However, if I ask the user to input something this is not written to this file:
read UserInput
Also, I issue commands with $UserInput as parameter. These command are also not written to the logfile.
The logfile should contain everything my script does, i.e. what the user entered interactively and also the resulting commands along with their output.
Of course I could use set -x and/or echo "user input: "$UserInput, but this would also be sent to the "screen". I dont want to read anything else on the screen except what my script or the commands echo.
How can this be done?

Unix output redirection

I have a script which prompts user to select options like 'y' or 'n'.
If 'y' is selected, the script proceeds with further execution and if 'n' is selected then it stops.
I want the output of this file to be re-directed to a log file. so used below command:-
./script stop >> script_RUN.log 2>&1
The problem is, the script starts running but does not prompt to ask for options like 'y' or 'n'
It is writing this to script_RUN.log.
How can I make the script to prompt user for options and re-direct the further execution to script_RUN.log?
you can try using tee command instead.
./script stop | tee script_RUN.log
NOTE:
Only the output of the program will be saved.
EDIT:
if you don't want to see the output on the console at all just redirect it into /dev/null
for example:
./script stop | tee script_RUN.log > /dev/null
the above line will write the file into log but dost NOT printout on console
This works like it has to really. You are redirecting stdout and stderr output from the very start. Instead you should try to redirect it in the script after the prompt. I think this would be helpful for you:
redirect COPY of stdout to log file from within bash script itself

shell script : write sdterr & sdtout to file

I know this has been asked many times, but I can find a suitable answer in my case.
I croned a backup script using rsync and would like to see all output, errors or not, from the all script commands. I must write the command inside the script itself, and do not want to see output in my shell.
I have been trying with no success. Below part of the script.
#!/bin/bash
.....
BKLOG=/mnt/backup_error_$now.txt
# Log everything to log file
# something like
exec 2>&1 | tee $BKLOG
# OR
exec &> $BKLOG
I have been adding at the script beginig all kinds of exec | tee $BKLOG with adding &>, 2>&1at various part of the command line, but all failed. I either get an empty log file or incomplete. I need to see on log file what rsync has done, and the error if script failed before syncing.
Thank you for help. My shell is zsh, so any solution in zsh is welcomed.
To redirect all the stdout/stderr to a file place this line on top of your script:
BKLOG=/mnt/backup_error_$now.txt
exec &> "$BKLOG"

Is there a way in a shell script to figure out where its output is redirected?

We have scripts of following nature (in cron)
someScript.sh > /tmp/cronlog/somescript.$(date +%Y%m%d).log 2>&1
Now is there a way by which with in someScript.sh I can figure out what file the output has gone in to?
The script sends email with summary. At the same time I would like to mention that details could be found in so and so output file - with in the email.
I am aware of the construct if [ -t 1 ] to detect stdout etc but how to get the output file name?
Note that I want this to be generic so that some one can change the output file in cron and the script does not need to be modified.
The simplest thing I could think is that:
readlink -f /proc/$$/fd/1
$$ is the PID of the script (inside the script). On most unix systems, /proc/[pid] is the pseudo-directory containing info for process [pid].
/proc/[pid]/fd is a directory containing a list of symlinks for the open file-descriptors of the process. fd/0 is input, fd/1 is the output of the script, etc.
readlink then gives you the target file or tty if you don't redirect the output.
Of course, if you want to display it, you have to display it somewhere else than standard ouput, or it will be redirected! To debug, try the std error (2).
Various callings give those results on my box (script.sh just calls readlink -f /proc/$$/fd/1 >&2)
# ./script.sh
/dev/pts/0
# ./script.sh > /var/tmp/foo
/var/tmp/foo
# ./script.sh | more
/proc/12132/fd/pipe:[916212]
Rather than trying to find a hack (and that too platform dependent) its better to take a slightly different approach here.
Set your cron job like this:
someScript.sh /tmp/cronlog/somescript.$(date +%Y%m%d).log
i.e. without and > or 2>&1 (stdout/stderr streams redirections) and just pass an argument with the desired logfile name.
Now inside someScript.sh redirect streams to your log file like this:
LOGFILE=$1
exec &>${LOGFILE}
And finally you can then message your clients that:
"output details could be found in ${LOGFILE}"

Resources