I know this has been asked many times, but I can find a suitable answer in my case.
I croned a backup script using rsync and would like to see all output, errors or not, from the all script commands. I must write the command inside the script itself, and do not want to see output in my shell.
I have been trying with no success. Below part of the script.
#!/bin/bash
.....
BKLOG=/mnt/backup_error_$now.txt
# Log everything to log file
# something like
exec 2>&1 | tee $BKLOG
# OR
exec &> $BKLOG
I have been adding at the script beginig all kinds of exec | tee $BKLOG with adding &>, 2>&1at various part of the command line, but all failed. I either get an empty log file or incomplete. I need to see on log file what rsync has done, and the error if script failed before syncing.
Thank you for help. My shell is zsh, so any solution in zsh is welcomed.
To redirect all the stdout/stderr to a file place this line on top of your script:
BKLOG=/mnt/backup_error_$now.txt
exec &> "$BKLOG"
Related
I am trying to modify a script someone created for unix in shell. This script is mostly used to run on backed servers with no human interaction, however I needed to make another script to allow users to input information. So, it is just modifying to old version for user input. But the biggest issue I am running into is trying to get both error logs and echos to be saved in a log file. The script has a lot of them, but I wanted to have those shown on the terminal as well as send them to the log file specified, to be looked into later.
What I have is this:
exec 1> ${LOG} 2>&1
This line is pretty much send everything to the log file. That is all good, but I also have people trying to enter in information in the script, and it is sending everything to the log file including the echo needed for the prompt. This line is also at the beginning of the script, but reading more into the stderr and stdout messages. I tried:
exec 2>&1 1>>${LOG}
exec 1 | tee ${LOG} But only getting error when running it this "./bash_pam.sh: line 39: exec: 1: not found"
I have went over site such as this to solve the issue, but I am not understanding why it does not print to both. The way I insert it, it either only sends it to the log location and not to the terminal, or it sends it to the terminal, but nothing is persevered in the log.
EDIT: Some of the solutions, for this have mentioned that certain fixes will work in bash, but not in /bin/sh.
If you would like all output to be printed onto the console, while also being printed to a logfile.txt you would run this command on your script:
bash your_script.sh 2>&1 | tee -a logfile.txt
Or calling it within the file:
<bash_command> 2>&1 | tee -a logfile.txt
To append to logfile.txt instead of overwriting, add the -a option to tee.
I would like to pass parameters to a perl script using positional parameters inside a bash script "tablecheck.sh". I am using an alias "tablecheck" to call "tablecheck.sh".
#!/bin/bash
/scripts/tables.pl /var/lib/mysql/$1/ /var/mysql/$1/mysql.sock > /tmp/chktables_$1 2>&1 &
Perl script by itself works fine. But when I do "tablecheck MySQLinstance", $1 stays $1. It won't get replaced by the instance. So I get the output as follows:
Exit /scripts/tables.pl /var/lib/mysql/$1/ /var/mysql/$1/mysql.sock > /tmp/chktables_$1 2>&1 &
The job exits.
FYI: alias tablecheck='. pathtobashscript/tablecheck.sh'
I have a bunch of aliases in another bash script. Hence . command.
Could anyone help me... I have gone till the 3rd page of Google to find an answer. Tried so many things with no luck.
I am a noob. But may be it has something to do with it being a background job or $1 in a path... I don't understand why the $1 won't get replaced...
If I copy your exact set up (which I agree with other commenters, is some what unusual) then I believe I am getting the same error message
$ tablecheck foo
[1]+ Exit 127 /scripts/tables.pl /var/lib/mysql/$1/ /var/mysql/$1/mysql.sock > /tmp/chktables_$1 2>&1
In the /tmp/chktables_foo file that it makes there is an additional error message, in my case "bash: /scripts/tables.pl: No such file or directory"
I suspect permissions are wrong in your case
I have a bash script that has set -x in it. Is it possible to redirect the debug prints of this script and all its output to a file? Ideally I would like to do something like this:
#!/bin/bash
set -x
(some magic command here...) > /tmp/mylog
echo "test"
and get the
+ echo test
test
output in /tmp/mylog, not in stdout.
This is what I've just googled and I remember myself using this some time ago...
Use exec to redirect both standard output and standard error of all commands in a script:
#!/bin/bash
logfile=$$.log
exec > $logfile 2>&1
For more redirection magic check out Advanced Bash Scripting Guide - I/O Redirection.
If you also want to see the output and debug on the terminal in addition to in the log file, see redirect COPY of stdout to log file from within bash script itself.
If you want to handle the destination of the set -x trace output independently of normal STDOUT and STDERR, see bash storing the output of set -x to log file.
the -x output goes to stderr, so to log it do:
set -x
exec 2>/tmp/mylog
To redirect stderr and stdout:
exec &>> $LOG_FILE_NAME
If you want to append to file. To overwrite file:
exec &> $LOG_FILE_NAME
In my case, the script was being called multiple times from elsewhere, and I wasn't seeing everything, so I did an append instead, and it worked:
exec 1>>FILENAME 2>&1
set -x
To avoid confusion, be sure to delete FILENAME before each run.
I am trying to implement something which my logic says can't be done. But I need your help to understand why can't it be.
Short Version of Question
Is it possible to log stdout+stderr of a script in csh without using file redirection ( >& or tee ).
Detailed Explanation of Question
I have a requirement with a csh script (script1) where I am not allowed to use file redirection.(I will give the reason in a while)
So that means I can't use something like
echo just checking >& logfile
hence I can't use this or tee to create my logfile.
I also have a another script (script2) which is a top level script.
I can either run script1 in standalone mode or through script2.
In either case i need to create a log(stdout+stderr) of script1 in logfile.
There are two possible(but not complete) option for that
write this line in script2
./script1 >& logfile
But then I can't log script1 in logfile when script1 is run in standalone mode.
Another option is to use file redirections in script1 like this:
echo test starting >> logfile
echo test over
In this case thee are two disadvantages:
1) "test over" prints before "test starting" , i.e. the order of occurring of command logs is not certain.
2) It's tedious to put >>& after every statement if I am intending to cover whole script.
Now is there any other way,I can get what I need. That is I can run script1 without file redirection and still get to log its stdout+stderr in logfile.
You mention csh, so this may not help you. On the other had, it may motivate you to stop using csh for scripts, a task for which it is notoriously inappropriate. In sh, you can simply do:
#!/bin/sh
exec > logfile 2>&1
echo foo
To write foo (and the output and errors of all subsequent commands) to the logfile
We have scripts of following nature (in cron)
someScript.sh > /tmp/cronlog/somescript.$(date +%Y%m%d).log 2>&1
Now is there a way by which with in someScript.sh I can figure out what file the output has gone in to?
The script sends email with summary. At the same time I would like to mention that details could be found in so and so output file - with in the email.
I am aware of the construct if [ -t 1 ] to detect stdout etc but how to get the output file name?
Note that I want this to be generic so that some one can change the output file in cron and the script does not need to be modified.
The simplest thing I could think is that:
readlink -f /proc/$$/fd/1
$$ is the PID of the script (inside the script). On most unix systems, /proc/[pid] is the pseudo-directory containing info for process [pid].
/proc/[pid]/fd is a directory containing a list of symlinks for the open file-descriptors of the process. fd/0 is input, fd/1 is the output of the script, etc.
readlink then gives you the target file or tty if you don't redirect the output.
Of course, if you want to display it, you have to display it somewhere else than standard ouput, or it will be redirected! To debug, try the std error (2).
Various callings give those results on my box (script.sh just calls readlink -f /proc/$$/fd/1 >&2)
# ./script.sh
/dev/pts/0
# ./script.sh > /var/tmp/foo
/var/tmp/foo
# ./script.sh | more
/proc/12132/fd/pipe:[916212]
Rather than trying to find a hack (and that too platform dependent) its better to take a slightly different approach here.
Set your cron job like this:
someScript.sh /tmp/cronlog/somescript.$(date +%Y%m%d).log
i.e. without and > or 2>&1 (stdout/stderr streams redirections) and just pass an argument with the desired logfile name.
Now inside someScript.sh redirect streams to your log file like this:
LOGFILE=$1
exec &>${LOGFILE}
And finally you can then message your clients that:
"output details could be found in ${LOGFILE}"