I was trying to learn how exec and tee work and encountered something I could not understand:
# create a log.out file in current directory
log=$(echo $(pwd)/log.out)
# start redirect
# 3 holds stdout, 4 holds stderr
# 1 & 2 points to log.out
exec 3>&1 4>&2 &>${log}
# print 'Have a good day everyone!' to both log.out and stdout
echo 'Have a good day everyone!' | tee ${log} 1>&3
echo 'Ciao!'
echo 'Bye!'
# end redirect
exec 1>&3 2>&4 3>&- 4>&-
When I went into log.out file, I got this:
Ciao!
Bye!
day everyone!
I was expecting:
Have a good day everyone!
Ciao!
Bye!
Please help me understand what is going on here and how to resolve this.
Thank you.
If this is duplicate, please close and give me the link to the solution.
What's happening here is that while tee is adding content to your file, the existing open file pointer created by exec &>log.out is still back at the beginning of that file. Thus, when you start writing to that file pointer, those writes start at the beginning, despite other contents having been written by tee.
If you want to ensure that content is always added to the end of the file, even if other software has modified where that end-of-the-file location is, then you should ensure that the O_APPEND flag is used on open.
To do this, use >> rather than > for your redirection:
exec 3>&1 4>&2 &>>${log}
Related
Is there a filename that is assignable to a variable (i.e. not a magic builtin shell token like &1) that will let me redirect to stdout?
What I finally want to do is run something like this in a cron script:
LOG=/tmp/some_file
...
some_command 2>&1 >> $LOG
echo "blah" >> $LOG
...
Conveniently, this lets me turn off log noise by redirecting to /dev/null later when I'm sure there is nothing that can fail (or, at least, nothing that I care about!) without rewriting the whole script. Yes, turning off logging isn't precisely best practice -- but once this script works, there is not much that can conceivably go wrong, and trashing the disk with megabytes of log info that nobody wants to read isn't desired.
In case something unexpectedly fails 5 years later, it is still possible to turn on logging again by flipping a switch.
On the other hand, while writing and debugging the script, which involves calling it manually from the shell, it would be extremely nice if it could just dump the output to the console. That way I wouldn't need to tail the logfile manually.
In other words, what I'm looking for is something like /proc/self/fd/0 in bash-talk that I can assign to LOG. As it happens, /proc/self/fd/0 works just fine on my Linux box, but I wonder if there isn't such a thing built into bash already (which would generally be preferrable).
Basic solution:
#!/bin/bash
LOG=/dev/null
# uncomment next line for debugging (logging)
# LOG=/tmp/some_file
{
some_command
echo "blah"
} | tee 1>$LOG 2>&1
More evolved:
#!/bin/bash
ENABLE_LOG=0 # 1 to log standard & error outputs
LOG=/tmp/some_file
{
some_command
echo "blah"
} | if (( $ENABLE_LOG ))
then
tee 1>$LOG 2>&1
fi
More elegant solution from DevSolar's idea:
#!/bin/bash
# uncomment next line for debugging (logging)
# exec 1> >(tee /tmp/some_file) 2>&1
some_command
echo "blah"
Thanks to the awesome hints by olibre and suvayu, I came up with this (for the record, version that I'm using now):
# log to file
# exec 1>> /tmp/logfile 2>&1
# be quiet
# exec 1> /dev/null 2>&1
# dump to console
exec 2>&1
Just need to uncomment one of the three, depending on what is desired, and don't worry about anything else, ever again. This either logs all subsequent output to a file or to console, or not at all.
No output duplicated, works universally the same for every command (without explicit redirects), no weird stuff, and as easy as it gets.
If I have understood your requirement clearly, the following should do what you want
exec 2>&1
exec >> $LOG
Stdout and stderr of all subsequent commands will be appended to the file $LOG.
Use /dev/stdout
Here's another SO answer that mentions this solution: Difference between stdout and /dev/stdout
So I was playing around with the language Octave, and they had this useful command called diary that would log stdout into a file for anything in between the diary on and diary off
diary on
a = [4 5, 2 6, 2 1]
a + 1
diary off
The above would save a file called diary in the working directory with the output of a, then a+1. It was super helpful for debugging, especially when looking at large datasets.
I was looking at other scripting languages and wondered if they have equivalents. The best I could come up with was echo hello.dat >> diary.txt for every single line. Does a tool exist that could achieve this functionality for bash? If not, how about python? It seems like a basic thing, but idk how to do it.
If you don't need contents to keep going to the TTY, and want to redirect both stdout and stderr:
exec 3>&1 4>&2 >>diary.txt 2>&1
echo "Everything here goes to diary.txt"
echo "...without having to redirect each line separately"
exec >&3 2>&4
If you do need contents to keep going to the TTY:
exec 3>&1 4>&2 > >(tee -a diary.txt) 2>&1
echo "Everything here goes to diary.txt"
echo "...without having to redirect each line separately"
exec >&3 2>&4
Note that you can't redirect both stdout and stderr to the file without either losing their ordering (ie. having two separate copies of tee and having to hope that they finish flushing in the same order in which you started writing to them) or lose information on which piece of output went to which descriptor.
The above can also be done with a multi-line block with a single redirection, which will do both the setup and the cleanup automatically:
{
echo "Everything here goes to diary.txt"
echo "...without having to redirect each line separately"
} >>diary.txt 2>&1
I would like to route a file descriptor to multiple places at the same time. For instance I would like every command in my script to print stdout to /dev/ps/9 and ./myscript.stdout at the same time.
I'm looking to achieve similar results as piping every command in a script (or a section of a script) into tee, perhaps with file descriptors. I also want the ability to restore default output behavior later in the script.
This code doesn't work, but it's an attempt at expressing my intent. To restore stdout as FD 1 later, I copy it into FD 4.
exec 3>(tee /dev/ps/9 ./myscript.stdout)
exec 4>&1
exec 1>&3
Restore normal output behavior, deleting FDs 3 and 4.
exec 1>&4
exec 4>&-
exec 3>&-
I would like every command in my script to print stdout to /dev/ps/9
and ./myscript.stdout at the same time.
exec 1> >(tee ./myscript.stdout >/dev/ps/9)
The above combines redirection and process substitution. With redirection alone, one can send stdout to a file. For example:
exec 1> filename
However, with bash, filenames can often be replaced with commands. This is called process substitution and it looks like >(some command) or <(some command) depending on whether one wants to write-to or read-from the process. In our case, we want to write to a tee command. Thus:
exec 1> >(some command)
Or, more specifically:
exec 1> >(tee ./myscript.stdout >/dev/ps/9)
Note that we have to maintain the space between the redirect (1>) and the process substitution (>(tee ./myscript.stdout >/dev/ps/9). Without the space, it would look like we were trying to append to a file whose name starts with a parens and this would generate a bash error.
For more information on this see the sections entitled "REDIRECTION" and "Process Substitution" in man bash.
#!/bin/bash
random=$$ # generate a random seed number to name the log files with
out=out.$random
err=err.$random
dev=`echo $(who -m) | cut -d' ' -f2` # for finding the right pseudo-terminal
: >$out # create the log files or empty their contents
: >$err # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
exec 1> >(tee ./$out >/dev/$dev) # I don't know how this works but it does
exec 2> >(tee ./$err >/dev/$dev) # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
echo # writing directly to the pts in /dev doesn't look right until sending a blank line
##########################################
echo 'hello'
for i in `seq 0 1 10`; do
echo $i
done
bad_command
THANKS #John1024
Here's a script for anybody else wishing to test this out.
Could somebody please explain in detail the exec lines to me.
For instance, why is there a blank space after the arrow in:
exec 1>
?
#!/bin/bash
logfile=$$.log
exec > $logfile 2>&1 | tee
echo "test"
The $$ does a random seed number which is optional.
I often have trouble figuring out certain language constructs because they won't register when googling or duckduckgoing them. With a bit of experimenting, it's often simple to figure it out, but I don't get this one.
I often see stuff like 2>&1 or 3>&- in bash scripts. I know this is some kind of redirection. 1 is stdout and 2 is stderror. 3 is probably custom. But what is the minus?
Also, I have a script whose output I want to log, but also want to see on screen. I use exec > >(tee $LOGFILE); exec 2>&1 for that. It works. But sometimes when I bashtrap this script, I cannot type at the prompt anymore. Output is hidden after Ctrl+C. Can I use a custom channel and the minus sign to fix this, or is it unrelated?
2>&1 means that stderr is redirected to stdout
3>&- means that file descriptor 3, opened for writing(same as stdout), is closed.
You can see more examples of redirection here
As for questions number 3, I think this is a good link.
The 3>&- close the file descriptor number 3 (it probably has been opened before with 3>filename).
The 2>&1 redirect the output of file descriptor 2 (stderr) to the same destination as file descriptor 1 (stdout). This dies call dup2() syscall.
For more information about redirecting file descriptor please consult the bash manpages (`man bash). They are dense but great.
For your script, I would do it like that:
#!/bin/bash
if [[ -z $recursive_call ]]; then
recursive_call=1
export recursive_call
"$0" "$#" | tee filename
exit
fi
# rest of the script goes there
It lose the exit code from the script though. There is a way in bash to get it I guess but I can't remember it now.
I came across a working ksh script [interactive] today, where I saw the below statement.
printf "Enter the release no. : " >&5
I wonder the use of >&5 when the author could have as well use nothing or say >&1.
Can someone shed some light on this point ?
Thanks in advance
--
Benil
He probably has remapped the file descriptors or does use the file descriptor 5 for something special
e.g. to only temporarily redirect errors to /dev/null
#errors produced here go to stderr
....
#now save stderr to fd 5
exec 5>&2
#redirect to /dev/null
exec 2>/dev/null
...
# do stuff which errors are discarded
......
# restore stderr from fd 5
exec 2>&5
So check more of the script what it does before