So I was playing around with the language Octave, and they had this useful command called diary that would log stdout into a file for anything in between the diary on and diary off
diary on
a = [4 5, 2 6, 2 1]
a + 1
diary off
The above would save a file called diary in the working directory with the output of a, then a+1. It was super helpful for debugging, especially when looking at large datasets.
I was looking at other scripting languages and wondered if they have equivalents. The best I could come up with was echo hello.dat >> diary.txt for every single line. Does a tool exist that could achieve this functionality for bash? If not, how about python? It seems like a basic thing, but idk how to do it.
If you don't need contents to keep going to the TTY, and want to redirect both stdout and stderr:
exec 3>&1 4>&2 >>diary.txt 2>&1
echo "Everything here goes to diary.txt"
echo "...without having to redirect each line separately"
exec >&3 2>&4
If you do need contents to keep going to the TTY:
exec 3>&1 4>&2 > >(tee -a diary.txt) 2>&1
echo "Everything here goes to diary.txt"
echo "...without having to redirect each line separately"
exec >&3 2>&4
Note that you can't redirect both stdout and stderr to the file without either losing their ordering (ie. having two separate copies of tee and having to hope that they finish flushing in the same order in which you started writing to them) or lose information on which piece of output went to which descriptor.
The above can also be done with a multi-line block with a single redirection, which will do both the setup and the cleanup automatically:
{
echo "Everything here goes to diary.txt"
echo "...without having to redirect each line separately"
} >>diary.txt 2>&1
Related
Is there a filename that is assignable to a variable (i.e. not a magic builtin shell token like &1) that will let me redirect to stdout?
What I finally want to do is run something like this in a cron script:
LOG=/tmp/some_file
...
some_command 2>&1 >> $LOG
echo "blah" >> $LOG
...
Conveniently, this lets me turn off log noise by redirecting to /dev/null later when I'm sure there is nothing that can fail (or, at least, nothing that I care about!) without rewriting the whole script. Yes, turning off logging isn't precisely best practice -- but once this script works, there is not much that can conceivably go wrong, and trashing the disk with megabytes of log info that nobody wants to read isn't desired.
In case something unexpectedly fails 5 years later, it is still possible to turn on logging again by flipping a switch.
On the other hand, while writing and debugging the script, which involves calling it manually from the shell, it would be extremely nice if it could just dump the output to the console. That way I wouldn't need to tail the logfile manually.
In other words, what I'm looking for is something like /proc/self/fd/0 in bash-talk that I can assign to LOG. As it happens, /proc/self/fd/0 works just fine on my Linux box, but I wonder if there isn't such a thing built into bash already (which would generally be preferrable).
Basic solution:
#!/bin/bash
LOG=/dev/null
# uncomment next line for debugging (logging)
# LOG=/tmp/some_file
{
some_command
echo "blah"
} | tee 1>$LOG 2>&1
More evolved:
#!/bin/bash
ENABLE_LOG=0 # 1 to log standard & error outputs
LOG=/tmp/some_file
{
some_command
echo "blah"
} | if (( $ENABLE_LOG ))
then
tee 1>$LOG 2>&1
fi
More elegant solution from DevSolar's idea:
#!/bin/bash
# uncomment next line for debugging (logging)
# exec 1> >(tee /tmp/some_file) 2>&1
some_command
echo "blah"
Thanks to the awesome hints by olibre and suvayu, I came up with this (for the record, version that I'm using now):
# log to file
# exec 1>> /tmp/logfile 2>&1
# be quiet
# exec 1> /dev/null 2>&1
# dump to console
exec 2>&1
Just need to uncomment one of the three, depending on what is desired, and don't worry about anything else, ever again. This either logs all subsequent output to a file or to console, or not at all.
No output duplicated, works universally the same for every command (without explicit redirects), no weird stuff, and as easy as it gets.
If I have understood your requirement clearly, the following should do what you want
exec 2>&1
exec >> $LOG
Stdout and stderr of all subsequent commands will be appended to the file $LOG.
Use /dev/stdout
Here's another SO answer that mentions this solution: Difference between stdout and /dev/stdout
I'm trying to redirect some bash script outputs.
What I whould like to do is :
./some_script.sh 2> error.log >> all_output.log 2>&1
I whould like to put the stderr in a file, and both stderr and stdout on another file.
In addition I want to append at the end of all_output.log (for error.log that doesn't matter).
But I'm not getting the right syntax, I've been trying lot of things and I wasn't able to find out the right thing to do.
Thanks for you help ! :)
Redirection statements (like > foo or 2> bar or 1>&2) are best read like assignments to file descriptors, executed from left to right. Your code does this:
2> error.log
Means: fd2 = open_for_writing('error.log')
>> all_output.log
Means: fd1 = open_for_appending('all_output.log')
2>&1
Means: fd2 = fd1
By this you can understand that the first statement (2> error.log) will have no effect besides maybe creating the (empty) error.log.
What you want to achieve is duplicate one stream into two different targets. That is not done by a mere redirect of anything. For that you need a process which reads one thing and writes it into two different streams. That's best done using tee(1).
Unfortunately passing streams to other processes is done via pipes and they only pass stdout, not stderr. To achieve your goals you have to swap stderr and stdout first.
The complete resulting call could look like this:
(./some_script.sh 3>&2 2>&1 1>&3 | tee error.log) >> all_outputlog 2>&1
I was trying to learn how exec and tee work and encountered something I could not understand:
# create a log.out file in current directory
log=$(echo $(pwd)/log.out)
# start redirect
# 3 holds stdout, 4 holds stderr
# 1 & 2 points to log.out
exec 3>&1 4>&2 &>${log}
# print 'Have a good day everyone!' to both log.out and stdout
echo 'Have a good day everyone!' | tee ${log} 1>&3
echo 'Ciao!'
echo 'Bye!'
# end redirect
exec 1>&3 2>&4 3>&- 4>&-
When I went into log.out file, I got this:
Ciao!
Bye!
day everyone!
I was expecting:
Have a good day everyone!
Ciao!
Bye!
Please help me understand what is going on here and how to resolve this.
Thank you.
If this is duplicate, please close and give me the link to the solution.
What's happening here is that while tee is adding content to your file, the existing open file pointer created by exec &>log.out is still back at the beginning of that file. Thus, when you start writing to that file pointer, those writes start at the beginning, despite other contents having been written by tee.
If you want to ensure that content is always added to the end of the file, even if other software has modified where that end-of-the-file location is, then you should ensure that the O_APPEND flag is used on open.
To do this, use >> rather than > for your redirection:
exec 3>&1 4>&2 &>>${log}
to clarify, by reporting output, here i mean the ones start with [1]:
$ echo hello world >&2 &
[1] 11714
hello world
[1]+ Done echo hello world 1>&2
which means i want hello world to be output.
i did a lot of search on this, and the solutions i found would be:
having it run in a subshell
set +m can deal with Done message only.
suppress it explicitly: { cmd & } 2>/dev/null which won't suppress Done and it suppresses all my stderr too.
but in my condition, they don't work quite fine since i want extra parallelism. the framework should be:
cmd1 &>>log &
cmd2 &>>log &
wait
cat file &
cat file2 >&2 &
wait
if i put things into subshells, output is suppressed, but wait won't block the program.
the rest two options doesn't work as i've stated.
the worst is i am expecting something will be output to stderr. so i am looking for a way to totally suppress these reporting things or any other work around that you can come up with.
This is very ugly but it looks like it works in a quick test.
set +m
{ { sleep 2; echo stdout; echo stderr >&2; } 2>&3- & } 3>&2 2>/dev/null
Create fd 3 as a copy of fd 2 then redirect fd 2 to /dev/null (to suppress the background id/pid message).
Then, for the backgrounded command list, move fd 3 back to fd 2 so things that try to use it go where you wanted them to.
My first attempt had the fd 3 mv in the outer brace command list but that didn't suppress the id/pid message correctly (I guess that happened too quickly or something).
I often have trouble figuring out certain language constructs because they won't register when googling or duckduckgoing them. With a bit of experimenting, it's often simple to figure it out, but I don't get this one.
I often see stuff like 2>&1 or 3>&- in bash scripts. I know this is some kind of redirection. 1 is stdout and 2 is stderror. 3 is probably custom. But what is the minus?
Also, I have a script whose output I want to log, but also want to see on screen. I use exec > >(tee $LOGFILE); exec 2>&1 for that. It works. But sometimes when I bashtrap this script, I cannot type at the prompt anymore. Output is hidden after Ctrl+C. Can I use a custom channel and the minus sign to fix this, or is it unrelated?
2>&1 means that stderr is redirected to stdout
3>&- means that file descriptor 3, opened for writing(same as stdout), is closed.
You can see more examples of redirection here
As for questions number 3, I think this is a good link.
The 3>&- close the file descriptor number 3 (it probably has been opened before with 3>filename).
The 2>&1 redirect the output of file descriptor 2 (stderr) to the same destination as file descriptor 1 (stdout). This dies call dup2() syscall.
For more information about redirecting file descriptor please consult the bash manpages (`man bash). They are dense but great.
For your script, I would do it like that:
#!/bin/bash
if [[ -z $recursive_call ]]; then
recursive_call=1
export recursive_call
"$0" "$#" | tee filename
exit
fi
# rest of the script goes there
It lose the exit code from the script though. There is a way in bash to get it I guess but I can't remember it now.