I have some code handed from someone long gone to another department. It purports to log everything to the $MBL location, however, it does not; it creates an empty file in the $MBL location :-(
exec > >(tee ${MBL}) 2>&1
I can tell that it takes stderr and sends it to stdout; I can tell that tee should output the result to stdout and to $MBL. However, I don't understand the exec > >() syntax.
Reading the bash(1) man page suggests that a fork happens....
Two things going on here: exec with only redirections redirects the shell's own file descriptors, and the >(command) syntax in bash and zsh creates a pipe and substitutes a /dev/fd/* reference to its input.
As written, this looks like it does what it is claimed to do.. but there may be other redirections in the script, or if it's being run in a shell that doesn't support >(command) then it will spit out an error and do nothing useful.
Related
I have seen this question:
Shell redirection i/o order.
But I have another question. If this line fails to redirect stderr to file:
ls -xy 2>&1 1>file
Then why this line can redirect stderr to grep?
ls -xy 2>&1 | grep ls
I want to know how it is actually being run underneath.
It is said that 2>&1 redirects stderr to a copy of stdout. What does "a copy of stdout" mean? What is actually being copied?
The terminal registers itself (through the OS) for sending and receiving through the standard streams of the processes it creates, right? Does the other redirections go through the OS as well (I don't think so, as the terminal can handle this itself)?
The pipe redirection (connecting standard output of one command to the stdin of the next) happens before the redirection performed by the command.
That means by the time 2>&1 happens, the stdout of ls is already setup to connect to stdin of grep.
See the man page of bash:
Pipelines
The standard output of command is connected via a pipe to
thestandard input of command2. This connection is performed before
anyredirections specified by the command (see REDIRECTION below). If
|&is used, command's standard error, in addition to its
standardoutput, is connected to command2's standard input through the
pipe;it is shorthand for 2>&1 |. This implicit redirection of
thestandard error to the standard output is performed after
anyredirections specified by the command.
(emphasis mine).
Whereas in the former case (ls -xy 2>&1 1>file), nothing like that happens i.e. when 2>&1 is performed the stdout of ls is still connected to the terminal (and hasn't yet been redirected to the file).
That answers my first question. What about the others?
Well, your second question has already been answered in the comments. (What is being duplicated is a file descriptor).
As to your last question(s),
The terminal registers itself (through the OS) for sending and receiving through the standard streams of the processes it creates, right? Does the other redirections go through the OS as well (I don't think so, as the terminal can handle this itself)?
it is the shell which attaches the standard streams of the processes it creates (pipes first, then <>’s, as you have just learned). In the default case, it attaches them to its own streams, which might be attached to a tty, with which you can interact in a number of ways, usually a terminal emulation window, or a serial console, whatever. Terminal is a very ambiguous word.
I am familiar with the ability to pipe and redirect the IO of individual processes when running them in bash. However, is there a way to redirect stdio for an entire bash session?
Ideally, I would like to transparently pipe all stdout and stderr of all processes spawned by bash into tee to duplicate into a file the printed output displayed to the user. No matter what processes are run within that bash session, I could then go back later and look over the output.
Even more ideally, this should be the case for simple interactive programs that take options from stdin, but not for heavily interactive programs like vim.
The best I've found so far is: whenever the user opens a new terminal, run the command:
bash --login -i > >(tee ~/bash_$$.log) 2>&1
This will immediately start an interactive child shell in that new shell, and tee all stdin and stderr to a logfile named with the new parent shell's PID (to avoid overwriting).
This works, but vim fails to start with Vim: Warning: Output is not to a terminal. Are there any known solutions, up to and including patching the shell, to do this?
Background: vim is failing because isatty() is returning false when given the file descriptor for stdout; this is a safeguard to prevent uses such as vim >file that generally don't make sense. (Also, there are operating system calls available for interacting with PTYs that are useful to graphical, cursor-oriented programs that aren't available with a simple FIFO; this is why tools like ssh go to the trouble to provide a pseudoterminal during interactive use).
What's important for your purposes is that vim is directly inspecting the file descriptor it's passed as stdout. The shell is not a party to this -- it's literally vim running a standard-C-library call that gets some details about an open file descriptor -- so it's nothing that patching or reconfiguring the shell can fix.
To avoid this, then, you need to find a different way to redirect your output for logging such that stdout and stderr are still pointed at PTYs.
That said, for your real goal (logging all activity, vs redirecting stdout in-place), what you want is probably script:
if [ -z "$redirection_done" ]; then
redirection_done=1 exec script shell.log bash --login -i
fi
Using logging support from another tool which simulates a TTY, such as screen or tmux, will likewise suffice. (unbuffer, from the expect toolkit, can be used with similar effect).
Back to your literal question... (since while it may not be what you want to know, it is what you asked):
In all POSIX shells, including bash,
exec >wherever
...will immediately redirect stdout for the current shell to wherever. This can be a process substitution in bash, as anywhere else; thus, in an already-running shell, you can execute
exec > >(tee shell.log) 2>&1
I'm using a script which is calling another, like this :
# stuff...
OUT="$(./scriptB)"
# do stuff with the variable OUT
Basically, the scriptB script displays text in multiple time. Ie : it displays a line, 2s late another, 3s later another and so on.
With the snippet i use, i only get the first output of my command, i miss a lot.
How can i get the whole output, by capturing stdout for a given time ? Something like :
begin capture
./scriptB
stop capture
I don't mind if the output is not shown on screen.
Thanks.
If I understand your question, then I believe you can use the tee command, like
./scriptB | tee $HOME/scriptB.log
It will display the stdout from scriptB and write stdout to the log file at the same time.
Some of your output seems to be coming on the STDERR stream. So we have to redirect that as needed. As in my comment, you can do
{ ./scriptB ; } > /tmp/scriptB.log 2>&1
Which can almost certainly be reduced to
./scriptB > /tmp/scriptB.log 2>&1
And in newer versions of bash, can further be reduced to
./scriptB >& /tmp/scriptB.log
AND finally, as your original question involved storing the output to a variable, you can do
OUT=$(./scriptB > /tmp/scriptB.log 2>&1)
The notation 2>&1 says, take the file descriptor 2 of this process (STDERR) and tie it (&) into the file descriptor 1 of the process (STDOUT).
The alternate notation provided ( ... >& file) is a shorthand for the 2>&1.
Personally, I'd recommend using the 2>&1 syntax, as this is understood by all Bourne derived shells (not [t]csh).
As an aside, all processes by default have 3 file descriptors created when the process is created, 0=STDIN, 1=STDOUT, 2=STDERR. Manipulation of those streams is usually as simple as illustrated here. More advanced (rare) manipulations are possible. Post a separate question if you need to know more.
IHTH
I have a series of bash scripts that echo a lot of data to stdout and occasionally to stderr. There is a main bash script which then imports and/or invokes many other bash scripts.
I'm trying to implement something that will capture the output from not just the parent script, but all children scripts. I need to capture both stdout and stderr so that any issues from compilation, etc... get captured in this log file.
I'm aware of tee and of course the normal stdout redirect >... but these don't seem to work without either adding these commands to each line of every script, both parent and children. There's several thousand lines in these scripts, so adding a redirect to each line would be impractical, even using sed.
I've seen suggestions such as: Redirect stderr and stdout in a Bash script
But these require editing every line in the scripts. Forcing my users to install screen is also impractical.
UPDATE:
Forgot to mention I want the output to still display on the console as well as write to the log. The scripts take several hours to run, and the user needs to know something is happening...
You can use yourmainscript 2>&1 | tee log which will capture stdout and stderr from all imported/invoked scripts in yourmainscript while also showing it on screen.
Inside yourmainscript, you can get the same effect using:
echo "Redirecting the rest of script output to 'log'"
exec > >(tee log) 2>&1
rest of your code
To redirect just a certain section:
echo "Redirecting the next commands"
{
cmd1
cmd2
} > >(tee log) 2>&1
echo "Continuing as normal"
when using the following bash script to assign variable to the output of fuser, it still outputs the part of result(before :) of fuser to screen.
why isn't it suppressed? I suspect it has to do with the ":" char output by fuser. how do I fix this?
test=`fuser -f /home/whois_database_collection_v4/whoisdatacollector/logs/com_log_2013_02_15_12_40_43.log`
/home/whois_database_collection_v4/whoisdatacollector/logs/com_log_2013_02_15_12_40_43.log:
fuser sends the part of the output you see to stderr, and the rest to stdout (I suspect that it tries to simplify its usage from scripts, while preserving some beauty from the user's point of view). It will disappear if you add 2> /dev/null redirection for stderr.