log all stderr to file and console - bash

There are plenty of threads here discussing how to do this for scripts or for the cmdline (mostly involving pipes, redirections, tee).
What I didn't find is a solution which can be set up once and then just works globally, without manipulating single scripts or adding something to every command line.
What I want to achieve is something like described in the top answer of
How do I write stderr to a file while using "tee" with a pipe?
Isn't it possible to configure the bash session so that all stderr output is logged to a file, while still writing it to console? Something I could add to .bashrc and thus automatically set up every time I login?
Software: Bash 4.2.24(1)-release (x86_64-pc-linux-gnu), xterm, Ubuntu 12.04

Try this variation on #0xC0000022L's previous solution (put it in your .bash_profile):
exec 2> >( tee log.file > /dev/tty )
A couple of caveats:
The prompt and anything you type at the command line are printed to stderr, and so will be logged in your file.
There could be an issue with the newline that terminates a command not being displayed in your terminal; I observe it on my Linux host, but not on my Mac OS X laptop. Perhaps someone else can explain and/or fix the issue. For example, if I type "echo stdout", I see the following:
$ echo stdoutstdout
$

Related

bash: separating command from pipe

I have a third-part software that accepts command line arguments. I want to pipe the output in a file. I have found that for some inexplicable reasons the code hangs if I try:
./run_third_part.py &> log
but it works if
./run_third_part.py
I believe that piping the output is messing up with the process of reading command line arguments, although other ideas are welcome. How can I isolate the program from the pipe command? (I was thinking about putting some sort of parentheses.)
Probably the script is waiting for input on an interactive prompt. The easiest way around this is usually to give it some input:
./run_third_part.py < /dev/null &> log
Can you try creating a subshell and run the script,
bash$ `./run_third_part.py` &> log
Please notice ` is not '
(single quote)

.sh output to .txt file - what am I doing wrong?

I am running Windows 10 and am trying to save the error output of a test.sh file to a text file.
So I created the test.sh file and wrote an unkown command in it (i.e. "blablubb").
After that I open the terminal (cmd.exe), switch to the directory and type test.sh 2>> log.txt.
Another window opens with "/usr/bin/bash --login -i \test.sh" in the title bar, shows me "bash: blablubb: command not found" and then closes immediately.
I want to save that output because the bash-window just opens for a split second. Every google search brings me to websites talking about redirecting the output and that Stream2 ist STDERR and therefore I should use test.sh 2>> log.txt or something smiliar that takes care of the STDERR stream.
If I try the same with a test.sh file and the content:
#!/bin/bash
echo hi there
I get the output in the briefly open bash-window:
bash: #!/bin/bash: No such file or directory
hi there
But the log.txt file is empty.
If I only have echo hi therein the test.sh file I get bash: echo: command not found in the bash-window.
The log.txt also empty.
If I type the following directly in the terminal, the output is written in the log.txt:
echo hi > log.txt 2>&1
If I type directly in the terminal:
echdo hi > log.txt 2>&1
I get 'Der Befehl "echdo" ist entweder falsch geschrieben oder konnte nicht gefunden werden.' in the log.txt file.
So I guess the redirecting of the output works fine until I use test.sh.
I know that .sh files are something from the unix world and that the problem might lie there but I don't know why I can not redirect the output briefly shown in the bash-console to a text file.
The 2>> redirection syntax only works if the command line containing that syntax is interpreted by bash. So it won't work from the Windows command prompt, even if the program you are running happens to be written in bash. By the time bash is running, it's too late; it gets the arguments as they were interpreted by CMD or whatever your Windows command interpreter is. (In this case, I'm guessing that means the shell script will find it has a command line argument [$1] with the value "2".)
If you open up a bash window (or just type bash in the command one) and then type the test.sh 2>>log.txt command line in that shell, it will put the error message in the file as you expect.
I think you could also do it in one step by typing bash -c "test.sh 2>>log.txt" at the Windows command prompt, but I'm not sure; Windows quoting is different than *nix quoting, and that may wind up passing literal quotation marks to bash, which won't work.
Note that CMD does have the 2>> syntax as well, and if you try to run a nonexistent windows command with 2>>errlog.txt, the "is not recognized" error message goes to the file. I think the problem comes from the fact that CMD and bash disagree on what "standard error" means, so redirecting the error output from Windows doesn't catch the error output by bash. But that's just speculation; I don't have a bash-on-Windows setup handy to test.
It would help to know if you are running Windows Subsystem for Linux (Beta). Or if you are doing something else. I'm assuming this is what you are doing on windows 10.
If this is the case are you using bash to run the script?
Are you using win-bash?
If it is win-bash I'm not very familiar and would recommend Windows Subsystem for Linux (Beta) for reasons like this. win-bash, while cool, might not be compatible with redirection operators like 2>>.
You have stdout and stderr, by default (if you don't specify) the >> (or append) will only append standard output into the txt file.
If you use 2 it will append the standard error into the txt file. Example: test.sh 2>> log.txt
This could be better described at this site.
To get exactly the command for append both stdout and stderr, go to this page.
Please tell me if this doesn't answer your question. Also, it could be more valuable to attempt a search for this answer first and explain why your search found nothing or give more extensive clarification as to what the problem is. I love answering questions and helping, but creating a new forum page for what might be an easy answer may be ineffective. I've had a bunch of fun with your question. I hope that I've helped.
That's makes a lot of sense. Thanks Mark!
Taking what mark says into account I would get Windows Subsystem for Linux (Beta). There are instructions here. Then run your script from there.

Redirect entire bash session to log file

I am familiar with the ability to pipe and redirect the IO of individual processes when running them in bash. However, is there a way to redirect stdio for an entire bash session?
Ideally, I would like to transparently pipe all stdout and stderr of all processes spawned by bash into tee to duplicate into a file the printed output displayed to the user. No matter what processes are run within that bash session, I could then go back later and look over the output.
Even more ideally, this should be the case for simple interactive programs that take options from stdin, but not for heavily interactive programs like vim.
The best I've found so far is: whenever the user opens a new terminal, run the command:
bash --login -i > >(tee ~/bash_$$.log) 2>&1
This will immediately start an interactive child shell in that new shell, and tee all stdin and stderr to a logfile named with the new parent shell's PID (to avoid overwriting).
This works, but vim fails to start with Vim: Warning: Output is not to a terminal. Are there any known solutions, up to and including patching the shell, to do this?
Background: vim is failing because isatty() is returning false when given the file descriptor for stdout; this is a safeguard to prevent uses such as vim >file that generally don't make sense. (Also, there are operating system calls available for interacting with PTYs that are useful to graphical, cursor-oriented programs that aren't available with a simple FIFO; this is why tools like ssh go to the trouble to provide a pseudoterminal during interactive use).
What's important for your purposes is that vim is directly inspecting the file descriptor it's passed as stdout. The shell is not a party to this -- it's literally vim running a standard-C-library call that gets some details about an open file descriptor -- so it's nothing that patching or reconfiguring the shell can fix.
To avoid this, then, you need to find a different way to redirect your output for logging such that stdout and stderr are still pointed at PTYs.
That said, for your real goal (logging all activity, vs redirecting stdout in-place), what you want is probably script:
if [ -z "$redirection_done" ]; then
redirection_done=1 exec script shell.log bash --login -i
fi
Using logging support from another tool which simulates a TTY, such as screen or tmux, will likewise suffice. (unbuffer, from the expect toolkit, can be used with similar effect).
Back to your literal question... (since while it may not be what you want to know, it is what you asked):
In all POSIX shells, including bash,
exec >wherever
...will immediately redirect stdout for the current shell to wherever. This can be a process substitution in bash, as anywhere else; thus, in an already-running shell, you can execute
exec > >(tee shell.log) 2>&1

Terminal emulator implementation - problems with repeated input

I am trying to implement a terminal emulator in Java. It is supposed to be able to host both cmd.exe on Windows and bash on Unix-like systems (I would like to support at least Linux and Mac OS X). The problem I have is that both cmd.exe and bash repeat on their standard output whatever I send to their standard input.
For example, in bash, I type "ls", hit enter, at which point the terminal emulator sends the input line to bash's stdin and flushes the stream. The process then outputs the input line again "ls\n" and then the output of the ls command.
This is a problem, because other programs apart from bash and cmd.exe don't do that. If I run, inside either bash, or cmd.exe, the command "python -i", the python interactive shell does not repeat the input in the way bash and cmd.exe does. This means a workaround would have to know what process the actual output came from. I doubt that's what actual terminal emulators do.
Running "bash -i" doesn't change this behaviour. As far as I know, cmd.exe doesn't have distinct "interactive" and "noninteractive" modes.
EDIT: I am creating the host process using the ProcessBuilder class. I am reading the stdout and stderr and writing to the stdin of the process using a technique similar to the stream gobbler. I don't set any environment variables before I start the host process. The exact commands I use to start the processes are bash -i for bash and cmd for cmd.exe. I'll try to post minimal code example as soon as I manage to create one.
On Unix, run stty -echo to disable "local echo" (i.e. the shell repeating everything that you type). This is usually enabled so a user can edit what she types.
In your case, BASH must somehow allocate a pseudo TTY; otherwise, it would not echo every command. set +x would have a similar effect but then, you'd see + ls instead of ls in the output.
With cmd.exe the command #ECHO OFF should achieve the same effect.
Just execute those after the process has been created and it should work.

Can colorized output be captured via shell redirect? [duplicate]

This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null

Resources