In the context where I want to capture the stdout of a process in a file but still want to have this output displayed in the terminal I can choose between script and tee. In this context, are these tools essentially equivalent or is there a – possibly subtle – reason to prefer one over the other?
The programs script and tee are designed for different purposes:
script -- make typescript of terminal session
tee -- pipe fitting
Important differences between script and tee are:
script transmits the exit status of the process it supervises, while tee, being a filter, does not even know about it.
script captures stdin, stdout, stderr of the process it supervises while tee only catches the stream it filters.
None of these differences are relevant in the given context.
They have a very different purpose and the usage is quite different as well.
Script is to record what you are doing in a shell session. Handy to show a professor what you did, to show co-workers how to do something, etc...
Tee is just an application to write to both your screen and a file. Very handy when installing something or running a command that generates a lot of output and wanting to see the output realtime while still saving it to disk.
A notable difference between the two is that you can use script to create an interactive shell to log everything (e.g. script commands.log zsh) including colors and such. Tee won't register as a tty so with that regard it's pretty different.
I found script to be useful for making control sequences work when piping to tee:
script -q -c 'python -c "import pdb, sys; pdb.set_trace()"' /dev/null \
| tee -a /tmp/tmp.txt
With only the following, Ctrl-A would be displayed as ^A etc:
python -c "import pdb, sys; pdb.set_trace()" | tee -a /tmp/tmp.txt
This is a minimal example. I am using tee here to capture the output from a pytest test run, but sometimes there might be a debugger in there, and cursor keys etc should work then.
Via https://unix.stackexchange.com/a/61833/1920.
Related
One may want to use Bash on Windows in Task Scheduler or maybe as version-control hook scripts. Is it possible or supported?
If not, why? Is it a bug or a measure to prevent some issues?
Use #3d1t0r's solution, but also pipe to cat
wsl bash -c "man bash | cat" # noninteractive; streams the entire manpage to the terminal
wsl bash -c "man bash" # shows me the first page, and lets me scroll around; need to hit `q` to exit
If interactive mode is fine, bash -c is often superfluous
wsl man bash # same behavior as `wsl bash -c "man bash"`
Context (what in the world is "interactive" vs "non-interactive" invocation?):
The example above might not make it entirely clear, but man is changing its behavior based on what it's connected to.
In "interactive mode", man lets me scroll around, so that I can read the page at a comfortable reading pace.
In noninteractive mode, man dumps the entire manpage to the console, giving me no time to read anything.
"But wait," I hear you ask, "isn't man catting the man page because you asked it to? I see it right there--man bash | cat"
No, man has no idea what cat is. It just gets hints about whether STDOUT is connected to an interactive terminal.
Here's a different example, that consistently cats:
wsl bash -c "echo hey | grep --color e" # colors 'e' red
wsl bash -c "echo hey | grep --color e | cat" # colors disappear, what gives?
Now both examples are streaming their output, but the second one is defiantly ignoring my --color flag.
The common thread here is man and grep both behave appropriately depending on whether they think their output is going to be read by a human piped away somewhere.
Other common commands that auto-detect interactivity include ls and git. Usually the behavior change will involve output paging or colors (other variations exist).
paging is nice for humans, because humans generally can't read at the speed of streamed output.
paging is bad for robots, because paging is a lot of protocol overhead when you can just consume buffered streams. I mean seriously, why are humans so slow and chatty?
colors are nice for humans, because we like additional visual cues to aid visual distinction.
colors are bad for streaming to file, because your file will be full of ansi color code garbage, that most text editors don't display nicely.
Automatic behavior switching based on whether STDOUT is connected to an interactive terminal makes all these use cases usually "just work".
Restating the Original Question
In my use case and #bahrep's use case, interactive mode can be especially bad for unsupervised scripts (e.g. as launched by Task Scheduler). I am guessing #bahrep's scheduled runs hung on less getting invoked and waiting for human input.
For some reason, wsl-driven scripts launched from the task-scheduler give underlying scripts the wrong hints--they hint that the final output is attached to an interactive terminal.
Ideally, wsl would know from the windows side of the execution environment whether it is getting invoked interactively or not, and pass along the proper hint. Then I could just run wsl [command]. Until that happens, I'll need to use wsl bash -c "[command] | cat" as a workaround.
If I'm understanding your question correctly, the -c option is what you're looking for. It allows you to directly invoke a Linux command.
For example, to open the man page for bash (perhaps in order to find out about the -c option):
bash -c "man bash"
Note: You can leave off the quotes if you escape any spaces (e.g. bash -c man\ bash), but it's often easier to just use the quotes, as the first unescaped space will lose the rest of your command.
e.g. bash -c man bash will be interpreted the same as bash -c man.
Usually if I want to print the output of a command and in addition capture that output in a file, tee is the solution. But I'm making a script using a utility which seems to have a special behaviour. It's the wps wireless assessment tool bully.
If I run a bully command as normal (without tee), the output is shown in a standard way, step by step. But If I put the pipe at the end to log like this | tee "/path/to/my/logfile" the output on the screen freezes. It shows nothing until the command ends. And after ending, it shows everything in one shot (not step by step) and of course it puts the output also in the log tee file.
An example of bully command: bully wlan0mon -b 00:11:22:33:44:55 -c 8 -L -F -B -v 3 -p 12345670 | tee /root/Desktop/log.txt
Why? Not sure if it only happens with bully or if there are other programs with the same behaviour.
Is there another way to capture the output into a file having the output on screen in real time?
What you're seeing is full buffering vs line buffering. By default, when stdout is writing to a tty (i.e. interactive) you'll have line buffering, vs the default of being fully buffered otherwise. You can see the setvbuf(3) man page for a more detailed explanation.
Some commands offer an option to force line buffering (e.g. GNU grep has --line-buffered). But that sort of option is not widely available.
Another option is to use something like expect's unbuffer command if you want to be able to see the output more interactively (at the cost of depending on expect, of course).
I am familiar with the ability to pipe and redirect the IO of individual processes when running them in bash. However, is there a way to redirect stdio for an entire bash session?
Ideally, I would like to transparently pipe all stdout and stderr of all processes spawned by bash into tee to duplicate into a file the printed output displayed to the user. No matter what processes are run within that bash session, I could then go back later and look over the output.
Even more ideally, this should be the case for simple interactive programs that take options from stdin, but not for heavily interactive programs like vim.
The best I've found so far is: whenever the user opens a new terminal, run the command:
bash --login -i > >(tee ~/bash_$$.log) 2>&1
This will immediately start an interactive child shell in that new shell, and tee all stdin and stderr to a logfile named with the new parent shell's PID (to avoid overwriting).
This works, but vim fails to start with Vim: Warning: Output is not to a terminal. Are there any known solutions, up to and including patching the shell, to do this?
Background: vim is failing because isatty() is returning false when given the file descriptor for stdout; this is a safeguard to prevent uses such as vim >file that generally don't make sense. (Also, there are operating system calls available for interacting with PTYs that are useful to graphical, cursor-oriented programs that aren't available with a simple FIFO; this is why tools like ssh go to the trouble to provide a pseudoterminal during interactive use).
What's important for your purposes is that vim is directly inspecting the file descriptor it's passed as stdout. The shell is not a party to this -- it's literally vim running a standard-C-library call that gets some details about an open file descriptor -- so it's nothing that patching or reconfiguring the shell can fix.
To avoid this, then, you need to find a different way to redirect your output for logging such that stdout and stderr are still pointed at PTYs.
That said, for your real goal (logging all activity, vs redirecting stdout in-place), what you want is probably script:
if [ -z "$redirection_done" ]; then
redirection_done=1 exec script shell.log bash --login -i
fi
Using logging support from another tool which simulates a TTY, such as screen or tmux, will likewise suffice. (unbuffer, from the expect toolkit, can be used with similar effect).
Back to your literal question... (since while it may not be what you want to know, it is what you asked):
In all POSIX shells, including bash,
exec >wherever
...will immediately redirect stdout for the current shell to wherever. This can be a process substitution in bash, as anywhere else; thus, in an already-running shell, you can execute
exec > >(tee shell.log) 2>&1
This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null
I have a program (that I did not write) which is not designed to read in commands from a file. Entering commands on STDIN is pretty tedious, so I'd like to be able to automate it by writing the commands in a file for re-use. Trouble is, if the program hits EOF, it loops infinitely trying to read in the next command dropping an endless torrent of menu options on the screen.
What I'd like to be able to do is cat a file containing the commands into the program via a pipe, then use some sort of shell magic to have it switch from the file to STDIN when it hits the file's EOF.
Note: I've already considered using cat with the '-' for STDIN. Unfortunately (I didn't know this before), piped commands wait for the first program's output to terminate before starting the second program -- they do not run in parallel. If there's some way to get the programs to run in parallel with that kind of piping action, that would work!
Any thoughts? Thanks for any assistance!
EDIT:
I should note that my goal is not only to prevent the system from hitting the end of the commands file. I would like to be able to continue typing in commands from the keyboard when the file hits EOF.
I would do something like
(cat your_file_with_commands; cat) | sh your_script
That way, when the file with commands is done, the second cat will feed your script with whatever you type on stdin afterwards.
Same as Idelic answer with more simple syntax ;)
cat your_file_with_commands - | sh your_script
I would think expect would work for this.
Have you tried using something like tail -f commandfile | command I think that should pipe the lines of the file to command without closing the file descriptor afterwards. Use -n to specify the number of lines to be piped if tail -f doesn't catch all of them.