I am trying to get help from someone, so I want to record my screen to be able to later review what happened. Formerly, I have used ssh with tee like ssh user#server | tee recfile and the recording works fine. Even when I use byobu on the server, everything is recorded using that simple pipe.
But when I pipe byobu itself to tee, the file will contain pretty nothing. I have used both byobu | tee recfile and byobu |& tee recfile. In both cases, byobu starts and works well, but the record file will only contain a few lines not related to the things happened in byobu session.
And byobu uses tmux. I have tried to pipe tmux to tee as well, and the output file only contained [exited].
My question is how tmux writes to the screen? It seems that it does not use standard output or standard error, as if it did, tee could work recording the screen. Is there a way to tell tmux to write to the standard output or standard error? Or is there another way to redirect the output to tee?
Edit: I checked that screen | tee recfile and screen |& tee recfile will produce an empty file. Also, bash | tee recfile will only redirect the output of executed commands to the file (user#name:~$'s and the input commands are not in there). bash |& tee recfile will do likewise, except that the prompt of the bash (user#name:~$) is not displayed at all.
As gniourf_gniourf pointed out, script solved my problem.
I also wrote a simple C++ code to playback the recorded file. Compile it with --std=c++11. (I use script -c byobu --timing=recfile.tim recfile to record and scriptout recfile recfile.tim 3 for playback.)
https://gist.github.com/Shayan-To/672c77fbf9811d769d453c8a9b43747e
Related
In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output
Usually, stdout is line-buffered. In other words, as long as your printf argument ends with a newline, you can expect the line to be printed instantly. This does not appear to hold when using a pipe to redirect to tee.
I have a C++ program, a, that outputs strings, always \n-terminated, to stdout.
When it is run by itself (./a), everything prints correctly and at the right time, as expected. However, if I pipe it to tee (./a | tee output.txt), it doesn't print anything until it quits, which defeats the purpose of using tee.
I know that I could fix it by adding a fflush(stdout) after each printing operation in the C++ program. But is there a cleaner, easier way? Is there a command I can run, for example, that would force stdout to be line-buffered, even when using a pipe?
you can try stdbuf
$ stdbuf --output=L ./a | tee output.txt
(big) part of the man page:
-i, --input=MODE adjust standard input stream buffering
-o, --output=MODE adjust standard output stream buffering
-e, --error=MODE adjust standard error stream buffering
If MODE is 'L' the corresponding stream will be line buffered.
This option is invalid with standard input.
If MODE is '0' the corresponding stream will be unbuffered.
Otherwise MODE is a number which may be followed by one of the following:
KB 1000, K 1024, MB 1000*1000, M 1024*1024, and so on for G, T, P, E, Z, Y.
In this case the corresponding stream will be fully buffered with the buffer
size set to MODE bytes.
keep this in mind, though:
NOTE: If COMMAND adjusts the buffering of its standard streams ('tee' does
for e.g.) then that will override corresponding settings changed by 'stdbuf'.
Also some filters (like 'dd' and 'cat' etc.) dont use streams for I/O,
and are thus unaffected by 'stdbuf' settings.
you are not running stdbuf on tee, you're running it on a, so this shouldn't affect you, unless you set the buffering of a's streams in a's source.
Also, stdbuf is not POSIX, but part of GNU-coreutils.
Try unbuffer (man page) which is part of the expect package. You may already have it on your system.
In your case you would use it like this:
unbuffer ./a | tee output.txt
The -p option is for pipeline mode where unbuffer reads from stdin and passes it to the command in the rest of the arguments.
You can use setlinebuf from stdio.h.
setlinebuf(stdout);
This should change the buffering to "line buffered".
If you need more flexibility you can use setvbuf.
You may also try to execute your command in a pseudo-terminal using the script command (which should enforce line-buffered output to the pipe)!
script -q /dev/null ./a | tee output.txt # Mac OS X, FreeBSD
script -c "./a" /dev/null | tee output.txt # Linux
Be aware the script command does not propagate back the exit status of the wrapped command.
The unbuffer command from the expect package at the #Paused until further notice answer did not worked for me the way it was presented.
Instead of using:
./a | unbuffer -p tee output.txt
I had to use:
unbuffer -p ./a | tee output.txt
(-p is for pipeline mode where unbuffer reads from stdin and passes it to the command in the rest of the arguments)
The expect package can be installed on:
MSYS2 with pacman -S expect
Mac OS with brew install expect
Update
I recently had buffering problems with python inside a shell script (when trying to append timestamp to its output). The fix was to pass -u flag to python this way:
run.sh with python -u script.py
unbuffer -p /bin/bash run.sh 2>&1 | tee /dev/tty | ts '[%Y-%m-%d %H:%M:%S]' >> somefile.txt
This command will put a timestamp on the output and send it to a file and stdout at the same time.
The ts program (timestamp) can be installed with the moreutils package.
Update 2
Recently, also had problems with grep buffering the output, when I used the argument grep --line-buffered on grep to it stop buffering the output.
If you use the C++ stream classes instead, every std::endl is an implicit flush. Using C-style printing, I think the method you suggested (fflush()) is the only way.
The best answer IMO is grep's --line-buffer option as stated here:
https://unix.stackexchange.com/a/53445/40003
I have a complex command on my bash script which prints a lot of info on stdout. This command is complex and takes some time to finish but is fully working. At the same time, I'm using a pipe with tee to write it into a file for a post-parsing task.
cmd="myComplexCommand | tee /dev/fd/5"
exec 5>&1
stored_output=$(eval "${cmd}")
Until here everything is working.
Now, I'm trying to implement ccze to colorize screen output. Usually to use it on any command is as simple as:
anyCommand | ccze -A
And everything is printed in a beauty colorized way. The problem is if I try to apply this to my particular case, after using the pipe to ccze on my myComplexCommand, the output on screen is colorized (nice!) but it alters the output stored on the file I want to parse on my post-parse task and it doesn't work.
Is there a Bash way to print a command using ccze in a beauty way on screen and at the same time store it in a file (without ccze modifications) to parse it later?
tee to file at a point in the pipeline before the colorization takes place:
myComplexCommand | tee filename | ccze -A
Incidentally, with bash 4.1 or later, if you want to send a lot of data both to a file and in colorized form to the TTY, you might put both those operations in a single process substitution:
exec {stdout_backup}>&1
exec {store_and_colorize}> >(tee filename | ccze -A | tee /dev/fd/"$stdout_backup")
and then reuse that process substitution as many times as you like:
result=$(something >&$store_and_colorize)
another_result=$(something_else >&$store_and_colorize)
That way you've got exactly one copy of ccze persisting across multiple uses.
Why doesn't
which myscript | xargs vim
work nicely? My terminal (ubuntu 14.04) freezes when I exit vim.
Or, is there an a nice clean alternative?
Why The Original Doesn't Work
You can't meaningfully pipe anything into vim, if you're going to use it as an interactive editor: A pipeline overrides stdin; an editor needs to be able to access your stdin (unless it's, say, interacting via X11 -- but that would be gvim).
To go into a little more detail: foo | bar runs both foo and bar at the same time, with the stdout of foo connected to the stdin of bar. Thus, which myscript | xargs vim has the shell originally starting two processes -- which myscript and xargs vim -- with the stdout of which myscript connected to the stdin of xargs vim.
However, this means that xargs vim is getting its input from which, and not from the terminal/console that the user was typing at. Thus, when xargs vim starts vim, the stdin which vim inherits isn't connected to the terminal either -- and vim, being an interactive editor built to get input from the user at a terminal, fails (perhaps spectacularly or entertainingly).
What To Do Instead
vim "$(which myscript)"
The $() syntax above is a command substitution, which is replaced with the stdout of the command which it runs. As such, while this overrides the stdout of which (directed into a FIFO which the shell reads from for purposes of that substitution), it does not in any respect redirect the input and output handed to vim.
Alternately, if you really want to use xargs (note the following uses -d, a GNUism, to ensure that it works correctly when passed filenames with spaces -- though not filenames with newlines):
which myscript | xargs -d $'\n' sh -c 'exec vim "$#" <&2'
The above has xargs, instead of directly running vim, start a shell which copies stderr (file descriptor 2) to stdin (file descriptor 0, the default target of redirection with <), and then starts vim, so as to provide that copy of vim a file descriptor for stdin that's attached to your terminal -- if your stderr isn't open to your TTY, replace <&2 with </dev/tty instead.
I would need to output the output of a command on a file. Let's say my command is zip -r zip.zip directory , I would need to append/write (any of these options would be fine) to a file (let's say out.txt). I got zip zip.zip directory | tee -a out.txt so far, but it doesn't seem to work, it just writes the whole output when the command is over... How can I achieve this?
Thanks ;)
Background (ie. Why?)
Redirections are immediate -- when you run somecommand | tee -a out.txt, somecommand is set up with its stdout sent directly to a tee command, which is defined by its documentation to be unbuffered, and thus to write anything available on its input to its specified output sinks as quickly as possible. Similarly, somecommand >out.txt sets somecommand to be writing to out.txt literally before it's even started.
What's not immediate is flushing of buffered output.
That is to say: The standard C library, and most other tools/languages, buffer output on stdout, combining small writes into big ones. This is generally desirable, inasmuch as decreases the number of calls to and from kernel space ("context switches") in favor of doing a smaller number of more efficient, larger writes.
So your program isn't really waiting until it exits to write its output -- but it is waiting until its buffer (of maybe 32kb, or 64kb, or whatever) is full. If it never generates that much output at all, then it only gets flushed when closing the output stream.
Workarounds (How? -- GNU version)
If you're on a GNU platform, and your program is leaving its file descriptors the way it found them rather than trying to configure buffering explicitly, you can use the stdbuf command to configure buffering like so:
stdbuf -oL somecommand | tee -a out.txt
defines stdout (-o) to be line-buffered (L) when running somecommand.
Workarounds (How? -- Expect version)
Alternately, if you have expect installed, you can use the unbuffer helper it includes:
unbuffer somecommand | tee -a out.txt
...which will actually simulate a TTY (as expect does), getting the same non-buffered behavior you have when somecommand is connected directly to a console.
Did you try option command > out.log 2>&1 this log to file everything without displaying anything, everything will go straight to the file