Realtime stdout in golang when executing binary [duplicate] - go

Is there a way to run shell commands without output buffering?
For example, hexdump file | ./my_script will only pass input from hexdump to my_script in buffered chunks, not line by line.
Actually I want to know a general solution how to make any command unbuffered?

Try stdbuf, included in GNU coreutils and thus virtually any Linux distro. This sets the buffer length for input, output and error to zero:
stdbuf -i0 -o0 -e0 command

The command unbuffer from the expect package disables the output buffering:
Ubuntu Manpage: unbuffer - unbuffer output
Example usage:
unbuffer hexdump file | ./my_script

AFAIK, you can't do it without ugly hacks. Writing to a pipe (or reading from it) automatically turns on full buffering and there is nothing you can do about it :-(. "Line buffering" (which is what you want) is only used when reading/writing a terminal. The ugly hacks exactly do this: They connect a program to a pseudo-terminal, so that the other tools in the pipe read/write from that terminal in line buffering mode. The whole problem is described here:
http://www.pixelbeat.org/programming/stdio_buffering/
The page has also some suggestions (the aforementioned "ugly hacks") what to do, i.e. using unbuffer or pulling some tricks with LD_PRELOAD.

You could also use the script command to make the output of hexdump line-buffered (hexdump will be run in a pseudo terminal which tricks hexdump into thinking its writing its stdout to a terminal, and not to a pipe).
# cf. http://unix.stackexchange.com/questions/25372/turn-off-buffering-in-pipe/
stty -echo -onlcr
script -q /dev/null hexdump file | ./my_script # FreeBSD, Mac OS X
script -q -c "hexdump file" /dev/null | ./my_script # Linux
stty echo onlcr

One should use grep or egrep "--line-buffered" options to solve this. no other tools needed.

Related

Trim output of running program

I have a program that when it runs, outputs hundreds of lines starting with "Info:" and a few lines that have useful output. To make things easier, I have created a simple python and bash combination script to emulate the issue I'm having:
wait_2sec.py:
import time
print("Hello!")
time.sleep(2)
print("Goodbye!")
I am attempting to trim my output by running:
python wait_2sec.py | sed '/Goodbye/d'
However, sed does not output Hello! until after the python script has finished. I don't know whether the pipe waits until after the program is finished to begin running the sed command, or if the sed command is the hangup.
I am open to using another command to trim output if sed does not work for this use-case.
I don't know whether the pipe waits until after the program is finished to begin running the sed command, or if the sed command is the hangup.
It's actually neither of both, it's that python normally buffers its output (if not to a terminal) until the buffer is full, hence Moustapha's suggestion may work provided that unbuffer is installed. But you can simply use python's built-in option -u (Force the stdout and stderr streams to be unbuffered.) instead:
python -u wait_2sec.py | sed '/Goodbye/d'
You can try to run your script using the following command:
unbuffer python wait_2sec.py | sed '/Goodbye/d'
Refrence: https://unix.stackexchange.com/a/200413

How to make tee in Linux provide screen output line by line, not at the end of execution? [duplicate]

Usually, stdout is line-buffered. In other words, as long as your printf argument ends with a newline, you can expect the line to be printed instantly. This does not appear to hold when using a pipe to redirect to tee.
I have a C++ program, a, that outputs strings, always \n-terminated, to stdout.
When it is run by itself (./a), everything prints correctly and at the right time, as expected. However, if I pipe it to tee (./a | tee output.txt), it doesn't print anything until it quits, which defeats the purpose of using tee.
I know that I could fix it by adding a fflush(stdout) after each printing operation in the C++ program. But is there a cleaner, easier way? Is there a command I can run, for example, that would force stdout to be line-buffered, even when using a pipe?
you can try stdbuf
$ stdbuf --output=L ./a | tee output.txt
(big) part of the man page:
-i, --input=MODE adjust standard input stream buffering
-o, --output=MODE adjust standard output stream buffering
-e, --error=MODE adjust standard error stream buffering
If MODE is 'L' the corresponding stream will be line buffered.
This option is invalid with standard input.
If MODE is '0' the corresponding stream will be unbuffered.
Otherwise MODE is a number which may be followed by one of the following:
KB 1000, K 1024, MB 1000*1000, M 1024*1024, and so on for G, T, P, E, Z, Y.
In this case the corresponding stream will be fully buffered with the buffer
size set to MODE bytes.
keep this in mind, though:
NOTE: If COMMAND adjusts the buffering of its standard streams ('tee' does
for e.g.) then that will override corresponding settings changed by 'stdbuf'.
Also some filters (like 'dd' and 'cat' etc.) dont use streams for I/O,
and are thus unaffected by 'stdbuf' settings.
you are not running stdbuf on tee, you're running it on a, so this shouldn't affect you, unless you set the buffering of a's streams in a's source.
Also, stdbuf is not POSIX, but part of GNU-coreutils.
Try unbuffer (man page) which is part of the expect package. You may already have it on your system.
In your case you would use it like this:
unbuffer ./a | tee output.txt
The -p option is for pipeline mode where unbuffer reads from stdin and passes it to the command in the rest of the arguments.
You can use setlinebuf from stdio.h.
setlinebuf(stdout);
This should change the buffering to "line buffered".
If you need more flexibility you can use setvbuf.
You may also try to execute your command in a pseudo-terminal using the script command (which should enforce line-buffered output to the pipe)!
script -q /dev/null ./a | tee output.txt # Mac OS X, FreeBSD
script -c "./a" /dev/null | tee output.txt # Linux
Be aware the script command does not propagate back the exit status of the wrapped command.
The unbuffer command from the expect package at the #Paused until further notice answer did not worked for me the way it was presented.
Instead of using:
./a | unbuffer -p tee output.txt
I had to use:
unbuffer -p ./a | tee output.txt
(-p is for pipeline mode where unbuffer reads from stdin and passes it to the command in the rest of the arguments)
The expect package can be installed on:
MSYS2 with pacman -S expect
Mac OS with brew install expect
Update
I recently had buffering problems with python inside a shell script (when trying to append timestamp to its output). The fix was to pass -u flag to python this way:
run.sh with python -u script.py
unbuffer -p /bin/bash run.sh 2>&1 | tee /dev/tty | ts '[%Y-%m-%d %H:%M:%S]' >> somefile.txt
This command will put a timestamp on the output and send it to a file and stdout at the same time.
The ts program (timestamp) can be installed with the moreutils package.
Update 2
Recently, also had problems with grep buffering the output, when I used the argument grep --line-buffered on grep to it stop buffering the output.
If you use the C++ stream classes instead, every std::endl is an implicit flush. Using C-style printing, I think the method you suggested (fflush()) is the only way.
The best answer IMO is grep's --line-buffer option as stated here:
https://unix.stackexchange.com/a/53445/40003

Why does `ack` not produce output when used with `bash` like this?

I'm guessing this has nothing to do with ack but more with bash:
Here we create file.txt containing the string foobar soack can find foobar in it:
> echo foobar > file.txt
> echo 'ack foobar file.txt' > ack.sh
> bash ack.sh
foobar
> bash < ack.sh
foobar
So far so good. But why doesn't ack find anything in it like this?
> cat ack.sh | bash
(no output)
or
> echo 'ack foobar file.txt' | bash
(no output)
Why doesn't ack find foobar in the last two cases?
Adding unbuffer (from expect) in front makes it work, which I don't understand:
> echo 'unbuffer ack foobar file.txt' | bash
foobar
Even stranger:
> cat ack2.sh
echo running
ack foobar file.txt
echo running again
unbuffer ack foobar file.txt
# Behaves as I'd expect
> bash ack2.sh
running
foobar
running again
foobar
# Strange output
> cat ack2.sh | bash
running
unbuffer ack foobar file.txt
What's up with this output? It echos unbuffer ack foobar file.txt but not running again? Huh?
ack gets confused because stdin is a pipe rather than a terminal. You need to pass the --nofilter option to force ack to treat stdin as a tty.
This:
# ack.sh
ack --nofilter foobar file.txt
works:
$ cat ack.sh | bash
foobar
If you ask me, that behaviour is quite unexpected. Probably it is expected when someone understand the concepts of ack which I do not atm. I would expect that ack doesn't look at stdin when filename arguments are passed to it.
Why does unbuffer "solve" the problem?
unbuffer, following it's man page, does not attempt to read from stdin:
Normally, unbuffer does not read from stdin. This simplifies use of
unbuffer in some situations. To use unbuffer in a pipeline, use the -p
flag. ...
Looks like ack tries to be too! smart about stdin here. If it is empty it does not read from stdin and looks at the filenames passed to it. Again, imo it would be correct to not look at stdin at all if filename arguments are present.
The big mismatch here is that ack was never intended to be used in shell scripts. It's meant as a command line tool for humans. That means that it makes some assumptions and optimizations for humans. For example, by default ack's output is different if it's going to a terminal vs. getting redirected in a pipe. There's also dangers in using ack in a shell script because its behavior can be affected by ackrc files and environment variables. If you're going to be using ack in a script, you should be using the --noenv flag. Better still, for shell scripts I'd use plain ol' grep.
What is the use case that brought up this problem?
I agree that this is a bug – ack could look at stdin, however in a NON BLOCKING way. It is a bug to hang over a pipe that's empty…

Modifying "... | tee -a out.txt" to stream output live, rather than on completion?

I would need to output the output of a command on a file. Let's say my command is zip -r zip.zip directory , I would need to append/write (any of these options would be fine) to a file (let's say out.txt). I got zip zip.zip directory | tee -a out.txt so far, but it doesn't seem to work, it just writes the whole output when the command is over... How can I achieve this?
Thanks ;)
Background (ie. Why?)
Redirections are immediate -- when you run somecommand | tee -a out.txt, somecommand is set up with its stdout sent directly to a tee command, which is defined by its documentation to be unbuffered, and thus to write anything available on its input to its specified output sinks as quickly as possible. Similarly, somecommand >out.txt sets somecommand to be writing to out.txt literally before it's even started.
What's not immediate is flushing of buffered output.
That is to say: The standard C library, and most other tools/languages, buffer output on stdout, combining small writes into big ones. This is generally desirable, inasmuch as decreases the number of calls to and from kernel space ("context switches") in favor of doing a smaller number of more efficient, larger writes.
So your program isn't really waiting until it exits to write its output -- but it is waiting until its buffer (of maybe 32kb, or 64kb, or whatever) is full. If it never generates that much output at all, then it only gets flushed when closing the output stream.
Workarounds (How? -- GNU version)
If you're on a GNU platform, and your program is leaving its file descriptors the way it found them rather than trying to configure buffering explicitly, you can use the stdbuf command to configure buffering like so:
stdbuf -oL somecommand | tee -a out.txt
defines stdout (-o) to be line-buffered (L) when running somecommand.
Workarounds (How? -- Expect version)
Alternately, if you have expect installed, you can use the unbuffer helper it includes:
unbuffer somecommand | tee -a out.txt
...which will actually simulate a TTY (as expect does), getting the same non-buffered behavior you have when somecommand is connected directly to a console.
Did you try option command > out.log 2>&1 this log to file everything without displaying anything, everything will go straight to the file

bash: force exec'd process to have unbuffered stdout

I've got a script like:
#!/bin/bash
exec /usr/bin/some_binary > /tmp/my.log 2>&1
Problem is that some_binary sends all of its logging to stdout, and buffering makes it so that I only see output in chunks of a few lines. This is annoying when something gets stuck and I need to see what the last line says.
Is there any way to make stdout unbuffered before I do the exec that will affect some_binary so it has more useful logging?
(The wrapper script is only setting a few environment variables before the exec, so a solution in perl or python would also be feasible.)
GNU coreutils-8.5 also has the stdbuf command to modify I/O stream buffering:
http://www.pixelbeat.org/programming/stdio_buffering/
So, in your example case, simply invoke:
stdbuf -oL /usr/bin/some_binary > /tmp/my.log 2>&1
This will allow text to appear immediately line-by-line (once a line is completed with the end-of-line "\n" character in C). If you really want immediate output, use -o0 instead.
This way could be more desirable if you do not want to introduce dependency to expect via unbuffer command. The unbuffer way, on the other hand, is needed if you have to fool some_binary into thinking that it is facing a real tty standard output.
You might find that the unbuffer script that comes with expect may help.
Some command line programs have an option to modify their stdout stream buffering behaviour. So that's the way to go if the C source is available ...
# two command options ...
man file | less -p '--no-buffer'
man grep | less -p '--line-buffered'
# ... and their respective source code
# from: http://www.opensource.apple.com/source/file/file-6.2.1/file/src/file.c
if(nobuffer)
(void) fflush(stdout);
# from: http://www.opensource.apple.com/source/grep/grep-28/grep/src/grep.c
if (line_buffered)
fflush (stdout);
As an alternative to using expect's unbuffer script or modifying the program's source code, you may also try to use script(1) to avoid stdout hiccups caused by a pipe:
See: Trick an application into thinking its stdin is interactive, not a pipe
# Linux
script -c "[executable string]" /dev/null
# FreeBSD, Mac OS X
script -q /dev/null "[executable string]"
I scoured the internets for an answer, and none of this worked for uniq which is too stubborn to buffer everything except for stdbuf:
{piped_command_here} | stdbuf -oL uniq | {more_piped_command_here}
GNU Coreutils-8 includes a program called stdbuf which essentially does the LD_PRELOAD trick. It works on Linux and reportedly works on BSD systems.
An environment variable can set the terminal IO mode to unbuffered.
export NSUnbufferedIO=YES
This will set the terminal unbuffered for both C and Ojective-C terminal output commands.

Resources