I'm running a process using popen3, and trying to read the stdout stream. I want to be able to detect each character as it's put into stdout, but even using stdout.getc, I'm only able to get any output once there's a newline.
Any tips? I've seen this answered with stdin.getc, but not stdout.
The problem you have is that most programs when run through a pipe, which is what happens with popen, will be run using buffered output.
But what you are looking for is unbuffered output. Unbuffered output only happens when the proccess is attached to a PTY. I.e. a shell.
When you attach the process to a pipe the output will be buffered unless the process is explicitly calling flush during its output processing.
You have a few options:
If you have control over the source code that you run with popen then force a flush call on stdout after each output sequence.
You could try and run the command under stdbuf -o0, which tries to force unbuffered output.
See this answer:
Force line-buffering of stdout when piping to tee.
However that is not guaranteed to work for all programs.
You could try to use the Ruby PTY library instead of popen to make the program run under a pseudo-terminal, and hence run unbuffered. See this answer:
Continuously read from STDOUT of external process in Ruby
Option 3 is most likely to work with any program you want to run and monitor, while the other options may or may not work depending on the program you are running and if you have access to the source code of that program.
Related
I want to run gdb (GNU debugger) in Screen virtual terminal and grep the output in real time on adjacent Screen view.
How do I implement this arrangement? Normal pipe just redirects the output. I'm also curious how to bind Screen view (^A + c) to an existing process for IO.
EDIT:
I came up with the following solution. Created a named pipe with mkfifo pipe and executed gdb program | tee pipe in pty1. This will duplicate the output to a pipe. In pty2 I executed less -f pipe | grep foo to print the lines of interest.
I'm sure there have to be simpler way for such a trivial task though.
EDIT2:
The method mentioned above seems somewhat buggy. Gdb doesn't print anything to it's console unless something actually reads from the FIFO. Why is that? Also, when I try this method with my own program, which simply printf HelloWorld to stdout, nothing is printed in neither view.
EDIT3:
I figured out it's intentional that Tee blocks if nobody actually reads from the pipe. A matter of synchronization. Still I wonder, how is the original program able to read the input from keyboard even Tee controls now the terminal window. Or is it so that terminal input goes to stdin of original program and output to stdout of Tee?
You don't have to start your program out of gdb. Just start it in one screen pane, and determine the pid (use top, pgrep, ps).
In the other pane you start the gdb session:
gdb <path_to_program> <pid>
This way you have a terminal to control gdb and a terminal to use the program you are debugging, both have their own input and outputs.
The only condition is that the program runs long enough for you to attach the debugger to the process. An easy way to do that is to make it wait for input at the beginning. You could also make it print its pid.
This question already has answers here:
Send output of last command to a file automatically in bash?
(3 answers)
Closed 8 years ago.
I know I can save the result of a command to a variable using last_output=$(my_cmd) but what I'd really want is for $last_output to get updated every time I run a command. Is there a variable, zsh module, or plugin that I could install?
I guess my question is does stdout get permanently written somewhere (at least before the next command)? That way I could manipulate the results of the previous command without having to re-run it. This would be really useful for commands that take a long time to run
If you run the following:
exec > >(tee save.txt)
# ... stuff here...
exec >/dev/tty
...then your stdout for everything run between the two commands will go both to stdout, and to save.txt.
You could, of course, write a shell function which does this for you:
with_saved_output() {
"$#" \
2> >(tee "$HOME/.last-command.err >&2) \
| tee "$HOME/.last-command.out"
}
...and then use it at will:
with_saved_output some-command-here
...and zsh almost certainly will provide a mechanism to wrap interactively-entered commands. (In bash, which I can speak to more directly, you could do the same thing with a DEBUG trap).
However, even though you can, you shouldn't do this: When you split stdout and stderr into two streams, information about the exact ordering of writes is lost, even if those streams are recombined later.
Thus, the output
O: this is written to stdout first
E: this is written to stderr second
could become:
E: this is written to stderr second
O: this is written to stdout first
when these streams are individually passed through tee subprocesses to have copies written to disk. There are also buffering concerns created, and differences in behavior caused by software which checks whether it's outputting to a TTY and changes its behavior (for instance, software which turns color-coded output on when writing directly to console, and off when writing to a file or pipeline).
stdout is just a file handle that by default is connected to the console, but could be redirected.
yourcommand > save.txt
If you want to display the output to the console and save it to a file at the same time yout could pipe the output to tee, a command that writes everything it receives on stdin to stdout and to a file of your choice:
yourcommand | tee save.txt
I'm using a Windows x64 machine and am trying to capture the STDOUT and STDERR streams from a command. I also have to write to the command's STDIN. I'm trying to use perl's IPC::Open3 for this, with no luck. I'm using the script posted here and the command as this script here. I of course replaced the $cmd variable with "perl test.pl" for Windows.
It's supposed to print 'StdOut!' and 'StdErr!', along with the pid, but I only get the PID. I don't know if it is because of my operating system, or because the thread is 10 years old (no biggie, Perl 5 is almost 18, right?). Another monk posted this script to fix any problems in the other one, but on my computer it never exits.
Can anyone give me a working example of using open3 to start a command in perl, write to its STDIN, and capture both its STDERR and its STDOUT?
select only works for sockets in Windows; it doesn't work on pipes. You could create sockets instead and pass those to open3 instead of letting it create pipes for you (as seen here), but I suggest that you use a higher-level module such as IPC::Run instead. open3 is a rather low-level function.
I have a FORTRAN program output I want to redirect to file. I've done this before and use
$myprog.out>>out.txt 2>&1
and for some reason this is not working. I test it with a another simple test program
$myprog.test>>out.txt 2>&1
and it works
I run myprog.out and the output goes to screen as usual but redirecting it seems to fail! It was working and now seems to have stopped working. It's very strange. I commented out a few print statements that I no longer wanted, recompiled and then band redirect does not work.
There is clearly something different going on with my output but how to diagnose where it is going?
You probably need to flush your output. See for example this SO topic. How to do that depends on your compiler I guess. Because only Fortran 2003 Standard includes flush() statement and the ability to determine numbers that corresponds to stdout/stderr units.
However in gfortran (for example) you can use flush() intrinsic procedure with the equivalents of Unix file descriptors: UNIT=5 for stdin, UNIT=6 for stdout and UNIT=0 for stderr.
PROGRAM main
PRINT *, "Hello!"
CALL flush(6)
CALL flush(0)
END PROGRAM main
With >> you are appending the output of your program to out.txt every time you run it.
Can you just try scrolling to the end of out.txt and see if your output is there?
I have a crontab job calling a python script and outputting to a file:
python run.py &> current_date.log
now sometimes when I do
tail -f current_date.log
I see the file filling up with the output, but other times the log file exists, but stays empty for a long time. I am sure that the python script is printing stuff right after it starts running, and the log file is created. Any ideas why does it stay empty some of the time?
Python buffers output when it detects that it is not writing to a tty, and so your log file may not receive any output right away. You can configure your script to flush output or you can invoke python with the -u argument to get unbuffered output.
$ python -h
...
-u : unbuffered binary stdout and stderr (also PYTHONUNBUFFERED=x)
see man page for details on internal buffering relating to '-u'
...
The problem is actually Python (not bash) and is by design. Python buffers output by default. Run python with -u to prevent buffering.
Another suggestion is to create a class (or special function) which calls flush() right after the write to the log file.