I have the following test Ruby script:
require 'tempfile'
tempfile = Tempfile.new 'test'
$stderr.reopen tempfile
$stdout.reopen tempfile
puts 'test stdout'
warn 'test stderr'
`mail -s 'test' my#email.com < #{tempfile.path}`
tempfile.close
tempfile.unlink
$stderr.reopen STDERR
$stdout.reopen STDOUT
The email that I get has the contents:
test stderr
Why is stderr redirecting properly but not stdout?
Edit: In response to a comment I added a $stdout.flush after the puts line and it printed correctly. So I'll restate my question: what was happening and why does the flush fix it?
The standard output is generally buffered to avoid a system call for every write. So, when you say this:
puts 'test stdout'
You're actually just stuffing that string into the buffer. Then you say this:
`mail -s 'test' my#email.com < #{tempfile.path}`
and your 'test stdout' string is still in the buffer so it isn't in tempfile when mail sends the file's content to you. Flushing $stdout forces everything in the buffer to be written to disk; from the fine manual:
Flushes any buffered data within ios to the underlying operating system (note that this is Ruby internal buffering only; the OS may buffer the data as well).
$stdout.print "no newline"
$stdout.flush
produces:
no newline
The standard error is often unbuffered so that error messages (which are supposed to be rare) are visible immediately.
Related
The following code is a simplification of my current situation. I have a JSON log source which I continuously fetch and write to stdout with puts.
#!/usr/bin/env ruby
require "json"
loop do
puts({ value: "foobar" }.to_json)
sleep 1
end
I want to be able to pipe the output of this script into jq for further processing, but in a 'stream'-friendly way, using unix pipes. Running the above code like so:
./my_script | jq
Results in an empty output. However, if I place an exit statement after the sleep call, the output is sent through the pipe to jq as expected. I was able to solve this problem by calling $stdout.flush following the puts call. While it's working now, I'm not sure why. $stdout.sync is set to true by default (see IO#sync). It seems to me that if sync was enabled, then Ruby should be doing no output buffering, and calling $stdout.flush should not be required - yet it is.
My follow-up question is about using tail instead of jq. It seems to me that I should be able to pipe a text stream into tail the same way I pipe it into jq, but neither method (with the $stdout.flush call or without it) works - the output is just empty.
As #Ry points out in the comments, $stdout.sync is true by default in IRB, but this is not necessarily the same for scripts.
So you should set $stdout.sync = true to be sure to prevent buffering.
I'm reading an example ruby script that creates a daemon by forking, creating a new session, forking again, then redirecting stdin, stdout, stderr to /dev/null
Here's a snippet of the redirection:
STDIN.reopen '/dev/null'
STDOUT.reopen '/dev/null', 'a'
STDERR.reopen '/dev/null', 'a'
What's the significance of specifying the file mode ('a') in this case? Would the behavior be any different, for example, with
STDOUT.reopen '/dev/null', 'w'
or even
STDOUT.reopen '/dev/null'
?
There's no particular significance, but it's semantically helpful to a reader who would expect that STDOUT is append or write, but not read. It's also defensive against the default (typically read) changing in the future, unlikely as that may be. In fact Ruby has protections against changing the mode of STDIN or STDOUT.
STDOUT.reopen '/dev/null', 'r'
test.rb:1:in `reopen': <STDOUT> can't change access mode from "w" to "r" (ArgumentError)
from test.rb:1:in `<main>'
This does work on other IOs though, and it's always nice to be explicit.
f = File.open('file.out', 'w')
f.puts 'Hi'
f.close
f.reopen('file.out', 'r')
puts f.read
$ ruby test.rb
Hi
value = %x( #{"svn lock #{#path}/#{#file}"} )
=>
svn: warning: W160035: Path '/README.txt' is already locked by user 'tester' in filesystem 'some_path'
""
Returns empty string rather then the svn:warning message. I want to record the svn warning message. What am I doing wrong.
Thanks for your help in advance.
This is likely because the output is being sent to STDERR, not STDOUT (which is all %x captures). Because you’re not capturing it, it does what it would normally and prints to the console.
You can either redirect STDERR to STDOUT in your command:
%x(svn lock #{#path}/#{#file} 2>&1)
Or use Open3 to capture both STDOUT & STDERR:
require 'open3'
Open3.popen3("svn lock #{#path}/#{#file}") do |stdin, stdout, stderr, wait_thr|
puts "stdout is:" + stdout.read
puts "stderr is:" + stderr.read
end
The first option offloads the work to the shell executing the command and depending on the environment it may not support it. As such using Open3 is a much more portable solution.
Some additional notes: note I’ve removed the unnecessary interpolation in your %x statement. Also, consider using ShellWords to properly escape interpolated string in shell commands. This is particularly important if these are user-inputted strings.
Your problem is that backticks (or %x) returns the output of STDOUT, whereas in this case you want STDERR. Use e.g. Open3::capture2e instead:
http://www.ruby-doc.org/stdlib-2.0.0/libdoc/open3/rdoc/Open3.html#method-c-capture2e
Whenever you want to execute something on the command line, you can use the following syntax:
%x(command to run)
However, I want to catch an error or at least get the response so I can parse it correctly. I tried setting:
result = %x(command to run)
and using a try-catch
begin
%x(command to run)
rescue
"didn't work"
end
to no avail. How can I capture the results instead of having them printed out?
So this doesn't directly answer your question (won't capture the command's output). But instead of trying begin/rescue, you can just check the exit code ($?) of the command:
%x(command to run)
unless $? == 0
"ack! error occurred"
end
Edit: Just remembered this new project. I think it does exactly what you want:
https://github.com/envato/safe_shell
You might want to redirect stderr to stdout:
result = %x(command to run 2>&1)
Or if you want to separate the error messages from the actual output, you can use popen3:
require 'open3'
stdin, stdout, stderr = Open3.popen3("find /proc")
Then you can read the actual output from stdout and error messages from stderr.
Here's how to use Ruby's open3:
require 'open3'
include Open3
stdin, stdout, stderr = popen3('date')
stdin.close
puts
puts "Reading STDOUT"
print stdout.read
stdout.close
puts
puts "Reading STDERR"
print stderr.read
stderr.close
# >>
# >> Reading STDOUT
# >> Sat Jan 22 20:03:13 MST 2011
# >>
# >> Reading STDERR
popen3 returns IO streams for STDIN, STDOUT and STDERR, allowing you to do I/O to the opened app.
Many command-line apps require their STDIN to be closed before they'll process their input.
You have to read from the returned STDOUT and STDERR pipes. They don't automatically shove content into a mystical variable.
In general, I like using a block with popen3 because it handles cleaning up behind itself.
Look through the examples in the Open3 doc. There's lots of nice functionality.
You need a mix of #Cam 's answer and #tonttu 's answer.
decent explanation of $? and others.
Edit: the domain http://blog.purifyapp.com is now in hands of a domain-squatter and scammer.
result = %x(command to run 2>&1)
unless $? == 0 #check if the child process exited cleanly.
puts "got error #{result}"
end
I am using the open4 gem and having problems reading from the spawned processes stdout. I have a ruby program, test1.rb:
print 'hi.' # 3 characters
$stdin.read(1) # block
And another ruby program in the same directory, test2.rb:
require 'open4'
pid, stdin, stdout, stderr = Open4.popen4 'ruby test1.rb'
p stdout.read(2) # 2 characters
When I run the second program:
$ ruby test2.rb
It just sits there forever without printing anything. Why does this happen, and what can I do to stop it?
I needed to change test1.rb to this. I don't know why.
print 'hi.' # 3 characters
$stdout.flush
$stdin.read(1) # block
By default, everything that you printto stdout or to another file is written into a buffer of Ruby (or the standard C library, which is underneath Ruby). The content of the buffer is forwarded to the OS if one of the following events occurs:
The buffer gets full.
You close stdout.
You have printed a newline sequence (`\n')
You call flush explicitly.
For other files, a flush is done on other occasions, too, like ftell.
If you put stdout in unbuffered mode ($stdout.sync = true), the buffer will not be used.
stderr is unbuffered by default.
The reason for doing buffering is efficiency: Aggregating output data in a buffer can save many system call (calls to operating system). System calls are very expensive: They take many hundreds or even thousands of CPU cycles. Avoiding them with a little bit of code and some buffers in user space results in a good speedup.
A good reading on buffering: Why does printf not flush after the call unless a newline is in the format string?
I'm not an expert in process.
From my first sight of API document, the sequence of using open4 is like this:
first send text to stdin, then close stdin and lastly read text from stdout.
So. You can the test2.rb like this
require 'open4'
pid, stdin, stdout, stderr = Open4.popen4 'ruby test1.rb'
stdin.puts "something" # This line is important
stdin.close # It might be optional, open4 might close itself.
p stdout.read(2) # 2 characters