How can I copy STDOUT to a file without stopping it showing onscreen using Ruby - ruby

I'm attempting to copy stdout to a file for logging purposes. I also want it to display in the Ruby console of the IDE I'm using.
I inserted this code into my script and it redirects $stdout to the my.log file:
$stdout.reopen("my.log", "w")
Does anyone know of a gem or technique to copy the contents of $stdout to a file and not redirect it to a file? Also, I am not using Rails just Ruby.

Something like this might help you:
class TeeIO < IO
def initialize orig, file
#orig = orig
#file = file
end
def write string
#file.write string
#orig.write string
end
end
Most of the methods in IO that do output ultimately use write, so you only have to override this one method. You can use it like this:
#setup
tee = TeeIO.new $stdout, File.new('out.txt', 'w')
$stdout = tee
# Now lots of example uses:
puts "Hello"
$stdout.puts "Extending IO allows us to expicitly use $stdout"
print "heres", :an, :example, "using", 'print', "\n"
48.upto(57) do |i|
putc i
end
putc 10 #newline
printf "%s works as well - %d\n", "printf", 42
$stdout.write "Goodbye\n"
After this example, the following is written identically to both the console and to the file:
Hello
Extending IO allows us to expicitly use $stdout
heresanexampleusingprint
0123456789
printf works as well - 42
Goodbye
I won't claim this technique is fail proof, but it should work for simple uses of stdout. Test it for your use.
Note that you don't have to use reopen on $stdout unless you want to redirect output from a child process or an uncooperative extension. Simply assigning a different IO object to it will work for most uses.
RSpec
The RSpec command line takes a reference to $stdout before you can get any code to run to reassign it, so this doesn't work. reopen still works in this case as you're changing the actual object pointed to by both $stdout and the reference that RSpec has, but this doesn't give you output to both.
One solution is to monkey-patch $stdout like this:
$out_file = File.new('out.txt', 'w')
def $stdout.write string
$out_file.write string
super
end
This works, but as with all monkey patching, be careful. It would be safer to use your OS's tee command.

If you are using Linux or Mac OS, the tee command available in the OS makes it easy to do this. From its man page:
NAME
tee -- pipe fitting
SYNOPSIS
tee [-ai] [file ...]
DESCRIPTION
The tee utility copies standard input to standard output, making a copy in zero or more files. The output is unbuffered.
So something like:
echo '>foo bar' | tee tmp.out
>foo bar
echos the output to STDOUT and to the file. Catting the file gives me:
cat tmp.out
>foo bar
Otherwise, if you want to do it inside your code, it's a simple task:
def tee_output(logfile)
log_output = File.open(logfile, 'w+')
->(o) {
log_output.puts o
puts o
}
end
tee = tee_output('tmp.out')
tee.call('foo bar')
Running it:
>ruby test.rb
foo bar
And checking the output file:
>cat tmp.out
foo bar
I'd use "w+" for my file access to append to the output file, rather than over-write it.
CAVEAT: This opens the file and leaves it open during the life of the code after you've called the tee_output method. That bothers some people, but, personally, it doesn't bother me because Ruby will close the file when the script exits. In general we want to close files as soon as we're done with them, but in your code, it makes more sense to open it and leave it open, than to repeatedly open and close the output file, but your mileage might vary.
EDIT:
For Ruby 1.8.7, use lambda instead of the new -> syntax:
def tee_output(logfile)
log_output = File.open(logfile, 'w+')
lambda { |o|
log_output.puts o
puts o
}
end
tee = tee_output('tmp.out')
tee.call('foo bar')

I know it's a old question, but I found myself in the same situation. I wrote a Multi-IO Class which extends File and overrode write puts and close methods, I also made sure its thread safe:
require 'singleton'
class MultiIO < File
include Singleton
##targets = []
##mutex = Mutex.new
def self.instance
self.open('/dev/null','w+')
end
def puts(str)
write "#{str}\n"
end
def write(str)
##mutex.synchronize do
##targets.each { |t| t.write str; flush }
end
end
def setTargets(targets)
raise 'setTargets is a one-off operation' unless ##targets.length < 1
targets.each do |t|
##targets.push STDOUT.clone if t == STDOUT
##targets.push STDERR.clone if t == STDERR
break if t == STDOUT or t == STDERR
##targets.push(File.open(t,'w+'))
end
self
end
def close
##targets.each {|t| f.close}
end
end
STDOUT.reopen MultiIO.instance.setTargets(['/tmp/1.log',STDOUT,STDERR])
STDERR.reopen STDOUT
threads = []
5.times.each do |i|
threads.push(
Thread.new do
10000.times.each do |j|
STDOUT.puts "out#{i}:#{j}"
end
end
)
end
5.times.each do |i|
threads.push(
Thread.new do
10000.times.each do |j|
STDERR.puts "err#{i}:#{j}"
end
end
)
end
threads.each {|t| t.join}

Tiny improvement from matts answer:
class TeeIO < IO
def initialize(ios)
#ios = ios
end
def write(string)
#ios.each { |io| io.write string }
end
end

Related

Suppress STDOUT during RSpec, but not Pry

I'm testing a generator, which outputs a lot of stuff to STDOUT. I want to suppress this, and there are lots of answers for that.
But I want to still be able to use pry. Right now, I have to disable the suppression if I need to pry into the test state.
I was using this code. It bypassed pry entirely:
def suppress_output(&block)
#original_stderr = $stderr
#original_stdout = $stdout
$stderr = $stdout = StringIO.new
yield(block)
$stderr = #original_stderr
$stdout = #original_stdout
#original_stderr = nil
#original_stdout = nil
end
I replaced it with this. It stops at the pry, but continues to suppress output, so you can't do anything:
def suppress_output(&block)
orig_stderr = $stderr.clone
orig_stdout = $stdout.clone
$stderr.reopen File.new("/dev/null", "w")
$stdout.reopen File.new("/dev/null", "w")
yield(block)
rescue Exception => e
$stdout.reopen orig_stdout
$stderr.reopen orig_stderr
raise e
ensure
$stdout.reopen orig_stdout
$stderr.reopen orig_stderr
end
Is there any way to have my cake and eat it too?
I'd still like an answer to this question if someone can think of a way. This isn't the only time I've had to suppress STDOUT in tests, and the scenarios haven't always been the same as this one.
However, it occurred to me today that in this case, the easier solution is to change the code, rather than the testing setup.
The generators are using Thor, which is very powerful, but has very opaque documentation past the basics and hasn't really been updated in years. When I dug around in the docs, I found there is some muting capability.
By calling add_runtime_options! in my main Cli < Thor class, I get a global --quiet option. This suppresses a lot of output, but not everything I need. #say still prints. #run itself is muted, but whatever shell commands I pass it to run are not.
Overwriting these methods takes care of the rest of my issues:
no_commands do
def quiet?
!!options[:quiet]
end
def run(command, config = {})
config[:capture] ||= quiet?
super(command, config)
end
def say(message = "", color = nil, force_new_line = (message.to_s !~ /( |\t)\Z/))
super(message, color, force_new_line) unless quiet?
end
end
I don't currently have a use-case where I would only want to suppress some things, so making it all-or-nothing works for now.
Now, I have to explicitly create the Cli instances in my tests with quiet: true, but I can run RSpec without unwanted output and still use pry.
I found this solution by Chris Hough to work in my case, adding the following configuration to spec/spec_helper:
RSpec.configure do |config|
config.before(:each) do
allow($stdout).to receive(:write)
end
end
This replaced an :all before block, which was setting the following (and reversing the assignment in an after block):
$stderr = File.open(File::NULL, "w")
$stdout = File.open(File::NULL, "w")
The fix still suppresses output while allowing Pry to function as expected.

Capture Ruby Logger output for testing

I have a ruby class like this:
require 'logger'
class T
def do_something
log = Logger.new(STDERR)
log.info("Here is an info message")
end
end
And a test script line this:
#!/usr/bin/env ruby
gem "minitest"
require 'minitest/autorun'
require_relative 't'
class TestMailProcessorClasses < Minitest::Test
def test_it
me = T.new
out, err = capture_io do
me.do_something
end
puts "Out is '#{out}'"
puts "err is '#{err}'"
end
end
When I run this test, both out and err are empty strings. I see the message printed on stderr (on the terminal). Is there a way to make Logger and capture_io to play nicely together?
I'm in a straight Ruby environment, not Ruby on Rails.
The magic is to use capture_subprocess_io
out, err = capture_subprocess_io do
do_stuff
end
MiniTest's #capture_io temporarily switches $stdout and $stderr for StringIO objects to capture output written to $stdout or $stderr. But Logger has its own reference to the original standard error stream, which it will write to happily. I think you can consider this a bug or at least a limitation of MiniTest's #capture_io.
In your case, you're creating the Logger inside the block to #capture_io with the argument STDERR. STDERR still points to the original standard error stream, which is why it doesn't work as expected.
Changing STDERR to $stderr (which at that points does point to a StringIO object) works around this problem, but only if the Logger is actually created in the #capture_io block, since outside that block it points to the original standard error stream.
class T
def do_something
log = Logger.new($stderr)
log.info("Here is an info message")
end
end
Documentation of capture_subprocess_io
Basically Leonard's example fleshed out and commented with working code and pointing to the docs.
Captures $stdout and $stderr into strings, using Tempfile to ensure that subprocess IO is captured as well.
out, err = capture_subprocess_io do
system "echo Some info" # echos to standard out
system "echo You did a bad thing 1>&2" # echos to standard error
end
assert_match %r%info%, out
assert_match %r%bad%, err
NOTE: This method is approximately 10x slower than #capture_io so only use it when you need to test the output of a subprocess.
See Documentation
This is an old question, but one way we do this is to mock out the logger with an expects. Something like
logger.expects(:info).with("Here is an info message")
This allows us to assert the code under test without changing how logger works out of the box.
As an example of capture_io, we have a logger implementation to allow us to pass in hashes and output them to json. When we test that implementation we use capture_io. This is possible because we initialize the logger implementation in our subject line with $stdout.
subject { CustomLogging.new(ActiveSupport::Logger.new($stdout)) }
in the test
it 'processes a string message' do
msg = "stuff"
out, err = capture_io do
subject.info(msg)
end
out.must_equal "#{msg}\n"
end
You need to provide a different StringIO object while initializing Logger.new to capture the output, rather than the usual: STDERR which actually points to the console.
I modified the above two files a bit and made into a single file so that you can copy and test easily:
require 'logger'
require 'minitest'
class T
def do_something(io = nil)
io ||= STDERR
log = Logger.new io
log.info("Here is an info message")
end
end
class TestT < Minitest::Test
def test_something
t = T.new
string_io = StringIO.new
t.do_something string_io
puts "Out: #{string_io.string}"
end
end
Minitest.autorun
Explanation:
Method do_something will function normally in all other code when used without the argument.
When a StringIO method is provided, it uses that instead of the typical STDERR thus enabling to capture output like into a file or in this case for testing.

Testing a REPL in Ruby with RSpec and threads

I'm using RSpec to test the behavior of a simple REPL. The REPL just echoes back whatever the input was, unless the input was "exit", in which case it terminates the loop.
To avoid hanging the test runner, I'm running the REPL method inside a separate thread. To make sure that the code in the thread has executed before I write expectations about it, I've found it necessary to include a brief sleep call. If I remove it, the tests fail intermittently because the expectations are sometimes made before the code in the thread has run.
What is a good way to structure the code and spec such that I can make expectations about the REPL's behavior deterministically, without the need for the sleep hack?
Here is the REPL class and the spec:
class REPL
def initialize(stdin = $stdin, stdout = $stdout)
#stdin = stdin
#stdout = stdout
end
def run
#stdout.puts "Type exit to end the session."
loop do
#stdout.print "$ "
input = #stdin.gets.to_s.chomp.strip
break if input == "exit"
#stdout.puts(input)
end
end
end
describe REPL do
let(:stdin) { StringIO.new }
let(:stdout) { StringIO.new }
let!(:thread) { Thread.new { subject.run } }
subject { described_class.new(stdin, stdout) }
# Removing this before hook causes the examples to fail intermittently
before { sleep 0.01 }
after { thread.kill if thread.alive? }
it "prints a message on how to end the session" do
expect(stdout.string).to match(/end the session/)
end
it "prints a prompt for user input" do
expect(stdout.string).to match(/\$ /)
end
it "echoes input" do
stdin.puts("foo")
stdin.rewind
expect(stdout.string).to match(/foo/)
end
end
Instead of letting :stdout be a StringIO, you could back it by a Queue. Then when you try to read from the queue, your tests will just wait until the REPL pushes something into the queue (aka. writes to stdout).
require 'thread'
class QueueIO
def initialize
#queue = Queue.new
end
def write(str)
#queue.push(str)
end
def puts(str)
write(str + "\n")
end
def read
#queue.pop
end
end
let(:stdout) { QueueIO.new }
I just wrote this up without trying it out, and it may not be robust enough for your needs, but it gets the point across. If you use a data structure to synchronize the two threads like this, then you don't need to sleep at all. Since this removes the non-determinism, you shouldn't see the intermittent failures.
I've used a running? guard for situations like this. You probably can't avoid the sleep entirely, but you can avoid unnecessary sleeps.
First, add a running? method to your REPL class.
class REPL
...
def running?
!!#running
end
def run
#running=true
loop do
...
if input == 'exit
#running = false
break
end
...
end
end
end
Then, in your specs, sleep until the REPL is running:
describe REPL do
...
before { sleep 0.01 until REPL.running? }
...
end

How to log everything on the screen to a file?

I use one rb file with rufus/scheduler on Windows. The script is executed on a comupter start up and it runs in a cmd window.
How can I log everything that ruby outputs to the screen to a file? I still want to be able to see the output on the screen. So I want the logging on top of current behaviour.
Windows 7 64 bit
ruby 1.9.3p194 (2012-04-20) [i386-mingw32]
If you just want the script to send output to the file instead of the console use IO#reopen to redirect stdout and stderr.
def redirect_console(filename)
$stdout.reopen(filename,'w')
$stderr.reopen(filename,'w')
end
redirect_console('/my/console/output/file')
If you need to direct to one or more output streams, use a proxy object and method_missing to send to them
class TeeIO
def initialize(*streams)
raise ArgumentError, "Can only tee to IO objects" unless streams.all? { |e| e.is_a? IO }
#streams = streams
end
def method_missing(meth, *args)
# HACK only returns result of first stream
#streams.map {|io| io.send(meth, *args) }.first
end
def respond_to_missing?(meth, include_all)
#streams.all? {|io| io.respond_to?(meth, include_all) }
end
end
def tee_console(filename)
tee_to = File.open(filename, 'w')
tee_to.sync = true # flush after each write
$stdout = TeeIO.new($stdout, tee_to)
$stderr = TeeIO.new($stderr, tee_to)
end
tee_console('/my/console/output/file')

Test output to command line with RSpec

I want to do is run ruby sayhello.rb on the command line, then receive Hello from Rspec.
I've got that with this:
class Hello
def speak
puts 'Hello from RSpec'
end
end
hi = Hello.new #brings my object into existence
hi.speak
Now I want to write a test in rspec to check that the command line output is in fact "Hello from RSpec"
and not "I like Unix"
NOT WORKING. I currently have this in my sayhello_spec.rb file
require_relative 'sayhello.rb' #points to file so I can 'see' it
describe "sayhello.rb" do
it "should say 'Hello from Rspec' when ran" do
STDOUT.should_receive(:puts).with('Hello from RSpec')
end
end
Can someone point me in the right direction please?
Here's a pretty good way to do this. Copied from the hirb test_helper source:
def capture_stdout(&block)
original_stdout = $stdout
$stdout = fake = StringIO.new
begin
yield
ensure
$stdout = original_stdout
end
fake.string
end
Use like this:
output = capture_stdout { Hello.new.speak }
output.should == "Hello from RSpec\n"
The quietly command is probably what you want (cooked into ActiveSupport, see docs at api.rubyonrails.org). This snippet of RSpec code below shows how to ensure there is no output on stderr while simultaneously silencing stdout.
quietly do # silence everything
commands.each do |c|
content = capture(:stderr) { # capture anything sent to :stderr
MyGem::Cli.start(c)
}
expect(content).to be_empty, "#{c.inspect} had output on stderr: #{content}"
end
end
So you don't have to change your main ruby code I just found out you can do something like this:
def my_meth
print 'Entering my method'
p 5 * 50
puts 'Second inside message'
end
describe '#my_meth' do
it 'puts a 2nd message to the console' do
expect{my_meth}.to output(/Second inside message/).to_stdout
end
end
When checking for a desired output text I used it inside / / like a Regexp because after many many maaany tests and looking around, the STDOUT is everything that is outputted so I found it to be better to use Regex so you could check the whole STDOUT for the exact text that you want.
Like I put it, it works in the terminal just perfect.
//Just had to share this, it took me days to figure it out.
it "should say 'Hello from Rspec' when run" do
output = `ruby sayhello.rb`
output.should == 'Hello from RSpec'
end

Resources