What I've found so far is that i can redirect the stdout to e.g. StringIO like this:
#original_output = $stdout
#new_output = StringIO.new
$stdout = #new_output
But I also need a "callback" everytime there is smth. written to the new stream. It would be fine to subclass StringIO or whatever but I do not want to overwrite very method puts/print/...
Is there one method that I can overwrite, or how would I just get everything that is written to that IO?
method_missing and message transmission seems like a path of least resistance (although others may have better suggestions)
e.g.
class FakeStdout
attr_reader :output
def initialize(output=STDOUT)
#output= output
end
def some_callback
#output.puts 'called'
# logic
end
def method_missing(method_name,*args,**kwargs,&block)
if #output.respond_to?(method_name)
some_callback
#output.public_send(method_name,*args,**kwargs,&block)
else
super
end
end
def respond_to_missing?(method_name, include_private = false)
#output.respond_to?(method_name, include_private) || super
end
end
Then you can use as
$stdout = FakeStdout.new
# called
#=> #<FakeStdout:0x00007ffff4897500 #output=#<IO:<STDOUT>>>
'hello'
# called
#=> "hello"
Caveat: Any method inside FakeStdout that would write to the output stream (e.g. print, puts, etc.) should call directly to the #output instance variable or you will end up with a SystemStackError
Someting like a tee functionality in logger.
You can write a pseudo IO class that will write to multiple IO objects. Something like:
class MultiIO
def initialize(*targets)
#targets = targets
end
def write(*args)
#targets.each {|t| t.write(*args)}
end
def close
#targets.each(&:close)
end
end
Then set that as your log file:
log_file = File.open("log/debug.log", "a")
Logger.new MultiIO.new(STDOUT, log_file)
Every time Logger calls puts on your MultiIO object, it will write to both STDOUT and your log file.
Edit: I went ahead and figured out the rest of the interface. A log device must respond to write and close (not puts). As long as MultiIO responds to those and proxies them to the real IO objects, this should work.
#David's solution is very good. I've made a generic delegator class for multiple targets based on his code.
require 'logger'
class MultiDelegator
def initialize(*targets)
#targets = targets
end
def self.delegate(*methods)
methods.each do |m|
define_method(m) do |*args|
#targets.map { |t| t.send(m, *args) }
end
end
self
end
class <<self
alias to new
end
end
log_file = File.open("debug.log", "a")
log = Logger.new MultiDelegator.delegate(:write, :close).to(STDOUT, log_file)
If you're in Rails 3 or 4, as this blog post points out, Rails 4 has this functionality built in. So you can do:
# config/environment/production.rb
file_logger = Logger.new(Rails.root.join("log/alternative-output.log"))
config.logger.extend(ActiveSupport::Logger.broadcast(file_logger))
Or if you're on Rails 3, you can backport it:
# config/initializers/alternative_output_log.rb
# backported from rails4
module ActiveSupport
class Logger < ::Logger
# Broadcasts logs to multiple loggers. Returns a module to be
# `extended`'ed into other logger instances.
def self.broadcast(logger)
Module.new do
define_method(:add) do |*args, &block|
logger.add(*args, &block)
super(*args, &block)
end
define_method(:<<) do |x|
logger << x
super(x)
end
define_method(:close) do
logger.close
super()
end
define_method(:progname=) do |name|
logger.progname = name
super(name)
end
define_method(:formatter=) do |formatter|
logger.formatter = formatter
super(formatter)
end
define_method(:level=) do |level|
logger.level = level
super(level)
end
end
end
end
end
file_logger = Logger.new(Rails.root.join("log/alternative-output.log"))
Rails.logger.extend(ActiveSupport::Logger.broadcast(file_logger))
For those who like it simple:
log = Logger.new("| tee test.log") # note the pipe ( '|' )
log.info "hi" # will log to both STDOUT and test.log
source
Or print the message in the Logger formatter:
log = Logger.new("test.log")
log.formatter = proc do |severity, datetime, progname, msg|
puts msg
msg
end
log.info "hi" # will log to both STDOUT and test.log
I'm actually using this technique to print to a log file, a cloud logger service (logentries) and if it's dev environment - also print to STDOUT.
You can also add multiple device logging functionality directly into the Logger:
require 'logger'
class Logger
# Creates or opens a secondary log file.
def attach(name)
#logdev.attach(name)
end
# Closes a secondary log file.
def detach(name)
#logdev.detach(name)
end
class LogDevice # :nodoc:
attr_reader :devs
def attach(log)
#devs ||= {}
#devs[log] = open_logfile(log)
end
def detach(log)
#devs ||= {}
#devs[log].close
#devs.delete(log)
end
alias_method :old_write, :write
def write(message)
old_write(message)
#devs ||= {}
#devs.each do |log, dev|
dev.write(message)
end
end
end
end
For instance:
logger = Logger.new(STDOUT)
logger.warn('This message goes to stdout')
logger.attach('logfile.txt')
logger.warn('This message goes both to stdout and logfile.txt')
logger.detach('logfile.txt')
logger.warn('This message goes just to stdout')
While I quite like the other suggestions, I found I had this same issue but wanted the ability to have different logging levels for STDERR and the file.
I ended up with a routing strategy that multiplexes at the logger level rather than at the IO level, so that each logger could then operate at independent log-levels:
class MultiLogger
def initialize(*targets)
#targets = targets
end
%w(log debug info warn error fatal unknown).each do |m|
define_method(m) do |*args|
#targets.map { |t| t.send(m, *args) }
end
end
end
stderr_log = Logger.new(STDERR)
file_log = Logger.new(File.open('logger.log', 'a'))
stderr_log.level = Logger::INFO
file_log.level = Logger::DEBUG
log = MultiLogger.new(stderr_log, file_log)
Here's another implementation, inspired by #jonas054's answer.
This uses a pattern similar to Delegator. This way you don't have to list all the methods you want to delegate, since it will delegate all methods that are defined in any of the target objects:
class Tee < DelegateToAllClass(IO)
end
$stdout = Tee.new(STDOUT, File.open("#{__FILE__}.log", "a"))
You should be able to use this with Logger as well.
delegate_to_all.rb is available from here: https://gist.github.com/TylerRick/4990898
Quick and dirty (ref: https://coderwall.com/p/y_b3ra/log-to-stdout-and-a-file-at-the-same-time)
require 'logger'
ll=Logger.new('| tee script.log')
ll.info('test')
#jonas054's answer above is great, but it pollutes the MultiDelegator class with every new delegate. If you use MultiDelegator several times, it will keep adding methods to the class, which is undesirable. (See bellow for example)
Here is the same implementation, but using anonymous classes so the methods don't pollute the delegator class.
class BetterMultiDelegator
def self.delegate(*methods)
Class.new do
def initialize(*targets)
#targets = targets
end
methods.each do |m|
define_method(m) do |*args|
#targets.map { |t| t.send(m, *args) }
end
end
class <<self
alias to new
end
end # new class
end # delegate
end
Here is an example of the method pollution with the original implementation, contrasted with the modified implementation:
tee = MultiDelegator.delegate(:write).to(STDOUT)
tee.respond_to? :write
# => true
tee.respond_to? :size
# => false
All is good above. tee has a write method, but no size method as expected. Now, consider when we create another delegate:
tee2 = MultiDelegator.delegate(:size).to("bar")
tee2.respond_to? :size
# => true
tee2.respond_to? :write
# => true !!!!! Bad
tee.respond_to? :size
# => true !!!!! Bad
Oh no, tee2 responds to size as expected, but it also responds to write because of the first delegate. Even tee now responds to size because of the method pollution.
Contrast this to the anonymous class solution, everything is as expected:
see = BetterMultiDelegator.delegate(:write).to(STDOUT)
see.respond_to? :write
# => true
see.respond_to? :size
# => false
see2 = BetterMultiDelegator.delegate(:size).to("bar")
see2.respond_to? :size
# => true
see2.respond_to? :write
# => false
see.respond_to? :size
# => false
I have written a little RubyGem that allows you to do several of these things:
# Pipe calls to an instance of Ruby's logger class to $stdout
require 'teerb'
log_file = File.open("debug.log", "a")
logger = Logger.new(TeeRb::IODelegate.new(log_file, STDOUT))
logger.warn "warn"
$stderr.puts "stderr hello"
puts "stdout hello"
You can find the code on github: teerb
Are you restricted to the standard logger?
If not you may use log4r:
require 'log4r'
LOGGER = Log4r::Logger.new('mylog')
LOGGER.outputters << Log4r::StdoutOutputter.new('stdout')
LOGGER.outputters << Log4r::FileOutputter.new('file', :filename => 'test.log') #attach to existing log-file
LOGGER.info('aa') #Writs on STDOUT and sends to file
One advantage: You could also define different log-levels for stdout and file.
If you're okay with using ActiveSupport, then I would highly recommend checking out ActiveSupport::Logger.broadcast, which is an excellent and very concise way to add additional log destinations to a logger.
In fact, if you are using Rails 4+ (as of this commit), you don't need to do anything to get the desired behavior — at least if you're using the rails console. Whenever you use the rails console, Rails automatically extends Rails.logger such that it outputs both to its usual file destination (log/production.log, for example) and STDERR:
console do |app|
…
unless ActiveSupport::Logger.logger_outputs_to?(Rails.logger, STDERR, STDOUT)
console = ActiveSupport::Logger.new(STDERR)
Rails.logger.extend ActiveSupport::Logger.broadcast console
end
ActiveRecord::Base.verbose_query_logs = false
end
For some unknown and unfortunate reason, this method is undocumented but you can refer to the source code or blog posts to learn how it works or see examples.
https://www.joshmcarthur.com/til/2018/08/16/logging-to-multiple-destinations-using-activesupport-4.html has another example:
require "active_support/logger"
console_logger = ActiveSupport::Logger.new(STDOUT)
file_logger = ActiveSupport::Logger.new("my_log.log")
combined_logger = console_logger.extend(ActiveSupport::Logger.broadcast(file_logger))
combined_logger.debug "Debug level"
…
I went to the same idea of "Delegating all methods to sub-elements" that other people already explored, but am returning for each of them the return value of the last call of the method.
If I didn't, it broke logger-colors which were expecting an Integer and map was returning an Array.
class MultiIO
def self.delegate_all
IO.methods.each do |m|
define_method(m) do |*args|
ret = nil
#targets.each { |t| ret = t.send(m, *args) }
ret
end
end
end
def initialize(*targets)
#targets = targets
MultiIO.delegate_all
end
end
This will redelegate every method to all targets, and return only the return value of the last call.
Also, if you want colors, STDOUT or STDERR must be put last, since it's the only two were colors are supposed to be output. But then, it will also output colors to your file.
logger = Logger.new MultiIO.new(File.open("log/test.log", 'w'), STDOUT)
logger.error "Roses are red"
logger.unknown "Violets are blue"
One more way.
If you're using tagged logging and need tags in another logfile as well, you could do it in this way
# backported from rails4
# config/initializers/active_support_logger.rb
module ActiveSupport
class Logger < ::Logger
# Broadcasts logs to multiple loggers. Returns a module to be
# `extended`'ed into other logger instances.
def self.broadcast(logger)
Module.new do
define_method(:add) do |*args, &block|
logger.add(*args, &block)
super(*args, &block)
end
define_method(:<<) do |x|
logger << x
super(x)
end
define_method(:close) do
logger.close
super()
end
define_method(:progname=) do |name|
logger.progname = name
super(name)
end
define_method(:formatter=) do |formatter|
logger.formatter = formatter
super(formatter)
end
define_method(:level=) do |level|
logger.level = level
super(level)
end
end # Module.new
end # broadcast
def initialize(*args)
super
#formatter = SimpleFormatter.new
end
# Simple formatter which only displays the message.
class SimpleFormatter < ::Logger::Formatter
# This method is invoked when a log event occurs
def call(severity, time, progname, msg)
element = caller[4] ? caller[4].split("/").last : "UNDEFINED"
"#{Thread.current[:activesupport_tagged_logging_tags]||nil } # {time.to_s(:db)} #{severity} #{element} -- #{String === msg ? msg : msg.inspect}\n"
end
end
end # class Logger
end # module ActiveSupport
custom_logger = ActiveSupport::Logger.new(Rails.root.join("log/alternative_#{Rails.env}.log"))
Rails.logger.extend(ActiveSupport::Logger.broadcast(custom_logger))
After this you'll get uuid tags in alternative logger
["fbfea87d1d8cc101a4ff9d12461ae810"] 2015-03-12 16:54:04 INFO logger.rb:28:in `call_app' --
["fbfea87d1d8cc101a4ff9d12461ae810"] 2015-03-12 16:54:04 INFO logger.rb:31:in `call_app' -- Started POST "/psp/entrypoint" for 192.168.56.1 at 2015-03-12 16:54:04 +0700
Hope that helps someone.
One more option ;-)
require 'logger'
class MultiDelegator
def initialize(*targets)
#targets = targets
end
def method_missing(method_sym, *arguments, &block)
#targets.each do |target|
target.send(method_sym, *arguments, &block) if target.respond_to?(method_sym)
end
end
end
log = MultiDelegator.new(Logger.new(STDOUT), Logger.new(File.open("debug.log", "a")))
log.info('Hello ...')
This is a simplification of #rado's solution.
def delegator(*methods)
Class.new do
def initialize(*targets)
#targets = targets
end
methods.each do |m|
define_method(m) do |*args|
#targets.map { |t| t.send(m, *args) }
end
end
class << self
alias for new
end
end # new class
end # delegate
It has all the same benefits as his without the need of the outer class wrapper. Its a useful utility to have in a separate ruby file.
Use it as a one-liner to generate delegator instances like so:
IO_delegator_instance = delegator(:write, :read).for(STDOUT, STDERR)
IO_delegator_instance.write("blah")
OR use it as a factory like so:
logger_delegator_class = delegator(:log, :warn, :error)
secret_delegator = logger_delegator_class(main_logger, secret_logger)
secret_delegator.warn("secret")
general_delegator = logger_delegator_class(main_logger, debug_logger, other_logger)
general_delegator.log("message")
You can use Loog::Tee object from loog gem:
require 'loog'
logger = Loog::Tee.new(first, second)
Exactly what you are looking for.
I also has this need recently so I implemented a library that does this. I just discovered this StackOverflow question, so I'm putting it out there for anyone that needs it: https://github.com/agis/multi_io.
Compared to the other solutions mentioned here, this strives to be an IO object of its own, so it can be used as a drop-in replacement for other regular IO objects (files, sockets etc.)
That said, I've not yet implemented all the standard IO methods, but those that are, follow the IO semantics (e.g. for example, #write returns the sum of the number of bytes written to all the underlying IO targets).
You can inherit Logger and override the write method:
class LoggerWithStdout < Logger
def initialize(*)
super
def #logdev.write(msg)
super
puts msg
end
end
end
logger = LoggerWithStdout.new('path/to/log_file.log')
I like the MultiIO approach. It works well with Ruby Logger. If you use pure IO it stops working because it lacks some methods that IO objects are expected to have. Pipes were mentioned before here: How can I have ruby logger log output to stdout as well as file?. Here is what works best for me.
def watch(cmd)
output = StringIO.new
IO.popen(cmd) do |fd|
until fd.eof?
bit = fd.getc
output << bit
$stdout.putc bit
end
end
output.rewind
[output.read, $?.success?]
ensure
output.close
end
result, success = watch('./my/shell_command as a String')
Note I know this doesn't answer the question directly but it is strongly related. Whenever I searched for output to multiple IOs I came across this thread.So, I hope you find this useful too.
I think your STDOUT is used for critical runtime info and errors raised.
So I use
$log = Logger.new('process.log', 'daily')
to log debug and regular logging, and then wrote a few
puts "doing stuff..."
where I need to see STDOUT information that my scripts were running at all!
Bah, just my 10 cents :-)
I've got a small but growing framework for building .net systems with ruby / rake , that I've been working on for a while now. In this code base, I have the following:
require 'rake/tasklib'
def assemblyinfo(name=:assemblyinfo, *args, &block)
Albacore::AssemblyInfoTask.new(name, *args, &block)
end
module Albacore
class AssemblyInfoTask < Albacore::AlbacoreTask
def execute(name)
asm = AssemblyInfo.new
asm.load_config_by_task_name(name)
call_task_block(asm)
asm.write
fail if asm.failed
end
end
end
the pattern that this code follows is repeated about 20 times in the framework. The difference in each version is the name of the class being created/called (instead of AssemblyInfoTask, it may be MSBuildTask or NUnitTask), and the contents of the execute method. Each task has it's own execute method implementation.
I'm constantly fixing bugs in this pattern of code and I have to repeat the fix 20 times, every time I need a fix.
I know it's possible to do some meta-programming magic and wire up this code for each of my tasks from a single location... but I'm having a really hard time getting it to work.
my idea is that I want to be able to call something like this:
create_task :assemblyinfo do |name|
asm = AssemblyInfo.new
asm.load_config_by_task_name(name)
call_task_block(asm)
asm.write
fail if asm.failed
end
and this would wire up everything I need.
I need help! tips, suggestions, someone willing to tackle this... how can I keep from having to repeat this pattern of code over and over?
Update: You can get the full source code here: http://github.com/derickbailey/Albacore/ the provided code is /lib/rake/assemblyinfotask.rb
Ok, here's some metaprogramming that will do what you want (in ruby18 or ruby19)
def create_task(taskname, &execute_body)
taskclass = :"#{taskname}Task"
taskmethod = taskname.to_s.downcase.to_sym
# open up the metaclass for main
(class << self; self; end).class_eval do
# can't pass a default to a block parameter in ruby18
define_method(taskmethod) do |*args, &block|
# set default name if none given
args << taskmethod if args.empty?
Albacore.const_get(taskclass).new(*args, &block)
end
end
Albacore.const_set(taskclass, Class.new(Albacore::AlbacoreTask) do
define_method(:execute, &execute_body)
end)
end
create_task :AssemblyInfo do |name|
asm = AssemblyInfo.new
asm.load_config_by_task_name(name)
call_task_block(asm)
asm.write
fail if asm.failed
end
The key tools in the metaprogrammers tool box are:
class<<self;self;end - to get at the metaclass for any object, so you can define methods on that object
define_method - so you can define methods using current local variables
Also useful are
const_set, const_get: allow you to set/get constants
class_eval : allows you to define methods using def as if you were in a class <Classname> ... end region
Something like this, tested on ruby 1.8.6:
class String
def camelize
self.split(/[^a-z0-9]/i).map{|w| w.capitalize}.join
end
end
class AlbacoreTask; end
def create_task(name, &block)
klass = Class.new AlbacoreTask
klass.send :define_method, :execute, &block
Object.const_set "#{name.to_s.camelize}Task", klass
end
create_task :test do |name|
puts "test: #{name}"
end
testing = TestTask.new
testing.execute 'me'
The core piece is the "create_task" method, it:
Creates new class
adds execute method
Names the class and exposes it
EDIT: I slightly changed the spec, to better match what I imagined this to do.
Well, I don't really want to fake C# attributes, I want to one-up-them and support AOP as well.
Given the program:
class Object
def Object.profile
# magic code here
end
end
class Foo
# This is the fake attribute, it profiles a single method.
profile
def bar(b)
puts b
end
def barbar(b)
puts(b)
end
comment("this really should be fixed")
def snafu(b)
end
end
Foo.new.bar("test")
Foo.new.barbar("test")
puts Foo.get_comment(:snafu)
Desired output:
Foo.bar was called with param: b = "test"
test
Foo.bar call finished, duration was 1ms
test
This really should be fixed
Is there any way to achieve this?
I have a somewhat different approach:
class Object
def self.profile(method_name)
return_value = nil
time = Benchmark.measure do
return_value = yield
end
puts "#{method_name} finished in #{time.real}"
return_value
end
end
require "benchmark"
module Profiler
def method_added(name)
profile_method(name) if #method_profiled
super
end
def profile_method(method_name)
#method_profiled = nil
alias_method "unprofiled_#{method_name}", method_name
class_eval <<-ruby_eval
def #{method_name}(*args, &blk)
name = "\#{self.class}##{method_name}"
msg = "\#{name} was called with \#{args.inspect}"
msg << " and a block" if block_given?
puts msg
Object.profile(name) { unprofiled_#{method_name}(*args, &blk) }
end
ruby_eval
end
def profile
#method_profiled = true
end
end
module Comment
def method_added(name)
comment_method(name) if #method_commented
super
end
def comment_method(method_name)
comment = #method_commented
#method_commented = nil
alias_method "uncommented_#{method_name}", method_name
class_eval <<-ruby_eval
def #{method_name}(*args, &blk)
puts #{comment.inspect}
uncommented_#{method_name}(*args, &blk)
end
ruby_eval
end
def comment(text)
#method_commented = text
end
end
class Foo
extend Profiler
extend Comment
# This is the fake attribute, it profiles a single method.
profile
def bar(b)
puts b
end
def barbar(b)
puts(b)
end
comment("this really should be fixed")
def snafu(b)
end
end
A few points about this solution:
I provided the additional methods via modules which could be extended into new classes as needed. This avoids polluting the global namespace for all modules.
I avoided using alias_method, since module includes allow AOP-style extensions (in this case, for method_added) without the need for aliasing.
I chose to use class_eval rather than define_method to define the new method in order to be able to support methods that take blocks. This also necessitated the use of alias_method.
Because I chose to support blocks, I also added a bit of text to the output in case the method takes a block.
There are ways to get the actual parameter names, which would be closer to your original output, but they don't really fit in a response here. You can check out merb-action-args, where we wrote some code that required getting the actual parameter names. It works in JRuby, Ruby 1.8.x, Ruby 1.9.1 (with a gem), and Ruby 1.9 trunk (natively).
The basic technique here is to store a class instance variable when profile or comment is called, which is then applied when a method is added. As in the previous solution, the method_added hook is used to track when the new method is added, but instead of removing the hook each time, the hook checks for an instance variable. The instance variable is removed after the AOP is applied, so it only applies once. If this same technique was used multiple time, it could be further abstracted.
In general, I tried to stick as close to your "spec" as possible, which is why I included the Object.profile snippet instead of implementing it inline.
Great question. This is my quick attempt at an implementation (I did not try to optimise the code). I took the liberty of adding the profile method to the
Module class. In this way it will be available in every class and module definition. It would be even better
to extract it into a module and mix it into the class Module whenever you need it.
I also didn't know if the point was to make the profile method behave like Ruby's public/protected/private keywords,
but I implemented it like that anyway. All methods defined after calling profile are profiled, until noprofile is called.
class Module
def profile
require "benchmark"
#profiled_methods ||= []
class << self
# Save any original method_added callback.
alias_method :__unprofiling_method_added, :method_added
# Create new callback.
def method_added(method)
# Possible infinite loop if we do not check if we already replaced this method.
unless #profiled_methods.include?(method)
#profiled_methods << method
unbound_method = instance_method(method)
define_method(method) do |*args|
puts "#{self.class}##{method} was called with params #{args.join(", ")}"
bench = Benchmark.measure do
unbound_method.bind(self).call(*args)
end
puts "#{self.class}##{method} finished in %.5fs" % bench.real
end
# Call the original callback too.
__unprofiling_method_added(method)
end
end
end
end
def noprofile # What's the opposite of profile?
class << self
# Remove profiling callback and restore previous one.
alias_method :method_added, :__unprofiling_method_added
end
end
end
You can now use it as follows:
class Foo
def self.method_added(method) # This still works.
puts "Method '#{method}' has been added to '#{self}'."
end
profile
def foo(arg1, arg2, arg3 = nil)
puts "> body of foo"
sleep 1
end
def bar(arg)
puts "> body of bar"
end
noprofile
def baz(arg)
puts "> body of baz"
end
end
Call the methods as you would normally:
foo = Foo.new
foo.foo(1, 2, 3)
foo.bar(2)
foo.baz(3)
And get benchmarked output (and the result of the original method_added callback just to show that it still works):
Method 'foo' has been added to 'Foo'.
Method 'bar' has been added to 'Foo'.
Method 'baz' has been added to 'Foo'.
Foo#foo was called with params 1, 2, 3
> body of foo
Foo#foo finished in 1.00018s
Foo#bar was called with params 2
> body of bar
Foo#bar finished in 0.00016s
> body of baz
One thing to note is that it is impossible to dynamically get the name of the arguments with Ruby meta-programming.
You'd have to parse the original Ruby file, which is certainly possible but a little more complex. See the parse_tree and ruby_parser
gems for details.
A fun improvement would be to be able to define this kind of behaviour with a class method in the Module class. It would be cool to be able to do something like:
class Module
method_wrapper :profile do |*arguments|
# Do something before calling method.
yield *arguments # Call original method.
# Do something afterwards.
end
end
I'll leave this meta-meta-programming exercise for another time. :-)