I'm using Ruby 1.9.2 p180.
I'm writing a continuous evaluation tool for Rubyvis (to be part of SciRuby). Basically, you set up the Rubyvis::Panel in your input file (e.g., test.rb), and this SciRuby class (Plotter) watches test.rb for modifications. When there's a change, SciRuby runs the script through eval.
The script works if I run it from the command line, but when executed through eval, the plot is wrong -- a straight line, as if all the data is gone, instead of what you see here. Note: Previously, it said here that the SVG was different -- but it turns out this was the result of REXML being loaded instead of nokogiri.
Here are the test scripts and eval code. Most produce the straight line (with exceptions described in the edit section below).
I haven't the faintest how this is happening.
I have a few ideas as to why it might be happening, but no clue as to the mechanism.
Hypotheses:
eval doesn't allow deep copies to be made. Objects that are taken from eval are missing pieces in certain contexts, particularly when lambda is used to process the data into the correct format for the plot.
For some reason, eval is not honoring the bundled dep list when require is called -- maybe the wrong version of nokogiri is being used in my binding?
Some other required library (perhaps RSVG?) has overloaded some method Rubyvis uses.
Has anyone seen anything like this before? I'm sort of feeling around in the dark -- at a total loss as to where to begin troubleshooting.
Edit 9/15/11: new information
It seems that the call to OpenStruct.new is causing the problem.
If I define data as a list of lists, data = pv.range(0,10,0.1).map { |d| [d,Math.sin(d)+2+rand()] }, it works well.
But when data is defined as a list of OpenStructs, the following code gives incorrect output:
data = pv.range(0, 10, 0.1).map {|x|
o = OpenStruct.new({:x=> x, :y=> Math.sin(x) + 2+rand()})
STDERR.puts o.inspect # Output is correct.
o
}
STDERR.puts "size of data: #{data.size}"
STDERR.puts "first x = #{data.first.x}" # Output is first x = 0.0
STDERR.puts "first y = #{data.first.y}" # Output is first y = (WRONG)
I can even induce an error if I use collect when assigning the data, e.g.,
vis.add(pv.Line).data(data.collect { |d| [d.x,d.y] }
plotter.rb:88:in `block in <main>': undefined method `x' for [0.0, nil]:Array (NoMethodError)
versus no error with vis.add(pv.Line).data(data). The error appears to originate from the call to eval("vis.render()", bind) in my app's source (not in the plot script).
It turns out that if I just use a hash, e.g., {:x => x, :y => Math.sin(x)}, that works fine. But when I explicitly say Hash.new({:x => x, :y => Math.sin(x)}), that gives an error regardless of how I call vis.data:
rubyvis/lib/rubyvis/internals.rb:184:in `each': comparison of Hash with Hash failed (ArgumentError)
So the difference is in how my data is assigned. The question is: why?
Copies of the inputs are available in the original gist. Thanks for your help.
It turns out that if I just use a hash, e.g., {:x => x, :y =>
Math.sin(x)}, that works fine. But when I explicitly say Hash.new({:x
=> x, :y => Math.sin(x)}), that gives an error regardless of how I
call vis.data:
First of all, your call to Hash.new is wrong. Hash.new takes a parameter that's the default value of the hash.
I wanted to post this as a comment, not an answer;
Have you tried load instead of eval?
Related
In the first statement below, Pry returns a normal-looking object.
In the second, Pry specifies a lambda in the object, but also adds #(pry) with a reference to the line inside the Pry session (:37). Why doesn't the first return value contain #(pry)? Or, conversely, why does the second return value contain it?
{}.to_proc
# => #<Proc:0x9b3fed0>
lambda {}
# => #<Proc:0x97db9c4#(pry):37 (lambda)>
The second example is a literal, and the proc (lambda) is created there within Ruby code, where it gets the source location.
In the first example, the proc is created by executing a C method (to_proc). C code is compiled into Ruby interpreter, which becomes binary code, and it does not make sense to describe the C location in place of a Ruby source location. In fact, you will also not get the source location for the method (which is not the same as the "source location" of the proc it generates, but should be close to it, if they were to be given):
{}.method(:to_proc).source_location # => nil
However, if the source is written as part of Ruby code, you get the source location:
irb(main):001:0> def to_proc
irb(main):002:1> Proc.new{}
irb(main):003:1> end
=> :to_proc
irb(main):004:0> {}.to_proc
=> #<Proc:0x007f387602af70#(irb):2>
This doesn't have anything to do with Pry. This is what you get when you call inspect on these two Procs.
I'm not 100% sure, but I have a theory. In the second example, you're passing a block to lambda. Although you don't have any code inside the block, you ordinarily would, and when debugging (which is what inspect is ordinarily used for) line numbers are important.
In the first example, though, there's no block. You're calling Hash#to_proc on an empty Hash (which is irrelevant; you get the same result with Symbol#to_proc etc.), and so there's no code to associate a line number with; a line number wouldn't even really make sense.
You can see where this happens in the proc_to_s function in proc.c, by the way.
I'd love to use a method signature like:
def register(something, on:, for:)
This works, but I can't work out how to use "for" without causing a syntax error! Rather annoying, anyone know a way around this?
The problem is not the method definition line that you posted, the problem is the usage of the for variable inside the method body. Since for is a reserved word, you cannot use it as a plain variable name, but you can use it as part of a hash. In your case that means you must resort to using arbitrary keyword arguments (**opts), but you can use the keyword_argument for: in the method call. You may want to raise an ArgumentError if the key is not present to emulate the behavior of the method signature you posted above.
def register(something, on:, **opts)
raise ArgumentError, 'missing keyword: for' unless opts.has_key?(:for)
for_value = opts[:for]
puts "registering #{something} on #{on} for #{for_value}"
end
register 'chocolate chips', on: 'cookie'
# ArgumentError: missing keyword: for
register 'chocolate chips', on: 'cookie', for: 'cookie monster'
# registering chocolate chips on cookie for cookie monster
In Ruby, for is a reserved keyword - looks like you just cannot use them in other way to how they were meant to use.
That's the whole purpose of reserving keywords.
Additional resources on which keywords are reserved in Ruby:
http://www.rubymagic.org/posts/ruby-and-rails-reserved-words
http://www.java2s.com/Code/Ruby/Language-Basics/Rubysreservedwords.htm
(thanks #cremno for this comment) Both links don't mention ENCODING. Also, since 2.2.0 Ruby comes with an official document: $ ri ruby:keywords
UPD
Actualy, you can still use :for symbol as a key in hash (let's say, options hash), so, you can write like this:
def test(something, options = {})
puts something
puts options.values.join(' and ')
end
and it works like charm:
[4] pry(main)> test 'arguments', :for => :lulz, :with => :care, :while => 'you are writing code'
arguments
lulz and care and you are writing code
binding.local_variable_get(:for)
is a way I was thinking. Only works in ruby 2.1+ I think.
NOTE: Don't do this, I'm just interested in how you could get round it, you probably should just call your named parameter something else :)
I'd like to see if a proc call with the same arguments will give the same results every time. pureproc called with arguments is free, so every time I call pureproc(1,1), I'll get the same result. dirtyproc called with arguments is bound within its environment, and thus even though it has the same arity as pureproc, its output will depend on the environment.
ruby-1.9.2-p136 :001 > envx = 1
=> 1
ruby-1.9.2-p136 :003 > pureproc = Proc.new{ |a,b| a+b }
=> #
ruby-1.9.2-p136 :004 > dirtyproc = Proc.new{ |a,b| a+b+envx }
How can I programmatically determine whether a called proc or a method is free, as defined by only being bound over the variables that must be passed in? Any explanation of bindings, local variables, etc would be welcome as well.
Probably you can parse the source using some gem like sourcify, take out all the tokens, and check if there is anything that is a variable. But note that this is a different concept from the value of the proc/method call being constant. For example, if you had things like Time.now or Random.new in your code, that does not require any variable to be defined, but will still vary every time you call. Also, what would you want to be the case when a proc has envx - envx? That will remain constant, but will still affect the code in the sense that it will return an error unless envx is defined.
Hm, tricky. There's the parameters method that tells you about expected arguments (note how they are optional cause you are using a procs, not lambdas).
pureproc.parameters
=> [[:opt, :a], [:opt, :b]]
dirtyproc.parameters
=> [[:opt, :a], [:opt, :b]]
As for determining whether or not one of the closed over variables are actually used to compute the return value of the proc, walking the AST comes to mind (there are gems for that), but seems cumbersome. My first idea was something like dirtyproc.instance_eval { local_variables }, but since both closures close over the same environment, that obviously doesn't get you very far...
The overall question though is: if you want to make sure something is pure, why not make it a proper method where you don't close over the environment in the first place?
I want to merge a value into a Thor option hash.
If I just use merge I get an error, HashWithIndifferentAccess
I have read the documentation but I have difficulties to understand how to get it to work. I guess I hope this question will help me to both find an answer on the question how to merge a value into this kind of hash and understand how to read documentation.
p options.inspect
#=> "{\"ruby\"=>\"/Users/work/.rbenv/versions/1.9.2-p290/bin/ruby\"}"
p options.merge!(:a => true)
#=> hash_with_indifferent_access.rb:26:in `[]=': can't modify frozen hash (RuntimeError)
The hash is frozen:
"Prevents further modifications to obj. A RuntimeError will be raised
if modification is attempted. There is no way to unfreeze a frozen
object."
You can copy options to a new hash (will be unfrozen) and modifying that instead.
new_options = options.dup
options = new_options
options.merge!(:a => "this will work now")
Or if you want it to be even briefer:
options=options.dup
options.merge!(:a => "this will work now")
The Thor library returns a frozen hash by default, so another option would be to modify the library to return unfrozen hashes, but I think the first solution is good enough.
Below is a link to the source code for Thor's parse function, you'll notice it freezes the "assigns" return hash prior to actually returning it (go to the bottom of the page, and under (Object) parse(args), click 'View Source'. The freezing is on line 83 of the source snippet.)
http://rubydoc.info/github/wycats/thor/master/Thor/Options
I would like to do some fairly heavy-duty reflection in Ruby. I want to create a function that returns the names of the arguments of various calling functions higher up the call stack (just one higher would be enough but why stop there?). I could use Kernel.caller, go to the file and parse the argument list but that would be ugly and unreliable.
The function that I would like would work in the following way:
module A
def method1( tuti, fruity)
foo
end
def method2(bim, bam, boom)
foo
end
def foo
print caller_args[1].join(",") #the "1" mean one step up the call stack
end
end
A.method1
#prints "tuti,fruity"
A.method2
#prints "bim, bam, boom"
I would not mind using ParseTree or some similar tool for this task but looking at Parsetree, it is not obvious how to use it for this purpose. Creating a C extension like this is another possibility but it would be nice if someone had already done it for me.
I can see that I'll probably need some kind of C extension. I suppose that means my question is what combination of C extension would work most easily. I don't think caller+ParseTree would be enough by themselves.
As far as why I would like to do this goes, rather than saying "automatic debugging", perhaps I should say that I would like to use this functionality to do automatic checking of the calling and return conditions of functions:
def add x, y
check_positive
return x + y
end
Where check_positive would throw an exception if x and y weren't positive. Obviously, there would be more to it than that but hopefully this gives enough motivation.
In Ruby 1.9.2, you can trivially get the parameter list of any Proc (and thus of course also of any Method or UnboundMethod) with Proc#parameters:
A.instance_method(:method1).parameters # => [[:req, :tuti], [:req, :fruity]]
The format is an array of pairs of symbols: type (required, optional, rest, block) and name.
For the format you want, try
A.instance_method(:method1).parameters.map(&:last).map(&:to_s)
# => ['tuti', 'fruity']
Of course, that still doesn't give you access to the caller, though.
I suggest you take a look at Merb's action-args library.
require 'rubygems'
require 'merb'
include GetArgs
def foo(bar, zed=42)
end
method(:foo).get_args # => [[[:bar], [:zed, 42]], [:zed]]
If you don't want to depend on Merb, you can choose and pick the best parts from the source code in github.
I have a method that is quite expensive and only almost works.
$shadow_stack = []
set_trace_func( lambda {
|event, file, line, id, binding, classname|
if event == "call"
$shadow_stack.push( eval("local_variables", binding) )
elsif event == "return"
$shadow_stack.pop
end
} )
def method1( tuti, fruity )
foo
end
def method2(bim, bam, boom)
foo
x = 10
y = 3
end
def foo
puts $shadow_stack[-2].join(", ")
end
method1(1,2)
method2(3,4,4)
Outputs
tuti, fruity
bim, bam, boom, x, y
I'm curious as to why you'd want such functionality in such a generalized manner.
I'm curious how you think this functionality would allow for automatic debugging? You'd still need to inject calls to your "foo" function. In fact, something based on set_trace_func is more able to be automatic, as you don't need to touch existing code. Indeed this is how debug.rb is implemented, in terms of set_trace_func.
The solutions to your precise question are indeed basically, as you outlined. use caller + parsetree, or open the file and grab the data that way. There is no reflection capability that I am aware of that will let you get the names of arguments. You can approve upon my solution by grabbing the associated method object and calling #arity to then infer what of local_variables are arguments, but though it appears the result of that function is ordered, I'm not sure it is safe to rely on that. If you don't mind me asking, once you have the data and the interface you describe, what are you going to do with it? Automatic debugging was not what initially came to mind when I imagined uses for this functionality, although perhaps it is failing of imagination on my part.
Aha!
I would approach this differently then. There are several ruby libraries for doing design by contract already, including ruby-contract, rdbc, etc.
Another option is to write something like:
def positive
lambda { |x| x >= 0 }
end
def any
lambda { |x| true }
end
class Module
def define_checked_method(name, *checkers, &body)
define_method(name) do |*args|
unless checkers.zip(args).all? { |check, arg| check[arg] }
raise "bad argument"
end
body.call(*args)
end
end
end
class A
define_checked_method(:add, positive, any) do |x, y|
x + y
end
end
a = A.new
p a.add(3, 2)
p a.add(3, -1)
p a.add(-4, 2)
Outputs
5
2
checked_rb.rb:13:in `add': bad argument (RuntimeError)
from checked_rb.rb:29
Of course this can be made much more sophisticated, and indeed that's some of what the libraries I mentioned provided, but perhaps this is a way to get you where you want to go without necessarily taking the path you planned to use to get there?
if you want the value for the default values, too, there's the "arguments" gem
$ gem install rdp-arguments
$ irb
>> require 'arguments'
>> require 'test.rb' # class A is defined here
>> Arguments.names(A, :go)
In fact, the method you describe clearly fails to distinguish arguments from local variables while also failing to work automatically
That's because what you're trying to do is not something which is supported. It's possible (everything is possible in ruby), but there's no documented or known way to do it.
Either you can eval the backtrace like what logan suggested, or you can bust out your C compiler and hack sourcecode for ruby. I'm reasonably confident there aren't any other ways to do this.