just curious why I cann't remove declared local variable from 'local_variables' array.
Example:
x=1
myarr = local_variables.clone
p local_variables
=> [:x, :_]
p myarr
=> [:x, :_]
p local_variables.class
=> Array
p myarr.class
=> Array
myarr.delete :x
p myarr
=> [:_]
local_variables.delete :x
p local_variables
=> [:x, :_]
WTF ?
I did suspected calling local_variables.delete with parameter :x reinserts it back as it is declared anew. But if called with other previously undeclared symbol does not change it:
p local_variables
=> [:x, :_]
local_variables.delete :whatever
p local_variables
=> [:x, :_]
Can somebody explain ?
Thx.
local_variables returns an array containing the names of all currently declared local variables. You can do anything you want with that array, but this is obviously not going to affect which local variables are declared. Why would it? If you strike out a name from phone book, does that person die?
Consider the following method:
def foo
[42, 23, 13]
end
foo #=> [42, 23, 13]
foo.delete 23
foo #=> [42, 23, 13]
Does that behaviour surprise you?
Every time you call local_variables (or foo) a new array is created. If you modify that new array that has no effect on what will happen if you call local_variables again.
AFAIK it's not possible to remove variables:
http://programming.itags.org/ruby/64695/
Related
I have an array of hashes with names and ages:
array = [ {"bill" => 12}, {"tom" => 13}, {"pat" => 14} ]
I realize that, by calling the first method, this happens:
array.first # => {"bill" => 12}
Without defining a class, I'd like to do:
array.first.name # => "bill"
How can I do that?
Doing:
def name
array[0].keys
end
will define a private method, which can't be called on the receiver.
You have:
h = array.first #=> {"bill" => 12}
h.class #=> Hash
so if you want to create a method first, such that:
h.name #=> "bill"
you must define name on the class of its receiver (h), which is Hash. #shivam has shown you how to that, but (as #AndrewMarshall points out in a comment) that it pollutes the class Hash. A better way is to use Refinements.
Refinements was an experimental addition to v2.0, then modified and made permanent in v2.1. It provides a way to avoid "monkey-patching" by providing "a way to extend a class locally".
We can do that as follows. First, refine Hash within a module M:
module M
refine Hash do
def name
keys.first
end
end
end
The refinement has no effect until the keyword using is invoked on the module:
h = {"bill" => 12}
puts h.name
#-> undefined method `name' for {"bill"=>12}:Hash (NoMethodError)
After activating the refinement we can invoke name on h:
using M
h.name
#-> bill
The refinement applies to the remainder of the file, but does not apply to to code in other files. Suppose we were to add the following to the present file:
class A
def greeting(h)
puts "Hello, #{h.name}"
end
end
Then:
A.new.greeting({"bill" => 12})
#-> Hello, bill
Since your Array array consists of Hashes, when you do array.first a Hash is returned {"bill" => 12}.
You can now define a method Hash#name that can be applied to array.first
(a Hash)
Here:
class Hash
def name
self.keys.first
end
end
array.first.name
# => "bill"
#to print all names
array.each{|a| puts a.name}
# bill
# tom
# pat
# or collect them in an Array
array.map(&:name)
# => ["bill", "tom", "pat"]
Is there a better way to write this? basically I want to add an argument to a hash. if the argument is a key-val pair, then id like to add it as is. if the argument is a string i'd like to add it as a key with a nil value. the below code works, but is there a more appropriate (simple) way?
2nd question, does calling an each method on an array with two arguments |key, val| automatically convert an array to a hash as it appears to?
#some_hash = {}
def some_method(input)
if input.is_a? Hash
input.each {|key, val| #some_hash[key] = val}
else
input.split(" ").each {|key, val| #some_hash[key] = val}
end
end
some_method("key" => "val")
This gives the result as instructed in the question, but it works differently from the code OP gave (which means that the OP's code does not work as it says):
#some_hash = {}
def some_method(input)
case input
when Hash then #some_hash.merge!(input)
when String then #some_hash[input] = nil
end
end
some_method("foo" => "bar")
some_method("baz")
#some_hash # => {"foo" => "bar", "baz" => nil}
Second question
An array is never automatically converted to a hash. What you are probably mentioning is the fact that the elements of an array within an array [[:foo, :bar]] can be referred to separately in:
[[:foo, :bar]].each{|f, b| puts f; puts b}
# => foo
# => bar
That is due to destructive assignment. When necessary, Ruby takes out the elements of an array as separate things and tries to adjust the number of variables. It is the same as:
f, b = [:foo, :bar]
f # => :foo
b # => :bar
Here, you don't get f # => [:foo, :bar] and b # => nil.
This question already has answers here:
How to create a deep copy of an object in Ruby?
(9 answers)
Closed 8 years ago.
Im trying to clone a hash, to make a new copy of the original hash but it seems that when I set a value in the new hash, I have the same effect on the original hash.
rr = Hash.new
command = "/usr/local/bin/aws route53 list-resource-record-sets --hosted-zone-id EXAMPLEID --max-items 1"
rr=JSON.parse(%x{#{command}})
puts rr
if rr["ResourceRecordSets"][0]["TTL"] != 60
new_rr = rr.clone
new_rr["ResourceRecordSets"][0]["TTL"] = 60
puts rr
puts new_rr
end
Output:
{"NextRecordType"=>"MX", "NextRecordName"=>"example.com.", "ResourceRecordSets"=>[{"ResourceRecords"=>[{"Value"=>"1.2.3.4"}], "Type"=>"A", "Name"=>"example.com.", "TTL"=>1800}], "MaxItems"=>"1", "IsTruncated"=>true}
{"NextRecordType"=>"MX", "NextRecordName"=>"example.com.", "ResourceRecordSets"=>[{"ResourceRecords"=>[{"Value"=>"1.2.3.4"}], "Type"=>"A", "Name"=>"example.com.", "TTL"=>60}], "MaxItems"=>"1", "IsTruncated"=>true}
{"NextRecordType"=>"MX", "NextRecordName"=>"example.com.", "ResourceRecordSets"=>[{"ResourceRecords"=>[{"Value"=>"1.2.3.4"}], "Type"=>"A", "Name"=>"example.com.", "TTL"=>60}], "MaxItems"=>"1", "IsTruncated"=>true}
I dont see Hash.clone documented in Ruby 2.0, should I be using another method to create a Hash copy now?
Thanks in advance.
Hash is a collection of keys and values, where values are references to objects. When duplicating a hash, new hash is being created, but all object references are being copied, so as result you get new hash containing the same values. That is why this will work:
hash = {1 => 'Some string'} #Strings are mutable
hash2 = hash.clone
hash2[1] #=> 'Some string'
hash2[1].upcase! # modifying mutual object
hash[1] #=> 'SOME STRING; # so it appears modified on both hashes
hash2[1] = 'Other string' # changing reference on second hash to another object
hash[1] #=> 'SOME STRING' # original obejct has not been changed
hash2[2] = 'new value' # adding obejct to original hash
hash[2] #=> nil
If you want duplicate the referenced objects, you need to perform deep duplication. It is added in rails (activesupport gem) as deep_dup method. If you are not using rails and don;t want to install the gem, you can write it like:
class Hash
def deep_dup
Hash[map {|key, value| [key, value.respond_to?(:deep_dup) ? value.deep_dup : begin
value.dup
rescue
value
end]}]
end
end
hash = {1 => 'Some string'} #Strings are mutable
hash2 = hash.deep_dup
hash2[1] #=> 'Some string'
hash2[1].upcase! # modifying referenced object
hash2[1] #=> 'SOME STRING'
hash[1] #=> 'Some string; # now other hash point to original object's clone
You probably should write something similar for arrays. I would also thought about writing it for whole enumerable module, but it might be slightly trickier.
The easiest way to make a deep copy of most Ruby objects (including strings, arrays, hashes and combinations thereof) is to use Marshal:
def deep_copy(obj)
Marshal.load(Marshal.dump(obj))
end
For example,
h = {a: 1, b: [:c, d: {e: 4}]} # => {:a=>1, :b=>[:c, {:d=>{:e=>4}}]}
hclone = h.clone
hdup = h.dup
hmarshal = deep_copy(h)
h[:b][1][:d][:e] = 5
h # => {:a=>1, :b=>[:c, {:d=>{:e=>5}}]}
hclone # => {:a=>1, :b=>[:c, {:d=>{:e=>5}}]}
hdup # => {:a=>1, :b=>[:c, {:d=>{:e=>5}}]}
hmarshal # => {:a=>1, :b=>[:c, {:d=>{:e=>4}}]}
... or alternatively an Array which prevents duplicate entries.
Is there some kind of object in Ruby which:
responds to [], []= and <<
silently drops duplicate entries
is Enumerable (or at least supports find_all)
preserves the order in which entries were inserted
?
As far as I can tell, an Array supports points 1, 3 and 4; while a Set supports 1, 2 and 3 (but not 4). And a SortedSet won't do, because my entries don't implement <=>.
As of Ruby 1.9, the built-in Hash object preserves insertion order. For example:
h = {}
h[:z] = 1
h[:b] = 2
h[:a] = 3
h[:x] = 0
p h.keys #=> [:z, :b, :a, :x]
h.delete :b
p h.keys #=> [:z, :a, :x]
h[:b] = 1
p h.keys #=> [:z, :a, :x, :b]
So, you can set any value (like a simple true) for any key and you now have an ordered set. You can test for a key using either h.key?(obj) or, if you always set each key to have a truthy value, just h[obj]. To remove a key, use h.delete(obj). To convert the ordered set to an array, use h.keys.
Because the Ruby 1.9 Set library happens to be built upon a Hash currently, you can currently use Set as an ordered set. (For example, the to_a method's implementation is just #hash.keys.) Note, however, that this behavior is not guaranteed by that library, and might change in the future.
require 'set'
s = Set[ :f, :o, :o, :b, :a, :r ] #=> #<Set: {:f, :o, :b, :a, :r}>
s << :z #=> #<Set: {:f, :o, :b, :a, :r, :z}>
s.delete :o #=> #<Set: {:f, :b, :a, :r, :z}>
s << :o #=> #<Set: {:f, :b, :a, :r, :z, :o}>
s << :o #=> #<Set: {:f, :b, :a, :r, :z, :o}>
s << :f #=> #<Set: {:f, :b, :a, :r, :z, :o}>
s.to_a #=> [:f, :b, :a, :r, :z, :o]
There isn't one as far as I know, and Set by its mathematical nature is meant to be unordered (or at least, implementationally, meant not to guarantee order - in fact its usually implemented as a hash table so it does mess up order).
However, it's not hard to either extend array directly or subclass it to do this. I just tried it out and this works:
class UniqueArray < Array
def initialize(*args)
if args.size == 1 and args[0].is_a? Array then
super(args[0].uniq)
else
super(*args)
end
end
def insert(i, v)
super(i, v) unless include?(v)
end
def <<(v)
super(v) unless include?(v)
end
def []=(*args)
# note: could just call super(*args) then uniq!, but this is faster
# there are three different versions of this call:
# 1. start, length, value
# 2. index, value
# 3. range, value
# We just need to get the value
v = case args.size
when 3 then args[2]
when 2 then args[1]
else nil
end
super(*args) if v.nil? or not include?(v)
end
end
Seems to cover all the bases. I used OReilly's handy Ruby Cookbook as a reference - they have a recipe for "Making sure a sorted array stays sorted" which is similar.
I like this solution although it requires active_support's OrderedHash
require 'active_support/ordered_hash'
class OrderedSet < Set
def initialize enum = nil, &block
#hash = ActiveSupport::OrderedHash.new
super
end
end
=)
You could use a Hash to store the values, and have an incrementing value stored in the value of each Hash pair. Then you can access the set in a sorted manner, albeit slowly, by accessing the objects via their values.
I'll try to add some code in here later to explain further.
I am aware accessing via values is much slower than by keys.
Update 1: In Ruby 1.9, Hash elements are iterated in their insertion order.
Not that I know of, but it wouldn't be hard to roll your own. Just subclass Array and use a Set to maintain your uniqueness constraint.
One question about silent dropping. How would this affect #[]=? If I was trying to overwrite an existing entry with something which was already stored elsewhere, should it remove the would-be-removed element anyway? I think either way could provide nasty surprises down the road.
We can easily define a method and turn it into block with unary ampersand.
def my_method(arg)
puts arg*2
end
['foo', 'bar'].each(&method(:my_method))
# foofoo
# barbar
# or
my_method = ->(arg) { puts arg*2 }
['foo', 'bar'].each(&my_method)
# same output
As we see the first argument is passed automatically when we work with aggregates. But what if we need to pass 2 or even more arguments?
my_method = ->(arg,num) { puts arg*num }
['foo', 'bar'].each(&my_method)
# ArgumentError: wrong number of arguments (1 for 2)
['foo', 'bar'].each(&my_method(3))
# NoMethodError: undefined method `foo' for main:Object
['foo','bar'].each do |i, &my_method|
yield i, 3
end
# LocalJumpError: no block given (yield)
Is that possible to pass additional arguments while turning proc to a block?
#sawa is right. You can do that with curry.
Proc version:
mult = proc {|a, b| a * b} # => #<Proc:0x00000002af1098#(irb):32>
[1, 2].map(&mult.curry[2]) # => [2, 4]
Method version:
def mult(a, b)
a*b
end
[1, 2].map(&method(:mult).to_proc.curry[2]) # => [2, 4]
Regarding your comment:
Strange, but it swaps arguments during the performance
Actually, the argument order is preserved.
curry returns a new proc that effectively collects arguments until there are enough arguments to invoke the original method / proc (based on its arity). This is achieved by returning intermediate procs:
def foo(a, b, c)
{ a: a, b: b, c: c }
end
curried_proc = foo.curry #=> #<Proc:0x007fd09b84e018 (lambda)>
curried_proc[1] #=> #<Proc:0x007fd09b83e320 (lambda)>
curried_proc[1][2] #=> #<Proc:0x007fd09b82cfd0 (lambda)>
curried_proc[1][2][3] #=> {:a=>1, :b=>2, :c=>3}
You can pass any number of arguments at once to a curried proc:
curried_proc[1][2][3] #=> {:a=>1, :b=>2, :c=>3}
curried_proc[1, 2][3] #=> {:a=>1, :b=>2, :c=>3}
curried_proc[1][2, 3] #=> {:a=>1, :b=>2, :c=>3}
curried_proc[1, 2, 3] #=> {:a=>1, :b=>2, :c=>3}
Empty arguments are ignored:
curried_proc[1][][2][][3] #=> {:a=>1, :b=>2, :c=>3}
However, you obviously can't alter the argument order.
An alternative to currying is partial application which returns a new proc with lower arity by fixing one or more arguments. Unlike curry, there's no built-in method for partial application, but you can easily write your own:
my_proc = -> (arg, num) { arg * num }
def fix_first(proc, arg)
-> (*args) { proc[arg, *args] }
end
fixed_proc = fix_first(my_proc, 'foo') #=> #<Proc:0x007fa31c2070d0 (lambda)>
fixed_proc[2] #=> "foofoo"
fixed_proc[3] #=> "foofoofoo"
[2, 3].map(&fixed_proc) #=> ["foofoo", "foofoofoo"]
Or fixing the last argument:
def fix_last(proc, arg)
-> (*args) { proc[*args, arg] }
end
fixed_proc = fix_last(my_proc, 2) #=> #<Proc:0x007fa31c2070d0 (lambda)>
fixed_proc['foo'] #=> "foofoo"
fixed_proc['bar'] #=> "barbar"
['foo', 'bar'].map(&fixed_proc) #=> ["foofoo", "barbar"]
Of course, you are not limited to fixing single arguments. You could for example return a proc that takes an array and converts it to an argument list:
def splat_args(proc)
-> (array) { proc[*array] }
end
splatting_proc = splat_args(my_proc)
[['foo', 1], ['bar', 2], ['baz', 3]].map(&splatting_proc)
#=> ["foo", "barbar", "bazbazbaz"]