Confused with chef syntax - ruby

Look at the following line of chef code:
node.default['apache']['dir'] = '/etc/apache2'
In the official chef docs, it says 'node' is an object, and 'default' is a method of it, so how can square brackets(I thought this is hash syntax) follow a method?
I come from Python background and I'm new to Ruby, maybe this is general syntax, or maybe this is Chef-specific syntax, I'm just confused about this syntax.

node.default() (which is really an alias for node.attributes().default()) returns an instance of a Chef::Node::VividMash which works kind of like a normal Hash object but implements the deep-set behavior you see there (where you can set a deeply nested key without creating the intervening levels).
tl;dr don't worry about, we do a lot of object trickery to make the DSL look as nice as possible.

Related

Why does rubocop or the ruby style guide prefer not to use get_ or set_?

I was running rubocop on my project and fixing the complaints it raised.
One particular complaint bothered me
Do not prefix reader method names with get_
I could not understand much from this complaint so I looked at source code in github.
I found this snippet
def bad_reader_name?(method_name, args)
method_name.start_with?('get_') && args.to_a.empty?
end
def bad_writer_name?(method_name, args)
method_name.start_with?('set_') && args.to_a.one?
end
So the advice or convention is as follows:
1) Actually they advice us not to use get_ when the method does not have arguments . otherwise they allow get_
2) And they advice us not to use set_ when the method has only one argument .otherwise they allow set_
What is the reason behind this convention or rule or advice?
I think the point here is ruby devs prefer to always think of methods as getters since they returns something and use the equals "syntactic sugar" (like in def self.dog=(params) which lets you do Class.dog = something). In essence the point I've always seen made is that the get and set are redundant and verbose.
In opposition to this you have get and set with multiple args which are like finder methods (particularly get; think of ActiveRecord's where).
Keep in mind that 'style guide' = pure opinion. Consistency is the higher goal of style guides in general so unless something is arguably wrong or difficult to read, your goal should be more on having everything the same than of a certain type. Which is why rubocop let's you turn this off.
Another way to see it: the getter/setter paradigm was, as far as I can tell, largely a specific convention in Java/C++ etc.; at least I knew quite a few Java codebases in the very foggy past where Beans were littered with huge amounts of get_-getters and set_-setters. In that time, the private attribute would likely be called "name" with "set_name()" and "get_name()"; as the attribute itself was called "name", the getter could not be "name()" as well.
Hence the very existence of "get_" and "set_" is due to a (trivial) technical shortcoming of languages that do not allow the "=" in method names.
In Ruby, we have quite a different array of possibilities:
First and foremost, we have name() and name=(), which immediately removes the need for the get_ and set_ syntax. So, we do have getters and setters, we just call them differently from Java.
Also, the attribute then is not name but #name, hence solving this conflict as well.
Actually, you don't have attributes with the plain "obj.name" syntax at all! For eaxmple; while Rails/ActiveRecord pretends that "person.name" is a "attribute", it is in fact simply a pair of auto-generated getter name() and setter name=(). Conceptionally, the caller is not supposed to care about what "name" is (attribute or method), it is an implementation detail of the class.
It saves 4 or 3 key presses for each call. This might seem like a joke, but writing concise yet easily comprehensible code is a trademark of Ruby, after all. :-)
The way I understand it is that it's because foo.get_value is imperative and foo.value is declarative.

Is it possible to change Ruby's frozen object handling behaviour?

I am submitting solutions to Ruby puzzles on codewars.com and experimenting with how locked into the testing enviroment I am for one of the challenges.
I can redefine the classes used to test my solution but they are defined by the system after I submit my code. If I freeze these objects, the system cannot write over them but a RunTime error is raised when it tries to.
I'm fairly new to Ruby, so I'm not sure which parts (other than falsiness and truthiness) are impossible to override. Can I use Ruby code to force modification of frozen objects to silently fail instead of terminate the program or is that bound up in untouchable things like the assignment operator or similar?
The real answer here is that if you might want to modify an object later, you shouldn't freeze it. That's inherent in the whole concept of "freezing" an object. But since you asked, note that you can test whether an object is frozen with:
obj.frozen?
So if those pesky RuntimeErrors are getting you down, one solution is to use a guard clause like:
obj.do_something! if !obj.frozen?
If you want to make the guard clauses implicit, you can redefine the "problem" methods using a monkey patch:
class Array
# there are a couple other ways to do this
# read up on Ruby metaprogramming if you want to know
alias :__pop__ :pop
def pop
frozen? ? nil : __pop__
end
end
If you want your code to work seamlessly with any and all Ruby libraries/gems, adding behavior to built-in methods like this is probably a bad idea. In this case, I doubt it will cause any problems, but whenever you choose to start hacking on Ruby's core classes, you have to be ready for the possible consequences.

Adding a "source" attribute to ruby objects using Rubinius

I'm attempting to (for fun and profit) add the ability to inspect objects in ruby and discover their source code. Not the generated bytecode, and not some decompiled version of the internal representation, but the actual source that was parsed to create that object.
I was up quite late learning about Rubinius, and while I don't have my head around it yet fully, I think I've made some progress.
I'm having trouble figuring out how to do this, though. My first approach was to simply add another instance attribute to the AST structures (for, say, a ClosedScope object). Then, somehow pull that attribute out again when the bytecode is interpreted at runtime.
Does this seem like a sound approach?
As Mr Samuel says, you can just use pry and do show-source foo. But perhaps you'd like to know how it works under the hood.
Ruby provides two things that are useful: firstly you can get a list of all methods on an object. Just call foo.methods. Secondly it provides a file_name and line_number attribute for each method.
To find the entire source code for an object, we scan through all the methods and group them by where they are defined. We then scan up the file back until we see class or module or a few other ways rubyists use to define methods. We then scan forward in each file until we have identified the entire class/module definition.
As dgitized points out we often end up with multiple such definitions, if people have monkey patched core objects. By default pry only shows the module definition which contains most methods; but you can request the others with show-source -a.
Have you looked into Pry? It is a Ruby interpreter/debugger that claims to be able to do just what you've asked.
have you tried set_trace_func? It's not rubinius specific, but does what you ask and isn't based on pry or some other gem.
see http://www.ruby-doc.org/core-1.9.3/Kernel.html#method-i-set_trace_func

Naming convention for syntactic sugar methods

I'm build a library for generic reporting, Excel(using Spreadsheet), and most of the time I'll be writing things out on the last created worksheet (or active as I mostly refer to it).
So I'm wondering if there's a naming convention for methods that are mostly sugar to the normal/unsugared method.
For instance I saw a blog post, scroll down to Composite, a while ago where the author used the #method for the sugared, and #method! when unsugared/manual.
Could this be said to be a normal way of doing things, or just an odd implementation?
What I'm thinking of doing now is:
add_row(data)
add_row!(sheet, data)
This feels like a good fit to me, but is there a consensus on how these kinds of methods should be named?
Edit
I'm aware that the ! is used for "dangerous" methods and ? for query/boolean responses. Which is why I got curious whether the usage in Prawn (the blog post) could be said to be normal.
I think it's fair to say that your definitions:
add_row(data)
add_row!(sheet, data)
are going to confuse Ruby users. There is a good number of naming conventions is the Ruby community that are considered like a de-facto standard for naming. For example, the bang methods are meant to modify the receiver, see map and map!. Another convention is add the ? as a suffix to methods that returns a boolean. See all? or any? for a reference.
I used to see bang-methods as more dangerous version of a regular named method:
Array#reverse! that modifies array itself instead of returning new array with reversed order of elements.
ActiveRecord::Base#save! (from Rails) validates model and save it if it's valid. But unlike regular version that return true or false depending on whether the model was saved or not raises an exception if model is invalid.
I don't remember seeing bang-methods as sugared alternatives for regular methods. May be I'd give such methods their own distinct name other then just adding a bang to regular version name.
Why have two separate methods? You could for example make the sheet an optional parameter, for example
def add_row(sheet = active_sheet, data)
...
end
default values don't have to just be static values - in this case it's calling the active_sheet method. If my memory is correct prior to ruby 1.9 you'd have to swap the parameters as optional parameters couldn't be followed by non optional ones.
I'd agree with other answers that ! has rather different connotations to me.

Using Hash Tables as function inputs

I am working in a group that is writing some APIs for tools that we are using in Ruby. When writing API methods, many of my team mates use hash tables as the method's only parameter while I write my methods with each value specified.
For example, a class Apple defined as:
class Apple
#commonName
#volume
#color
end
I would instantiate the class with method:
Apple.new( commonName, volume, color )
My team mates would write it so the method looked like:
Apple.new( {"commonName"=>commonName, "volume"=>volume, "color"=>color )
I don't like using a hash table as the input. To me is seems unnecessarily bulky and doesn't add any clarity to the code. While it doesn't appear to be a big deal in this example, some of our methods have greater than 10 parameters and there will often be hash tables nested in inside other hash tables. I also noticed that using hash tables in this way is extremely uncommon in public APIs(net/telnet is the only exception that I can think of right now).
Question: What arguments could I make to my team members to not use hash tables as input parameters. The bulkiness of the code isn't a sufficient justification(they are not afraid of writing 200-400 character lines) and excessive memory/processing overhead won't work because it won't become an issue with the way our tools will be used.
Actually if your method takes more than 10 arguments, you should either redesign your class or eat dirt and use hashes. For any method that takes more than 4 arguments, using typical arguments can be counter-intuitive while calling the method, because you got to remember the order correctly.
I think best solution would be to simply redesign such methods and use something like builder or fluent patterns.
First of all, you should chide them for using strings instead of symbols for hash keys.
One issue with using a hash is that you then have to check that all the appropriate keys are in it. This makes it useful for optional parameters, but for mandatory one, why not use the built-in functionality of the language? For example, with their method, what happens if I do this:
Apple.new( {"commonName"=>commonName, "volume"=>volume} )
Whereas, with Apple.new(commonName, volume), you know you'll get an ArgumentError.
Named parameters make for more self-documenting code which is nice. But other than that there's not a lot of difference. The Hash allows for more flexibility, especially if you start doing any method aliasing. Also, the various Hash methods in ActiveSupport make setting defaults and verifying inputs pretty painless. I guess this probably wasn't the answer you were looking for.

Resources