How can I compare classes in Ruby? - ruby

How can I compare classes in Ruby or in other words how can I translate this Java code into Ruby?
Class<?> clazz = ...;
if (clazz == String.class) {
...
} else if (clazz == Integer.class) {
...
}
To clarify: I do not want to compare object instances or check if an object is an instance of a class.
EDIT: I do not want to compare object instances or check if an object is an instance of a class, i.e. is_a? and kind_of? don't help me.

The literal translation of your Java code would be something like
klass = ...
if klass == String
...
elsif klass == Integer
...
end
Or, more idiomatically
klass = ...
case
when klass == String
...
when klass == Integer
...
end
Or maybe
klass = ...
DISPATCH_TABLE = {
String => -> { ... },
Integer => -> { ... }
}
DISPATCH_TABLE[klass].()
However, Ruby is an object-oriented language, and in an object-oriented language this simply doesn't make sense. You would instead just write
class String
def do_something
...
end
end
class Integer
def do_something
...
end
end
and the language will perform the dispatching for you. This is called polymorphism and is supported by pretty much every object-oriented language and many non-object-oriented ones as well.
This particular transformation is one of the fundamental Refactorings described in Martin Fowler's book (p. 255), it is called the Replace Conditional With Polymorphism Refactoring.
The biggest problem with providing a sensible solution to your problem is that you don't tell us what the problem is. You only tell us what the solution is. Or, more precisely, you tell us what you think the solution is in Java, and you somehow expect that the solution in Ruby would be exactly 100% identical, even though the languages couldn't be more different.
To provide a good solution, we need to know the problem first. In other words: the most relevant parts of your question are the …s

>> "foo".class == "bar".class
true

In Ruby, all classes objects are instances of the Class class. The names of classes are constants that refer to the instance. If you have a reference to the class object, you can compare it with the named constant for that class. So you could do:
if klass == String
...
elsif klass == Integer
...
end
The expression String.class == Integer.class would be true, since the class of both classes is, of course, Class.

Related

Setting an instance variable with `attr_accessor` in a constructor leads to a `nil` value when accessed outside the class [duplicate]

Ruby setters—whether created by (c)attr_accessor or manually—seem to be the only methods that need self. qualification when accessed within the class itself. This seems to put Ruby alone the world of languages:
All methods need self/this (like Perl, and I think Javascript)
No methods require self/this is (C#, Java)
Only setters need self/this (Ruby?)
The best comparison is C# vs Ruby, because both languages support accessor methods which work syntactically just like class instance variables: foo.x = y, y = foo.x . C# calls them properties.
Here's a simple example; the same program in Ruby then C#:
class A
def qwerty; #q; end # manual getter
def qwerty=(value); #q = value; end # manual setter, but attr_accessor is same
def asdf; self.qwerty = 4; end # "self." is necessary in ruby?
def xxx; asdf; end # we can invoke nonsetters w/o "self."
def dump; puts "qwerty = #{qwerty}"; end
end
a = A.new
a.xxx
a.dump
take away the self.qwerty =() and it fails (Ruby 1.8.6 on Linux & OS X). Now C#:
using System;
public class A {
public A() {}
int q;
public int qwerty {
get { return q; }
set { q = value; }
}
public void asdf() { qwerty = 4; } // C# setters work w/o "this."
public void xxx() { asdf(); } // are just like other methods
public void dump() { Console.WriteLine("qwerty = {0}", qwerty); }
}
public class Test {
public static void Main() {
A a = new A();
a.xxx();
a.dump();
}
}
Question: Is this true? Are there other occasions besides setters where self is necessary? I.e., are there other occasions where a Ruby method cannot be invoked without self?
There are certainly lots of cases where self becomes necessary. This is not unique to Ruby, just to be clear:
using System;
public class A {
public A() {}
public int test { get { return 4; }}
public int useVariable() {
int test = 5;
return test;
}
public int useMethod() {
int test = 5;
return this.test;
}
}
public class Test {
public static void Main() {
A a = new A();
Console.WriteLine("{0}", a.useVariable()); // prints 5
Console.WriteLine("{0}", a.useMethod()); // prints 4
}
}
Same ambiguity is resolved in same way. But while subtle I'm asking about the case where
A method has been defined, and
No local variable has been defined, and
we encounter
qwerty = 4
which is ambiguous—is this a method invocation or an new local variable assignment?
#Mike Stone
Hi! I understand and appreciate the points you've made and your
example was great. Believe me when I say, if I had enough reputation,
I'd vote up your response. Yet we still disagree:
on a matter of semantics, and
on a central point of fact
First I claim, not without irony, we're having a semantic debate about the
meaning of 'ambiguity'.
When it comes to parsing and programming language semantics (the subject
of this question), surely you would admit a broad spectrum of the notion
'ambiguity'. Let's just adopt some random notation:
ambiguous: lexical ambiguity (lex must 'look ahead')
Ambiguous: grammatical ambiguity (yacc must defer to parse-tree analysis)
AMBIGUOUS: ambiguity knowing everything at the moment of execution
(and there's junk between 2-3 too). All these categories are resolved by
gathering more contextual info, looking more and more globally. So when you
say,
"qwerty = 4" is UNAMBIGUOUS in C#
when there is no variable defined...
I couldn't agree more. But by the same token, I'm saying
"qwerty = 4" is un-Ambiguous in ruby
(as it now exists)
"qwerty = 4" is Ambiguous in C#
And we're not yet contradicting each other. Finally, here's where we really
disagree: Either ruby could or could not be implemented without any further
language constructs such that,
For "qwerty = 4," ruby UNAMBIGUOUSLY
invokes an existing setter if there
is no local variable defined
You say no. I say yes; another ruby could exist which behaves exactly like
the current in every respect, except "qwerty = 4" defines a new
variable when no setter and no local exists, it invokes the setter if one
exists, and it assigns to the local if one exists. I fully accept that I
could be wrong. In fact, a reason why I might be wrong would be interesting.
Let me explain.
Imagine you are writing a new OO language with accessor methods looking
like instances vars (like ruby & C#). You'd probably start with
conceptual grammars something like:
var = expr // assignment
method = expr // setter method invocation
But the parser-compiler (not even the runtime) will puke, because even after
all the input is grokked there's no way to know which grammar is pertinent.
You're faced which a classic choice. I can't be sure of the details, but
basically ruby does this:
var = expr // assignment (new or existing)
// method = expr, disallow setter method invocation without .
that is why it's un-Ambiguous, while and C# does this:
symbol = expr // push 'symbol=' onto parse tree and decide later
// if local variable is def'd somewhere in scope: assignment
// else if a setter is def'd in scope: invocation
For C#, 'later' is still at compile time.
I'm sure ruby could do the same, but 'later' would have to be at runtime, because
as ben points out you don't know until the statement is executed which case
applies.
My question was never intended to mean "do I really need the 'self.'?" or "what
potential ambiguity is being avoided?" Rather I wanted to know why was this
particular choice made? Maybe it's not performance. Maybe it just got the job
done, or it was considered best to always allow a 1-liner local to override a
method (a pretty rare case requirement) ...
But I'm sort of suggesting that the most dynamical language might be the one which
postpones this decision the longest, and chooses semantics based on the most contextual
info: so if you have no local and you defined a setter, it would use the setter. Isn't
this why we like ruby, smalltalk, objc, because method invocation is decided at runtime,
offering maximum expressiveness?
Well, I think the reason this is the case is because qwerty = 4 is ambiguous—are you defining a new variable called qwerty or calling the setter? Ruby resolves this ambiguity by saying it will create a new variable, thus the self. is required.
Here is another case where you need self.:
class A
def test
4
end
def use_variable
test = 5
test
end
def use_method
test = 5
self.test
end
end
a = A.new
a.use_variable # returns 5
a.use_method # returns 4
As you can see, the access to test is ambiguous, so the self. is required.
Also, this is why the C# example is actually not a good comparison, because you define variables in a way that is unambiguous from using the setter. If you had defined a variable in C# that was the same name as the accessor, you would need to qualify calls to the accessor with this., just like the Ruby case.
The important thing to remember here is that Ruby methods can be (un)defined at any point, so to intelligently resolve the ambiguity, every assignment would need to run code to check whether there is a method with the assigned-to name at the time of assignment.
Because otherwise it would be impossible to set local variables at all inside of methods. variable = some_value is ambiguous. For example:
class ExampleClass
attr_reader :last_set
def method_missing(name, *args)
if name.to_s =~ /=$/
#last_set = args.first
else
super
end
end
def some_method
some_variable = 5 # Set a local variable? Or call method_missing?
puts some_variable
end
end
If self wasn't required for setters, some_method would raise NameError: undefined local variable or method 'some_variable'. As-is though, the method works as intended:
example = ExampleClass.new
example.blah = 'Some text'
example.last_set #=> "Some text"
example.some_method # prints "5"
example.last_set #=> "Some text"

Template methods in ruby

Simple question:
In java you can define :
void myFunction<T>(T value) { //do stuff }
Is there an equivalent in ruby, and if not, how can I achieve a similar result (passing class types?)
You can pass a class to a method just like passing normal objects. For example
def create_object(klass, *args)
klass.new(*args)
end
create_object(String) #=> ""
create_object(Hash) #=> {}
create_object(Array, 3, :hello) #=> [:hello, :hello, :hello]
First a few definitions
Generics is an abstraction over types
Polymorphism is a sum-type pattern
Composition is a product-type pattern
Most OO languages lean towards polymorphism
Ruby is an OO language. Polymorphism is at the core of its design. The type system in Ruby is only getting strong in Ruby 3. So we may see more interesting generics at that point; but until now, I haven't heard that to be a feature.
To achieve this, we technically need to figure out a way of applying a method on separate types without knowing the type. It's a lot of code duplication is possible.
Your Java example…
void myFunction<T>(T value) { //do stuff }
…can be translated into Ruby as
def myFunction(value)
raise "Only works with T types" unless value.is_a? T
# do stuff
end
Where the magic now has to happen is in defining the possible set of T. I'm thinking something like…
class T
def _required_for_MyFunction()
raise "T is abstract!"
end
end
class Something < T
def _required_for_MyFunction()
# does something
end
end
class Nothing < T
def _required_for_MyFunction()
# does nothing
end
end
The painful part of polymorphism is that you have to define your type space from the get-go. The good parts of this is you have total control of the domain space.
Ruby follows duck typing. You can pass arguments of any class to any method (which is the original reason why you might need generics). If you want to get the class of said argument, you can still use #class
def foo(bar)
bar.class
end
foo 'baz' # => String
foo 42 # => Fixnum

Ruby's attr_accessor vs setter method vs local variable [duplicate]

Ruby setters—whether created by (c)attr_accessor or manually—seem to be the only methods that need self. qualification when accessed within the class itself. This seems to put Ruby alone the world of languages:
All methods need self/this (like Perl, and I think Javascript)
No methods require self/this is (C#, Java)
Only setters need self/this (Ruby?)
The best comparison is C# vs Ruby, because both languages support accessor methods which work syntactically just like class instance variables: foo.x = y, y = foo.x . C# calls them properties.
Here's a simple example; the same program in Ruby then C#:
class A
def qwerty; #q; end # manual getter
def qwerty=(value); #q = value; end # manual setter, but attr_accessor is same
def asdf; self.qwerty = 4; end # "self." is necessary in ruby?
def xxx; asdf; end # we can invoke nonsetters w/o "self."
def dump; puts "qwerty = #{qwerty}"; end
end
a = A.new
a.xxx
a.dump
take away the self.qwerty =() and it fails (Ruby 1.8.6 on Linux & OS X). Now C#:
using System;
public class A {
public A() {}
int q;
public int qwerty {
get { return q; }
set { q = value; }
}
public void asdf() { qwerty = 4; } // C# setters work w/o "this."
public void xxx() { asdf(); } // are just like other methods
public void dump() { Console.WriteLine("qwerty = {0}", qwerty); }
}
public class Test {
public static void Main() {
A a = new A();
a.xxx();
a.dump();
}
}
Question: Is this true? Are there other occasions besides setters where self is necessary? I.e., are there other occasions where a Ruby method cannot be invoked without self?
There are certainly lots of cases where self becomes necessary. This is not unique to Ruby, just to be clear:
using System;
public class A {
public A() {}
public int test { get { return 4; }}
public int useVariable() {
int test = 5;
return test;
}
public int useMethod() {
int test = 5;
return this.test;
}
}
public class Test {
public static void Main() {
A a = new A();
Console.WriteLine("{0}", a.useVariable()); // prints 5
Console.WriteLine("{0}", a.useMethod()); // prints 4
}
}
Same ambiguity is resolved in same way. But while subtle I'm asking about the case where
A method has been defined, and
No local variable has been defined, and
we encounter
qwerty = 4
which is ambiguous—is this a method invocation or an new local variable assignment?
#Mike Stone
Hi! I understand and appreciate the points you've made and your
example was great. Believe me when I say, if I had enough reputation,
I'd vote up your response. Yet we still disagree:
on a matter of semantics, and
on a central point of fact
First I claim, not without irony, we're having a semantic debate about the
meaning of 'ambiguity'.
When it comes to parsing and programming language semantics (the subject
of this question), surely you would admit a broad spectrum of the notion
'ambiguity'. Let's just adopt some random notation:
ambiguous: lexical ambiguity (lex must 'look ahead')
Ambiguous: grammatical ambiguity (yacc must defer to parse-tree analysis)
AMBIGUOUS: ambiguity knowing everything at the moment of execution
(and there's junk between 2-3 too). All these categories are resolved by
gathering more contextual info, looking more and more globally. So when you
say,
"qwerty = 4" is UNAMBIGUOUS in C#
when there is no variable defined...
I couldn't agree more. But by the same token, I'm saying
"qwerty = 4" is un-Ambiguous in ruby
(as it now exists)
"qwerty = 4" is Ambiguous in C#
And we're not yet contradicting each other. Finally, here's where we really
disagree: Either ruby could or could not be implemented without any further
language constructs such that,
For "qwerty = 4," ruby UNAMBIGUOUSLY
invokes an existing setter if there
is no local variable defined
You say no. I say yes; another ruby could exist which behaves exactly like
the current in every respect, except "qwerty = 4" defines a new
variable when no setter and no local exists, it invokes the setter if one
exists, and it assigns to the local if one exists. I fully accept that I
could be wrong. In fact, a reason why I might be wrong would be interesting.
Let me explain.
Imagine you are writing a new OO language with accessor methods looking
like instances vars (like ruby & C#). You'd probably start with
conceptual grammars something like:
var = expr // assignment
method = expr // setter method invocation
But the parser-compiler (not even the runtime) will puke, because even after
all the input is grokked there's no way to know which grammar is pertinent.
You're faced which a classic choice. I can't be sure of the details, but
basically ruby does this:
var = expr // assignment (new or existing)
// method = expr, disallow setter method invocation without .
that is why it's un-Ambiguous, while and C# does this:
symbol = expr // push 'symbol=' onto parse tree and decide later
// if local variable is def'd somewhere in scope: assignment
// else if a setter is def'd in scope: invocation
For C#, 'later' is still at compile time.
I'm sure ruby could do the same, but 'later' would have to be at runtime, because
as ben points out you don't know until the statement is executed which case
applies.
My question was never intended to mean "do I really need the 'self.'?" or "what
potential ambiguity is being avoided?" Rather I wanted to know why was this
particular choice made? Maybe it's not performance. Maybe it just got the job
done, or it was considered best to always allow a 1-liner local to override a
method (a pretty rare case requirement) ...
But I'm sort of suggesting that the most dynamical language might be the one which
postpones this decision the longest, and chooses semantics based on the most contextual
info: so if you have no local and you defined a setter, it would use the setter. Isn't
this why we like ruby, smalltalk, objc, because method invocation is decided at runtime,
offering maximum expressiveness?
Well, I think the reason this is the case is because qwerty = 4 is ambiguous—are you defining a new variable called qwerty or calling the setter? Ruby resolves this ambiguity by saying it will create a new variable, thus the self. is required.
Here is another case where you need self.:
class A
def test
4
end
def use_variable
test = 5
test
end
def use_method
test = 5
self.test
end
end
a = A.new
a.use_variable # returns 5
a.use_method # returns 4
As you can see, the access to test is ambiguous, so the self. is required.
Also, this is why the C# example is actually not a good comparison, because you define variables in a way that is unambiguous from using the setter. If you had defined a variable in C# that was the same name as the accessor, you would need to qualify calls to the accessor with this., just like the Ruby case.
The important thing to remember here is that Ruby methods can be (un)defined at any point, so to intelligently resolve the ambiguity, every assignment would need to run code to check whether there is a method with the assigned-to name at the time of assignment.
Because otherwise it would be impossible to set local variables at all inside of methods. variable = some_value is ambiguous. For example:
class ExampleClass
attr_reader :last_set
def method_missing(name, *args)
if name.to_s =~ /=$/
#last_set = args.first
else
super
end
end
def some_method
some_variable = 5 # Set a local variable? Or call method_missing?
puts some_variable
end
end
If self wasn't required for setters, some_method would raise NameError: undefined local variable or method 'some_variable'. As-is though, the method works as intended:
example = ExampleClass.new
example.blah = 'Some text'
example.last_set #=> "Some text"
example.some_method # prints "5"
example.last_set #=> "Some text"

Why does Ruby not have boolean class, instead using TrueClass and FalseClass? [duplicate]

I was working on serializing values when found out about this one. Ruby has a TrueClass class, and a FalseClass class, but it has no Boolean class. I'd like to know why is this.
I see some advantages in using a Boolean; for example, string parsing could be centralized on it.
Ruby developers are smarter than me, so there must be a lot of good reasons that I just don't see. But right now it looks to me like having OneClass and a TwoClass instead of Fixnum.
The purpose of a class is to group similar objects, or objects with similar behavior together. 1 and 2 are very similar, therefore it makes perfect sense for them to be in the same class. true and false however are not similar. In fact, their whole point is that they are exactly the opposite of each other and have opposite behavior. Therefore, they don't belong in the same class.
Can you give an example of what sort of common behavior you would implement in a Boolean class? I can't think of anything.
Let's just look at the behavior that TrueClass and FalseClass have: there's exactly four methods there. No more. And in every single case, the two methods do exactly the opposite. How and why would you put that in a single class?
Here's how you implement all those methods:
class TrueClass
def &(other)
other
end
def |(_)
self
end
def ^(other)
!other
end
def to_s
'true'
end
end
And now the other way around:
class FalseClass
def &(_)
self
end
def |(other)
other
end
def ^(other)
other
end
def to_s
'false'
end
end
Granted, in Ruby, there is a lot of "magic" that is going on behind the scenes and that is not actually handled by TrueClass and FalseClass but rather hardwired into the interpreter. Stuff like if, &&, || and !. However, in Smalltalk, from which Ruby borrowed a lot including the concept of FalseClass and TrueClass, all of these are implemented as methods as well, and you can do the same thing in Ruby:
class TrueClass
def if
yield
end
def ifelse(then_branch=->{}, _=nil)
then_branch.()
end
def unless
end
def unlesselse(_=nil, else_branch=->{})
ifelse(else_branch, _)
end
def and
yield
end
def or
self
end
def not
false
end
end
And again the other way around:
class FalseClass
def if
end
def ifelse(_=nil, else_branch=->{})
else_branch.()
end
def unless
yield
end
def unlesselse(unless_branch=->{}, _=nil)
ifelse(_, unless_branch)
end
def and
self
end
def or
yield
end
def not
true
end
end
A couple of years ago, I wrote the above just for fun and even published it. Apart from the fact that the syntax looks different because Ruby uses special operators while I use only methods, it behaves exactly like Ruby's builtin operators. In fact, I actually took the RubySpec conformance testsuite and ported it over to my syntax and it passes.
It seems that Matz himself answered this question on a mailing list message in 2004.
Short version of his answer: "right now it works ok, adding a Boolean doesn't give any advantage".
Personally I don't agree with that; the aforementioned "string parsing" is one example. Another one is that when you are applying different treatment to a variable depending on its type, (i.e. a yml parser) having a "Boolean" class is handy - it removes one "if". It also looks more correct, but that's a personal opinion.
Quoting Matz on Ruby forum (2013):
...There's nothing true and false commonly share, thus no Boolean class.
Besides that, in Ruby, everything behave as Boolean value....
true and false could be managed by a Boolean class that held multiple values, but then the class object would have to have internal values, and therefore have to be de-referenced with every use.
Instead, Ruby treats true and false as long values (0 and 1), each of which corresponds to a type of object class (FalseClass and TrueClass). By using two classes instead of a single Boolean class, each class does not require any values and therefore can be distinguished simply by its class identifier (0 or 1). I believe this translates to significant speed advantages internal to the Ruby engine, because internally Ruby can treat TrueClass and FalseClass as long values that require zero translation from their ID value, whereas a Boolean object would have to be de-referenced before it could be evaluated.
Since everything but false and nil evaluate to true in Ruby by default, you would only need to add parsing to String.
Something like this could work:
class Object
## Makes sure any other object that evaluates to false will work as intended,
## and returns just an actual boolean (like it would in any context that expect a boolean value).
def trueish?; !!self; end
end
class String
## Parses certain strings as true; everything else as false.
def trueish?
# check if it's a literal "true" string
return true if self.strip.downcase == 'true'
# check if the string contains a numerical zero
[:Integer, :Float, :Rational, :Complex].each do |t|
begin
converted_number = Kernel.send(t, self)
return false if converted_number == 0
rescue ArgumentError
# raises if the string could not be converted, in which case we'll continue on
end
end
return false
end
end
When used, this would give you:
puts false.trueish? # => false
puts true.trueish? # => true
puts 'false'.trueish? # => false
puts 'true'.trueish? # => true
puts '0'.trueish? # => false
puts '1'.trueish? # => true
puts '0.0'.trueish? # => false
puts '1.0'.trueish? # => true
I believe part of the “big idea” behind Ruby is to just make the behavior you desire inherent to your program (e.g. boolean parsing), rather creating a fully encapsulated class that lives in it's own namespace world (e.g. BooleanParser).
The main reason is simply the fact that this is far quicker and simpler to implement boolean expressions as it is currently than with a Boolean class which would imply a conversion.
As Mongus Pong told you, when you write "if ", you ask the interpretor to evaluate the thing and then branch. If you had Boolean class, you'd have to convert the evaluation of thing into a Boolean before branching (one more step).
Remember that such a ->Boolean conversion would be available as a Ruby method in the Boolean class. This method could then be changed dynamically as any other Ruby method, allowing developer to mess up things completely (which is not so serious indeed) but clearly, this wouldn't allow the interpretor to optimize tests as they should.
Do you realize that it would replace a few CPU instructions operation by a complete method call, which is costly in Ruby (remember the "send" method processing)...
Joining TrueClass and FalseClass with a kind of Boolean super-class breaks the LSP (Liskov Substitution Principle, lsp) - then "any code using Boolean should be able use any it's descendant transparently". It is not achievable for TrueClass and FalseClass since they have opposite behavior.
Btw, LSP is the (L) of SOLID.
Looks like it is the good example of LSP in real life.
UPD#1: regarding does LSP broken or not
COMMON_EXPECTED_VALUE = x
def test_LSP(klass)
assert klass.new.value == COMMON_EXPECTED_VALUE
end
test_LSP(FooClass)
test_LSP(BarClass)
Once you have got both green - you have chances to not break the LSP joining this classes with the common superclass, of course if you will get double green for all features of future superclass.
In Ruby nil and false are false and everything else is true. Hence there is no need for a specific boolean class.
You can try it :
if 5
puts "5 is true"
end
5 evaluates to true
if nil
puts "nil is true"
else
puts "nil is false"
end
Will print "nil is false"
As others have said, you could "patch" Ruby. Create your own class. Here is something I came up with. The methods on the Boolean class are a bit silly, but they could be useful programmatically at some point.
class Boolean
def self.new(bool)
bool
end
def self.true
true
end
def self.false
false
end
end
class FalseClass
def is_a?(other)
other == Boolean || super
end
def self.===(other)
other == Boolean || super
end
end
class TrueClass
def is_a?(other)
other == Boolean || super
end
def self.===(other)
other == Boolean || super
end
end

Why do Ruby setters need "self." qualification within the class?

Ruby setters—whether created by (c)attr_accessor or manually—seem to be the only methods that need self. qualification when accessed within the class itself. This seems to put Ruby alone the world of languages:
All methods need self/this (like Perl, and I think Javascript)
No methods require self/this is (C#, Java)
Only setters need self/this (Ruby?)
The best comparison is C# vs Ruby, because both languages support accessor methods which work syntactically just like class instance variables: foo.x = y, y = foo.x . C# calls them properties.
Here's a simple example; the same program in Ruby then C#:
class A
def qwerty; #q; end # manual getter
def qwerty=(value); #q = value; end # manual setter, but attr_accessor is same
def asdf; self.qwerty = 4; end # "self." is necessary in ruby?
def xxx; asdf; end # we can invoke nonsetters w/o "self."
def dump; puts "qwerty = #{qwerty}"; end
end
a = A.new
a.xxx
a.dump
take away the self.qwerty =() and it fails (Ruby 1.8.6 on Linux & OS X). Now C#:
using System;
public class A {
public A() {}
int q;
public int qwerty {
get { return q; }
set { q = value; }
}
public void asdf() { qwerty = 4; } // C# setters work w/o "this."
public void xxx() { asdf(); } // are just like other methods
public void dump() { Console.WriteLine("qwerty = {0}", qwerty); }
}
public class Test {
public static void Main() {
A a = new A();
a.xxx();
a.dump();
}
}
Question: Is this true? Are there other occasions besides setters where self is necessary? I.e., are there other occasions where a Ruby method cannot be invoked without self?
There are certainly lots of cases where self becomes necessary. This is not unique to Ruby, just to be clear:
using System;
public class A {
public A() {}
public int test { get { return 4; }}
public int useVariable() {
int test = 5;
return test;
}
public int useMethod() {
int test = 5;
return this.test;
}
}
public class Test {
public static void Main() {
A a = new A();
Console.WriteLine("{0}", a.useVariable()); // prints 5
Console.WriteLine("{0}", a.useMethod()); // prints 4
}
}
Same ambiguity is resolved in same way. But while subtle I'm asking about the case where
A method has been defined, and
No local variable has been defined, and
we encounter
qwerty = 4
which is ambiguous—is this a method invocation or an new local variable assignment?
#Mike Stone
Hi! I understand and appreciate the points you've made and your
example was great. Believe me when I say, if I had enough reputation,
I'd vote up your response. Yet we still disagree:
on a matter of semantics, and
on a central point of fact
First I claim, not without irony, we're having a semantic debate about the
meaning of 'ambiguity'.
When it comes to parsing and programming language semantics (the subject
of this question), surely you would admit a broad spectrum of the notion
'ambiguity'. Let's just adopt some random notation:
ambiguous: lexical ambiguity (lex must 'look ahead')
Ambiguous: grammatical ambiguity (yacc must defer to parse-tree analysis)
AMBIGUOUS: ambiguity knowing everything at the moment of execution
(and there's junk between 2-3 too). All these categories are resolved by
gathering more contextual info, looking more and more globally. So when you
say,
"qwerty = 4" is UNAMBIGUOUS in C#
when there is no variable defined...
I couldn't agree more. But by the same token, I'm saying
"qwerty = 4" is un-Ambiguous in ruby
(as it now exists)
"qwerty = 4" is Ambiguous in C#
And we're not yet contradicting each other. Finally, here's where we really
disagree: Either ruby could or could not be implemented without any further
language constructs such that,
For "qwerty = 4," ruby UNAMBIGUOUSLY
invokes an existing setter if there
is no local variable defined
You say no. I say yes; another ruby could exist which behaves exactly like
the current in every respect, except "qwerty = 4" defines a new
variable when no setter and no local exists, it invokes the setter if one
exists, and it assigns to the local if one exists. I fully accept that I
could be wrong. In fact, a reason why I might be wrong would be interesting.
Let me explain.
Imagine you are writing a new OO language with accessor methods looking
like instances vars (like ruby & C#). You'd probably start with
conceptual grammars something like:
var = expr // assignment
method = expr // setter method invocation
But the parser-compiler (not even the runtime) will puke, because even after
all the input is grokked there's no way to know which grammar is pertinent.
You're faced which a classic choice. I can't be sure of the details, but
basically ruby does this:
var = expr // assignment (new or existing)
// method = expr, disallow setter method invocation without .
that is why it's un-Ambiguous, while and C# does this:
symbol = expr // push 'symbol=' onto parse tree and decide later
// if local variable is def'd somewhere in scope: assignment
// else if a setter is def'd in scope: invocation
For C#, 'later' is still at compile time.
I'm sure ruby could do the same, but 'later' would have to be at runtime, because
as ben points out you don't know until the statement is executed which case
applies.
My question was never intended to mean "do I really need the 'self.'?" or "what
potential ambiguity is being avoided?" Rather I wanted to know why was this
particular choice made? Maybe it's not performance. Maybe it just got the job
done, or it was considered best to always allow a 1-liner local to override a
method (a pretty rare case requirement) ...
But I'm sort of suggesting that the most dynamical language might be the one which
postpones this decision the longest, and chooses semantics based on the most contextual
info: so if you have no local and you defined a setter, it would use the setter. Isn't
this why we like ruby, smalltalk, objc, because method invocation is decided at runtime,
offering maximum expressiveness?
Well, I think the reason this is the case is because qwerty = 4 is ambiguous—are you defining a new variable called qwerty or calling the setter? Ruby resolves this ambiguity by saying it will create a new variable, thus the self. is required.
Here is another case where you need self.:
class A
def test
4
end
def use_variable
test = 5
test
end
def use_method
test = 5
self.test
end
end
a = A.new
a.use_variable # returns 5
a.use_method # returns 4
As you can see, the access to test is ambiguous, so the self. is required.
Also, this is why the C# example is actually not a good comparison, because you define variables in a way that is unambiguous from using the setter. If you had defined a variable in C# that was the same name as the accessor, you would need to qualify calls to the accessor with this., just like the Ruby case.
The important thing to remember here is that Ruby methods can be (un)defined at any point, so to intelligently resolve the ambiguity, every assignment would need to run code to check whether there is a method with the assigned-to name at the time of assignment.
Because otherwise it would be impossible to set local variables at all inside of methods. variable = some_value is ambiguous. For example:
class ExampleClass
attr_reader :last_set
def method_missing(name, *args)
if name.to_s =~ /=$/
#last_set = args.first
else
super
end
end
def some_method
some_variable = 5 # Set a local variable? Or call method_missing?
puts some_variable
end
end
If self wasn't required for setters, some_method would raise NameError: undefined local variable or method 'some_variable'. As-is though, the method works as intended:
example = ExampleClass.new
example.blah = 'Some text'
example.last_set #=> "Some text"
example.some_method # prints "5"
example.last_set #=> "Some text"

Resources