I think this is a scope related problem. If I have a rule on my object like this:
:- public(new/2).
:- mode(new(+list, -object_identifier), one).
new(Args, Instance) :-
self(Self),
create_object(Instance, [instantiates(Self)], [], []),
Instance::process_arguments(Args).
I find this works fine if I do this dance:
:- object(name, instantiates(name)).
I don't fully understand why this is necessary but I suspect it is linked to my actual problem, which is that if I have your standard Prolog loop in my object, like so:
process_arguments([Arg|Args]) :- process_arg(Arg), process_arguments(Args).
process_arguments([]).
process_arg(Arg) :- ::asserta(something(Arg)).
I find this use of ::asserta puts the facts in the right namespace (on the newly-created instance). However, if I get witty and replace the body of process_arguments/1 with this lambda expression:
process_arguments(Args) :- meta::map([Arg]>>process_arg(Arg), Args).
then I wind up with my facts being added to the parent class and shared by all the instances. If I replace it with this:
process_arguments(Args) :-
self(Self),
meta::map([Arg]>>(Self::process_arg(Arg)), Args).
then it works, but I have to make process_arg/1 a public rule when I'd rather not. What am I missing?
Let me start first with your snippet of code above where the object name instantiates itself. In doing this, you make name its own class. Nothing wrong where. In languages that support meta-classes, such as Smalltalk and Logtalk, making a class its own meta-class is a classical way of avoiding an infinite regression. See, for example the Wikipedia entry on meta-classes (http://en.wikipedia.org/wiki/Metaclass). See also the "reflection" example in the Logtalk distribution. By making the object name instantiate itself, it plays both the role of an instance (as it instantiates an object) and the role of a class (as it's instantiated by an object). If you defined name as a stand-alone object, i.e. an object with no relation to other objects, it would be compiled as a prototype.
Now to your question. In Logtalk, meta-predicates (such as meta::map/2) are called in the context of the sender. If the process_arguments/1 predicate is defined in name, then the execution context (including the value of self) will be name. Thus, the clauses for something/1 will be asserted in name. Your workaround (by using the built-in method self/1) works as expected but it does forces you to declare process_arg/1 public predicate. This is a bug in the stable Logtalk version as it should also work by declaring the process_arg/1 predicate protected or private (as the sender is name and the predicate is declared in the sender). For example:
:- object(name,
instantiates(name)).
:- public(new/2).
:- mode(new(+list, -object_identifier), one).
new(Args, Instance) :-
self(Self),
create_object(Instance, [instantiates(Self)], [set_logtalk_flag(dynamic_declarations, allow)], []),
meta::map({Instance}/[Arg]>>(Instance::process_arg(Arg)), Args).
:- private(process_arg/1).
process_arg(Arg) :-
::asserta(something(Arg)).
:- end_object.
I will push the bug fix into the publicly available Logtalk development version later this week. Thanks for driving my attention to this bug.
Related
What's the Prolog way of adding "annotations" (meta information) to predicates?
A typical example from Java would be:
#Transactional
public List<User> findAll() { ... }
What are the Prolog options for adding Transactional annotation to the predicate:
users(Users) :- ...
There's no (de facto) standard solution. Some systems use predicate directives (e.g. Ciao Prolog, ECLiPSe, ... also Logtalk) that are in some cases user-extensible in the information they can represent. some system use structured comments (e.g. SWI-Prolog). But no matter the solution specifics, the best systems allow that information to be programmatically retrieved using the system reflection API so that we can base developer tools on it.
I put facts into the Working Set using the this.session.Insert(object fact1) or this.session.InsertAll(IEnumerable<object> fact) methods.
Now, one of the facts changes and I call the this.session.Replace(object fact).
How does NRules know which object to replace? Does it compare the references for equality? Does it call the Equals operator? I'm guessing you're probably using Dictionary logic, so the object's Equals() and GetHashCode() determine when two facts are the same, but I need some affirmation before I continue with my design.
When calling Update, UpdateAll, Retract or RetractAll in NRules, the engine indeed looks the facts up in the Dictionary. So, the engine uses object's Equals and GetHashCode implementation.
However, if updating/retracting the same object instance, it's not necessary to override Equals and GetHashCode, because default implementation for reference types, which uses ReferenceEquals, works just fine.
Problem:
We know that Java doesn’t allow to extend multiple classes because it would result in the Diamond Problem where the compiler could’t decide which superclass method to use. With interface default methods the Diamond Problem were introduction in Java 8. That is, because if a class implements two interfaces, each defining the same default method, and the implementing class doesn’t override the common default method, the compiler couldn’t decide which implementation to chose.
Solution:
Java 8 requires to provide an implementation for default methods implemented by more than one interface. So if a class would implement both interfaces mentioned above, it would have to provide an implementation for the common default method. Otherwise the compiler would throw a compile time error.
Question:
Why is this solution not applicable for multiple class inheritance, by overriding common methods introduced by child class?
You didn’t understand the Diamond Problem correctly (and granted, the current state of the Wikipedia article doesn’t explain it sufficiently). As shown in this graphic,
the diamond problem occurs, when the same class is inherited multiple times through different inheritance paths. This isn’t a problem for interfaces (and never was), as they only define a contract and specifying the same contract multiple times makes no difference.
The main problem is not associated with the methods but the data of that super type. Should the instance state of A exist once or twice in that case? If once, C and B can have different, conflicting constraints on A’s instance state. Both classes might also assume to have full control over A’s state, i.e. not consider that other class having the same access level. If having two different A states, a widening conversion of a D reference to an A reference becomes ambiguous, as either A could be meant.
Interfaces don’t have these problems, as they do not carry instance data at all. They also have (almost) no accessibility issues as their methods are always public. Allowing default methods, doesn’t change this, as default methods still don’t access instance variables but operate with the interface methods only.
Of course, there is the possibility that B and C declared default methods with identical signature, causing an ambiguity that has to be resolved in D. But this is even the case, when there is no A, i.e. no “diamond” at all. So this scenario is not a correct example of the “Diamond Problem”.
Methods introduced by interfaces may always be overriden, while methods introduced by classes could be final. This is one reason why you potentially couldn't apply the same strategy for classes as you could for interfaces.
The conflict described as "diamond problem" can best be illustrated using a polymorphic call to method A.m() where the runtime type of the receiver has type D: Imagine D inherits two different methods both claiming to play the role of A.m() (one of them could be the original method A.m(), at least one of them is an override). Now, dynamic dispatch cannot decide which of the conflicting methods to invoke.
Aside: the distinction betwee the "diamond problem" and regular name clashes is particularly relevant in languages like Eiffel, where the conflict could be locally resolved for the perspective of type D, e.g., by renaming one method. This would avoid the name clash for invocations with static type D, but not for invocations with static type A.
Now, with default methods in Java 8, JLS was amended with rules that detect any such conflicts, requiring D to resolve the conflict (many different cases exist, depending on whether or not some of the types involved are classes). I.e., the diamond problem is not "solved" in Java 8, it is just avoided by rejecting any programs that would produce it.
In theory, similar rules could have been defined in Java 1 to admit multiple inheritance for classes. It's just a decision that was made early on, that the designers of Java did not want to support multiple inheritance.
The choice to admit multiple (implementation) inheritance for default methods but not for class methods is a purely pragmatic choice, not necessitated by any theory.
In the book The Well Grounded Rubyist (excerpt), David Black talks about the "Class/Object Chicken-and-Egg Paradox". I'm having a tough time understanding the entire concept.
Can someone explain it in better/easier/analogical/other terms?
Quote (emphasis mine):
The class Class is an instance of itself; that is, it’s a Class
object. And there’s more. Remember the class Object? Well, Object
is a class... but classes are objects. So, Object is an object. And
Class is a class. And Object is a class, and Class is an object.
Which came first? How can the class Class be created unless the
class Object already exists? But how can there be a class Object
(or any other class) until there’s a class Class of which there can
be instances?
The best way to deal with this paradox, at least for now, is to ignore
it. Ruby has to do some of this chicken-or-egg stuff in order to get
the class and object system up and running—and then, the circularity
and paradoxes don’t matter. In the course of programming, you just
need to know that classes are objects, instances of the class called
Class.
(If you want to know in brief how it works, it’s like this: every
object has an internal record of what class it’s an instance of, and
the internal record inside the object Class points back to Class.)
You can see the problem in this diagram:
(source: phrogz.net)
All object instances inherit from Object. All classes are objects, and Class is a class, therefore Class is an object. However, object instances inherit from their class, and Object is an instance of the Class class, therefore Object itself gets methods from Class.
As you can see in the diagram, however, there isn't a circular lookup loop, because there are two different inheritance 'parts' to every class: the instance methods and the 'class' methods. In the end, the lookup path is sane.
N.B.: This diagram reflects Ruby 1.8, and thus does not include the core BasicObject class introduced in Ruby 1.9.
In practical terms, all you need to understand is that Object is the mother of all classes. All classes extend Object. It is this relationship that you will use in programming, understanding inheritance and so forth.
Eg; You can call hash() on any instance of any object at any time? Why? Because that function appears in the Object class, and all classes inherit that function, because all classes extend Object.
As far as the idea of Class goes, this comes up much less frequently. An object of class Class is like a blueprint, it's like having the class in your hands, without creating an instance of it. There's a little more to it, but it's a difficult one to describe without a lengthy example. Rest assured, when (if ever) the time comes to use it, you'll see it's purpose.
All this excerpt is saying is that Object has a class of type Class and Class is an object, so must extend Object. Its cyclic, but it's irrelevant. The answer is buried somewhere in the compiler.
Regarding the which-came-first criterion, there are two kinds of Ruby objects:
Built-in objects. They exist at the start of a Ruby program and can be considered to have zero creation time.
User created objects. They are created after the program starts via object creation methods (new/clone/dup, class-definition, module-definition, literal-constructs, ...). Such objects are linearly ordered by their time of creation. This order happens to inversely correspond to class-inheritance and instance-of relations.
A detailed explanation of the Ruby object model is available at www.atalon.cz.
I know that my answer comes at least 3 years late, but I have learned about Ruby quite recently and I must admit that the literature sometimes presents (in my opinion) misleading explanations, even though one is handling a very simple problem. Moreover, I am (and was) surprised by such appalling phrases as:
"The best way to deal with this paradox, at least for now, is to ignore it."
stated by the author D. Black, and quoted in the question, but this attitude seems to be pervasive. I have experienced this tendency within the physics community but I have not suspected it had also spread through the programming one. I am not stating that nobody understands what is lurking behind, but it seems at least intriguing why not providing the (indeed) very simple and precise answer, as in this case there is one, without invoking any obscure words such as "paradox" within the explanation.
This so-called "paradox" (even though it is definitely NOT such thing) comes from the (at least misleading) belief that "being an instance of (a subclass of)" should be something as "being an element of" (in, say, naive set theory), or in other terms, a class (in Ruby) is the set containing all the objects sharing some common property (for instance, under this naive interpretation the class String includes all the string objects). Even though this naive point of view (which I may call the "membership interpretation") may help understanding isolated (and rather easy) classes such as String or Symbol, it indeed PRODUCES A CLEAR CONTRADICTION with our naive understanding (and also the axiomatic one, for it contradicts Von Neumann's regularity axiom of set theory, if someone knows what I am talking about) of the membership relationship, for an object could not be an element of itself, as the previous interpretation implies when regarding the object Class.
The previously stated problem is easily avoided if one gets rid of such misleading membership interpretation with a very simply minded model as follows.
I would have guess that my following explanation is well-known by the experts, so I claim no originality at all, but perhaps it was not rephrased in the (simple) terms I am going to present it: in some sense I am completely astonished that (apparently) nobody stated in these terms from the very beginning, and my intention is just to highlight what I believe is the basic underlying structure.
Let us consider a set O of (basic) objects, which consists of all the (basic) objects Ruby has, provided with a mapping f from O to O (which is more or less the .class method), satisfying that the image of the composition of f with itself has only one element.
This latter element (or object) is denoted Class (indeed, what you know to be the class Class).
I would be tempted to write this post using LaTeX code but I am not quite sure if it will be parsed and converted into typical math expressions.
The image of the mapping f is (by definition) the set of Ruby classes (e.g. String, Object, Symbol, Class, etc).
Ruby programmers (even though if they do not know it) interpret the previous construction as follows: we say that an object "x is an instance of y" if and only if y = f(x). By the way this tells us you exactly that Class is an instance of Class (i.e. of itself).
Now, we would need much more decorations to this simple model in order to get the full Ruby hierarchy of classes and functionality (imposing the existence of some fixed members on the image of the map f, a partial order on the image of the map f in order to define subclasses to get afterwards inheritance, etc), and in particular to get the nice picture that was interestingly upvoted, but I suppose that everybody can figure this out from the primitive model I gave (I have written some pages explaining this for myself, after having read several Ruby manuals. I may provide a copy if anybody finds it useful).
Which came first: The class Class or the class Object?
Every class is an instance of the class Class. It follows that the class Object is an instance of the class Class. So you need the class Class to create the class Object. Therefore:
The class Class exists before the class Object.
The class Class is a subclass of the class Object. So you need the class Object from which the class Class can be created. Therefore:
The class Object exists before the class Class.
So statement-2 conflicts with statement-1 and so we have our chicken & egg dilemma. Or you can just accept it as a circular definition!
I have done a dig into the source code of Ruby and created this post on how to make sense of it.
I've been told not to use implementation details. A dependency seems like an implementation detail. However I could phrase it also as a behavior.
Example: A LinkList depends on a storage engine to store its links (eg LinkStorageInterface). The constructor needs to be passed an instance of an implemented LinkStorageInterface to do its job.
I can't say 'shouldUseLinkStorage'. But maybe I can say 'shouldStoreLinksInStorage'.
What is correct to 'test' in this case? Should I test that it stores links in a store (behavior) or don't test this at all?
The dependency itself is not an expected behavior, but the actions called on the dependency most certainly are. You should test the stuff you (the caller) know about, and avoid testing the stuff that requires you to have intimate knowledge of the inner workings of the SUT.
Expanding your example a little, lets imagine that our LinkStorageInterface has the following definition (Pseudo-Code):
Interface LinkStorageInterface
void WriteListToPersistentMedium(LinkList list)
End Interface
Now, since you (the caller) are providing the concrete implementation for that interface it is perfectly reasonable for you to test that WriteListToPersistentMedium() gets called when you invoke the Save() method on your LinkList.
A test might look like this, again using pseudo-code:
void ShouldSaveLinkListToPersistentMedium()
define da = new MockLinkListStorage()
define list = new LinkList(da)
list.Save()
Assert.Method(da.WriteListToPersistentMedium).WasCalledWith(list)
end method
You have tested expected behavior without testing implementation specific details of either your SUT, or your mock. What you want to avoid testing (mostly) are things like:
Order in which methods were called
Making a method, or property public just so you can check it
Anything that does not directly involve the expected behavior you are testing
Again, a dependency is something that you as the consumer of the class are providing, so you expect it to be used. Otherwise there is no point in having that dependency in the first place.
LinkStorageInterface is not an implementation detail - its name suggests an interface to to an engine. In which case the name shouldUseLinkStorage has more value than shouldStoreLinksInStorage.
That's my 2 pennies worth!