Traits vs. Interfaces vs. Mixins? - ruby

What are the similarities & differences between traits, mixins and interfaces. I am trying to get a deeper understanding of these concepts but I don't know enough programming languages that implement these features to truly understand the similarities and differences.
For each of traits, mixins and interfaces
What is the problem being solved?
Is the definition of the concept consistent across programming languages?
What are the similarities between it and the others?
what are the differences between it and the others?

Every reference type in Java, except Object, derives from one single superclass.
By the way, Java classes may implement zero or more interfaces.
Generally speaking, an interface is a contract that describes the methods an implementing class is forced to have, though without directly providing an implementation.
In other words, a Java class is obliged to abide its contract and thus to give implementation to method signatures provided by the interfaces it declares to implement.
An interface constitutes a type. So you can pass parameters and have return values from methods declared as interface types, requiring that way that parameters and return types implement particular methods without necessarily providing a concrete implementation for them.
This sets the basis for several abstraction patterns, like, for example, dependency injection.
Scala, on its own, has traits. Traits give you all the features of Java interfaces, with the significant difference that they can contain method implementations and variables.
Traits are a smart way of implementing methods just once and - by means of that - distribute those methods into all the classes that extend the trait.
Like interfaces for Java classes, you can mix more than one trait into a Scala class.
Since I have no Ruby background, though, I'll point you to an excerpt from David Pollak's "Beginning Scala" (amazon link):
Ruby has mixins, which are collections of methods that can be mixed into any class. Because Ruby does not have static typing and there is no way to declare the types of method parameters, there’s no reasonable way to use mixins to define a contract like interfaces. Ruby mixins provide a mechanism for composing code into classes but not a mechanism for defining or enforcing parameter types.
Interfaces can do even more than is described in this post; as the topic can be vast, I suggest you to investigate more in each one of the three directions, while if you even have Java background, Scala and therefore traits are affordable to learn.

Related

Struct Properties vs Generics in Racket

Racket seems to have two mechanisms for adding per-type information to structs: generics and properties. Unfortunately, the documentation doesn't seem to indicate when one is preferred over the other. The docs do say:
Generic Interfaces provide a high-level API on top of structure type properties.
But that doesn't seem to provide a good intuition when I should use one over the other. It does seem pretty clear that define-generic provides a much higher level interface than make-struct-type-property. But many struct types still only use properties, which seems to indicate that there are still cases where the low-level API is preferred.
So the question is, when is using the struct properties system better than using the generics one, or does the properties library only exist as a historic relic?
The struct property system is the implementation strategy for the generic interface library, so it's not deprecated. It, or something like it, is necessary to make generic interfaces work. Not all uses of struct properties make sense as generic interfaces either.
That said, for many typical use cases the define-generic form is preferred. As the #:methods form for structs suggests, it is useful for code that is organized in an object-oriented fashion with interface-based dispatch. Examples of this include sequences (gen:sequence from data/collection) and dictionaries (gen:dict).
Plain struct properties in the Racket codebase are typically used when some data just needs to be stored in a struct as metadata, or when there is only one "method" and it's needlessly complicated to use define-generic, or when the interface is more complicated than "just put a procedure in here". Examples include prop:procedure or prop:evt.

Java 8 doesn't provide the same solution to allow multiple inheritance which they gave to solve interface default methods

Problem:
We know that Java doesn’t allow to extend multiple classes because it would result in the Diamond Problem where the compiler could’t decide which superclass method to use. With interface default methods the Diamond Problem were introduction in Java 8. That is, because if a class implements two interfaces, each defining the same default method, and the implementing class doesn’t override the common default method, the compiler couldn’t decide which implementation to chose.
Solution:
Java 8 requires to provide an implementation for default methods implemented by more than one interface. So if a class would implement both interfaces mentioned above, it would have to provide an implementation for the common default method. Otherwise the compiler would throw a compile time error.
Question:
Why is this solution not applicable for multiple class inheritance, by overriding common methods introduced by child class?
You didn’t understand the Diamond Problem correctly (and granted, the current state of the Wikipedia article doesn’t explain it sufficiently). As shown in this graphic,
the diamond problem occurs, when the same class is inherited multiple times through different inheritance paths. This isn’t a problem for interfaces (and never was), as they only define a contract and specifying the same contract multiple times makes no difference.
The main problem is not associated with the methods but the data of that super type. Should the instance state of A exist once or twice in that case? If once, C and B can have different, conflicting constraints on A’s instance state. Both classes might also assume to have full control over A’s state, i.e. not consider that other class having the same access level. If having two different A states, a widening conversion of a D reference to an A reference becomes ambiguous, as either A could be meant.
Interfaces don’t have these problems, as they do not carry instance data at all. They also have (almost) no accessibility issues as their methods are always public. Allowing default methods, doesn’t change this, as default methods still don’t access instance variables but operate with the interface methods only.
Of course, there is the possibility that B and C declared default methods with identical signature, causing an ambiguity that has to be resolved in D. But this is even the case, when there is no A, i.e. no “diamond” at all. So this scenario is not a correct example of the “Diamond Problem”.
Methods introduced by interfaces may always be overriden, while methods introduced by classes could be final. This is one reason why you potentially couldn't apply the same strategy for classes as you could for interfaces.
The conflict described as "diamond problem" can best be illustrated using a polymorphic call to method A.m() where the runtime type of the receiver has type D: Imagine D inherits two different methods both claiming to play the role of A.m() (one of them could be the original method A.m(), at least one of them is an override). Now, dynamic dispatch cannot decide which of the conflicting methods to invoke.
Aside: the distinction betwee the "diamond problem" and regular name clashes is particularly relevant in languages like Eiffel, where the conflict could be locally resolved for the perspective of type D, e.g., by renaming one method. This would avoid the name clash for invocations with static type D, but not for invocations with static type A.
Now, with default methods in Java 8, JLS was amended with rules that detect any such conflicts, requiring D to resolve the conflict (many different cases exist, depending on whether or not some of the types involved are classes). I.e., the diamond problem is not "solved" in Java 8, it is just avoided by rejecting any programs that would produce it.
In theory, similar rules could have been defined in Java 1 to admit multiple inheritance for classes. It's just a decision that was made early on, that the designers of Java did not want to support multiple inheritance.
The choice to admit multiple (implementation) inheritance for default methods but not for class methods is a purely pragmatic choice, not necessitated by any theory.

Swift: Do protocols even have a real purpose?

I'm wondering why protocols are used in swift. In every case I've had to use one so far (UICollectionViewDelegate, UICollectionViewDataSource) I've noted that they don't even have to be added to the class declaration for my code to work. All they do is make it such that your class needs to have certain methods in it so that it can compile. Beats me why this is useful other then as a little post it note to help you keep track of what your classes do.
I'm assuming I'm wrong though. Would anyone care to point out why to me please?
A protocol defines a blueprint of methods, properties, and other requirements that suit a particular task or piece of functionality. The protocol doesn’t actually provide an implementation for any of these requirements—it only describes what an implementation will look like.
So it's basically an interface, right?
You use an interface when you want to define a "contract" for your code. In most cases, the purpose of this is to enable multiple implementations of that contract. For example, you can provide a real implementation, and a fake one for testing.
Further Reading
Protocols
What is the point of an Interface?
It allows flexible linkage between parts of code. Once the protocol is defined it can be used by code that doesn't need to know what will be done when the methods are called or exactly what object (or struct or enum) is actually being used. An alternative approach could be setting callbacks or blocks but by using a protocol as complete set of behaviours can be grouped and documented.
Some other code will typically create the concrete instance and pass it to the code expecting the protocol (sometimes the concrete instance will pass itself). In some cases neither the implementation of the code using it need to be aware of each other and it can all be set up by some other code to keep it reusable and testable.
It might be possible to do this in some languages by duck typing which is to say that a runtime inspection could allow a object to act in such a context without particular declaration but this is probably not possible to do at compile time in all cases and it could also be error prone if worked out implicitly.

What should have HandlerInterceptorAdaptor been called?

In Spring MVC, one can define interceptors that can perform work before and after a particular controller is invoked. This can be used, for example, to do logging, authentication etc.
The programmer who wishes to write a custom interceptor is supposed to implement the HandlerInterceptor interface. To aid this task, the HandlerInterceptorAdaptor abstract base class has been provided, which provides default implementations of all the methods specified in the interface. So, if just wants to do some pre processing, one can just extend HandlerInterceptorAdaptor and #Override public boolean preHandle(...), and not worry about implementing the postHandle function.
My doubt concerns the name. From what I understand of the Adapter pattern, it adapts syntactic impedance mismatches between interfaces.
Is that so? If yes, should the class providing the boilerplate implementations be called HandlerInterceptorDefaultImpl, or something along those lines?
Is there a different nomenclature/pattern for what is happening here?
Is the fact that we need a boilerplate class a code smell, and could be removed by refactoring the HandlerInterceptor interface into two: HandlerPreInterceptor and HandlerPostInterceptor? Or is that overkill?
From GOF book about the Adapter pattern:
Adapters vary in the amount of work they do to adapt Adaptee to the Target Interface. There is a spectrum of possible work, from simple interface conversion-for example,changing the names of operations-to supporting an entirely different set of operations. The amount of work Adapter does depends on how similar the Target interface is to Adaptee's.
The boilerplate class that you are referring to is called skeletal implementation class. This is mentioned in Effective Java by Joshua Bloch. From the book:
You can combine the virtues of interfaces and abstract classes by providing an abstract skeletal implementation class to go with each nontrivial interface that you export. The interface still defines the type, but the skeletal implementation takes all of the work out of implementing it.
By convention, skeletal implementations are called AbstractInterface, where Interface is the name of the interface they implement. For example, the Collections Framework provides a skeletal implementation to go along with each main collection interface: AbstractCollection, AbstractSet, AbstractList, and
AbstractMap. Arguably it would have made sense to call them SkeletalCollection, SkeletalSet, SkeletalList, and SkeletalMap, but the Abstract convention is now firmly established.

Point of using Dependency Injection (and for that matter an IoC Container) in LISP

I read ESR's essay named "How to become a hacker?" several years ago (link can be found in my profile) and Eric suggested learning LISP. Well I'm learning LISP for quite a while and I like it so much that I decided to write a web application using it.
Since I'm using Spring for a while I think that it's a good idea to write decoupled components and glue them together using an IoC container and depencency injection. I did a power search on google and it turned out that there is no such idea implemented in LISP. Do I miss something? Is there a good implementation of this concept in LISP or there is no point using it for some reason which is not yet clear to me?
'Inversion of control' is widely used in Lisp. It's totally simple, since functions and closures are first class objects.
Dependency injection is trivial. Classes and functions can be configured via symbols and first class classes.
You usually don't need a 'framework' for IoC or DI in Common Lisp, since a lot of functionality to configure and parameterize applications and libraries is built in.
'first class' means that something can be stored in variables, passed as arguments or returned as results.
In a language like Common Lisp, functions and classes are first class objects. Plus for decoupling via late-binding, you can use symbols as their names instead. The Common Lisp Object System knows meta classes and symbols as names for classes. Even generic functions and methods are objects and have meta-classes.
If concurrent-secure-server is a class and default-response is a function, you can do for example:
(make-instance 'web-services
:server-class 'concurrent-secure-server
:default-response-function 'default-reponse)
Above uses the symbol as the name for the class and the function. If the function gets a new version, the web service might call the new one later.
Alternatively:
(make-instance 'web-services
:server-class (find-class 'concurrent-secure-server)
:default-response-function #'default-reponse)
In above case we pass the class object and the function object.
In Common Lisp software modules can have global variables, which you can set with the right information:
(defvar *default-server-class* 'concurrent-secure-server)
Alternatively you can set those in slots of like below.
(defclass server-object ()
((default-response-function
:initarg :default-response-function
:initform *server-default-response-function*)))
(defvar *my-server*
(make-instance 'server-object
:default-response-function 'my-default-response-function))
You can even create objects and later change their class in a configuration phase.
The Common Lisp Object System allows you to change classes and have existing objects to be updated.
If you create an instance, you can be as flexible as you want:
you can pass in the class
you can pass in the arguments
Like this:
(let ((my-class 'foo-class)
(my-args `(:response-function ',*some-reponse-function)))
(apply #'make-instance my-class my-args))
Sometimes you see Lisp libraries which are computing such arguments lists at runtime.
Another thing where you can configure Lisp applications at runtime is via generic functions. Generic functions allow :before, :after and :around methods - they even allow your own custom call schemes. So by using your own classes inheriting from other classes and mixin classes, the generic function gets reconfigured. This is like you have basic mechanisms of Aspect-oriented programming built-in.
For people interested in these more advanced object-oriented concepts, there is some literature by Xerox PARC, where these issues had been researched when CLOS was created. It was called 'Open implementation' then:
http://www2.parc.com/csl/groups/sda/publications.shtml
In the Open Implementation approach, modules allow their clients individual control over the module's own implementation strategy. This allows the client to tailor the module's implementation strategy to better suit their needs, effectively making the module more reusable, and the client code more simple. This control is provided to clients through a well-designed auxiliary interface.
One thing you don't need: XML. Common Lisp is its own configuration language. Writing extensions for easier configuration can for example be done via macros. Loading of these configurations can be easily done via LOAD.
Actually IoC is the building principle of most web-frameworks, not only in Java or Lisp.
Considering DI, as noted by Rammaren, it's an implicit pattern in a dynamic language such as Lisp. You can see that for yourself, if you compare Hello World applications in Spring and Restas (one of the well-supported CL web-frameworks). You'll see, that there's the same pattern, except for the absence of a need for fancy type/class/interface declaration stuff in Lisp.

Resources