Adopting "Growing Object-Oriented Software" techniques to Ruby on Rails - ruby

I read Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce and was impressed very much. I want to adopt the ideas of this book in my Rails projects using RSpec, though its examples are written in Java.
A basic precept of this book is that we should mock interfaces instead of concrete classes. They say we can improve the application design by extracting interfaces and naming them.
But, Ruby doesn't have any syntax equivalent to Java's interface. How can I utilize their techniques for Rails projects?
UPDATE
For example, in the page 126 the authors introduced Auction interface in order to implement the bid method. Firstly, they mocked Auction.class to make the test pass, then they implemented an Auction class as anonymous inner class in the Main class. Finally, they extracted a new concrete class XMPPAuction from Main (page 131-132).
This incremental approach is the crux of this book in my opinion.
How can I adopt or imitate such a series of code transformation in the Ruby development?

Check out this previous Stack Overflow answer for a good explanation of interfaces in ruby.
Also, Practical Object-Oriented Design in Ruby is a book in similar vein as Growing Object Oriented Software book, but with ruby examples. Worth checking it out.

Since in Ruby, all things are duck-typed and the interface is simply the set of fields and methods that are publicly exposed you can do one or more of the following:
Test
Design your tests to test interfaces and name and comment your tests appropriately - then you can pass all of your "concrete implementations" of your "interface" through the same test suite. As long as your test suite covers your application's edge cases anything in your application that takes an instance of any of these concrete classes will be able to handle an instance of any of the other concrete classes.
Use base classes
Define a base class that all of your concrete classes will inherit from where all of the methods throw a NotImplemented error. This gives you a hierarchy that you can visualize - however, it does require extra classes and may encourage numerous is a tests in your production code rather than that code relying on has a.

You're right that Ruby does not have formal interfaces. Instead, interfaces are implicit in the messages that an object handles (see duck typing).
If you are still looking for a more formal way to "enforce" an interface in Ruby, consider writing a suite of automated unit tests that are green if an object conforms to the interface properly, and are red otherwise.
For examples, see ActiveModel::Lint::Tests and rspec's shared examples.

Related

What's the difference between a Contract in Laravel and an Interface in PHP?

As far as I can tell, Laravel refers to the interfaces it extends as Contracts because they are used by Laravel. But this seems a bit like circular reasoning. There is no value added in changing the terminology of an existing PHP feature simply because your project uses it.
Is there something more to it? What's the logic behind coining a new term for something that's a standard PHP feature? Or is there some feature of Contracts that are not already in Interfaces?
Edit: To clarify, it's the usage of Contract as a proper noun in the documentation that has me confused, as explained in my comment on Thomas's post
"Contract" isn't some new terminology that Taylor coined. It's a very common term programmers use.
An interface is a contract, but a contract doesn't necessarily have to be an interface. The interface in a nutshell defines the contract that the classes must implement.
An abstract class is also a contract. The difference is that an abstract class can provide actual implementations, state, etc., and as a result, it is (in a sense) a more rigorous contract.
Another key difference is that a child class can only extend 1 abstract class but it can implement multiple interfaces.
So basically, "contract" isn't a new naming convention. It's a common term that Taylor is using.
It's just a nice word to describe the idea of using interfaces.
Laravel contracts are just PHP interfaces so they don't provide any other functionality.
You can read more on this subject in the documentation http://laravel.com/docs/5.1/contracts
As others have said, that is just a fancy word for Interfaces, but I think that Taylor made that decision to make it more personal.
What I mean by personal is that interface it's a very broad/common word on programming language, you have your interfaces, libraries (that you might be using) have their own interfaces and so on.
Contracts you just assume as the Laravel interfaces it's like a wrapper or alias for all the Interfaces that belong to this repo.
Short description: Contract is a term used for interfaces, but also for abstract classes.

Traits vs. Interfaces vs. Mixins?

What are the similarities & differences between traits, mixins and interfaces. I am trying to get a deeper understanding of these concepts but I don't know enough programming languages that implement these features to truly understand the similarities and differences.
For each of traits, mixins and interfaces
What is the problem being solved?
Is the definition of the concept consistent across programming languages?
What are the similarities between it and the others?
what are the differences between it and the others?
Every reference type in Java, except Object, derives from one single superclass.
By the way, Java classes may implement zero or more interfaces.
Generally speaking, an interface is a contract that describes the methods an implementing class is forced to have, though without directly providing an implementation.
In other words, a Java class is obliged to abide its contract and thus to give implementation to method signatures provided by the interfaces it declares to implement.
An interface constitutes a type. So you can pass parameters and have return values from methods declared as interface types, requiring that way that parameters and return types implement particular methods without necessarily providing a concrete implementation for them.
This sets the basis for several abstraction patterns, like, for example, dependency injection.
Scala, on its own, has traits. Traits give you all the features of Java interfaces, with the significant difference that they can contain method implementations and variables.
Traits are a smart way of implementing methods just once and - by means of that - distribute those methods into all the classes that extend the trait.
Like interfaces for Java classes, you can mix more than one trait into a Scala class.
Since I have no Ruby background, though, I'll point you to an excerpt from David Pollak's "Beginning Scala" (amazon link):
Ruby has mixins, which are collections of methods that can be mixed into any class. Because Ruby does not have static typing and there is no way to declare the types of method parameters, there’s no reasonable way to use mixins to define a contract like interfaces. Ruby mixins provide a mechanism for composing code into classes but not a mechanism for defining or enforcing parameter types.
Interfaces can do even more than is described in this post; as the topic can be vast, I suggest you to investigate more in each one of the three directions, while if you even have Java background, Scala and therefore traits are affordable to learn.

Is the difference between different test doubles important?

I was just reading Martin Fowler's post Mocks Aren't Stubs. He defines the different test doubles (or rather references Gerard Meszaros's xUnit patterns book):
Dummy objects are passed around but never actually used. Usually they
are just used to fill parameter lists.
Fake objects actually have
working implementations, but usually take some shortcut which makes
them not suitable for production (an in memory database is a good
example).
Stubs provide canned answers to calls made during the test,
usually not responding at all to anything outside what's programmed in
for the test. Stubs may also record information about calls, such as
an email gateway stub that remembers the messages it 'sent', or maybe
only how many messages it 'sent'.
Mocks are ... objects pre-programmed with expectations which form a
specification of the calls they are expected to receive.
Part one of my question would be, is this even authoritative? Is this language widely used and understood?
The second part of my question is that it seems that my mocking framework of choice, Mockito, makes it easy to blur the line, certainly between mocks and stubs.
Everything is called "mock". Either by calling the Mockito.mock() method or with a #Mock annotation, you use the word "mock" to create mocks, stubs, and sometimes dummies (if a simple "new" won't do). The exception is a "spy" which might be used to make something like a "fake", but can also be used to wrap your system under test.
Even if you didn't care about the name of the method to create a test double, the double can be verified (or not) and you can include a capture in the verification step, which seem to include some things that a stub would do (remembering calls that were made) and mocks (verifying that certain expected calls were made).
The reason I ask is that I try to name my doubles according to the four things I see above, but then get confused sometimes whether something really has the role of stub or a mock. So, is this a deficiency of Mockito, or is this just how things have evolved and the distinction is not really important?
Actually, it's a strength of Mockito. A Mockito mock is an object on which you can either "stub" the methods, or "verify" the methods, or both. (Doing both for the same method is kind of a code smell, but that's another topic). So a Mockito mock is both a "stub" (in the Martin Fowler sense) and a "mock" (in the Martin Fowler sense); but you don't have to use it as both. Usually, a Mockito mock will act as EITHER a "stub", OR as a "mock"; less often as both.
In some other mocking frameworks, you don't have quite this level of flexibility. If you want to do "verifying" on a mock, you also have to do "stubbing". In these frameworks, the mocks MUST act as both a "stub" and as a "mock". As far as I understand, one of the factors that motivated Szczepan Faber to create Mockito was a desire to be able to separate "stub" behaviour, and "mock" behaviour (in the strict Martin Fowler senses of both words).
The English word "mock" means "an imitation of lesser quality than the original". This is why even hand-rolled mocks (written without the aid of a framework like Mockito) are sometimes called mocks.
The language which Martin used is now a little bit out of date. He wrote it in the context of old mocking frameworks like JMock, before the "nice mocks" came along. In that era, mocks used to be strict; any interactions which hadn't been set up and weren't expected would fail.
Nowadays we tend to think of it a different way. If I'm a class, I have some other classes that I need to help me. They're either providing information, or doing some work for me - for instance, a repository might provide a list of employees, or save a new employee.
Mocks stand in for these collaborators, and we don't tend to use expectations on mocks any more. Instead, we set up mocks to provide information, then verify that they were asked to do any jobs that need to be done. Mockito was the first framework to work this way, and that's why the distinction is blurred - because whatever you're doing, you're mocking out a collaborator, and you no longer need to set up expectations. Moq works the same way in .NET.
Mockito's mocks by default don't even care if you use them and don't check (although you'll need to set up any information that they have to provide before-hand - the equivalent of a "stub").
Additionally, because Mockito provides "nice" mocks, you don't need to worry about setting expectations in case a dummy object is used somewhere - you can just use Mockito to create those, as well. And, just in case you want to add some simple behavior, Mockito lets you do callbacks easily on the arguments which are passed to it, so you can create "Fakes".
It doesn't really matter what they are - you're just mocking out a collaborating class, and the flexibility means that you don't need to worry about how you do that.
Early frameworks didn't provide this flexibility, hence Martin's differentiation, intended to help you use mocks appropriately. Hope this helps clarify things and explain why Mockito's flexibility isn't a deficiency, but - as David Wallace pointed out - a strength.
According to what I understand from Gerard Meszaros' X-Unit test patterns, a mock is a test-double that includes the functionality of a dummy, a stub and a spy. You can check out the actual comparison he draws about them in pg.742 of that book.
Also this article might throw some light on your question. This article clearly states that
"A mock is dynamically created by a mock library (the others are
typically produced by a test developer using code). The test developer
never sees the actual code implementing the interface or base class,
but can configure the mock to provide return values, expect particular
members to be invoked, and so on. Depending on its configuration, a
mock can behave like a dummy, a stub, or a spy."
Both the quote and the image were taken from this article. IMHO, a mock is intended to blur the line between a dummy, a stub and a spy.

Should I only be testing public interfaces in BDD? (in general, and specifically in Ruby)

I'm reading through the (still beta) rspec book by the prag progs as I'm interested in behavioral testing on objects. From what I've gleaned so far (caveat: after only reading for 30 min), the basic idea is that I want ensure my object behaves as expected 'externally' i.e. in its output and in relation to other objects.
Is it true then that I should just be black box testing my object to ensure the proper output/interaction with other objects?
This may be completely wrong, but given all of the focus on how my object behaves in the system, it seems this is ideology one would take. If that's so, how do we focus on the implementation of an object? How do I test that my private method is doing what I want it to do for all different types of input?
I suppose this question is maybe valid for all types of testing?? I'm still fairly new to TDD and BDD.
If you want to understand BDD better, try thinking about it without using the word "test".
Instead of writing a test, you're going to write an example of how you can use your class (and you can't use it except through public methods). You're going to show why your class is valuable to other classes. You're defining the scope of your class's responsibilities, while showing (through mocks) what responsibilities are delegated elsewhere.
At the same time, you can question whether the responsibilities are appropriate, and tune the methods on your class to be as intuitively usable as possible. You're looking for code which is easy to understand and use, rather than code which is easy to write.
If you can think in terms of examples and providing value through behaviour, you'll create code that's easy to use, with examples and descriptions that other people can follow. You'll make your code safe and easy to change. If you think about testing, you'll pin it down so that nobody can break it. You'll make it hard to change.
If it's complex enough that there are internal methods you really want to test separately, break them out into another class then show why that class is valuable and what it does for the class that uses it.
Hope this helps!
I think there are two issues here.
One is that from the BDD perspective, you are typically testing at a higher level than from the TDD perspective. So your BDD tests will assert a bigger piece of functionality than your TDD tests and should always be "black box" tests.
The second is that if you feel the need to test private methods, even at the unit test level, that could be a code smell that your code is violating the Single Responsibilty Principle
and should be refactored so that the methods you care about can be tested as public methods of a different class. Michael Feathers gave an interesting talk about this recently called "The Deep Synergy Between Testability and Good Design."
Yes, focus on the exposed functionality of the class. Private methods are just part of a public function you will test. This point is a bit controversial, but in my opinion it should be enough to test the public functionality of a class (everything else also violates the OOP principle).

Too many public methods forced by test-driven development

A very specific question from a novice to TDD:
I separate my tests and my application into different packages. Thus, most of my application methods have to be public for tests to access them. As I progress, it becomes obvious that some methods could become private, but if I make that change, the tests that access them won't work. Am I missing a step, or doing something wrong, or is this just one downfall of TDD?
This is not a downfall of TDD, but rather an approach to testing that believes you need to test every property and every method. In fact you should not care about private methods when testing because they should only exist to facilitate some public portion of the API.
Never change something from private to public for testing purposes!
You should be trying to verify only publicly visible behavior. The rest are implementation details and you specifically want to avoid testing those. TDD is meant to give you a set of tests that will allow you to easily change the implementation details without breaking the tests (changing behavior).
Let’s say I have a type: MyClass and I want to test the DoStuff method. All I care about is that the DoStuff method does something meaningful and returns the expected results. It may call a hundred private methods to get to that point, but I don't care as the consumer of that method.
You don't specify what language you are using, but certainly in most of them you can put the tests in a way that have more privileged access to the class. In Java, for example, the test can be in the same package, with the actual class file being in a different directory so it is separate from production code.
However, when you are doing real TDD, the tests are driving the class design, so if you have a method that exists just to test some subset of functionality, you are probably (not always) doing something wrong, and you should look at techniques like dependency injection and mocking to better guide your design.
This is where the old saying, "TDD is about design," frequently comes up. A class with too many public methods probably has too many responsibilities - and the fact that you are test-driving it only exposes that; it doesn't cause the problem.
When you find yourself in this situation, the best solution is frequently to find some subset of the public methods that can be extracted into a new class ("sprout class"), then give your original class an instance variable of the sprouted class. The public methods deserve to be public in the new class, but they are now - with respect to the API of the original class - private. And you now have better adherence to SRP, looser coupling, and higher cohesion - better design.
All because TDD exposed features of your class that would otherwise have slid in under the radar. TDD is about design.
At least in Java, it's good practice to have two source trees, one for the code and one for the tests. So you can put your code and your tests in the same package, while they're still in different directories:
src/org/my/xy/X.java
test/org/my/xy/TestX.java
Then you can make your methods package private.

Resources