I'm reading some tuts about SOLID programming, and I'm trying to refactor my test project to implement some of those rules.
Often I have doubts with SingleResponsibilityPrinciple, so I hope someone could help me with that.
As I understood, SRP means that (in case of a function), function should be responsible for only one thing. And that's seems pretty easy and simple, but I do get in a trap of doing more than thing.
This is simplified example:
class TicketService {
private ticket;
getTicket() {
httpClient.get().then(function(response) {
ticket = response.ticket;
emit(ticket); <----------------------
});
}
}
The confusing part is emit(ticket). So, my function is named getTicket, that's exactly what I'm doing there (fetching it from server e.g.), but on the other hand, I need to emit that change to all other parts of my application, and let them know that ticket is changed.
I could create separate set() function, where I could do setting of private variable, and emit it there, but that seems like a same thing.
Is this wrong? Does it break the rule? How would you fix it?
You could also return the ticket from the getTicket() function, and then have a separate function called setUpdatedTicket() that takes a ticket and sets the private parameter, and at the end calls the emit function.
This can lead to unexpected behavior. If I want to re-use your class in the future and I see with auto-completion in my IDE the method getTicket() I expect to get a Ticket.
However renaming this method to mailChangedTicket, ideally you want this method to call the getTicket method (which actually returns the ticket) and this way you have re-usable code which will make more sense.
You can take SRP really far, for example your TicketService has a httpClient, but it probably doesn't matter where the ticket comes from. In order to 'fix' this, you will have to create a seperate interface and class for this.
A few advantages:
Code is becoming more re-usable
It is easier to test parts separately
I can recommend the book 'Clean Code' from Robert C. Martin which gives some good guidelines to achieve this.
Related
We plan to apply the command pattern in our process management project: there are
a Command interface to implement
a CommandProcessor, which truly execute some task
The CommandProcessor is passed to the Command instance through constructor so that the execute() method in Command will eventually trigger the true execution in CommandProcessor
So the code of CommandProcessor looks like this:
public class CommandProcessor {
public doWork1() {
//implementation
}
public doWork2() {
//implementation
}
public doWork3() {
//implementation
}
...
public doWork200() {
//implementation
}
}
As the code snippet indicates, the downside of command pattern in our use case is there might be hundreds of commands and thus the CommandProcessor might be difficult to maintain in the long term. So how to resolve this drawback?
This does not feel like the command pattern to me. the design you have highlighted does not hide the implementation detail from the caller. It puts all the logic for which method to call with the caller rather than it being hidden behind an interface.
The main reason for the Command pattern is the caller of the command does not need to know anything at all about what the command is, what it does, all of that is encapsulated in the command itself.
I feel your fears about having 200 command methods have merit. Firstly consider what happens when you add, remove or change the signiture of any of these work methods. Not only do you need to change the interface but also all the concrete classes that implement that interface, and all the locations where the interface is called.
Typically the command pattern has one execute interface, see this wikipedia article for a description of the command_pattern
As #robert mentioned in your comments i think you should rethink your API.
Edit
After a good conversation with #Rui I better understand the question and have the following to say.
Whilst I misunderstood the original question I'm still prepared to stand by the statement that a redesign is needed. Whilst the command pattern is being followed I dont think the spirit of the pattern is. The object passed to the commands (the commandProcessor) is the receiver of the commands actions, but this does not look like a proper object. Obviously I lack context but for me any object that ends in -er raises big flags as not really being an object, but more a collection of methods that act on someone elses data.
Here is a good little write up with some links. I often find that classes like managers, processors or helpers go hand in hand with an Anemic Domain Model. Maybe you are on the beginnings of the right track and i would challange you to look at that commandProcessor and see if its not actually a number of descrete objects that can encapsulate their own data and methods.
I have been using magento for a while now and always cant decide between using the magic getter and getData()
Can someone explain the main difference, apart from the slight performance overhead (and it must be very slight).
I am thinking in terms:
Future code proof (i think magento 2 will not be using magic getter)
Stylistically
Performance
Stability
Any other reasons to use 1 over the other
There is no clear way to go based on the core code as it uses a mixture of both
There's no one answer to fit all situations and it's best to decide based on the model you are using and the particular use case.
Performance is quite poor for magic methods, as well as the extra overhead of converting from CamelCase to under_score on each accessor.
the magic methods are basically a wrapper for getData() anyway, with extra overhead.
There's is one advantage of using magic methods though, for example:
if you use getAttributeName() rather than getData('attribute_name')
at some point in the future, the model may be updated to include a real, concrete getAttributeName() method, in which case your code will still work fine. However if you have used getData(), you access the attribute directly, and bypass the new method, which could include some important calculations which you are bypassing.
In my opinion, the safest way is to always use getData($key). The magic getter uses the same method as you already pointed out.
The advantage is that you can find all references to getData in your code and change it appropriately in case the getData() method is refactored. Compare that with having to find out all magic method calls where they are always named differently.
The second thing is that the magic getter can screw you up easily when you have a method which is named the same way (I think getName() got me once and it took quite some time to debug).
So my vote is definitely for using getData().
As stated before, it's best to use getData over the magic methods. Just wanted to add 2 quick points:
1) The performance overhead is not that slight, especially because of the implementation of _underscore in Varien_Object (as mentioned by Andrew).
2) The implementation of getData has some logic that helps "pretify" code, and although it is a little slower than typical getData calls, is still much faster than magic methods.
If you have nested Varien_Object's so that you need to perform a call like:
$firstObject->getData('second_object')->getData('third_object')->getData('some_string');
you can also perform that call like this:
$firstObject->getData('second_object/third_object/some_string');
Given that there is file selection widget on the view and controller need to handle event of selecting file, should I rather write controller method:
public void fileSelected(String filePath){
//process filePath
}
or
public void fileSelected(){
String filePath = view.getSelectedFilePath();
//process filePath
}
The first approach seems to introduce less coupling between C and V: C don't know what exactly data does C need while handling given event.
But it requires creating a lot of verbose methods similar to getSelectedFile on V side.
On the other hand, second approach may lead to cluttered controller methods in more complex cases than in example (much more data to pass than just filePath).
From your own experience, which approach do you prefer?
The first approach is my favourite. The only difference is I would rather use an object (like Mario suggested) to pass arguments to the method. This way method's signature will not change when you add or remove some of the arguments. Less coupling is always good :)
One more thing:
If You want to try the second solution I recommend using a ViewFactory to remove view logic from the controller.
The first approach is the way to go;
public void fileSelected(String filePath){
//process filePath
}
The Controller should not care about how the View looks like or how it's implemented. It gets much clearer for the developer as well, when creating/updating the view, to know what an action in the controller wants. Also it makes it easier for method overloading.
Though, I don't know really how String filePath = view.getSelectedFilePath(); would work. Are we talking about parsing the View code/markup?
On the other hand, second approach may lead to cluttered controller methods in more complex cases than in example (much more data to pass than just filePath).
That's when you would create a View Model class (let's say we name it MyViewModel) to store all the properties that you need to send (may it be 10 properties) and then pass that in the action: fileSelected(MyViewModel model). That's how it's intended to be used and what the *ModelBinder's in asp.net mvc are there to help you with.
I think you need to look at this from a step back.
Worry less about how it gets in, and be more concerned with validation and error raising.
Tomorrow, your requirements could change and demand that you source the information via a different architectural approach. You could refactor the setup of [inputs / an input object] into a base controller class - or one of several classes for different controller domains.
If you focus on proper validation, whether within the controller (scrubbing) or outside of it (unit tests), then you perform more thorough decoupling though duck typing.
I would go with the first approach. It's reusable and separates concerns. Even if the method of getting the filePath in the future were to change, it won't affect your method's functionality.
The good style (Clean Code book) says that a method's name should describe what the method does. So for example if I have a method that verifies an address, stores it in a database, and sends an email, should the name be something such as verifyAddressAndStoreToDatabaseAndSendEmail(address);
or
verifyAddress_StoreToDatabase_SendEmail(address);
although I can divide that functionality in 3 methods, I'll still need a method to call these 3 methods. So a large method name is inevitable.
Having And named methods certainly describes what the method does, but IMO it's not very readable as names can be very very large. How would you solve it?
EDIT: Maybe I could use fluent style to decompose the method name such as:
verifyAddress(address).storeToDatabase().sendEmail();
but I need a way to ensure the order of invocation. Maybe by using the state pattern, but this causes the code to grow.
How I approach this is to make the 3 smaller methods as you mentioned and then in the higher method that calls the 3 smaller ones, I name it after the "why" I need to do those three things.
Try to define why you need to do those steps and use that as the basis of the method name.
A single method should not do 3 things. Thus divide the work into 3 methods:
verifyAddress
storeAddress
sendEmail
I'm following up on my previous comment, but I've got more here than would fit reasonably in a comment so I'm answering.
The details of the method belong in the documentation not in the name of the method (in my opinion). Think of it this way... By putting SendEmail in the name of the method, you're committing implementation details to the method name. What if a decision is made down the road to send notification via SMS or twitter or something else instead of email? Do you change the name of the method and break your API, or do you have a method name that misleads the consumers of the API? Something to consider.
If you insist on keeping the functionality of the method in its name, I'd urge you to find something more generic. Perhaps something along the lines of VerifySaveAndNotify(Address address). That way, the method name tells you what it's doing without specifying how it does it. The parameter of type Address let's you know what is being verified and saved. All of that works together to make your method name informative, flexible, and terse.
EDIT: Maybe I could use fluent style to decompose the method name such as:
verifyAddress(address).storeToDatabase().sendEmail();
but I need a way to ensure the order of invocation. Maybe by using the state pattern, but this causes the code to grow.
To ensure ordering of commands in a fluent style, each result would be an object that exposes only the functionality required by the next step. For example:
public class Verifier
{
public DataStorer VerifyAddress(string address)
{
...
return new DataStorer(address);
}
}
public class DataStorer
{
public Emailer StoreToDataBase()
{
...
return new Emailer(...);
}
}
public class Emailer
{
public void SendEmail()
{
...
}
}
This is handy if you need to create a very granular design and want to optimise your classes for reuseability, but is likely to be design overkill under most circumstances. Better probably as others have said to choose a name that represents what the whole process is supposed to represent. You could simply call it "StoreAndEmail", making an assumption that verification is something you do routinely before committing data to any destination. The alternative if you don't mind names being long is to simply describe it in full and accept that a long name is necessary. In the end, it really doesn't cost you anything, but can certainly make you code more specific in its intent.
C#, nUnit, and Rhino Mocks, if that turns out to be applicable.
My quest with TDD continues as I attempt to wrap tests around a complicated function. Let's say I'm coding a form that, when saved, has to also save dependent objects within the form...answers to form questions, attachments if available, and "log" entries (such as "blahblah updated the form." or "blahblah attached a file."). This save function also fires off emails to various people depending on how the state of the form changed during the save function.
This means in order to fully test out the form's save function with all of its dependencies, I have to inject five or six data providers to test out this one function and make sure everything fired off in the right way and order. This is cumbersome when writing the multiple chained constructors for the form object to insert the mocked providers. I think I'm missing something, either in the way of refactoring or simply a better way to set the mocked data providers.
Should I further study refactoring methods to see how this function can be simplified? How's the observer pattern sound, so that the dependent objects detect when the parent form is saved and handle themselves? I know that people say to split out the function so it can be tested...meaning I test out the individual save functions of each dependent object, but not the save function of the form itself, which dictates how each should save themselves in the first place?
First, if you are following TDD, then you don't wrap tests around a complicated function. You wrap the function around your tests. Actually, even that's not right. You interweave your tests and functions, writing both at almost exactly the same time, with the tests just a little ahead of the functions. See The Three Laws of TDD.
When you follow these three laws, and are diligent about refactoring, then you never wind up with "a complicated function". Rather you wind up with many, tested, simple functions.
Now, on to your point. If you already have "a complicated function" and you want to wrap tests around it then you should:
Add your mocks explicitly, instead of through DI. (e.g. something horrible like a 'test' flag and an 'if' statement that selects the mocks instead of the real objects).
Write a few tests in order to cover the basic operation of the component.
Refactor mercilessly, breaking up the complicated function into many little simple functions, while running your cobbled together tests as often as possible.
Push the 'test' flag as high as possible. As you refactor, pass your data sources down to the small simple functions. Don't let the 'test' flag infect any but the topmost function.
Rewrite tests. As you refactor, rewrite as many tests as possible to call the simple little functions instead of the big top-level function. You can pass your mocks into the simple functions from your tests.
Get rid of the 'test' flag and determine how much DI you really need. Since you have tests written at the lower levels that can insert mocks through areguments, you probably don't need to mock out many data sources at the top level anymore.
If, after all this, the DI is still cumbersome, then think about injecting a single object that holds references to all your data sources. It's always easier to inject one thing rather than many.
Use an AutoMocking container. There is one written for RhinoMocks.
Imagine you have a class with a lot of dependencies injected via constructor injection. Here's what it looks like to set it up with RhinoMocks, no AutoMocking container:
private MockRepository _mocks;
private BroadcastListViewPresenter _presenter;
private IBroadcastListView _view;
private IAddNewBroadcastEventBroker _addNewBroadcastEventBroker;
private IBroadcastService _broadcastService;
private IChannelService _channelService;
private IDeviceService _deviceService;
private IDialogFactory _dialogFactory;
private IMessageBoxService _messageBoxService;
private ITouchScreenService _touchScreenService;
private IDeviceBroadcastFactory _deviceBroadcastFactory;
private IFileBroadcastFactory _fileBroadcastFactory;
private IBroadcastServiceCallback _broadcastServiceCallback;
private IChannelServiceCallback _channelServiceCallback;
[SetUp]
public void SetUp()
{
_mocks = new MockRepository();
_view = _mocks.DynamicMock<IBroadcastListView>();
_addNewBroadcastEventBroker = _mocks.DynamicMock<IAddNewBroadcastEventBroker>();
_broadcastService = _mocks.DynamicMock<IBroadcastService>();
_channelService = _mocks.DynamicMock<IChannelService>();
_deviceService = _mocks.DynamicMock<IDeviceService>();
_dialogFactory = _mocks.DynamicMock<IDialogFactory>();
_messageBoxService = _mocks.DynamicMock<IMessageBoxService>();
_touchScreenService = _mocks.DynamicMock<ITouchScreenService>();
_deviceBroadcastFactory = _mocks.DynamicMock<IDeviceBroadcastFactory>();
_fileBroadcastFactory = _mocks.DynamicMock<IFileBroadcastFactory>();
_broadcastServiceCallback = _mocks.DynamicMock<IBroadcastServiceCallback>();
_channelServiceCallback = _mocks.DynamicMock<IChannelServiceCallback>();
_presenter = new BroadcastListViewPresenter(
_addNewBroadcastEventBroker,
_broadcastService,
_channelService,
_deviceService,
_dialogFactory,
_messageBoxService,
_touchScreenService,
_deviceBroadcastFactory,
_fileBroadcastFactory,
_broadcastServiceCallback,
_channelServiceCallback);
_presenter.View = _view;
}
Now, here's the same thing with an AutoMocking container:
private MockRepository _mocks;
private AutoMockingContainer _container;
private BroadcastListViewPresenter _presenter;
private IBroadcastListView _view;
[SetUp]
public void SetUp()
{
_mocks = new MockRepository();
_container = new AutoMockingContainer(_mocks);
_container.Initialize();
_view = _mocks.DynamicMock<IBroadcastListView>();
_presenter = _container.Create<BroadcastListViewPresenter>();
_presenter.View = _view;
}
Easier, yes?
The AutoMocking container automatically creates mocks for every dependency in the constructor, and you can access them for testing like so:
using (_mocks.Record())
{
_container.Get<IChannelService>().Expect(cs => cs.ChannelIsBroadcasting(channel)).Return(false);
_container.Get<IBroadcastService>().Expect(bs => bs.Start(8));
}
Hope that helps. I know my testing life has been made a whole lot easier with the advent of the AutoMocking container.
You're right that it can be cumbersome.
Proponent of mocking methodology would point out that the code is written improperly to being with. That is, you shouldn't be constructing dependent objects inside this method. Rather, the injection API's should have functions that create the appropriate objects.
As for mocking up 6 different objects, that's true. However, if you also were unit-testing those systems, those objects should already have mocking infrastructure you can use.
Finally, use a mocking framework that does some of the work for you.
I don't have your code, but my first reaction is that your test is trying to tell you that your object has too many collaborators. In cases like this, I always find that there's a missing construct in there that should be packaged up into a higher level structure. Using an automocking container is just muzzling the feedback you're getting from your tests. See http://www.mockobjects.com/2007/04/test-smell-bloated-constructor.html for a longer discussion.
In this context, I usually find statements along the lines of "this indicates that your object has too many dependencies" or "your object has too many collaborators" to be a fairly specious claim. Of course a MVC controller or a form is going to be calling lots of different services and objects to fulfill its duties; it is, after all, sitting at the top layer of the application. You can smoosh some of these dependencies together into higher-level objects (say, a ShippingMethodRepository and a TransitTimeCalculator get combined into a ShippingRateFinder), but this only goes so far, especially for these top-level, presentation-oriented objects. That's one less object to mock, but you've just obfuscated the actual dependencies via one layer of indirection, not actually removed them.
One blasphemous piece of advice is to say that if you are dependency injecting an object and creating an interface for it that is quite unlikely to ever change (Are you really going to drop in a new MessageBoxService while changing your code? Really?), then don't bother. That dependency is part of the expected behavior of the object and you should just test them together since the integration test is where the real business value lies.
The other blasphemous piece of advice is that I usually see little utility in unit testing MVC controllers or Windows Forms. Everytime I see someone mocking the HttpContext and testing to see if a cookie was set, I want to scream. Who cares if the AccountController set a cookie? I don't. The cookie has nothing to do with treating the controller as a black box; an integration test is what is needed to test its functionality (hmm, a call to PrivilegedArea() failed after Login() in the integration test). This way, you avoid invalidating a million useless unit tests if the format of the login cookie ever changes.
Save the unit tests for the object model, save the integration tests for the presentation layer, and avoid mock objects when possible. If mocking a particular dependency is hard, it's time to be pragmatic: just don't do the unit test and write an integration test instead and stop wasting your time.
The simple answer is that code that you are trying to test is doing too much. I think sticking to the Single Responsibility Principle might help.
The Save button method should only contain a top-level calls to delegate things to other objects. These objects can then be abstracted through interfaces. Then when you test the Save button method, you only test the interaction with mocked objects.
The next step is to write tests to these lower-level classes, but thing should get easier since you only test these in isolation. If you need a complex test setup code, this is a good indicator of a bad design (or a bad testing approach).
Recommended reading:
Clean Code: A Handbook of Agile Software Craftsmanship
Google's guide to writing testable code
Constructor DI isn't the only way to do DI. Since you're using C#, if your constructor does no significant work you could use Property DI. That simplifies things greatly in terms of your object's constructors at the expense of complexity in your function. Your function must check for the nullity of any dependent properties and throw InvalidOperation if they're null, before it begins work.
When it is hard to test something, it is usually symptom of the code quality, that the code is not testable (mentioned in this podcast, IIRC). The recommendation is to refactor the code so that the code will be easy to test. Some heuristics for deciding how to split the code into classes are the SRP and OCP. For more specific instructions, it would be necessary to see the code in question.