How to solve the problem of too many methods in the command processor in command pattern - refactoring

We plan to apply the command pattern in our process management project: there are
a Command interface to implement
a CommandProcessor, which truly execute some task
The CommandProcessor is passed to the Command instance through constructor so that the execute() method in Command will eventually trigger the true execution in CommandProcessor
So the code of CommandProcessor looks like this:
public class CommandProcessor {
public doWork1() {
//implementation
}
public doWork2() {
//implementation
}
public doWork3() {
//implementation
}
...
public doWork200() {
//implementation
}
}
As the code snippet indicates, the downside of command pattern in our use case is there might be hundreds of commands and thus the CommandProcessor might be difficult to maintain in the long term. So how to resolve this drawback?

This does not feel like the command pattern to me. the design you have highlighted does not hide the implementation detail from the caller. It puts all the logic for which method to call with the caller rather than it being hidden behind an interface.
The main reason for the Command pattern is the caller of the command does not need to know anything at all about what the command is, what it does, all of that is encapsulated in the command itself.
I feel your fears about having 200 command methods have merit. Firstly consider what happens when you add, remove or change the signiture of any of these work methods. Not only do you need to change the interface but also all the concrete classes that implement that interface, and all the locations where the interface is called.
Typically the command pattern has one execute interface, see this wikipedia article for a description of the command_pattern
As #robert mentioned in your comments i think you should rethink your API.
Edit
After a good conversation with #Rui I better understand the question and have the following to say.
Whilst I misunderstood the original question I'm still prepared to stand by the statement that a redesign is needed. Whilst the command pattern is being followed I dont think the spirit of the pattern is. The object passed to the commands (the commandProcessor) is the receiver of the commands actions, but this does not look like a proper object. Obviously I lack context but for me any object that ends in -er raises big flags as not really being an object, but more a collection of methods that act on someone elses data.
Here is a good little write up with some links. I often find that classes like managers, processors or helpers go hand in hand with an Anemic Domain Model. Maybe you are on the beginnings of the right track and i would challange you to look at that commandProcessor and see if its not actually a number of descrete objects that can encapsulate their own data and methods.

Related

Single responsibility principle - function

I'm reading some tuts about SOLID programming, and I'm trying to refactor my test project to implement some of those rules.
Often I have doubts with SingleResponsibilityPrinciple, so I hope someone could help me with that.
As I understood, SRP means that (in case of a function), function should be responsible for only one thing. And that's seems pretty easy and simple, but I do get in a trap of doing more than thing.
This is simplified example:
class TicketService {
private ticket;
getTicket() {
httpClient.get().then(function(response) {
ticket = response.ticket;
emit(ticket); <----------------------
});
}
}
The confusing part is emit(ticket). So, my function is named getTicket, that's exactly what I'm doing there (fetching it from server e.g.), but on the other hand, I need to emit that change to all other parts of my application, and let them know that ticket is changed.
I could create separate set() function, where I could do setting of private variable, and emit it there, but that seems like a same thing.
Is this wrong? Does it break the rule? How would you fix it?
You could also return the ticket from the getTicket() function, and then have a separate function called setUpdatedTicket() that takes a ticket and sets the private parameter, and at the end calls the emit function.
This can lead to unexpected behavior. If I want to re-use your class in the future and I see with auto-completion in my IDE the method getTicket() I expect to get a Ticket.
However renaming this method to mailChangedTicket, ideally you want this method to call the getTicket method (which actually returns the ticket) and this way you have re-usable code which will make more sense.
You can take SRP really far, for example your TicketService has a httpClient, but it probably doesn't matter where the ticket comes from. In order to 'fix' this, you will have to create a seperate interface and class for this.
A few advantages:
Code is becoming more re-usable
It is easier to test parts separately
I can recommend the book 'Clean Code' from Robert C. Martin which gives some good guidelines to achieve this.

Is there a good way to use polymorphism to remove this switch statement?

I've been reading on refactoring and replacing conditional statements with polymorphism. The trouble I have is that it only seems to make sense to me when you have a more complex case where, without polymorphism, you would have to repeat the same switch statements or if-elses many times. I don't see how it makes sense if you're only doing it once - you have to have that conditional somewhere, right?
As an example, I recently wrote the following class, which is responsible for reading a XML file and converting its data into the program's objects. There are 2 possible formats for the file that we are supporting, so I simply wrote a method in the class for handling each one, and used a case-switch to determine which one to use:
public class ComponentXmlReader
{
public IEnumerable<UserComponent> ImportComponentsFromXml(string path)
{
var xmlFile = XElement.Load(path);
switch (xmlFile.Name.LocalName)
{
case "CaseDefinition":
return ImportComponentsFromA(xmlFile);
case "Root":
return ImportComponentsFromB(xmlFile);
}
}
private IEnumerable<UserComponent> ImportComponentsFromA(XContainer file)
{
//do stuff
}
private IEnumerable<UserComponent> ImportComponentsFromB(XContainer file)
{
//do stuff
}
}
As far as I can tell, I could write a class hierarchy for this to do the parsing, but I don't see the advantage here - I'd still have to use a case-switch to determine which class to instantiate. It looks to me like it would be extra complexity for no benefit. If I was going to keep these classes around and do more things with them that depended on the file type, then it would eliminate doing the same switch in multiple places, but this is single-use. Is this right, or is there some reason or technique I'm not seeing that makes it a good idea to use a polymorphic class hierarchy to do this?
If you had, say, an abstract ComponentImporter class, with concrete subclasses FromA and FromB, you could instantiate one of each, and put it in a Map. Then you could call componentImporterMap.get(xmlFile.Name.LocalName).importComponents() and avoid the switch.
As with all design choices, context is key. In this case, you have what seems to be a fairly simple class handling two very similar tasks. If the two Import methods contained very little duplicate code, then including them in a single class is perhaps the best choice since, as you say, it reduces complexity.
However, it's possible you'll use this class in the future, and even add new types of imports. In that case, the class would be more reusable if it was polymorphic.
Additionally, since these methods sound very similar, you're likely to have a bunch of duplicate code, which you could keep in a base class and only put import-specific code in the child classes.
Plus, as Carl mentions, there are numbers of ways to implement this logic without using a case statement.

Replace lots of switches with polymorphism but no type code

I have a class which could benefit with the state pattern. However the common "Replace Type Code with State/Strategy" refactoring does not seem to fit well in my case: the state is calculated by watching other objects, there is no type code variable.
Most of my class code is just "calculating" some state when it is called, and running the functions for that state.
Forcing a type code variable feels wrong because:
I will be forced to call an "updateState()" function in every place where the polymorphic functions are used.
My class will no longer be 100% behavior, which I would rather habe instead of some internal state.
Since the state must be calculated every single time its functions are called, I am wonder if I am thinking about the wrong pattern.
Normally I refactor this:
if (this.someOtherThingIsRunning()) {
...
} else {
...
}
like this:
typecode.doSomething()
// that being polymorphic
it seems strange doing:
updateTypeCode()
typecode.doSomething()
Does the state pattern applies to this case? Is there any alternative strategy pull from polymorphism without a type code?
While writing my question, I realized that maybe I could just make the type code a function and return a temporal (function scope) type code. Like:
typecode().doSomething()
This solution would never store the state, which is what I want to avoid. However I am still wondering if my problem started because I am using the wrong pattern.
If you're open to storing the state, maybe think about combining State and Observer to modify the state as the dependent classes change (rather than checking on every call). There's only certain models that this will work for though.
Otherwise you might as well say object.doSomething() and have the checks inside doSomething(). In this case using design patterns doesn't present any significant advantages (though if you loosen up slightly on the definitions of design patterns, many things would be considered such). I'd probably go with:
doSomething()
{
if (someOtherThingIsRunning())
doOneThing();
else
doAnotherThing();
}
The alternative (that you already suggested) is to have the above checks in typecode() and to return another class that contains the method doSomething().

method name for a long method

The good style (Clean Code book) says that a method's name should describe what the method does. So for example if I have a method that verifies an address, stores it in a database, and sends an email, should the name be something such as verifyAddressAndStoreToDatabaseAndSendEmail(address);
or
verifyAddress_StoreToDatabase_SendEmail(address);
although I can divide that functionality in 3 methods, I'll still need a method to call these 3 methods. So a large method name is inevitable.
Having And named methods certainly describes what the method does, but IMO it's not very readable as names can be very very large. How would you solve it?
EDIT: Maybe I could use fluent style to decompose the method name such as:
verifyAddress(address).storeToDatabase().sendEmail();
but I need a way to ensure the order of invocation. Maybe by using the state pattern, but this causes the code to grow.
How I approach this is to make the 3 smaller methods as you mentioned and then in the higher method that calls the 3 smaller ones, I name it after the "why" I need to do those three things.
Try to define why you need to do those steps and use that as the basis of the method name.
A single method should not do 3 things. Thus divide the work into 3 methods:
verifyAddress
storeAddress
sendEmail
I'm following up on my previous comment, but I've got more here than would fit reasonably in a comment so I'm answering.
The details of the method belong in the documentation not in the name of the method (in my opinion). Think of it this way... By putting SendEmail in the name of the method, you're committing implementation details to the method name. What if a decision is made down the road to send notification via SMS or twitter or something else instead of email? Do you change the name of the method and break your API, or do you have a method name that misleads the consumers of the API? Something to consider.
If you insist on keeping the functionality of the method in its name, I'd urge you to find something more generic. Perhaps something along the lines of VerifySaveAndNotify(Address address). That way, the method name tells you what it's doing without specifying how it does it. The parameter of type Address let's you know what is being verified and saved. All of that works together to make your method name informative, flexible, and terse.
EDIT: Maybe I could use fluent style to decompose the method name such as:
verifyAddress(address).storeToDatabase().sendEmail();
but I need a way to ensure the order of invocation. Maybe by using the state pattern, but this causes the code to grow.
To ensure ordering of commands in a fluent style, each result would be an object that exposes only the functionality required by the next step. For example:
public class Verifier
{
public DataStorer VerifyAddress(string address)
{
...
return new DataStorer(address);
}
}
public class DataStorer
{
public Emailer StoreToDataBase()
{
...
return new Emailer(...);
}
}
public class Emailer
{
public void SendEmail()
{
...
}
}
This is handy if you need to create a very granular design and want to optimise your classes for reuseability, but is likely to be design overkill under most circumstances. Better probably as others have said to choose a name that represents what the whole process is supposed to represent. You could simply call it "StoreAndEmail", making an assumption that verification is something you do routinely before committing data to any destination. The alternative if you don't mind names being long is to simply describe it in full and accept that a long name is necessary. In the end, it really doesn't cost you anything, but can certainly make you code more specific in its intent.

BDD/TDD: can dependencies be a behavior?

I've been told not to use implementation details. A dependency seems like an implementation detail. However I could phrase it also as a behavior.
Example: A LinkList depends on a storage engine to store its links (eg LinkStorageInterface). The constructor needs to be passed an instance of an implemented LinkStorageInterface to do its job.
I can't say 'shouldUseLinkStorage'. But maybe I can say 'shouldStoreLinksInStorage'.
What is correct to 'test' in this case? Should I test that it stores links in a store (behavior) or don't test this at all?
The dependency itself is not an expected behavior, but the actions called on the dependency most certainly are. You should test the stuff you (the caller) know about, and avoid testing the stuff that requires you to have intimate knowledge of the inner workings of the SUT.
Expanding your example a little, lets imagine that our LinkStorageInterface has the following definition (Pseudo-Code):
Interface LinkStorageInterface
void WriteListToPersistentMedium(LinkList list)
End Interface
Now, since you (the caller) are providing the concrete implementation for that interface it is perfectly reasonable for you to test that WriteListToPersistentMedium() gets called when you invoke the Save() method on your LinkList.
A test might look like this, again using pseudo-code:
void ShouldSaveLinkListToPersistentMedium()
define da = new MockLinkListStorage()
define list = new LinkList(da)
list.Save()
Assert.Method(da.WriteListToPersistentMedium).WasCalledWith(list)
end method
You have tested expected behavior without testing implementation specific details of either your SUT, or your mock. What you want to avoid testing (mostly) are things like:
Order in which methods were called
Making a method, or property public just so you can check it
Anything that does not directly involve the expected behavior you are testing
Again, a dependency is something that you as the consumer of the class are providing, so you expect it to be used. Otherwise there is no point in having that dependency in the first place.
LinkStorageInterface is not an implementation detail - its name suggests an interface to to an engine. In which case the name shouldUseLinkStorage has more value than shouldStoreLinksInStorage.
That's my 2 pennies worth!

Resources