Decoupling a client from a product? - algorithm

I was reading a so article on design patterns and Sudhakar Kalmari wrote, "Use the Factory Method pattern when there is a need to decouple a client from a particular product that it uses." I don't understand what this means. To my knowledge, coupling is the degree to which a method or class is hardcoded with other dependencies. I am assuming the client he is referring to is whatever method is calling it, but I don't know what he means when he uses the word 'product'.
Can someone explain this terminology and also explain what he is trying to communicate here?

At the start of the article he lists some distinct components that make up the design pattern; mainly they are the Factory Method, an Abstract Product, a Concrete Product, and the Client.
Use the Factory Method pattern when there is a need to decouple a client from a
particular product that it uses. Use the Factory Method to relieve a client of
responsibility for creating and configuring instances of a product.
The word "product" is referring to objects that are /produced/ by the Factory and used by the caller (client). The client could just create these objects directly, but it would be responsible for knowing the specific object type and implementation details (the "Concrete Product"), coupling the caller to that code.
To "relieve a client of [that] responsibility" we'll introduce an abstract class or interface (the "Abstract Product") that is implemented by each "Concrete Product". The factory method will return this abstract type, and the client will call the factory method to create the objects it needs. The client is now decoupled from any specific implementation and only needs to know about the abstract type (and its factory).
The factory method is "responsible for creating and configuring instances of a product". It knows about the various implementations of the abstract type, and can create and return any one of those implementations to the client. The caller does not need to care which "concrete product" it receives, only that its dealing with a given abstract type.
Using a factory method, we can change implementation details (adding or modifying "concrete products" to our factory) without needing to change the implementation of the client itself, and without coupling the client to "a particular product".

Related

What name should gRPC messages have to avoid conflicts with internal classes?

I have a class Book and its information needs to be passed on gRPC.
message Book {
...
}
But if I use this name there will be conflicts between one class and the other. Is there a convention on this? What name for I use for the gRPC equivalents?
Any meaningful and consistent name will be fine. This problem is not specific to protobuf/gRPC. Often times, we will have an entity class called Book and the corresponding DTO (data transfer object) BookDto with more or less same fields. We add the Dto to the entity class name to create BookDto.
This protobuf messages are basically these DTOs. You can follow the same.
You can use the Book name and access via the qualified path to avoid the conflict. You know this and I hope you do not like it.
Is it really a Book object? It might be a BookSearchRequest to query some books and you might expect BookSearchResponse from your gRPC service.

What should have HandlerInterceptorAdaptor been called?

In Spring MVC, one can define interceptors that can perform work before and after a particular controller is invoked. This can be used, for example, to do logging, authentication etc.
The programmer who wishes to write a custom interceptor is supposed to implement the HandlerInterceptor interface. To aid this task, the HandlerInterceptorAdaptor abstract base class has been provided, which provides default implementations of all the methods specified in the interface. So, if just wants to do some pre processing, one can just extend HandlerInterceptorAdaptor and #Override public boolean preHandle(...), and not worry about implementing the postHandle function.
My doubt concerns the name. From what I understand of the Adapter pattern, it adapts syntactic impedance mismatches between interfaces.
Is that so? If yes, should the class providing the boilerplate implementations be called HandlerInterceptorDefaultImpl, or something along those lines?
Is there a different nomenclature/pattern for what is happening here?
Is the fact that we need a boilerplate class a code smell, and could be removed by refactoring the HandlerInterceptor interface into two: HandlerPreInterceptor and HandlerPostInterceptor? Or is that overkill?
From GOF book about the Adapter pattern:
Adapters vary in the amount of work they do to adapt Adaptee to the Target Interface. There is a spectrum of possible work, from simple interface conversion-for example,changing the names of operations-to supporting an entirely different set of operations. The amount of work Adapter does depends on how similar the Target interface is to Adaptee's.
The boilerplate class that you are referring to is called skeletal implementation class. This is mentioned in Effective Java by Joshua Bloch. From the book:
You can combine the virtues of interfaces and abstract classes by providing an abstract skeletal implementation class to go with each nontrivial interface that you export. The interface still defines the type, but the skeletal implementation takes all of the work out of implementing it.
By convention, skeletal implementations are called AbstractInterface, where Interface is the name of the interface they implement. For example, the Collections Framework provides a skeletal implementation to go along with each main collection interface: AbstractCollection, AbstractSet, AbstractList, and
AbstractMap. Arguably it would have made sense to call them SkeletalCollection, SkeletalSet, SkeletalList, and SkeletalMap, but the Abstract convention is now firmly established.

Can a DAO call DAO?

I have component which needs to update the database for the customer and customer address (via JDBC). Is it appropriate to call the CustomerAddressDAO from the CustomerDAO? Or create a separate "CustomerDataManager" component which calls them separately?
You can do it, but that doesn't mean you should. In these cases, I like to use a Service (CustomerService in this case) that has a method call that uses both DAOs. You can define the transaction around the service method, so if one call fails, they both roll back.
The problem with DAOs that call other DAOs is you will quite quickly end up with circular references. Dependency injection becomes much more difficult.
Obviously, you can do it in different ways. But, to properly answer this question, you should start from your model. In the model, see if Address is an Entity (something with its own id and used also independently), or it is a value type (something that only makes sense in the context of a Customer. Then, you will have two cases:
Address is a Entity:
In this case, Address has its own Dao and Customer has its own Dao. Neither Dao should access the other one. If there is some logic that needs to manipulate the two, then that has to be in your application logic, not in Data Access Layer.
Address is a value type associated with Customer:
In this case, address does not have a separate DAO of itself. It is being saved/restored as part of the containing Customer object.
Conclusion: If properly designed, DAOs don't access each other (under standard situations).

Validation with DDD in SOA application using IoC

In my service facade layer, I have a service class with a method/operation that accepts a DTO (data contract) object. AutoMapper is used to map this DTO into an instance of my domain object to apply any changes. The request is passed onto my domain service which does the actual work. Here's what the method might look like:
public EntityContract AddEntity(EntityContract requestContract)
{
var entity = Mapper.Map<EntityContract, Entity>(requestContract);
var updatedEntity = Service.AddEntity(entity);
var responseContract = Mapper.Map<Entity, EntityContract>(updatedEntity);
return responseContract;
}
The Service and Mapper properties are set using constructor injection with Unity as the IoC container.
In performing the operation, the domain service makes changes to the entity then uses a repository to persist the changes like:
public Entity AddEntity(Entity entity)
{
// Make changes to entity
Repository.Add(entity);
// Prepare return value
}
The Repository is also set using constructor injection.
The issue is that the data becomes immediately available to other clients once it has been persisted so I have to ensure that no invalid data is persisted. I've read the "blue book" of DDD (Evans) as well as Nilsson and am not clear what approach to validation I should take.
If my goal is to prevent the entity from entering an invalid state, should I validate entityContract in my service method to ensure all rules are satisfied before ever passing the request on to my domain service? I hesitate to do so because it seems like I'm breaking encapsulation having these rules defined in the service facade.
The reason we are using a thin facade layer delegating to domain services is that we are exposing course-grained interfaces in our API but support reuse via composition of the fine-grained domain services. Keeping in mind that multiple facade services may be calling the same domain service method, perhaps delegating these rules into the domain service would be better so we know every use is validated. Or should I validate in both places?
I could also put guards in the property setters that prevent unacceptable values from ever putting the entity into an invalid state. This would mean that AutoMapper would fail when trying to map an invalid value. However, it doesn't help when no value is mapped.
I still can't get past the thinking that these rules are part of the entity's behavior and determining if the object is valid should be encapsulated within the entity. Is this wrong?
So first I need to determine when and where I perform these validation checks. Then I need to figure out how to implement with DI so the dependencies are decoupled.
What suggestions can you provide?
I've read the "blue book" of DDD (Evans) as well as Nilsson and am not
clear what approach to validation I should take.
Blue book approaches the problem from a different angle. I think that the term 'Validation' is not used because it is a dangerous overgeneralization. A better approach is to think about object invariants, not validation. Objects (not only in DDD) should enforce their internal invariants themselves. It is not UI or services or contracts or mappers or 'validation frameworks' or anything else that is external to the objects. Invariants are enforced internally. You may find these answers helpfull: 1, 2, 3, 4.
I could also put guards in the property setters that prevent
unacceptable values from ever putting the entity into an invalid
state. This would mean that AutoMapper would fail when trying to map
an invalid value.
You probably should not care about AutoMapper failing or using AutoMapper at all. Domain objects should encapsulate and enforce their internal invariants and throw exception if the attempt to break it is made. It is very simple and you should not compromise the simplicity and expressiveness of your domain objects because of some infrastructural issues. The goal of DDD is not to satisfy AutoMapper's or any other framework's requirements. If the framework does not work with your domain objects don't use it.
You have two types of validation:
Object consistency: is the responsibility of entities. Entities should not allow you to set them to invalid state, dependencies should be enforced, values should be in range. you have to design classes' methods and properties and constructors not to allow invalid state.
Business roles validation: this type of validation requires server processing, like checking id availability, email uniqueness and the so. these types of validations should be processed in server as Validators or Specifications before persistence.

Is it normal to have a long list of arguments in the constructor of a Presenter class?

Warning acronym overload approaching!!! I'm doing TDD and DDD with an MVP passive view pattern and DI. I'm finding myself adding dependency after dependency to the constructor of my presenter class as I write each new test. Most are domain objects. I'm using factories for dependency injection though I will likely be moving to an IoC container eventually.
When using constructor injection (as apposed to property injection) its easy to see where your dependencies are. A large number of dependencies is usually an indicator that a class has too much responsibility but in the case of a presenter, I fail to see how to avoid this.
I've thought of wrapping all the domain objects into a single "Domain" class which would act as a middle man but I have this gut feeling that I'd only be moving the problem instead of fixing it.
Am I missing something or is this unavoidable?
Often a large number of arguments to a method (constructor, function, etc) is a code smell. It can be hard to understand what all the arguments are. This is especially the case if you have large numbers of arguments of the same type. It is very easy for them to get confused which can introduce subtle bugs.
The refactoring is called "Introduce Parameter Object". Whether that's really a domain object or not, it is basically a data transfer object that minimizes the number of parameters passed to a method and gives them a bit more context.
I only use DI on the Constructor if I need something to be there from the start. Otherwise I use properties and have lazy loading for the other items. For TDD/DI as long as you can inject the item when you need it you don't need to add it to your constructor.
I recommend always following the Law of Demeter and not following the DI myth of everything needs to be in the constructor. Misko Hevery (Agile Coach at Google) describes it well on his blog http://misko.hevery.com/2008/10/21/dependency-injection-myth-reference-passing/
Having a Layer Supertype might not be a bad idea, but I think your code smell might be indicating something else. Geofflane mentioned the refactor pattern, Introduce Parameter Object. While it's a good pattern for this sort of problem, I'm not entirely sure it's the way to go for this situation.
Question: Why are you passing in Domain Model objects to the constructor?
There is such a thing as having too much abstraction. If there's one solid layer of code you should be able to trust, it's your Domain Model. You don't need to reference 3 IEntity objects when you're dealing with Customer, Vendor, and Product classes if those are part of your basic Domain Model and you don't necessarily need polymorphism.
My advice: Pass in application and domain services. Trust your Domain Model.
EDIT:
Re-reading the problem when it's not horribly late at night, I realize your "Domain class" is already the Introduce Parameter Object refactoring and not, in fact, a Layer Supertype, as I thought at 3AM.
I also realize that perhaps you need to reference the Model objects in the application code, outside the Presenter. Perhaps you're doing some initial setup of your Model objects before passing them in to the Presenter. If this is the case, your "Domain class" idea might be best. If there is some initial setup, when moving to an IoC, you'll want to look at something like Factory Support in Castle Windsor. (Other IoC containers have similar concepts.)

Resources