I'm developing an application with MVC3 and Entity Framework. I have service layers for two entities that have one similarity.
The two services are DurationService and FieldService. The former handles a list of Days and daysettings. These settings contain information about the timeslots per day (start time, end time, a list of possible break times). The latter service handles a list of Fields and fieldsettings. These fieldsettings are used to determine the field availability.
Both services need to check whether the break times overlap. I coded this for the DurationService but have now noticed that the FieldService requires exactly the same method. I don't want to violate the DRY principle so my question is, how do I best handle this?
Do I make a static class which both services can call? Do I use some kind of inheritance (even though this method is the only method they will share).
Looks like architecture is preventing you from doing the obviously right things. Don't let that happen.
Inheritance is probably not the right solution. A static helper class will do. Simple problems require simple solutions.
Related
I guess traditionally, one would for a RESTful web service use one type of DTO objects for POJO/JSON conversion, and a separate DTO object for database entity/POJO conversion?
Spring Boot should be more opinionated and easier to use, but would you still use different DTO object types for JSON and database entity representation, or do you convert entity objects directly to JSON?
Let me share my opinion.
At first, I think your question has nothing to do with spring boot. Spring boot just provides a fancy and lightweight way to start the application and allows to build the app in an easier manner.
But still you have your rest controller there and from that point it doesn't differ much for any other type of application.
So what you're actually asking is whether it makes sense to maintain an abstraction of JSON objects and converting them to the Business Logic Entity objects and later on converting them once again to Database objects or its enough to maintain only 2 levels and ditch the Json level.
I think the answer is "it depends".
First of all, In general currently the trend is a simplification. So maybe its enough to maintain only 1 level of objects.
There are a lot of advantages of such an approach:
Obviously less code to maintain
Speed of development and testing (POJOs should be checked, converters should be tested and so forth)
Speed of execution - you don't need to waste the CPU time on conversion. A kind of obvious implication.
Less obvious: Memory consumption. Lets say you work with a big bulk of data returned by your DAO. Let's say it occupies 10MB of memory (just for the sake of example). Now if you start to convert, to Business Entities, you'll spend yet another 10MB and now if its A JSon objects, well its again 10MB. The point is that all these objects may co-exist in memory simultaneously. Of course GC will probably take care of them if you implemented everything right, but this is a different story.
However there is one drawback of such a simplification.
In one word I would call it a Commitment
There are three Types of APIs in the application.
The API you're committed to at the level of Web Service - The JSon structure.
The chances are that various clients (not necessary using the JVM at all) are running against your Web service and consume the data. So they really expect you to provide a JSon objects of the given structure.
The API of your business. If your Business logic layer is pretty complicated, you probably have an entire team that develops that logic. So you usually work at the level of APIs between the teams.
The level of DAO - the same story as Business Logic actually.
So now, what happens if you, say, change that API at one level. Does it mean that all the levels will be broken?
Example
Lets say, we don't maintain "JSon" level. In this case, if we change the API at the level of Business Logic, the JSON will also change automatically. All the rest frameworks will happily convert the object for us, and the chances are that the user will get another data.
Another example
Lets say, your BL layer provides a Person entity that looks like this:
class Person {
String firstName;
String lastName;
List<Language> languages;
}
class Language {
...
}
Now, let's say you have a UI that consumes your REST service that provides a list of Persons upon request. What if there are 2 different pages in UI. One that shows only the Persons (in this case it doesn't make sense to provide a list of language, spoken by a person).
In the second page however you want to get the full information.
So, you'll end up exposing 2 web services or complicating the existing one by some parameters (the more params like this you have, the less it resembles the rest :) )
Maybe separation would help a little here? I don't know.
Bottom line.
I would say that as long as you can live without such a separation - do it. It can work even for quite big projects. And of course it can work for small or middle-sized projects.
If you find yourself struggling around fixes and you feel like such a separation would solve the issues - do the separation.
Hope this helps to understand the implications and chose what works for you
Given:
Spring MVC - Hibernate.
Controller -> Service -> DAO
Now I have a method which retrieves something from the DB and EVERYTIME it does this, has to do another method say "processList" (something like just changing some values in the list depending on some screen parameters).
Question:
What layer do I put this "processList"? (Controller, Service or DAO? and why)
I really need some j2ee clarifications now, I know that MVC is the same across languages but I just need to be sure :) If I am doing this in .net I would have undoubtedly put this in service.
This really depends on what processList is doing exactly. There is no golden rule. Some rules I try to follow are:
Never make calls between main objects on the same layer.
ManagementServiceImpl should never call NotificationServiceImpl.
Don't make circular dependencies between objects.
This is very similar to the one above.
If you find yourself repeating some logic across multiple main object, try to restructure the code and extract it in specialized logical classes (this will improve unit-testing as well).
E.g. UserUpdateHandler or NotificationDispatcher (these are still owned by the service layer -> noone else is allowed to call them)...
Put the code where it logically belongs.
Don't get distracted by the fact, that some class needs to do something. It might not be the right place for the code.
Never write fully generalised code before you need to.
This has its term as premature generalisation, which is a bad practice similar to premature optimisation. Saving few lines of code now can lead to pulling your hair out in the future.
Always write code which is able to become generalised.
This is not a contradiction with the previous. This says - always write with generalisation in mind, however don't bother with writing if it is not needed. Think ahead, but not necessarily act ahead.
Leave business logic to service layer, data persistence logic to data layer and user interaction logic to presentation layer.
Don't try to parse user input in service layer. This does not belong there similarly as counting final price in e-shop application does not belong to presentation layer.
Some examples for processList:
Example I - fetching additional relations via Hibernate#initialize
This is something which is really in between the service and DAO layer. On older projects we had specialized FetchHandler class (owned by service layer). In newer projects we leave this completely to DAOs.
Example II - go through list and add business validation messages to the result
service layer, no doubt
Example III - go through the list and prepare UI messages based on the validation errors
presentation layer
Side note:
MVC is a different thing from three-layered architecture.
M model spans across all three layers. Presentation layer includes both V views and C controllers.
The model of the domain are my entities used as POCOs which means no base class, no interfaces around and no Attributes.
So the business logic like validation rules must be outside of the entities. (Anemic Domain Model)
Would this comply with Domain Driven Design?
No. Not really.
Main aim of domain driven design is to capture and encapsulate business domain in model explicitly as possible. Business would always contain behavior, therefore - Your objects are supposed to have behavior too.
The model of the domain are my entities used as POCOs which means no base class, no interfaces around and no Attributes.
...and no c#, .net clr should be used. that's infrastructure, right? ;)
Those are tools to express Your model. You should try to keep noise level down, separate Your model, but You won't be able to runaway completely because it's a model of real life expressed in programming language and technology around it.
Btw, You might want to investigate idea of never allowing domain object to be in invalid state. And if it feels that this particular kind of validation does not relate to business - it is not supposed to be in domain model first of all.
That's a really philosophical question. I really want to give an equally philosophical answer, so here goes:
As I have understood domain driven design, the most important thing is that whoever knows something, does things with that knowledge. I believe this to be intertwined with this article.
With this in mind, your plain old objects should have the means of performing their "life or death" - important tasks (which makes your solution wrong).
However, another way of looking at it would be that these plain old objects are the tiniest available sets of data, almost like primitives. What happens then is that the objects owning these data objects, they are the actual model objects within the domain driven design. and they don't have to correlate perfectly (which would make your solution correct).
This could easily happen if the model and the data layer are designed by two completely independent designers, or if one person is capable of switching hats. Or maybe be a little.... wohoooooo D: i'm thinking this could be a good thing though! let me give an example:
A forum
What do we need? We need users, boards, threads, and posts. The last 3 all have a "one to many" relationship in the data layer. One board has many threads, and one thread has many posts. One user also has many posts, and one user starts many threads (could be derived by finding the author of the first post in a thread, so might not have to be stored in the data layer). But what is going on in the presentation layer?
When viewing a board, we will want to see all available threads in that board. but we won't be satisfied with seeing the name of the thread, and the name of the user who started it. We also want to see the number of posts in each thread, plus the name of the last poster in the thread, and the time of that posting.
We are now looking at a model object which is somewhat out of sync with the data layer. It will contain business logic to calculate the needed data from the given data objects, and then it will be able to load some sort of view with the data that the view wants. No getters or setters will be needed in the model, so capsulation is never broken. The model object conforms to the domain, which should be dependant on the usability demands, not the limitations of data storage. The data objects conform to the old data storing style.
This would give us a data abstraction layer (with the pocos), mvc, and domain driven design. win? :)
I am currently refactoring code that coordinates multiple hardware components for data acquisition, and feeling a bit like I'm recreating the wheel. In particular, an MVC-like pattern seems to be emerging. Except, this has nothing to do with a GUI and I'm worried that I'm forcing this particular pattern where another might be more appropriate. Here's my scenario:
Individual hardware "component" classes obey interface contracts for each hardware type. Previously, component instances were orchestrated by a single monolithic InstrumentController class, which relied heavily on configuration + branching logic for executing a specific acquisition sequence. After an iteration, I have a separate controller for each component, with these controllers all managed by a small InstrumentControllerBase (or its derivatives). The composite system will receive "input" either programmatically or via inter-hardware component triggering - in either case these interactions are routed to, and handled by, the appropriate controller.
So, I have something that feels MVC-esque, but I don't know if that's because I'm forcing the point. With little direct MVC experience in application development, it's hard to know if I'm just trying to make my scenario fit MVC, where another pattern might be a good alternative or complimentary. My problem is, search results and wiki documentation of these family of patterns seems to immediately drop me into GUI-specific discussions.
I understand "M means Model data and the V means View" - but what do you call the superset pattern? Component-Commander-Controller?
Whence can I exhume examples exemplary?
IMO a "view" is not necessarily a GUI component. The pattern is easiest to demonstrate with GUIs but that does not limit its usability to GUIs. If it works for you, don't worry about the name :-) And of course, feel free to tailor it according to your needs.
Update: Of more generic kins of MVC, the only example which surfaced in my mind (after a day's background processing) is PAC.
I'm drinking the coolade and loving it - interfaces, IoC, DI, TDD, etc. etc. Working out pretty well. But I'm finding I have to fight a tendency to make everything an interface! I have a factory which is an interface. Its methods return objects which could be interfaces (might make testing easier). Those objects are DI'ed interfaces to the services they need. What I'm finding is that keeping the interfaces in sync with the implementations is adding to the work - adding a method to a class means adding it to the class + the interface, mocks, etc.
Am I factoring the interfaces out too early? Are there best practices to know when something should return an interface vs. an object?
Interfaces are useful when you want to mock an interaction between an object and one of its collaborators. However there is less value in an interface for an object which has internal state.
For example, say I have a service which talks to a repository in order to extract some domain object in order to manipulate it in some way.
There is definite design value in extracting an interface from the repository. My concrete implementation of the repository may well be strongly linked to NHibernate or ActiveRecord. By linking my service to the interface I get a clean separation from this implementation detail. It just so happens that I can also write super fast standalone unit tests for my service now that I can hand it a mock IRepository.
Considering the domain object which came back from the repository and which my service acts upon, there is less value. When I write test for my service, I will want to use a real domain object and check its state. E.g. after the call to service.AddSomething() I want to check that something was added to the domain object. I can test this by simple inspection of the state of the domain object. When I test my domain object in isolation, I don't need interfaces as I am only going to perform operations on the object and quiz it on its internal state. e.g. is it valid for my sheep to eat grass if it is sleeping?
In the first case, we are interested in interaction based testing. Interfaces help because we want to intercept the calls passing between the object under test and its collaborators with mocks. In the second case we are interested in state based testing. Interfaces don't help here. Try to be conscious of whether you are testing state or interactions and let that influence your interface or no interface decision.
Remember that (providing you have a copy of Resharper installed) it is extremely cheap to extract an interface later. It is also cheap to delete the interface and revert to a simpler class hierarchy if you decide that you didn't need that interface after all. My advice would be to start without interfaces and extract them on demand when you find that you want to mock the interaction.
When you bring IoC into the picture, then I would tend to extract more interfaces - but try to keep a lid on how many classes you shove into your IoC container. In general, you want to keep these restricted to largely stateless service objects.
Sounds like you're suffering a little from BDUF.
Take it easy with the coolade and let it flow naturally.
Remember that while flexibility is a worthy objective, added flexibility with IoC and DI (which to some extent are requirements for TDD) also increases complexity. The only point of flexibility is to make changes downstream quicker, cheaper or better. Each IoC/DI point increases complexity, and thus contributes to making changes elsewhere more difficult.
This is actually where you need a Big Design Up Front to some extent: identify what areas are most likely to change (and/or need extensive unit testing), and plan for flexibility there. Refactor to eliminate flexibility where changes are unlikely.
Now, I'm not saying that you can guess where flexibility will be needed with any kind of accuracy. You'll be wrong. But it's likely that you'll get something right. Where you later find you don't need flexibility, it can be factored out in maintenance. Where you need it, it can be factored in when adding features.
Now, areas which may or may not change depends on your business problem and IT environment. Here are some recurring areas.
I'd always consider external
interfaces where you integrate to
other systems to be highly mutable.
Whatever code provides a back end to
the user interface will need to support change in the UI. However, plan for changes in functionality primarily: don't go overboard and plan for different UI technologies (such as supporting both a smart client and a web application – usage patterns will differ too much).
On the other hand, coding for
portability to different databases
and platforms is usually a waste
of time at least in corporate
environments. Ask around and check
what plans may exist to replace or
upgrade technologies within the
likely lifespan of your software.
Changes to data content and formats are a tricky
business: while data will
occasionally change, most designs
I've seen handle such changes
poorly, and thus you get concrete
entity classes used directly.
But only you can make the judgement of what might or should not change.
I usually find that I want interfaces for "services" - whereas types which are primarily about "data" can be concrete classes. For instance, I'd have an Authenticator interface, but a Contact class. Of course, it's not always that clear-cut, but it's an initial rule of thumb.
I do feel your pain though - it's a little bit like going back to the dark days of .h and .c files...
I think the most important "agile" principle is YAGNI ("You Ain't Gonna Need It"). In other words, don't write extra code until it's actually needed, because if you write it in advance the requirements and constraints may well have changed when(if!) you finally do need it.
Interfaces, dependency injections, etc. - all this stuff adds complexity to your code, making it harder to understand and change. My rule of thumb is to keep things as simple as possible (but no simpler) and to not add complexity unless it gains me more than enough to offset the burden it imposes.
So if you are actually testing and having a mock object would be very useful then by all means define an interface that both your mock and real classes implement. But don't create a bunch of interfaces on the purely hypothetical grounds that it might be useful at some point, or that it is "correct" OO design.
Interfaces have the purpose of establishing a contract and are particularly useful when you want to change the class performing a task on the fly. If there is no need to change the classes an interface just might get in the way.
There are automatic tools to extract the interface, so maybe you'd better delay the interface extraction later in the process.
It depends very much on what you are providing... if you are working on internal things then the advice of "don't do them until needed" is reasonable. If, however, you are making an API that is to be consumed by other developers then changing things around to interfaces at a later date can be annoying.
A good rule of thumb is to make interfaces out of anything that needs to be subclasses. This is not an "always make an interface in that case" sort of thing, you still need to think about it.
So, the short answer is (and this works with both internal things and providing an API) is that if you anticipate more than one implementation is going to be needed then make it an interface.
Somethings that generally would not be interfaces would be classes that only hold data, like say a Location class that deals with x and y. THe odds of there being another implementation of that is slim.
Do not create the interfaces first - you ain't gonna need them. You cannot guess for which classes you'll need an interface of for which classes you don't. Therefore do not spend any time to burden the code with useless interface now.
But extract them when you feel the urge to do so - when you see the need of an interface - at the refactoring step.
Those answers can help too.