I am creating my first ASP.NET MVC 3 application, and my data comes from a data source I can access only via its REST API.
I will only be using READ-ONLY access at this point to the REST data source (no updating, etc.)
I would like to use the Entity Framework V4 to provide a Business Entity interface to MVC 3 without exposing it to the REST API.
I need to get something working quickly - so I don't have time to fully understand the Server Layer / UnitOfWork and Repository patterns just yet, although I plan to go there next.
I am willing to use a Repository class at this time, but not ready for DI / IoC container yet.
Any suggestions on where the RESP API calls go?
EDIT
Learned by asking this question that it is not necessarily useful to integrate an ORM with a REST API - See my accepted answer below.
An Object/Relational Mapper, or ORM, like Entity Framework has specifically been developed to abstract away a relational database. It might not be the right fit for REST calls.
You could instead build a repository class that encapsulates the REST call and exposes methods like IEnumerable<T> GetAll() or T GetyById(...).
Related
I do REST API on Spring. Took a course in Spring Data Hibernate and found that it made the REST API the most time-consuming way.
When I added a new entity to the domain, I went through the following chain of objects:
Entity - domain object
DTO - for transmitting/receiving an object to/from a client
Mapper - to convert between Entity and DTO
Repository - for interacting with the database
RestController - for processing API requests
Service - service class for the object
The approximate chain of my actions was as follows:
RestController processes requests - receives DTO from the client (in case of creation of a new object)
Mapper in controller converts DTO to Entity
Service is called
Service accesses the Repository
Repository returns the result of execution (created by Entity)
Service returns Entity is created in RestController
RestController returns to the client an object of type ResponseEntity, where I put the body and response code.
As you can see a large chain of actions and a large number of objects.
But then I found out that if you use Spring Data REST, all this doesn't need all the API supplied by Spring from the box. In general, you only need to create an Entity and Repository.
It turns out that for typical CRUD-type operations, I wrote a lot of controllers and their methods in vain.
Questions:
When should I use RestConroller, and when is Spring Data REST?
Is it possible to combine two approaches for one Entity? It turns out that I was wasting my time writing for simple operations like creating, getting, saving, deleting controllers, it can be moved to Spring Data REST.
Will I be able to implement some of the actions that I did in Spring Data Rest in RestConroller? Such as:
Return an entity property value as id instead of object? I mean I have properties for entities that are entities themselves, for these fields I sometimes need to return their ID instead of the whole entity.
Is there any way to control error handling? In RestController I have implemented the ResponseEntityExceptionHandler extension class and all errors wherever they occur in my RestController are handled in the same way in one place and I always know that all errors will return approximately the same response structure.
Data validation will have to be hinged on the fact that it used to be validated on DTOs received from the client. Are there any nuances waiting for me in this regard?
I'm a little stumped on how to move forward. Give me your recommendations and thoughts on this. Push forward on what to use and how.
What Spring Data REST can do for you is scaffolding of the plain repository to rest service. It is much faster, and in theory it should be flexible, but in practice it is hard to achieve something more than REST access to your repositories.
In production I've used Spring Data REST as a wrapper of the database - in a service/microservice architecture model you just wrap-up sometimes the core DB into such layer in order to achieve DB-agnostic Application. Then the services will apply the business logic on top of this wrapper and will provide API for the front-end.
On the other hand Spring Data Rest(SDR) is not suitable if you plan to use only these generated endpoints, because you need to customize the logic for fetching data and data manipulation into Repoitories/Services. You can combine both and use SDR for the "simple" entities, where you need only the basic CRUD over them, and for the complex entities to go with the standard approach, where you decouple the entity from the endopint and apply your custom business logic into the services. The downside of mixing up both strategies is that your app will be not consistent, and some "things" will happen out-of-the-box, which is very confusing for a new developer on this project.
It loooks wasted time and efforts to write these classes yourself, but it only because your app doesn' have a complex database and/or business logic yet.
In short - the "standard" way provides much bigger flexibility at the price of writing repetetive code in the beginning.
You have much more control building the full stack on your own, you are using DTO's instead of returning the entity objects, you can combine repositories in your services and you can put your business logic on the service layer. If you are not doing anything of the above (and you don't expect to in the near future) there is no need for writing all that boilerplate yet over again, and that's when Spring Data REST comes into play.
This is an interesting question.
Spring Data Rest provides abstraction and takes a most of the implementation in its hand. This is helpful for small applications where the business logic resides at the repository layer. I would choose this for applications with simple straight forward business logic.
However if I need fine grained control (eg: transaction, AOP, unit testing, complex business decisions etc. ) at each of the layers as you mentioned which is most often needed for large scale applications I will prefer writing each of these layers.
There is no thumb rule.
I've read the post MVC With Lazy Loading and i still have some questions:
If we use the 4th choice, which part should be in charge of the transform process? The controller or the service? If we use controller, is it good to introduce #Transactional to controller?
I also see a post that suggested to use different query for different usage. It seems like to use different fetch groups JDO fetchgroup for different purpose. Is it good to use this way?
Thanks.
In today's day and age you can and should expose services as REST controllers and return proper domain objects rather than ModelAndView or other such constructs from Spring MVC.
UPDATE: Clarification: There is NO controller class in this approach I am suggesting. Expose Service as #Controller. Expose public methods on service as REST and annotate transnational, authorization etc context on the method. Because from an API point of view this public interface serves all kinds of clients whether it is REST or direct method call.
Also if you have dedicated controllers and services you may see some business logic seeping into your controllers in no time.
I would go to even further and not use option 4 which is essentially leads to a DTO anti-pattern.
Having said that it still depends on the complexity of entities. A very complex entity with many association is going to be a performance hog due to the queries it fires. Yet your MVC (JSPs) may actually resolve the associations whenever needed. (In a side note with full REST full architecture this is an issue.)
I have a project in which I am using NHibernate and ASP.Net MVC. The application is intended to allow users to track certain data and then produce views of statistics based upon the data entered. The structure of my application thus far looks something like this:
NHibernate Layer: Contains Repository<T> and UnitOfWork classes, as well as entity mapping definitions.
Core/Service Layer: Contains generic EntityService class. At the moment, this simply defines transaction scope via IUnitOfWork and interfaces with IRepository to provide higher-level data access services.
Presentation Layer (MVC Application): Not yet implemented, but contains the usual stuff plus dependency injection.
I have a couple of questions:
Is it poor design to allow my MVC application to handle dependency injection for ALL layers? For example, as well as dependency injection of EntityService instances into controllers, it will handle the dependency injection of IRepository into the EntityService classes. Should the service layer handle this itself, even though this would mean performing dependency injection in two distinct places?
Where should I produce my statistics? This business logic doesn't seem to belong in my service layer, which, at present, only contains entity type definitions and an interface for modifying and accessing entity properties. I have a few thoughts on this, but I'm not sure which I like best:
Keep my service layer as is and create a separate Statistics project - this is completely independent of the entity types for which it will be used, meaning my MVC controllers will have to pass raw numerical information between my business entities and my (presumably static) statistics classes. This is quite a neat separation but potentially means a lot of business logic still remaining in the presentation layer.
Create a Statistics project; however, create a tight coupling between the classes in this project and my business entities. For example, instead of passing a Reading object's values into a method, I will pass the entire object (or define them as extension methods). This will shift business logic out of my MVC app but the tight coupling seems a bit messy.
Keep all of my business logic inside my service layer. Define strongly-typed subclasses of EntityService, so my services contain both entity-specific business methods and data storage methods, while keeping the entity classes themselves as pure data containers. Create a separate Statistics project for any generic statistical processing and call its methods via my derived service classes. My service classes effectively merge business functions with the storage functionality provided created by IRepository<T>.
I am erring toward the third option but does anyone have any thoughts? Alternative suggestions?
Thanks in advance!
Preliminary observation:
I like the way in which you described your project, I just didn't get why your Data Access Layer (DAL) is called NHibernate Layer: it is odd with all the rest in which you didn't use technology name to describe a logical layer (correctly). So I suggest you to rename it DAL, and use it to abstract your app from NHibernate.
My opinions about your questions:
Absolutely no. It is good to apply Dependency Injection to All Layers. A couple or reasons for which it is good:
1.1 Testing: you can mock DAL interfaces and do unit test Service Layer w/o DAL using another DI config file. In the same way you can mock Service for Web Controllers layer and so on.
1.2 Different DAL implementations: suppose you need different DAL implementation (NOSQL, SQL or LINQ instead of NHibernate, etc..) technologies for different deployment of you project or to scale in the future. You can do that easily maintaining different DI config files.
You can have the same layer deployed in different projects. In the same way you can have a project containing different layers. I think their relation is orthogonal: project is describing a physical (development time and run time) implementation. Layers are logical. So initially I would keep it simple with the third option.
I just don't understand why you saying the following regarding this option:
Create a separate Statistics project for any generic statistical
processing and call its methods via my derived service classes. My
service classes effectively merge business functions with the storage
functionality provided created by IRepository.
I see Statistics as one or more services so you can implement it as namespace with classes inside your Service Layer. And, as any other service, you can inject DAL Repository classes. And, as any other Service/DAL, the Model classes can be shared between different Services and DAL classes.
StatsService.AverageReadingFor(Person p, DateTime start, DateTime end) sounds good.
There are several implementation options:
Using underlying repository features (for example: SQL avg function)
Using Observer Pattern which is implementable also using Dependency Injection
Using Aspect Oriented Programming. See that Spring.Net chapter as an example.
If you have more than one Service Layer instance (more than one server) than 2 and 3 must be adapted for out of process communication using a messaging system.
Just an update - Regarding my second question, I have decided to define an IStatsService<T> which expects an IEntityService<T> to be passed into its constructor. I'll use this for generic statistical processing of business entities and create further interfaces that implement IStatsService<T> where I need more type-specific information.
Hopefully this will help someone who has been scratching their head about a similar problem!
If I need to build a web application that implement functionalities related to managing an Medical hospital institution. That application include functionalities to add, update, delete, search and retrieve patients, medical visits, etc.
So I am confused on which approach to follow inside my ASP.NET MVC web application, either to have my controller classes inside my asp.net MVC.
Derived from IController to perform the CRUD operation
OR
Derived from ApiController to perform the CRUD operations
SECOND question is there an approach that is taking over the other, and what are the advantages/disadvantages of each approach over the other?
You can absolutely use both. Nothing stops you to use good old Controller to do CRUD in JSON and MVC is capable of doing it. However, you do not get all the nice features of Web API such as content negotiation, message handlers, media type formatters, etc.
If you do not need any of Web API specific features, then use MVC. But on the long run, Web API will be a better investment as I believe adoption will rocket soon.
If you are going to generate classical HTML pages from your controllers, use the MVC ones. If what you want to return to a client is some kind of serialized data (XML, JSON, ...), use Web API controllers since these make it easier, regarding e.g. content negotiation.
See also Getting started with Web API
Could anyone please give some examples of possible service. I am going through the book, but cannot understand what can the service do? It provides processed data for modelAndView to controller, but what it looks like is it java bean connecting and retrieving results from database, what can it be?
A service component is where all your DAOs come together and have the business logic. You can think of it this way.
DAO - should only load data from db. Nothing more.
Services - can use daos to load multiple objects and do some kind of business logic
controllers - use services to load objects. They should have nothing more than simple logic because complicated logics should really belong in the service. Reason for this is in the future when you want to reuse this logic, you can do so if its a in service but not if its in the controller.
Example:
BookDAO - Loads the book
BookService - loads the books for a person that is logged in
Finally, I'd like to quote the grails doc for a clean concise quote.
As well as the Web layer, Grails
defines the notion of a service layer.
The Grails team discourages the
embedding of core application logic
inside controllers, as it does not
promote re-use and a clean separation
of concerns.
An example of an Service could be an Email Service in an Business Application (not in an Email-Client). This Service offer other Components the functionality (service) to send emails to notify users about stuff.