In MVC, is the ORM the same as the model or just a way the model can be designed? In other words, the "model" doesn't care how you get data as long as you get it. Or, does "model" imply that I no longer have a bunch of SQL statements in my code like in code behind forms? Something else?
Thank you.
No, the ORM is the thing that maps a code-based model to your database and vice versa.
For basic CRUD apps, where your model in code is literally just DTOs that represent the database and you're loading, editing, and saving them, that's how you'd use it. If you do have a "proper" Domain Model, then it's a bit more complex because ideally you'd want to decouple the shape of the Domain Model classes from the shape of the database tables.
To elaborate, you would create your model in your code to represent the Domain Model (i.e. the various elements of your problem domain), build some sort of "memento" classes that are pure DTOs that you can convert your Domain Model classes from/into. Then configure an ORM (object relational mapper) to map those memento DTOs to a database. I.e. Generate SQL statements that will update the database based on the model objects you give to it.
I can understand some confusion, because there are tools (LINQ to SQL being one) that actually generate model classes in a designer for you. This isn't pure ORM, like NHibernate, where you provide the ORM plain old objects and some mapping configuration that it uses (often in conjunction with reflection) to automatically generate the SQL statements for the database. You could possibly get away with using EF Code First to map a Domain Model directly to the database, but I think in the end it may get messy as you try to make changes to one or the other.
If you'd like to have a look at a good real world implementation of MVC with an ORM, have a look at S#arp Architecture which is based on MS ASP.NET MVC, Nhibernate and the repository pattern.
The model should be decoupled from the backend data store technology as much as possible.
I thought this was a pretty good article that discusses the relationship between data access layers, DTOs, etc. http://msdn.microsoft.com/en-us/magazine/dd263098.aspx
Related
What I see a lot is that people use a Object Relational Mapper (ORM) for doing SQL stuff when working in a MVC environment. But if i really have complex queries I would like to write this whole query myself. What is the best practice for this kind of situation?
Having a Abstraction Layer between your model and the database with the complex queries
Still using the model with creating specific methodes that handle the queries
Or is there any other way that might be better? please tell me :)
Consider the Single Responsibility Principle. Specifically, the question would be...
"If I put data access logic in my model, what will that mean when I need to change something?"
Any time you need to change business logic, you're also changing the objects which maintain data access logic. So the data access logic also needs to be re-tested. Conversely, any time you need to change data access logic, you're also changing the objects which maintain business logic. So the business logic also needs to be re-tested.
As the logic expands, this becomes more difficult very quickly.
The idea behind the Single Responsibility Principle is to separate the dependencies of different roles which can enact changes to the application. (Keep in mind that "roles" doesn't map 1-to-1 with "people." One person may have multiple roles, but it's still important to separate those roles.) It's a matter of simpler support. If you want to make a change to a database query (say, for performance reasons) which shouldn't have any visible affect on anything else in the system, then there's no reason to be changing objects which contain business logic.
1. Having a Abstraction Layer between your model and the database with the complex queries
Yes, you should have a persistence abstraction that sits between storage (database or any other data source) and you business logic. Your business logic should not depend on "where", "how" and even "if" the data is actually stored.
Basically, your code should (at least - try to) adhere to SOLID principles, but as #david already pointed out: you are already violating the first on on that list.
Also, you should consider using a service layer which would be responsible for dealing with interaction between implementation of domain model and your persistence abstraction (doesn't matter whether you are using custom written data mappers or some 3rd party ORM).
In the article (more like excerpt, actually) the "MVC model" is actually all three concentric circles together. Domain model is not code. It actually is trm that describs the accumulated knowledge about the project. Most of domain model gets turned into pieces of code. Those pieces are referred to as domain objects.
2. Still using the model with creating specific methodes that handle the queries
This would imply implementation of active record. It is useful, but mostly misused pattern, for cases when your objects have no (or almost none) business logic. Basically - you should use active record only if all you need are glorified setter an getters, that talk to database.
Active record pattern is a very good choice when you need to quickly prototype something, but it should not be used, when you are attempting to implement fully realized model layer.
ORM's in general do not specifically have any drawbacks versus using direct SQL to fetch data from the database. ORM's as the name implies help in keeping your Relational model (designed using your SQL DDL's or using JPA annotations) and OO model in sync and help them integrate well together.
When using a ORM, you can write your queries in JPQL which is Object oriented SQL. So instead of writing queries that manipulate tables, you are writing queries that manipulate objects. You use the relationships between these objects to get your desired result. Now I understand that sometimes its easier to just write Native SQL, so the JPA specification allows you to run native sql! This just returns you list of "Generic Objects" which you can organize any way you like. When you choose to go this route and actually pick a JPA provider, like Hibernate, these providers have extended functionalities. So if you do have complex relationships you can use libraries like Hibernate Criteria Builder to help you create queries for those complex relationships.
So, if building a large MVC application, it would generally be a good idea to have this abstraction layer in the middle - handling all these relationships. It makes it easier on you the developer to just look at the big picture and the business side of the application.
Imho, no. I think, even the ORM layer adds often more complexity as needed. The databases have very good and sophisticated mechanisms for high-level data manipulation. Triggers, views, constraints, complex keying-indexing, (sub)transactions, stored procedures, and procedural extensions of the query language were normally much more as enough for everything.
The ORMs can't give, because of their structural barriers, a real interface to this feature set.
And the common practice is that the applications use practically only a nosql record service from all of this, and implement in an unneeded "middleware" which were the mission of the database.
Which I see really interesting, if the feature set of the databases got some OO-like interface (see "sql abstract types"), and the client-side logic went in the application (see "REST"). This practically eliminated the need of the middle layer.
I have an MVC3 NHibernate/ActiveRecord project. The project is going okay, and I'm getting a bit of use out of my model objects (mostly one giant hierarchy of three or four classes).
My application is analytics based; I store hierarchial data, and later slice it up, display it in graphs, etc. so the actual relationship is not that complicated.
So far, I haven't benefited much from ORM; it makes querying easy (ActiveRecord), but I frequently need less information than full objects, and I need to write "hard" queries through complex and multiple selects and iterations over collections -- raw SQL would be much faster and cleaner.
So I'm thinking about ditching ORM in this case, and going back to raw SQL. But I'm not sure how to rearchitect my solution. How should I handle the database tier?
Should I still have one class per model, with static methods to query for objects? Or should I have one class representing the DB?
Should I write my own layer under ActiveRecord (or my own ActiveRecord-like implementation) to keep the existing code more or less sound?
Should I combine ORM methods (like Save/Delete) into my model classes or not?
Should I change my table structure (one table per class with all of the fields)?
Any advice would be appreciated. I'm trying to figure out the best architecture and design to go with.
Many, including myself, think the ActiveRecord pattern is an anti-pattern mainly because it breaks the SRP and doesn't allow POCO objects (tightly coupling your domain to a particular ORM).
In saying that, you can't beat an ORM for simple CRUD stuff, so I would keep some kind of ORM around for that kind of work. Just re-architect your application to use POCO objects and some kind or repository pattern with your ORM implementation specifics in another project.
As for your "hard" queries, I would consider creating one class per view using a tiny ORM (like Dapper, PetaPoco, or Massive), to query the objects with your own raw sql.
Given an MVC3 app using the ViewModel pattern and the Repository pattern with Entity Framework.
If I have a create and update view each composed of multiple entities, what is the best practice for saving the data?
Should I save the date using an abstracted service layer which will save the data for each entity with its respective repository or should I save the data in the repository using a stored procedure?
I'm open to any suggestions or recommendations.
Thanks in advance!
This is one of those cases where a DDD/CQRS approach makes most sense. Simply put, you have some business objects which models a specific behavior (an aggregate). There is one object in chrage called the Aggregate Root (AR) which has explicit boundaries. When you want to save it, you send the whole AR to the repository which then saves everything as a transaction.
The workflow
User sends the data via a view model. The controller will then retrieve the AR from the repository or creates if it's new . THe input data is mapped to the AR, usually via an AR method. IF the AR finds that the data or the result of it, breaks some business rules then it should throw an exception (we assume that basic validation was already performed automatically by asp.net mvc).
If everything is ok, the controller will send the AR to the repo which then it will proceed to map the AR to EF entities and then saves it, all within a transaction.
THis is in a nutshell how I'd do it. Of course, I'd actually implement it a bit different, but the concepts are the same. THe important part is to send all the data to the AR which will know how to handle relationships.
Important points
Note that I've mentioned EF only after the AR got to the repo. This means, the AR has no relation to EF entities is completely separated and serves the actually business model. Only after the model is updated, we care about EF and ONLY within the repo (because EF is an implementation detail of the repo). The repo only transfers (maps basically) AR data to the relevant EF entities and then saves the entities.
It's important to have a very clear distinction between the business (domain) model and the persistence modewl (EF entities). Don't use EF to handle business rules, use it only to stare/retrieve data from db. EF was made to abstract RDBMS access only, use it as a virtual OOP database.
You've mentioned the ViewModel pattern. I haven't heard about such a pattern, everytime you're using MVC you're already using ViewModels. One again, the trick is NOT to use EF entities as ViewModels. Use 'dumb' view models fitted for the views. Populate the VM via a specialized Queries repository which will return directly VM parts. The repo will query EF entities and then return those VM bits which are simple DTO's. That's because you don't need validation and business rules when showing data.
I think it is a good practice to keep the layers and especially each layer's model separated. For updating stuff, use complex business objects(domain model) which will do the hard work and then only transfer their state to EF (via repository). For reading stuff, query EF and return simple DTOs fit for VM.
This is what CQRS is really about: don't try to fit different responsibilities (write and read) in a single model.
I have a small to medium sized project to work on and I wanted to use the new MVC 3 and Razor but unfortunately I will need to hit a mssql 2000 as well as an ms FoxPro 8 database.
Maybe I am stuck using ADO.Net typed data sets and webforms? Whats the best/easiest way to get type data sets into List or even just make them enumerable so I can use foreach etc for output?
Would it be better to map each data set row to a POCO?
The datastore you are using has nothing to do with the frontend application. You could perfectly fine use ASP.NET MVC 3 with Razor as frontend and abstract the data access layer in a repository. In the implementation of this repository you could use ADO.NET with data readers that return strongly typed model objects, Forget about the legacy DataSets. You could use an ORM such as NHibernate to simplify the conversion between SQL queries and objects. As far as MVC views are concerned you should use view models which are specific to each view instead of your model objects coming from the repository. To map between different object types you may take a look at AutoMapper.
ADO.NET DataReaders are much faster than Data Sets. Then inside the DataReader, load to POCO. You could also map DataSet rows to POCO (as you mention).
Is there a reason you can't use Entity Framework?
just wanted to gather different ideas and perspectives as to which layer should (and why) LINQ fall into?
LINQ = Language INtegrated Queries. This is the query extensions that allows you to query anything from databases to lists/collections to XML. The query language is useful in any layer.
However, a lot of people refer to LINQ to SQL as just "LINQ". In that context, a combined BLL/DAL makes sense when you're using L2S and that's where you do LINQ queries against your database. That does of course not exclude doing subsequent queries against the results from those same queries in new (Linq to objects) queries in higher layers...
it depends on what you want to do with linq. when using linq2sql i`d recommend the DAL, but Linq is more than just database access. you can use it to manipulate lists, ienumerables of business objects and so on... Linq itself can be useful everywhere in your application.
I consider your DataContext-derived object to your DAL layer itself, and LINQ is just a very flexible interface to it. Hence I use LINQ queries directly in the Business layer.
Both. DataContext is the DAL and, when using the designer, the auto-generated partial classes that map on to SQL objects (tables,views) can be considered part of your business layer. I implement partial classes that implement some of the partial methods to enforce validation and security as needed. Some business rules don't map directly on to DB objects and are handled via other classes.
I think if you are doing Linq to Sql, you should always do it in your DAL. However if you are doing Linq to Objects where you are just filtering, playing with different object you can do that is BL layer.
I think LINQ should be the very lower-level (DAL) and I think it should be wrapped into a BLL.
I know a lot of people like to use the partial accessibility of the models that LINQ to SQL generates but I think you should have clear separation of interests (see what I did there?). I think if you're going to have business logic it needs to be decoupled completely from your data access logic.
I think what makes it tricky is the fact that you can keep chaining those LINQ extension methods anywhere you have a using System.Linq line in your code. Again though I think LINQ belongs with the definition and should be at the lowest possible level. It also makes TDD/Unit Testing much, much easier when you wrap the usage of LINQ in a BLL.
I use linq in the traditional 'data access layer' or in 'data access objects'. This allows modularization of code, promotes data code in one place (vs cutting and pasting the same code a few different places) and allows a different front end to be developed with relative ease.
It depends on the architecture of your application, and it makes a huge difference how much the presentation model matches the data model. I agree with separating out business logic operations from the data objects and access methods created by LINQ. I also tend to wrap all data-level operations inside a manager class so I can make the data context an internal class.
I think the point of Linq is that it replaces your DAL.
The equivalent to your old DAL is all the auto-generated code behinf the DBML files + anything extra that Linq can't do added by you.