How closely does your data model map to your UI and domain model?
The data model can be quite close to the domain model if it has, for example, a Customer table, an Employee table etc.
The UI might not reflect the data model so closely though - for example, there may be multiple forms, all feeding in bits-and-pieces of Customer data along with other miscellaneous bits of data. In this case, one could you have separate tables to hold the data from each form. As required the data can then combined at a future point... Alternatively one could insert the form data directly into a Customer table, so that the data model does not correlate well to the UI.
What has proven to work better for you?
I find it cleaner to map your domain model to the real world problem you are trying to solve.
You can then create viewmodels which act as a bucket of all the data required by your view.
as stated, your UI can change frequently, but this does not usually change the particular domain problem you are tackling...
information can be found on this pattern here:
http://blogs.msdn.com/dphill/archive/2009/01/31/the-viewmodel-pattern.aspx
UI can change according to many needs, so it's generally better to keep data in a domain model, abstracted away from any one UI.
If I have a RESTful service layer, what they are exposing the domain model. In that case , the UI(any particular screen) calls a number of these services and from the domain models collected composes the screen. In this scenario although domain models bubble all the way up to UI the UI layer skims out the necessary data to build its particular screen. There are also some interesting questions on SO about on using domain model(annotated) for persistence.
My point here is the domain models can be a single source of truth. It can do the work of carrying data , encapsulating logic fairly well. I have worked on projects which had a lot of boilerplate code translating each domain model to DTO, VO , DO and what-have-yous. A lot of that looked quite unnecessary and more due to habit in most cases.
Related
I have gone through a lot of links and information about Event Sourcing and CQRS as well. But still, don't understand their proper need. What I could deduce is, it definitely brings complexity and scalability issues.
http://eventuate.io/whyeventsourcing.html
https://martinfowler.com/eaaDev/EventSourcing.html
For me, the main benefit is an ability to refactor your domain model over time.
With ORM you very often end up with database structure that is not easy to change. After several years, the cost of database structure change and data migration could be prohibitive.
With Event Sourcing, your read models are calculated from event stream. You just create a new projection function and have a new database (your read model).
And there a lot of other benefits explained in classic Greg Young's presentation.
Requirements for event sourcing tend to look like requirements for a source control system
we need the capability to recover data that was over written
we need to be able to support temporal queries, meaning that we need to be able to answer questions about how the domain model looked in the past.
A dual way of thinking about it:
we need the ability to audit every change in the model, but the model is too complicated to backup the entire data set after every change
In other words, what business value could you add if your data model was stored in git's object database?
What I see a lot is that people use a Object Relational Mapper (ORM) for doing SQL stuff when working in a MVC environment. But if i really have complex queries I would like to write this whole query myself. What is the best practice for this kind of situation?
Having a Abstraction Layer between your model and the database with the complex queries
Still using the model with creating specific methodes that handle the queries
Or is there any other way that might be better? please tell me :)
Consider the Single Responsibility Principle. Specifically, the question would be...
"If I put data access logic in my model, what will that mean when I need to change something?"
Any time you need to change business logic, you're also changing the objects which maintain data access logic. So the data access logic also needs to be re-tested. Conversely, any time you need to change data access logic, you're also changing the objects which maintain business logic. So the business logic also needs to be re-tested.
As the logic expands, this becomes more difficult very quickly.
The idea behind the Single Responsibility Principle is to separate the dependencies of different roles which can enact changes to the application. (Keep in mind that "roles" doesn't map 1-to-1 with "people." One person may have multiple roles, but it's still important to separate those roles.) It's a matter of simpler support. If you want to make a change to a database query (say, for performance reasons) which shouldn't have any visible affect on anything else in the system, then there's no reason to be changing objects which contain business logic.
1. Having a Abstraction Layer between your model and the database with the complex queries
Yes, you should have a persistence abstraction that sits between storage (database or any other data source) and you business logic. Your business logic should not depend on "where", "how" and even "if" the data is actually stored.
Basically, your code should (at least - try to) adhere to SOLID principles, but as #david already pointed out: you are already violating the first on on that list.
Also, you should consider using a service layer which would be responsible for dealing with interaction between implementation of domain model and your persistence abstraction (doesn't matter whether you are using custom written data mappers or some 3rd party ORM).
In the article (more like excerpt, actually) the "MVC model" is actually all three concentric circles together. Domain model is not code. It actually is trm that describs the accumulated knowledge about the project. Most of domain model gets turned into pieces of code. Those pieces are referred to as domain objects.
2. Still using the model with creating specific methodes that handle the queries
This would imply implementation of active record. It is useful, but mostly misused pattern, for cases when your objects have no (or almost none) business logic. Basically - you should use active record only if all you need are glorified setter an getters, that talk to database.
Active record pattern is a very good choice when you need to quickly prototype something, but it should not be used, when you are attempting to implement fully realized model layer.
ORM's in general do not specifically have any drawbacks versus using direct SQL to fetch data from the database. ORM's as the name implies help in keeping your Relational model (designed using your SQL DDL's or using JPA annotations) and OO model in sync and help them integrate well together.
When using a ORM, you can write your queries in JPQL which is Object oriented SQL. So instead of writing queries that manipulate tables, you are writing queries that manipulate objects. You use the relationships between these objects to get your desired result. Now I understand that sometimes its easier to just write Native SQL, so the JPA specification allows you to run native sql! This just returns you list of "Generic Objects" which you can organize any way you like. When you choose to go this route and actually pick a JPA provider, like Hibernate, these providers have extended functionalities. So if you do have complex relationships you can use libraries like Hibernate Criteria Builder to help you create queries for those complex relationships.
So, if building a large MVC application, it would generally be a good idea to have this abstraction layer in the middle - handling all these relationships. It makes it easier on you the developer to just look at the big picture and the business side of the application.
Imho, no. I think, even the ORM layer adds often more complexity as needed. The databases have very good and sophisticated mechanisms for high-level data manipulation. Triggers, views, constraints, complex keying-indexing, (sub)transactions, stored procedures, and procedural extensions of the query language were normally much more as enough for everything.
The ORMs can't give, because of their structural barriers, a real interface to this feature set.
And the common practice is that the applications use practically only a nosql record service from all of this, and implement in an unneeded "middleware" which were the mission of the database.
Which I see really interesting, if the feature set of the databases got some OO-like interface (see "sql abstract types"), and the client-side logic went in the application (see "REST"). This practically eliminated the need of the middle layer.
I'm not sure if I stated my question clearly, but I have two seperate pages and a single view model. Originally I only had one page, but I decided to split these up because my pages were getting too large (more specifically I had too many pivot items on a single page where two pages would seperate the data better for the user). I was wondering if it was possible to only load specific data to a single view from the view model, because as it is right now my application is freezing because my view model attempts to load all the data even though only about half of it needs to be used on the current page the user is viewing. If so, I'm assuming I would somehow need to let the view model know which data to load. How would I accomplish this. OR, is it good practice to create two seperate view models, one for each page, so that only the necessary data for each page will load accordingly and keep my application from freezing? I am not sure what the standard is here, or what is the most efficient in terms of CPU usage and response times, etc.
Loading more data than you need can definitely be a problem especially if you're doing it over the Internet. Why do it like that? Why not simply separate the viewmodel in two parts? The definition of VM basically says: (quote from Model-View-ViewModel (MVVM) Explained)
The viewmodel is a key piece of the triad because it introduces Presentation Separation, or the concept of keeping the nuances of the view separate from the model. Instead of making the model aware of the user's view of a date, so that it converts the date to the display format, the model simply holds the data, the view simply holds the formatted date, and the controller acts as the liaison between the two.
If you separated the view, you might as well separate the VM too in order to keep things simple.
Still, if that doesn't do it for you and your data is not exposed as a service of some kind, why not just using the parts of VM? Call only the methods you need according to the page which you're seeing, set only the properties you need, don't do it all. And do it on a different thread if the data is really large to process so that your UI doesn't freeze (and of course, in the meantime show the user that you're getting the data using a progress bar).
That should be enough for the scenario you described.
For application developers, I suppose the traditional paradigm for writing an application with domain objects that can be persisted to an underlying data store (SQL database for arguments sake), is to write the domain objects and then write (or generate) the table structure. There is a tight coupling between what the domain object looks like and what the structure of underlying data store looks like. So if you want to add a piece of information to your domain object, you add the field to your code and then add a column to the appropriate database table. All familiar?
This is all well and good for data stores that have a well defined structure (I'm mainly talking about SQL databases whereby the tables and columns are pre-defined and fixed), but now a number of alternatives to the ubiquitous SQL database exist and these often do not constrain the data in this way. For instance, MongoDB is a NoSQL database whereby you divide data into collections but aside from that there is no structuring of the data. You don't define new columns when you want to add a new field.
Now to the question: given the flexibility of a data store like MongoDB, how would one go about achieving a similar kind of flexibility in the domain objects that represent this data? So for instance if I'm using Spring and creating my own domain obejcts, when I add a "middleName" field to my data, how can I avoid having to add a "middleName" field to my domain object? I'm looking for some kind of mechanism/approach/framework to dynamically inspect the data and have access to it in my domain object without having to make a code change every time. All ideas welcome.
I think you have a couple of choices:
You can use a dynamic programming language and not have domain objects (clojure for example)
If you're fixed on using java, the mongo java driver returns data in DBObject which is essentially a Map. So the default behavior already provides what you want. It's only when you map the DBObject into domain objects, using a library like morphia (or spring-data), that you even have to worry about domain objects at all.
But, if I was using java, I would stick with the standard convention of domain objects mapped via morphia, because I think adding a field is a very minor inconvenience when compared against the benefits.
I think the question is inherintly paradoxical-
On one hand, you want to have domain objects, i.e. objects that represent the data (and behaviour) of your problem domain.
On the other hand, you say that you don't want your domain objects to be explicitly influenced by changes to the data.
But when you have objects that represent your problem domain, you want to do just that- to represent your problem domain.
So that if, for example, middle name is added, then your representation of the real-life 'User' entity should change to accomodate this change to the real-life user; perhaps not only by adding this piece of data to your object, but also adding some related behaviour (validation of middle name, or some functionality related to it).
In essense, what I'm trying to say here is that when you have (classic OO) domain objects, you may need to change your behaviour / functionality along with your data, and since you don't have any automatic way of changing your behaviour, the question of automatically changing your data becomes irrelevant.
If you don't want behaviour associated with your data, then you essentialy have DTOs, and #Kevin's answer is what you're looking for.
Honestly, it sounds more like you're looking for some kind of blackbox DTO where, like you describe, fields are added or removed "arbitrarily" depending on the data. This makes me inclined to suggest a simple Map to do the job. You can't really have a domain-driven design if your domain model is constantly changing.
I have predefined tables in the database based on which I have to develop a web application.
Should I base my model classes on the structure of data in the tables.
But a problem is that the tables are very poorly defined and there is much redundant data in them (which I can not change!).
Eg. in 2 tables three columns are same.
Table: Student_details
Student_id , Name, AGe, Class ,School
Table :Student_address
Student_id,Name,Age, Street1,Street2,City
I think you should make your models in a way that would be best suited for how they will be used. Don't worry about how the data is stored or where it is stored... otherwise why go through the trouble of layering your code. Why not just do the direct DB query right in your view? So if you are going to create an abstraction of your data... "model" ... make one that is designed around how it will be used... not how it will be or is persisted.
This seems like a risky project - presumably, there's another application somewhere which populates these tables. As the data model is not very sound from a relational point of view, I'm guessing there's a bunch of business/data logic glued into that app - for instance, putting the student age into the StudentAddress table.
I'd support jsobo in recommending you build your business logic independently of the underlying persistance mechanism, and that you try to keep your models as domain focused as possible, without too much emphasis on how the database happens to be structured.
You should, however, plan on spending a certain amount of time translating your domain models into their respective data representations and dealing with whatever quirks the data model imposes. I'd strongly recommend containing all this stuff in a separate translation layer - don't litter it throughout the rest of the application.