MVC sharing Servicestack Model (ormlite) - model-view-controller

I'm new to MVC. I come from Webforms, by the way I was also using Servicestack ormlite.
I need to know if I can have an MVC project, but the Model section can be in a different assembly, so I can share this assembly with other projects (console app, web api, mvc project).
It is important that the assembly can have the Model section using the servicestack functionality for model(s) definition(I use servicestack ormlite for data layer), so here comes my other question, that servicestack's model(s) definition can it be used for the "views" in the MVC project? ie, the models syntaxis for servicestack is compatible with the strong typed models in the views MVC?
do you have a sample of:
models in external assembly
share models definition for DAL in servicestack(ormlite), and working in the views for the MVC project
Thanks in advance.

OrmLite is very flexible and resilient in what clean POCOs you can use with it, from a previous answer:
Clean POCOs
The complex Data Models stored in OrmLite or Redis doesn't suffer from any of these issues which are able to use clean, disconnected POCOs. They're loosely-coupled, where only the "Shape" of the POCO is significant, i.e. moving projects and changing namespaces won't impact serialization, how it's stored in RDBMS tables, Redis data structures, Caching providers, etc. You're also not coupled to specific types, you can use a different type to insert data in OrmLite than what you use to read from it, nor does it need to be the "exact Shape", as OrmLite can populate a DTO with only a subset of the fields available in the underlying table. There's also no distinction between Table, View or Stored procedure, OrmLite will happily map any result-set into any matching fields on the specified POCO, ignoring others.
Effectively this means POCOs in ServiceStack are extremely resilient and interoperable, so you can happily re-use the same DTOs in OrmLite and vice-versa without issue. If the DTO and Data models only deviate slightly, you can hide them from being serialized or stored in OrmLite with the attributes below:
public class Poco
{
[Ignore]
public int IgnoreInOrmLite { get; set; }
[IgnoreDataMember]
public int IgnoreInSerialization { get; set; }
}
Otherwise when you need to separate them, e.g. more fields were added to the RDBMS table than you want to return, the DTO includes additional fields populated from alternative sources, or you just want your Services to project them differently. At that point (YAGNI) you can take a copy of the DTO and add it to your Services Implementation so they can grow separately, unimpeded by their different concerns. You can then effortlessly convert between them using
ServiceStack's built-in Auto Mapping, e.g:
var dto = dbPoco.ConvertTo<Poco>();
The built-in Auto Mapping is also very tolerant and can co-erce properties with different types, e.g. to/from strings, different collection types, etc.

Related

is this a good idea to implement validation in entity framework POCO entities in dabtase first?

It`s seems that the best place to implement validation is as close as possible to the database, so when I use entity framework the nearest objects are the entities, in my case the POCO entities.
The reason for that is that if I want to reuse this POCO entities, the validation is implemented in the POCO objects and then there are less posibilities to insert worng data in the database.
this also avoid that someone try to insert incorrect data in the databse creating another application, or because he does not implement the validation. So it is more secure.
One way to do that is using partial classes that extends the POCO entities and that implements the IValidatableObject interface and return a list of validationresult.
But other way is the following. I have a common assembly that has the following:
One interface that declare the methods that need to implement the repositories.
The POCO entities that will be used by the repositories.
One class with utilities, such as copy entities and methods to validate the data of the entities.
Then I can create many repositories that use different versions of EF or another technology and all of them use the common assembly. This repositories implements the validation using the methods in the common library.
In this case I implement the validation only once. The only problem is that the repositories need to call the methods to validate the data.
But there are advantages in this way, from my point of view. For example, I can validate the data of the entities depending on the type of the operation. For example, if I am adding a new record and the primary key as an autonumeric, if the ID is not 0, then I can throw an exception, or if I try to delete a register when the ID is 0, then I don't need to send the command to the database.
So this second solution solves the problem to implement the validation as close as possible to the database, bacause is used in the repository, that is the element that access to the database, but has the problem that if some developer creates a new repository and not use the validation methods, I can have incorrect data in the database.
So my question is if the best option is to use validation with partial classes or to use a common library and the validation is implemented in the repositories, that is really what the users will use.
Thanks.
OK - phew, big question. My opinion is that the APPLICATION DOMAIN of the application is the boss of everything. The database is just an add-on service. So, the application domain should ultimately validate ALL objects that are being SENT somewhere. No need to validate object coming out of the DB because they were validated going in.
As an example, what if you were creating some object that needed to be sent off to a web service and it needed validation. Lets say it was never going near the database or the repositories. Once the DOMAIN business objects have been validated, they can then be sent for persistence or anywhere else.
Another thing to consider is what you mean by validation. Does it mean the datatypes are correct? Does it mean the business object is valid? Does it mean the business object is valid in the given context? It could mean all or only some of these things.
As an example, what if your system allows users to partially update records (common with very long input forms). The business object may only become valid when ALL the required data is captured, but the database allows persistence of "partial" data. In other words, you can save the business object to the database although it is not valid for further processing yet. etc etc....

LINQ DataContext Object Model, could it be used to manage a changing data structure

I am currently working on a project where we are rewriting software that was originally written in Visual DataFlex and we are changing it to use SQL and rewriting it into a C# client program and a C#/ASP.Net website. The current database for this is really horrible and has just had columns added to table or pipe(|) characters stuck between the cell values when they needed to add new fields. So we have things like a person table with over 200 columns because stuff like 6 lots of (addressline1, addressline2, town, city, country, postcode) columns for storing different addresses (home/postal/accountPostal/ect...).
What we would like to do is restructure the database, but we also need to keep using the current structure so that the original software can still work as well. What I would like to know is would it be possible using Linq to write a DataContext Object Model Class that could sort of interpret the data base structures so that we could continue to use the current database structure, but to the code it could look like we where using the new structure, and then once different modules of the software are rewritten we could change the object model to use the correct data structure???
First of all, since you mention the DataContext I think you're looking at Linq to SQL? I would advice to use the Entity Framework. The Entity Framework has more advanced modeling capabilities that you can use in a scenario as yours. It has the ability to construct for example a type from multiple tables, use inheritance or complex types.
The Entity Framework creates a model for you that consists of three parts.
SSDL which stores how your database looks.
CSDL which stores your model (your objects and the relationships between them)
MSL which tells the Entity Framework how to map from your objects to the database structure.
Using this you can have a legacy database and map this to a Domain Model that's more suited to your needs.
The Entity Framework has the ability to create a starting model from your database (where all tables, columns and associations are mapped) en then you can begin restructuring this model.
These classes are generated as partial so you could extend them by for exampling splitting the database piped fields into separate properties.
Have you also thought about using Views? If possible you could at views to your database that give you a nicer dataschema to work with and then base your model on the views in combination with stored procedures.
Hope this gives you any ideas.

how to use codeigniter database models

I am wondering how the models in code ignitor are suposed to be used.
Lets say I have a couple of tables in menu items database, and I want to query information for each table in different controllers. Do I make different model classes for each of the tables and layout the functions within them?
Thanks!
Models should contain all the functionality for retrieving and inserting data into your database. A controller will load a model:
$this->load->model('model_name');
The controller then fetches any data needed by the view through the abstract functions defined in your model.
It would be best to create a different model for each table although its is not essential.
You should read up about the MVC design pattern, it is used by codeigniter and many other frameworks because it is efficient and allows code reuse. More info about models can be found in the Codeigniter docs:
http://codeigniter.com/user_guide/general/models.html
CodeIgniter is flexible, and leaves this decision up to you. The user's guide does not say one way or the other how you should organize your code.
That said, to keep your code clean and easy to maintain I would recommend an approach where you try to limit each model to dealing with an individual table, or at least a single database entity. You certainly want to avoid having a single model to handle all of your database tables.
For my taste, CodeIgniter is too flexible here - I'd rather call it vague. A CI "model" has no spec, no interface, it can be things as different as:
An entity domain object, where each instance represents basically a record of a table. Sometimes it's an "anemic" domain object, each property maps directly to a DB column, little behaviour and little or no understanding of objects relationships and "graphs" (say, foreign keys in the DB are just integer ids in PHP). Or it can also be a "rich (or true) domain object", with all the business intelligence, and also knows about relations: say instead of $person->getAccountId() (returns int) we have $person->getAccount(); perhaps also knows how to persist itself (and perhaps also the full graph or related object - perhaps some notion of "dirtiness").
A service object, related to objects persistence and/or general DB querying: be a DataMapper, a DAO, etc. In this case we have typically one single instance (singleton) of the object (little or no state), typically one per DB table or per domain class.
When you read, in CI docs or forums, about , say, the Person model you can never know what kind of patter we are dealing with. Worse: frequently it's a ungly mix of those fundamentally different patterns.
This informality/vagueness is not specific to CI, rather to PHP frameworks, in my experience.

In a MVC web application, who is responsible for filtering large collections of objects, view or model?

I have a web application built on an MVC design.
I have a database which contains a large number of objects (forum threads) which I can't load into memory at once. I now want to display (part of) this collection with different filters in effect (kinda like what stackoverflow does with questions sorted by date, votes, tags etc).
Where do I implement the filtering logic? It seems to me that this must go into the model part of the application, as only models interact with the database (in my implementation). If I make the filtering a part of the view, then the view must access the database directly to get the list of filtered objects, right? I'd like to avoid this, because it exposes the database layout to the view. But at the same time, displaying different views of the same data should be implemented in the view part of the application, as they are just that -- different views of the same data.
So how do I resolve this? Do I create an additional model, say, FilteredThreadsList, and have it remember the filter to use, and then use a FilteredView to display the list of threads that FilteredThreadsList spits out?
Or do I have to build a ThreadQueryier that allows views to query the database for certain thread objects, so I can have the filtering logic in a view without exposing the database backend?
You should never query data from the view. I don't know what framework you are using in particular but as for Ruby on Rails (should be the same for other frameworks) we always pull the necessary data from the controller and store all that information into a variable. The variable will be accessed by the view which can help you avoid querying your database directly from the view.If the code to query the database gets too lengthy in the controller, insert that code into the model instead so it's more maintainable for your project in the future. Additionally, you can call this model method from multiple places in your application if needed. Good luck!
From an architectural point of view, the model should be having the code for filtering. This is so, because in many applications the code for filtering is not trivial and has a good amount of domain logic in it. (Think of filtering top gainers from a list of stocks). From your example as well, it looks the same since you might want to filter by vote or by date or by tags and then by answered or unanswered etc.
In some very simple applications that deal with search/list of entities and allows Create/Read/Update/Delete of an entity, the pagination, sorting and filtering logic is usually very generic and can be implemented in a controller base class that is inherited by all entity-specific controller classes.
The bottom line is this: if your filtering logic is generic put it in the controller else put it in the model.
Model, that's only bunch of entities.
View provides a visual representation of the data from model - use as much of views as you want. If your application is web based, you can fetch data into browser just once (AJAX) using and re-use them for different UI components rendered in the browser.
As for what entities and what view to use for their representation, I think it's work of Controller. If you need some support for it on "model layer", add it but avoid tight coupling.

LINQ to SQL classes to my own classes

I'm looking at using LINQ to SQL for a new project I'm working on, but I do not want to expose the LINQ classes to my applications. For example, do an select in link returns a System.Linq.IQueryable<> collection. As well, all the classes generated to represent the database use Table, Column, EntityRef classes and attributes. It's fine if my data access layer has LINQ dependancies, but I don't want my application to.
So my thoughts are that I will have to use the LINQ to SQL generated classes as intermediate classes that are not exposed outside of my data access layer, and create my own classes which the application can use. What is the easiest/effecient way to get the data from the LINQ to SQL classes into my own classes?
I totally agree with your thinking - I would try to avoid exposing LINQ to SQL entities directly to the world.
I would definitely recommend using a "domain model" of your own, either a 1:1 mirror of the underlying LINQ to SQL entities, or a different one.
As long as you have a domain model that is quite similar to the underlying LINQ to SQL entities, you can use tools like AutoMapper to easily shuffle data between your LINQ to SQL entities and your domain model classes. It should be pretty easy and flexible to do it that way!
Rob Conery published a webcast series entitled the MVC-Storefront where he introduces a variation of the repository pattern that accomplishes what you want.
I've used ideas from the screencast on a reasonably large project and was quite pleased with the results.
There are, however, issues with the pattern, particularly around concurrency and detached scenarios that you will want to think about up front before fully committing to it.
I detailed some of my pain with concurrency in this pattern here.
I'll be interested in the responses you get because I'm considering the exact same thing. I want to use the L2S entities classes on our backend but use much lighter-weight entities for application consumption.
Randy
I would advise against using LINQ to SQL on a new project, since Microsoft will no longer be developing this project, except for maybe fine-tuning some issues. LINQ to SQL is perfectly usable and is acceptable, but I would not advise new projects to use it. If you like LINQ to SQL, you should definately look into using Entity Framework instead of LINQ to SQL.
This is my current incarnation of how I am going about doing this:
I have a DataContext class that I created by adding a LINQ to SQL class, and droping tables onto the designer. I called the class MyDataContext and put it in a namespace called Linq. My database has a table called Tag, which generated a class, also in the Linq namespace. I changed all the accessors to internal, so they would not be visible outside of the data access layer.
namespace Linq
{
[System.Data.Linq.Mapping.DatabaseAttribute(Name="MyDb")]
internal partial class MyDataContext : System.Data.Linq.DataContext
{
...
}
[Table(Name="dbo.vTag")]
internal partial class Tag
{
....
}
}
I then created a class called DataAccess which is what will be exposed to any application that references the assembly. I also created my own Tag class. The DataAccess class and my new Tag class are in a different namespace called Data to avoid collisions with the generated classes which are in the Linq namespace. I use Linq to Sql to query for an IList of Linq.Tag objects, then I use Linq to generate me a list of Data.Tag objects from the Linq.Tag objects.
I'd like to hear comments on this to see if there's a more performant way to do this, or one that requires less code. I also wasn't too happy with my use of duplicate class names (Tag) so I'm interested to hear any ideas on naming suggestions too.
namespace Data
{
public class DataAaccess
{
public IList<Tag> List_Tags()
{
using (Linq.MyDataContext dal = new Linq.MyDataContext ())
{
IList<Linq.Tag> lstTags = (from c in dal.Tags select c).ToList();
return (from tag in lstTags
select new Data.Tag()
{
ID = tag.ID,
Name = tag.Name,
Parent_ID = tag.Parent_ID
}).ToList();
}
}
}
}
What you are proposing is having two separate models. That means boilerplate code, which I've found is not necessary. I have more or less the same idea as you, but realized that this would be useless. I've suggested Entity Framework in another answer in this thread, and I want to make that point again here.
What you end up with is a model-soup, where you have to maintain two models instead of just the one. And that is definitely NOT desirable.
To go from the LINQ to SQL classes to your classes is a matter of some fairly straightfoward LINQ to Objects (or just initialisation for single objects).
More fun is going back from your model to the LINQ to SQL objects but this is fairly standard stuff (although something I'm still working out or I'd find you some specific references).

Resources