Whether to use virtual type with a foreign key using POCO? - entity-framework-4.3

I am new to Entity Framework and wondering why my foreign key property(s) has to be "virtual" even though data can be loaded without non virtual foreign key.
lets say i have two class. both have lines commented out with virtual keyword. data would still be loaded without virtual. Then why do we have to use virtual? does it has an impact once we add/update/delete records using Context.SaveChanges()?
public class Application
{
public int ApplicationID{get;set;}
public string Name{get;set;}
//public virtual ICollection<ApplicationPages> Pages{get;set;}
public ICollection<ApplicationPages> Pages{get;set;}
}
public class ApplicationPages
{
public int ApplicationPageID{get;set;}
public string Name{get;set;}
public int ApplicationID{get;set;}
[ForeignKey("ApplicationID")]
public Application Application{get;set;}
//public virtual Application Application{get;set;}
}

As I understand it, the virtual keyword is required to allow Entity Framework to make proxies of your classes, which assist in lazy loading situations.
I don't know the details of the proxy variant, but I'm guessing it brings back and stores child ids with the parent. Then, it you subsequently access child attributes (other than the id) it has an efficient way of fetching the child records.
I have had trouble with navigational properties on the parent, and left them out in favor of explicitly loading the child records when needed. However, my keys are two part (id and effective date) which may be the cause.

Related

C#, MVC3, How to use the non-generic DBSet with a runtime defined type?

I'm new to MVC and the EF. My app is a simple code-first with several POCO classes and a DBContext like this:
public class ExpDefContext : DbContext
{
public DbSet<Experiment> Experiments { get; set; }
public DbSet<Research> Researches { get; set; }
...
The problem: I need to add to my data model an entity-set that its type is built at runtime from user input, meaning I have no idea of its data structure.
I read the non-generic Dbset class is made just for this, so I added to the context:
public DbSet Log { get; set; }
...and created a constructor for the context that accepts the runtime-type and sets the new Dbset:
public ExpDefContext(Type LogRecType)
{
Log = Set(LogRecType);
}
(the type by the way is built using Reflection.Emit).
In the controller I create the type (named LogRec) and pass it to a new DBContext instance. Then I create a LogRec instance and try to Add it to the database:
Type LogRec;
LogRec = LogTypeBuilder.Build(dbExpDef, _experimentID);
var dbLog = new ExpDefContext(LogRec);
var testRec = LogRec.GetConstructor(Type.EmptyTypes).Invoke(Type.EmptyTypes);
dbLog.Log.Add(testRec);
dbLog.SaveChanges();
and I get an exception from the dbLog.Log.Add(testRec):
The entity type LogRec is not part of the model for the current context
What am I doing wrong?
Is there a better way to do this (preferably without diving too deep into the Entity Framework)?
Thanks
I suspect that EF only reflects over the generic DbSet<T> properties in your derived DbContext and ignores any non-generic DbSet properties when the model is created in memory.
However, an alternative approach might be to use the Fluent API in OnModelCreating to add your dynamic type as an entity to the model.
First of all you can add a type to the model only when the model is built in memory for the first time your AppDomain is loaded. (A model is built only once per AppDomain.) If you had a default constructor of the context in addition to the overloaded constructor and had created and used a context instance using this default constructor your model would have been built with only the static types and you can't use the dynamic type as entity anymore as long as the AppDomain lives. It would result in exactly the exception you have.
Another point to consider is the creation of the database schema. If your type is unknown at compile time the database schema is unknown at compile time. If the model changes due to a new type on the next run of your application you will need to update the database schema somehow, either by recreating the database from scratch or by defining a custom database initializer that only deletes the LogRec table and creates a new table according to the new layout of the LogRec type. Or maybe Code-First Migrations might help.
About the possible solution with Fluent API:
Remove the DbSet and add a Type member instead to the context and override OnModelCreating:
public class ExpDefContext : DbContext
{
private readonly Type _logRecType;
public ExpDefContext(Type LogRecType)
{
_logRecType = LogRecType;
}
public DbSet<Experiment> Experiments { get; set; }
public DbSet<Research> Researches { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
var entityMethod = typeof(DbModelBuilder).GetMethod("Entity");
entityMethod.MakeGenericMethod(_logRecType)
.Invoke(modelBuilder, new object[] { });
}
}
DbModelBuilder doesn't have a non-generic Entity method, hence dynamic invocation of the generic Entity<T> method is necessary.
The above code in OnModelCreating is the dynamic counterpart of...
modelBuilder.Entity<LogRec>();
...which would be used with a static LogRec type and that just makes the type as entity known to EF. It is exactly the same as adding a DbSet<LogRec> property to the context class.
You should be able to access the entity set of the dynamic entity by using...
context.Set(LogRecType)
...which will return a non-generic DbSet.
I have no clue if that will work and didn't test it but the idea is from Rowan Miller, member of the EF team, so I have some hope it will.

How to reduce number of injected dependencies on controller

I am using MVC3, Entity Framework v4.3 Code First, and SimpleInjector. I have several simple classes that look like this:
public class SomeThing
{
public int Id { get; set; }
public string Name { get; set; }
}
I have another entity that looks like this:
public class MainClass
{
public int Id { get; set; }
public string Name { get; set; }
public virtual AThing AThingy { get; set; }
public virtual BThing BThingy { get; set; }
public virtual CThing CThingy { get; set; }
public virtual DThing DThingy { get; set; }
public virtual EThing EThingy { get; set; }
}
Each Thingy (currently) has its own Manager class, like so:
public class SomeThingManager
{
private readonly IMyRepository<SomeThing> MyRepository;
public SomeThingManager(IMyRepository<SomeThing> myRepository)
{
MyRepository = myRepository;
}
}
My MainController consequently follows:
public class MainController
{
private readonly IMainManager MainManager;
private readonly IAThingManager AThingManager;
private readonly IBThingManager BThingManager;
private readonly ICThingManager CThingManager;
private readonly IDThingManager DThingManager;
private readonly IEThingManager EThingManager;
public MainController(IMainManager mainManager, IAThingManager aThingManager, IBThingManager bThingManager, ICThingManager cThingManager, IDThingManager dThingManager, IEThingManager eThingManager)
{
MainManager = mainManager;
AThingManager = aThingManager;
BThingManager = bThingManager;
CThingManager = cThingManager;
DThingManager = dThingManager;
EThingManager = eThingManager;
}
...various ActionMethods...
}
In reality, there are twice as many injected dependencies in this controller. It smells. The smell is worse when you also know that there is an OtherController with all or most of the same dependencies. I want to refactor it.
I already know enough about DI to know that property injection and service locator are not good ideas.
I can not split my MainController, because it is a single screen that requires all these things be displayed and editable with the click of a single Save button. In other words, a single post action method saves everything (though I'm open to changing that if it makes sense, as long as it's still a single Save button). This screen is built with Knockoutjs and saves with Ajax posts if that makes a difference.
I humored the use of an Ambient Context, but I'm not positive it's the right way to go.
I humored the use of injecting a Facade as well.
I'm also wondering if I should implement a Command architecture at this point.
(Don't all of the above just move the smell somewhere else?)
Lastly, and perhaps independent of the three above approaches, is should I instead have a single, say, LookupManager with explicit methods like GetAThings(), GetAThing(id), GetBThings(), GetBThing(id), and so on? (But then that LookupManager would need several repositories injected into it, or a new type of repository.)
My musings aside, my question is, to reiterate: what's a good way to refactor this code to reduce the crazy number of injected dependencies?
Using a command architecture is a good idea, since this moves all business logic out of the controller, and allows you to add cross-cutting concerns without changes to the code. However, this will not fix your problem of constructor over-injection. The standard solution is to move related dependencies into a aggregate service. However, I do agree with Mark that you should take a look at the unit of work pattern.
Have you considered using a unit of work design pattern? There is a great MSDN post on what a unit of work is. An excerpt from that article:
In a way, you can think of the Unit of Work as a place to dump all
transaction-handling code. The responsibilities of the Unit of Work
are to:
Manage transactions.
Order the database inserts, deletes, and updates.
Prevent duplicate updates. Inside a single usage of a Unit of Work object, different parts of the code may mark the same Invoice
object as changed, but the Unit of Work class will only issue a
single UPDATE command to the database.
The value of using a Unit of Work pattern is to free the rest of your
code from these concerns so that you can otherwise concentrate on
business logic.
There are several blog posts about this, but the best one I've found is on how to implement it is here. There are some other ones which have been referred to from this site here, and here.
Lastly, and perhaps independent of the three above approaches, is
should I instead have a single, say, LookupManager with explicit
methods like GetAThings(), GetAThing(id), GetBThings(), GetBThing(id),
and so on? (But then that LookupManager would need several
repositories injected into it, or a new type of repository.)
The unit of work would be able to handle all of these, especially if you're able to implement a generic repository for most of your database handling needs. Your tag mentions you're using Entity Framework 4.3 right?
Hope this helps!
I think your main issue is too many layers of abstraction. You are using Entity Framework, so you already have a layer of abstraction around you data, adding two more layers (one per entity) via a Repository and a Manager interface has led to the large number of interfaces your controller depends upon. It doesn't add a whole lot of value, and besides, YAGNI.
I would refactor, getting rid of your repository and manager layers, and use an 'ambient context'.
Then, look at the kinds of queries your controller is asking of the manager layers. Where these are very simple, I see no problems querying your 'ambient context' directly in your controller - this is what I would do. Where they are more complicated, refactor this into a new interface, grouping things logically (not necessarily one per Entity) and use your IOC for this.

How to decouple repository and entities

This is a question on domain model design.
Let's say for a domain design involving users and groups, we have the following interfaces to implement:
interface IUser
{
string Name{get;}
DateTime DOB {get;}
}
interface IGroup
{
string Name {get;}
bool IsUserInGroup(IUser user); // #1
void IncludeUser(IUser user); // #2
void ExcludeUser(IUser user); // #3
}
interface IUserRepository
{
IUser Create(string name);
IUser GetByName(string name);
void Remove(IUser user);
void Save(IUser user);
}
interface IGroupRepository
{
IGroup Create(string name);
IGroup GetByName(string name);
void Remove(IGroup group);
void Save(IGroup group);
}
The tricky bit is to implement #1 #2 and #3 while keeping the entity classes (User, Group) decoupled from the repository classes (UserRepository, GroupRepository.)
Another technicality to consider is that most RMDB systems do not implement many-to-many relationships, and in practice there is always a separate table (say, UserGroupAssociation) to have records each associates a user and a group via foreign keys. I would like to hide this implementation detail from the domain interfaces and expose the equivalent logic through members #1 #2 and #3.
The effect of calling #2 and #3 should not persist until the group object in question has been saved (i.e. passed to the Save() method of the repository object)
How do you usually do it?
I don't do it. My Repository objects are tightly coupled to the root of the aggregate to which they relate, and (as kind of an aside) I don't bother making interfaces for my domain model objects unless I find I have a good reason to do so - do you have a particular reason to do this?
I've not come across any Repository examples which don't use the entity implementation type in the repository class (this one, for instance) and can't think of any real advantage of using an interface instead. Interfaces earn their keep for infrastructure components (like a Repository) by making it easier to mock out entire layers of the system when testing, you don't get the same type of advantage using interfaces for domain objects.
And to perhaps actually answer the question...
I never have a domain object access a Repository - the domain object after all is supposed to represent something in the domain in real life, and Repositories are infrastructure components that don't exist in real life, so why would a domain object know about one?
For the specific example of adding a User to a Group, I'd use a Service Layer class, and do this:
public class UserService
{
private readonly IGroupRepository _groupRepository;
private readonly IUserRepository _userRepository;
public UserService(
IGroupRepository groupRepository,
IUserRepository userRepository)
{
this._groupRepository = groupRepository;
this._userRepository = userRepository;
}
public void IncludeUserInGroup(string groupName, string userName)
{
var group = this._groupRepository.FindByName(groupName);
var user = this._userRepository.FindByName(userName);
group.IncludeUser(user);
this._userRepository.SaveChanges();
}
}
public class User
{
public void AddToGroup(Group group)
{
this.Groups.Add(group);
}
public void RemoveFromGroup(Group group)
{
this.Groups.Remove(group);
}
}
Some points to note:
To avoid lazy-loading large numbers of Users when adding a User to a Group I've moved the Group administration methods onto User - depending on how much behaviour you actually have for Group, you might even consider turning it into an enumeration rather than a class. Be aware that if you're using the Entity Framework POCO T4 Templates with FixupCollections, this will still lazy-load all the Users in a Group, but you can get around that in one way or another :)
The Service Layer class would implement a Create() method, the like of which you have on your Repositories. The Repositories would have an Add method, Find methods and a SaveChanges() method. Add would add an object created by the Service Layer to the object context.
All Repository classes would be set up to use the same underlying, request-scoped object context, so it wouldn't matter which one you call SaveChanges() on.
SaveChanges() would cause all changes which had happened to objects during that request to be saved, such as a User having a new Group's added to their Groups collection.
Finally, another good technique for decoupling entities from Repositories and other infrastructure components is Domain Events.

Constructing Asp.Net IEnumerable ViewModels with eager loading and custom business logic

Model:
public class Company
{
public string Name {get;set;}
}
public class JobListing
{
public string Title {get;set;}
public Company Company {get;set;}
public bool JobListingHasRecommendation {get;set;}
}
ViewModel:
public class JobListingVM
{
public string Title {get;set;}
public string CompanyName {get;set;}
public string TitleAtCompany
{
get
{
return string.Format("{0} at {1}", Title, CompanyName);
}
}
}
Repository method:
public IEnumerable<JobListing> getAllJobs()
{
return dbContext.JobListings;
}
Controller Action:
public ActionResult Index()
{
var jobs = repository.getAllJobs();//jobs is now disconnected from the dbcontext
//let's say there are a thousands of jobs (and we will be paging)
//now we use some Automap like magic to convert the IEnumerable<JobListing> into
//IEnumerable<JobListingVM>
}
The problem is getAllJobs has to either eager load the Company property
and the Automap process relies on this knowledge
or Automap runs N queries for N jobs to get the Company - and this is clearly bad practice
While the solution to this maybe is to eager load the Company property in the
repository then what happens to cases where one doesn't want the eager load.
We need many combinatorial repository methods to provide all eager/lazy load (with
potentially nested eager load etc) scenarios?
JobListingHasRecommendation property of the VieModel is also a problem because it requires custom business logic involving db queries to set it, so this needs to be done per JobListing for all the members of the IEnumerable. Doing this inside the controller after the repository call is messy and doing it inside the automapping requires an IRepository injected in to the Automapping construct.
The mapping of Models to ViewModels is not as straightforward as all the automapping examples I see. I rarely see an example where a list of Models are mapped to a list of ViewModels and each ViewModel's JobListingHasRecommendation must be calculated individually. There are performance and architecture issues here as the automapping service now has to also have database access...
My question is - since ViewModels store data that can be the result of complex calculations and sequential set of database accesses, what is the best practice to create an IEnumerable of these types of ViewModels.
Thanks
Specify what you want to eager load via Include(), and return ToList()
return dbContext.JobListings.Include(o=>o.Customers).ToList(); //or whatever you want to eager load

Entity Framework Model with Table-per-Type Inheritance

When I define a model in Entity Framework with Table-per-Type Inheritance, if I have a base class/table (not abstract) called person, and two sub entities and tables, adult and child, after creating a child, how would I take the same object and convert it to adult? Once converted to adult, the child record should be deleted, though the base class data in the person table should be retained.
It is not possible. It is similar problem like here. Simply the entity exists and its type is immutable. The only way to do that is delete child entity (= records from both tables) and create a new adult entity (= new records for both tables).
This doesn't look like scenario for inheritance at all.
Edit:
The comment about inheritance was targeted for scenario where you mentioned Person, Adult and Child entities. Anyway once your scenario allows changing the type you should think about another solution where the part which can change will be handled by composition.
For example:
public class DataSource
{
public int Id { get; set; }
public virtual DataSourceFeatures Features { get; set; }
}
public class DataSourceFeatures
{
[Key, ForeignKey("DataSource")]
public int Id { get; set; }
public virtual DataSource DataSource { get; set; }
}
public class XmlDataSourceFeatures : DataSourceFeatures { ... }
public class DelimitedDataSourceFeatures : DataSourceFeatures { ... }
public class ServiceDataSourceFeatures : DataSourceFeatures { ... }
Now changing a type means deleting dependent current DataSourceFeatures from the database and create a new one but original object remains the same - only relation changes.
I wouldn't do this with EF, because with inheritance you've created an object-oriented abstraction over table relationships that doesn't allow you to convert from different types. In OO you can't do thing like this:
Child child = new Child();
Adult grownUp = child;
And then expect the child to be an adult. You'd do it like this:
Child child = new Child();
Adult grownUp = child.GrowUp();
So assuming you're using SQL Server you could do that with a stored procedure. Something like GrowUp(child) and have it create a new entry in the Adult table as well as delete the entry in the Child table, but leave Person untouched. You could return the new adult object from the procedure. This can then be used like this:
Adult grownUp = context.GrowUp(child);
However you'd need to make sure in your code that after this line you don't use the child object anymore and you probably need to refresh or remove it from the context (not entirely sure about this).

Resources