Find Foreign Keys from a T4 Template generator - visual-studio-2013

I am trying to generate a method for each of my Foreign Keys in my Entities to return a list of records based on that foreign key. I know of a way of determining the Primary Key:
foreach (var edmProperty in simpleProperties)
{
bool isPrimaryKey = ef.IsKey(edmProperty);
if(isPrimaryKey)
{
//do stuff
}
}
Is there a way of finding the Foreign Keys?
I am using EF 6 with Visual Studio 2013.
Thanks

Don't do this. For many reasons:
It breaks persistence ignorance. POCO's are not supposed to know anything about the data layer. You may even have POCOs defined in a separate assembly that has no reference to EF.
Methods like GetByCountryID are typically repository methods, they don't belong to an entity class.
Static methods shouldn't be scattered over a class model. They're typical for utility classes or factories (it could make sense to have a method like City.New()).
How would you know that City has a GetByCountryID method? There may even be more classes having the same method.
The object(s) obtained by the method are in no way related to a City instance, but its location seems to intend such an association.
If you remove the property Country from the EDMX (e.g. because it's never used), the method also disappears.
The main reason: there is no substitute for navigation properties. If you want to get Categories and their Products you have to load them in a way that EF knows how to associate them. You either do this by Include, or by including them in a projection, or by lazy loading, or by fetching the Products later, but all in the same context. Your proposed methods can only produce dissociated entities, and disconnected too (i.e. not attached to a context).
There are other patterns to hide data layer details from other application layers, for instance repositories with dependency injection.

Here's how I get the Foreign Key Property name.
public IEnumerable<NavigationProperty> GetParentNavigationProperties(EntityType type)
{
return type.NavigationProperties.Where(np => np.DeclaringType == type && np.ToEndMember.RelationshipMultiplicity != RelationshipMultiplicity.Many);
}

Related

'Existing Entity' constraint

I'm reading some data from an excel file, and hydrating it into an object of class A. Now I have to make sure that one of the fields of the data corresponds to the Id of a specific Entity. i.e:
class A{
protected $entityId;
}
I have to make sure that $entityId is an existing id of a specific entity (let's call it Foo). Now this can be achieved using the choice constraint, by supplying the choices option as all of the existing ids of Foo. However this will obviously cause a performance overhead. Is there a standard/better way to do this?
I'm a bit confused about what you are doing, since you seem to talk about Excel parsing, but at the same time you mention choices, which in my opinion relate to Forms.
IMO you should handle directly the relationship to your entity, instead of only its id. Most of the time it is always better to have directly the related entity as attribute of your class A than only the id, and Symfony manipulates such behaviours pretty well.
Then just have your Excel parser do something like this:
$relatedEntity = $this->relatedEntityRepository->find($entityId);
if (!$relatedEntity) {
throw new \Exception();
}
$entity->setRelatedEntity($relatedEntity);
After doing this, since you were talking about Forms, you can then use an EntityType field which will automatically perform the request in database. Use query_builder if you need to filter the results.

Entity Framework in detached mode with MVC application

I have started working out with Entity Framework (EF) for an MVC n-tier application. It would seem that very obvious that this being a web application (which is stateless), I would have to use detached object models. There is no ambiguity with doing an Add operation. However when doing an edit there are here are two ways
Fetch the original object in context, attach the updated object and
then save to database. Something like mentioned in answer to this
question
EF4 Context.ApplyCurrentValues does not update current values
Set individual modified properties explicitly using the IsModified property of individual fields of the object like
mentioned in this article
http://msdn.microsoft.com/en-us/data/jj592677.aspx
Method 1 has disadvantage of having to load object into memory from database each time an update needs to be performed.
Method 2 would require having to manually pass which fields to be set as IsModified to true from wherever the object an be updated. So for e.g. for each object, I may need to create a boolean collection object for each field of the object.
e.g.
SaveEntity(EntityClass e, EntityStateClass ec)
{
context.Entry(e).Property("Name").IsModified = ec.NameState;
context.SaveChanges();
}
class EntityStateClass{ public bool NameState;}
I would prefer method 2 simply for the sake of performance but I am hindered by the n-tier architecture and repository pattern I am using. My Repository interface restricts save method for any object to be
SaveEntity(EntityClass e);
So I cannot pass the "state" object. Context class is not available and should not be available outside DAL. So I cannot set property outside. Is there any "proper" way to achieve this ?
Note: Self-Tracking Entity is also out of question since I cannot send entities with state to client (the browser) since I am intent on keeping the html lightweight.
EDIT: After a lot of thinking, I am trying to use following mechanism to keep track of modified state for each field in my domain class
Declare a partial class for entity class.
For each field that is updateable, declare a boolean property like "IsModified_FieldName"
Set the "IsModified_FieldName" property when the field is set.
However for this I need Entity Framework to generate explicit properties for me instead of implicit properties that it auto-generates. Does EF provide an handle to do this ?
Here is sample code of what I am trying to achieve
//Save Method for class EntityClass.
SaveEntity(EntityClass e)
{
context.Entry(e).Property("Name").IsModified = e.IsModified_Name;
context.SaveChanges();
}
//EntityClass is class autogenerated by EF
public partial class EntityClass
{
//This is auto-generated property by EF
public string Name {get; set;}
/* This is what I would like EF to do
private string name;
public string Name
{
get {return Name;}
set {
name = value;
//this is what I would like to do
this.IsModified_Name = true;
};
}
*/
}
//This is another partial definition for EntityClass that I will provide
public partial class EntityClass
{
//This property will be set to true if "Name" is set
public bool IsModified_Name {get; set;}
}
PS: It seems the information I have provided is not sufficient and therefore there are no responses.
I am using DbContext (Database first model)
EF auto-generates the class files for me. So each time I update my database, the class files are regenerated.
To your concrete question: The entities are generated by a T4 template and it should be possible to modify this template (which is in text format) to generate the entities in a way you want to shape them.
But I have a few remarks about your concept:
In a web application data are usually changed by a user in a browser. To have a definite knowledge what really has been changed you need to track the changes in the browser (probably by some Javascript that sets flags in the data (a ViewModel for example) when a user edits a text box for instance).
If you don't track the changes in the browser what happens? The data get posted back to the server and you don't know at the server side (with MVC in a controller) which property has been changed. So, your only chance is to map all properties that has been posted back to your EntityClass and every property will be marked as Modified, no matter if the user really did a change or not. When you later call SaveChanges EF will write an UPDATE statement that involves all those properties and you have an unnecessary overhead that you you want to avoid.
So, what did you win by setting individual properties instead of setting the whole entity's state to Modified? In both cases you have marked all properties as Modified. Exceptions are partial changes of an entity, for example: You have a Customer entity that has a Name and City property and a view that only allows to edit the Name but not the City and a corresponding ViewModel that only contains a Name property. In this case your procedure would only mark the Name property of the Customer entity as Modified but not the City. You might save here a little bit because you don't save the City property value to the database. But you still save the Name even if it didn't change.
If you use solution 1 (ApplyCurrentValues) you have to load the entity first from the database, yes, but it would only mark the properties as Modified that really changed compared to their values in the database. If the user didn't change anything no UPDATE would be written at all.
Keep in mind that you are only at the beginning to implement your concept. There are other changes to the data that can happen in the browser than only scalar property changes, namely relationship changes. For example a user changes the relationship from an Order to a Customer or you have a view that has an Order and a collection of OrderItems and the user cannot only edit the Order header but also edit the OrderItems and remove and add new OrderItems. How do you want to recognize when the data come back from the browser to the server which collection item has been added and which has been removed - unless you track all those changes in the browser and send tracking information back to the server in addition to the actual data or unless you reload the Order and OrderItems from the database and merge the changes into the original entities from the database?
Personally I would vote for option 1 for these reasons:
You can use real POCOs that don't carry additional tracking information. (BTW: I have some doubt if you aren't reinventing the wheel by implementing your own tracking that EF change tracking proxies provide out of the box.)
You don't need to track changes in the browser which can become quite complex and will require Javascript in every Edit view to write change flags into hidden form fields or something.
You can use standard features of EF without having to implement your own tracking.
You are required to load entities from the database when you want to update an entity, that's true. But is this the real performance bottleneck in a web application where data have to run through the wire back and forth (and reflection (which isn't really known as to be fast) is involved by the model binder)? I have nothing said if your database is remote from the web server and connected by a 9600 baud modem. But otherwise, your plan is not only premature optimization, it is kind of premature architecture. You are starting to build a potentially complex architecture based on "it could be slow" to solve a performance problem that you actually don't know of whether it really exists.

Entity Framework 5 - Invalid column name - Reverse Engineer Code First

Using Entity Framework 5, Visual Studio 2010 with the Entity Framework Power Tools (Beta 2) extension.
Here is my database table structure:
I used the Reverse Engineer Code First function of the aforementioned extension, which generated the POCO classes and some 'mapping' files (not sure if that's the formal parlance) and a single DbContext-derived class. Other than the change I describe next, all of these generated classes are as-generated by the power tool.
In the Category.cs file, I added the following code to help flatten the object graph a bit:
private ICollection<Product> m_Products = null;
public ICollection<Product> Products
{
get
{
if (m_Products == null)
{
m_Products = new List<Product>();
foreach (var categoryProduct in CategoryProducts)
{
m_Products.Add(categoryProduct.Product);
}
}
return m_Products;
}
set { m_Products = value; }
}
I get the following exception, which I know must have something to do with the mappings, but I just can't quite figure this out.
Unhandled Exception: System.Data.EntityCommandExecutionException: An error occurred while
executing the command definition. See the inner exception for details.
---> System.Data.SqlClient.SqlException:
Invalid column name 'Category_CategoryId'.
If I need to post more information, such as the specifics of the mappings, just let me know and I'll do it. I wanted to keep this as short as possible, but I realize I've omitted some things that, for those unfamiliar with the code generated by the tool, may leave one wanting for more details.
You've added a navigation property to your model and so EF is trying to map that to your database. "Code First" means your code model defines your database schema.
Try adding the [NotMapped] attribute to your helper properties to tell EF to ignore them.
In case you've created DB scheme automatically and you are not using strategies like (DropDatabaseAlways/DropDatabaseIfModelChanges) - other words: you are really in Reverse Engineering, it seems that you have to manually add column "CategoryId" on "Category" table.
In case, you don't want to work with the property (I mean in DB), you can use Data Annotation [NotMapped] or Fluent API modelBuilder.Entity<Category>().Ignore(x=> x.CategoryId)
Finally it is possible that problem can be in mapping. I don't know whether you are using data annotations or Fluent API but EF may automatically looks for some db column (logical behavior derived from the model) and can not find it. In this case I recommend you make a revision of the mapping.
The OP already solved their problem, but I've had a similar error with a different solution. So here it is in case others need it:
I was missing a navigation property on one side of a 0..1 relationship between entities. Once I added an appropriate navigation property to the entity that was missing it, the problem was solved.
A bit more details: I had two entities with a 0..1 relationship using a FK. Entity A (parent) had a FK to Entity B (child). The child entity B had a navigation property to entity A, but A did not have a navigation property to B. After adding this, the problem was solved.

Linq to SQL inheritance patterns

Caveat emptor, I'm new to Linq To SQL.
I am knocking up a prototype to convert an existing application to use Linq To SQL for its model (it's an MVVM app). Since the app exists, I can not change its data model.
The database includes information on events; these are either advertising events or prize events. As such, the data model includes a table (Event) with two associated tables (AdvertisingEvent and PrizeEvent). In my old C# code, I had a base class (Event) with two subclasses (AdvertisingEvent and PrizeEvent) and used a factory method to create the appropriate flavour.
This can not be done under Linq to SQL, it does not support this inheritance strategy.
What I was thinking of doing is creating an interface (IEvent) to includes the base, shared functionality (for example, a property "Description' which is implemented in each subclass). I thought I'd then add a propery to the superclass, for example SharedStuff, that would either return an AdvertisingEvent or PrizeEvent as a IEvent. From WPF I could then bind to MyEvent.SharedStuff.Description.
Does this make sense? Is there a better way to do this?
BTW: I'd rather not have to move to Linq to Entities.
You could always use interface inheritance to accomplish this. Instead of working with subclasses, have your IEvent interface, with the IPrizeEvent and IAdvertisingEvent interfaces deriving from that.
Then, work in terms of the interfaces.
You could then have separate implementations that don't derive from each other, but implement the appropriate interfaces.
Also, the nice side effect of working with interface inheritance in LINQ-to-SQL is if you have methods that operate on IQueryable<T> where the constraint on T is IEvent, you can do something like this:
// Get an IQueryable<AdvertisingEvent>
IQueryable<AdvertisingEvent> events = ...;
// A function to work on anything of type IEvent.
static IQueryable<T> FilteredEvents<T>(this IQueryable<T> query,
string description)
where T : class, IEvent
{
// Return the filtered event.
return query.Where(e => e.Description == description);
}
And then make the call like this:
events = events.FilteredEvents("my description");

LINQ To SQL entity objects as domain objects

Clearly separation of concerns is a desirable trait in our code and the first obvious step most people take is to separate data access from presentation. In my situation, LINQ To SQL is being used within data access objects for the data access.
My question is, where should the use of the entity object stop? To clarify, I could pass the entity objects up to the domain layer but I feel as though an entity object is more than just a data object - it's like passing a bit of the DAL up to the next layer too.
Let's say I have a UserDAL class, should it expose an entity User object to the domain when a method GetByID() is called, or should it spit out a plain data object purely for storing the data and nothing more? (seems like wasteful duplication in this case)
What have you guys done in this same situation? Is there an alternative method to this?
Hope that wasn't too vague.
Thanks a lot,
Martin.
I return IQueryable of POCOs from my DAL (which uses LINQ2SQL), so no Linq entity object ever leaves the DAL. These POCOs are returned to the service and UI layers, and are also used to pass data back into the DAL for processing. Linq handles this very well:
IQueryable<MyObjects.Product> products = from p in linqDataContext.Products
select new MyObjects.Product //POCO
{
ProductID = p.ProductID
};
return products;
For most projects, we use LINQ to SQL entities as our business objects.
The LINQ to SQL designer allows you to control the accessibility of the classes and properties that it generates, so you can restrict access to anything that would allow the consumer to violate the business rules and provide suitable public alternatives (that respect the business rules) in partial classes.
There's even an article on implementing your business logic this way on the MSDN.
This saves you from writing a lot of tedious boilerplate code and you can even make your entities serialisable if you want to return them from a web service.
Whether or not you create a separate layer for the business logic really depends on the size of your project (with larger projects typically having greater variation between the business logic and data access layers).
I believe LINQ to Entities attempts to provide a one-stop solution to this conundrum by maintaining two separate models (a conceptual schema for your business logic and a storage schema for your data access).
I personally don't like my entities to spread accross the layers. My DAL return POCO's (of course, it often means extra work, but I found this much cleaner - maybe that this will be simpler in the next .NET version ;-)).
The question is not so simple and there are lots of different thinking of the subject (I keep on asking myself the same question that you are).
Maybe you could take a look at the MVC Storefront sample app : I like the essence of the concept (the mapping that occurs in the data layer especially).
Hope this helps.
There is a similar post here, however, I see your question is more about what you should do, rather than how you should do it.
In small applications I find a second POCO implementation to be wasteful, in larger applications (particularly those that implement web services) the POCO object (usually a Data Transfer Object) is useful.
If your app falls into the later case, you may want to look at ADO.Net Data Services.
Hope that helps!
I have actually struggled with this, as well. Using plain vanilla LINQ to SQL, I quickly abandoned the DBML tooling, because it bound the entities to tightly to the DAL. I was striving for a higher level of persistence ignorance, although Microsoft didn't make it very easy.
What I ended up doing was hand-writing the persistence ignorance layer, by having the DAL inherit from my POCOs. The inherited objects exposed the same properties of the POCO it is inheriting from, so while inside the persistence ignorance layer, I could use attributes to map to the objects. The called then could cast the inherited object back to its base type, or have the DAL do that for them. I preferred the latter case, because it lessened the amount of casting that needed to be done. Granted, this was a primarily read-only implementation, so I would have to revisit it for more complex update scenarios.
The amount of manual coding for this is rather large, because I also have to manually maintain (after coding, to begin with) the context and provider for each data source, on top of the object inheritance and mappings. If this project was being deprecated, I would definitely move to a more robust solution.
Looking forward to the Entity Framework, persistence ignorance is a commonly requested feature according to the design blogs for the EF team. In the meantime, if you decide to go the EF route, you could always look at a pre-rolled persistence ignorance tool, like the EFPocoAdapter project on MSDN, to help.
I use a custom LinqToSQL generator, built upon one I found in the Internet, in place of the default MSLinqToSQLGenerator.
To make my upper layers independent of such Linq objects, I create interfaces to represent each one of them and then use such interfaces in these layers.
Example:
public interface IConcept {
long Code { get; set; }
string Name { get; set; }
bool IsDefault { get; set; }
}
public partial class Concept : IConcept { }
[Table(Name="dbo.Concepts")]
public partial class Concept
{
private long _Code;
private string _Name;
private bool _IsDefault;
partial void OnCreated();
public Concept() { OnCreated(); }
[Column(Storage="_Code", DbType="BigInt NOT NULL IDENTITY", IsPrimaryKey=true)]
public long Code
{
//***
}
[Column(Storage="_Name", DbType="VarChar(50) NOT NULL")]
public string Name
{
//***
}
[Column(Storage="_IsDefault", DbType="Bit NOT NULL")]
public bool IsDefault
{
//***
}
}
Of course there is much more than this, but that's the idea.
Please keep in mind that Linq to SQL is not a forward looking technology. It was released, it's fun to play with, but Microsoft is not taking it anywhere. I have a feeling it won't be supported forever either. Take a look at the Entity Framework (EF) by Microsoft which incorporates some of the Linq to SQL goodness.

Resources