Update a single property of an entity with EF Core - asp.net-web-api

var user = _context.Users.Single(u => u.Id == userId);
user.AssignedInfo = _mapper.Map<AssignedInfo>(assignedInfoDTO);
_context.SaveChanges();
In this case with EF, after the query with the method Single, all other properties but AssignedInfo on the user entity will be overwritten? or are skipped and only the AssignedInfo property is updated?
I ask this because there's a big chance some other users may update those other columns of the table from other endpoints in the api, so I don't want to overwrite other properties than AssignedInfo on this particular endpoint.
Does it work that way or it does update the entire row with all the fields obtained in the query? I just need to update that particular property, and that's the only point where that property is being updated.

When you use Single method, EF Core starts to track the selected row, which is user with userId that is intended. Then after changes in user properties, when you use SaveChanges(), EF Core intelligently generates a query which only updates the fields that has been changed, which in your case is AssignedInfo. So the generated query will be something like :
UPDATE [Users] SET [AssignedInfo ] = #p0
WHERE [Id] = #p1;
This is a parameterized query (to prevent from SQL injection) so :
#p0 = _mapper.Map<AssignedInfo>(assignedInfoDTO)
and
#p1 = userId
By the way, you can see the EF Core generated queries just by adding
"Microsoft.EntityFrameworkCore.Database.Command": "Information"
to LogLevel property of Logging property of appsettings.Development.json, so in development mode the queries are shown in the command window or vs output window.

Related

Spring data update just one entity field

i’m new to spring when i try to update just one field of an entity I noticed in logs that hibernate perform two queries, before update it does a SELECT of all fields. Is that ok? Why does Hibernate perform that SELECT? How can i update a field with just one UPDATE query? Additionally when I tried to update a single title in an entity that has another nested entity i end up with a bunch of SELECT. I think it’s not good for performance or I’m wrong?
Something s = somethingRepository.findById(id);
s.setField1(someData);
somethingRepository.save(s);
On the internet I found a solution to make custom query with #Modifying and #Query(“UPDATE …”) but in this way I need to make custom query for every single field. Is there a better solution?
As per the source code you have pasted in the question
Something s = somethingRepository.findById(id);
s.setField1(someData);
somethingRepository.save(s);
if the entity Something which you are asking does not exist in the hibernate first level cache, it will make one SELECT call.
and as you are updating field 1 value then it will make another update call.
It does not matter if you are using save or not because Hibernate dirty checks will ensure that all changes are updated.
otherwise you can use custom method with #Modifying with JPQL and named params. It is more readable than ?1,
#Modifying
#Query("UPDATE Something s SET s.field = :fieldValue WHERE s.id = :id")
void updateField(String fieldValue, UUID id);
Regarding that you are seeing multiple calls "when I tried to update a single title in an entity that has another nested entity". It depends on how you have created the relationship among entities. If you can share the entities and their relationship then only it can be answered accurately.
Because internally the repository.save() method does an upsert. If you go to the inner classes and check you will see that first, it will check whether the entity is present in the database then based on that it will perform add or update. If you don't want the SELECT query to run then you can use the native query offered by the JPA repository. You can do something like this:
#Modifying
#Transactional
#Query("UPDATE <tableName> SET <columnName> = ?1 WHERE <condition>" ,nativeQuery=true)
void updateSomething(String value);

Does Entity Framework automatically load all of my data (not a reference and lazy load is off)?

I have a database first Entity Framework project. For every entity that I add, a collection is added to the DbContext for the entity. I explicity set LazyLoadingEnabled = false in the DbContext constructor. If I break into the following code and check the count of CustomerDepartments, I get the total count of the table. If I'm just adding a new record, I expect the count to be 0 before I add, and 1 after. I'm using this in a stateless environment, so loading the whole table just to add a record seems absurd. What am I doing wrong?
using (Model.SupportEntities support = new Model.SupportEntities(_state.Credentials, _handler.ReadWriteConnectionString))
{
Model.CustomerDepartment department = Json.JsonConvert.DeserializeObject<Model.CustomerDepartment>(_insertObject);
support.CustomerDepartments.Add(department);
support.SaveChanges();
_state.ReturnNewIdAsJson(department.CustomerDepartmentID);
}
It seems you have misinterpreted how DbContext and DbSet works.
It maybe best if you get hold of a tool for logging EntityFramework SQL calls try Clutch.Diagnostics.EntityFramework.
When you call IEnumerable<T>.Count() on DbSet<T>, Entity Framework runs the following query
SELECT COUNT(*) FROM TableName;
But it does not load the whole table.
The ACTUAL call you want for the behavior you wanted was either
support.CustomerDepartments.Local.Count;
OR
support.ChangeTracker.Entries<T>().Count()
These will NOT hit the database.
You have to remember that DbSet is an abstraction for the Database table, so calling Count() on it should tell you how many rows there are in the table.
BTW. FYI. The convention is call name your DbContext to be SupportContext. Model named classes or namespace suggests they are your POCOs.

Entity Framework Caching?

I have a project that is using Entity Framework 4.1. The model has been modeled from a database on a SQL Server 2008 R2 machine.
I have a table that has 3 nvarchar columns, 2 bitcolumns, and 1 datetime column.
The application is using a repository pattern and dependency injection. All the items in the DI container are set to Transient, except for the context which is set to Hierarchical.
Now, when I run a lambda query against the model in entity framework, it pulls my data as expected. For example:
var someData = _dataRepository.Get(data => data.Name == name && data.IsEnabled);
Basically the Get method of the repository is a wrapper for a .Where(filter) linq expression.
So the issue is this:
When I update the database (either via SMS or within code using .SaveChanges) to the bit column or the datetime column in the database and I re-query the data, the returned query data is returning what is expected. However, whenever I update (SMS/.SaveChanges) on one of the nvarchar columns and re-query, it returns the "old" data the nvarchar use to contain from previous query -- not the updated data.
Does Entity Framework have some inherit caching that is causing this to happen?

LINQ and Devexpress Grid Datasource

I have a DevExpress grid (DevExpress.XtraGrid.GridControl 8.2) with a datasource set at runtime like so:
private DataContext db = new DataContext("connection string");
gridControl.DataSource = from t in db.sometable
select new
{
Field1 = t.Name,
Field2 = t.Email,
Field3 = t.City
};
This means that the view has no idea what the data is going to look like at design time. I like being able to set a LINQ query as the datasource, but I'd also like to specify what the view will look like at design time.
Is there a way that I can tell the view that it will be using this query?
Would the best solution be to create a small object for holding the
contents of what gets returned from
this query?
You will have to define a class for the return type of your LINQ query if you want the DevExpress grid to automatically pick up the columns for the data source. At design time, the WinForm binding engine is using reflection or ICustomTypeDescriptor if the source implements it to automatically discover the properties, their types, etc of the data source. The DevExpress grid is using this underlying binding mechanism and automatically generating the columns for you at design time based on the property information. In your case however, you're creating an anonymous type in your LINQ query which is not known or available at design time. Therefore, DevExress Grid cannot generate the columns automatically. As #Dennis mentioned, you can manually add columns to the grid in designer. You need to make sure that 'FieldName', I believe, on the column matches the property name on your data source.
If you go with a class, you may also want to implement INotifyPropertyChanged to make the grid aware of data changes in the data source.
IIRC, the xtragrid requires that the datasource implement a databinding interface (ie IBindingList(T)) for it to auto-generate columns and the items should implement INotifyPropertyChanged.
With that in mind: if you do create columns via the wizard at design time or in code at runtime, as long as you set the FieldName property of the columns, they will show the data from the datasource with a property of that name.
Notes:
I think it must be a property, auto or not, as I've found that it sometimes won't bind to public variables.
The property must be assigned something (default or otherwise).
There must be a parameterless constructor for the item.
The fields are known at design time (Field1, Field2, Field3).
According to DevExpress you can use IList, IListSource, ITypedList or IBindingList. The difference between them is whether you can add new rows or if changes are refin the control.
So you can use ToList():
private DataContext db = new DataContext("connection string");
gridControl.DataSource = (from t in db.sometable
select new
{
Field1 = t.Name,
Field2 = t.Email,
Field3 = t.City
}).ToList();
Note: I tested it using DevExpress 10.1, but if it does use the WinForms binding then it should still work according to MSDN.
I haven't worked with the DevExpress grid, but I've done a lot with the .NET DataGridView.
Does the DevExpress grid have the same functionality as the .NET DataGridView that auto generates columns?
If so, then it should display whatever fields are found in your query and will use Field1, Field2 and Field3 (from your example code) as column names.
Or just turn off the auto generate column feature and add the columns at design time. As long as they match what your query returns it should work fine.

Using LINQ with stored procedure that returns multiple instances of the same entity per row

Our development policy dictates that all database accesses are made via stored procedures, and this is creating an issue when using LINQ.
The scenario discussed below has been somewhat simplified, in order to make the explanation easier.
Consider a database that has 2 tables.
Orders (OrderID (PK), InvoiceAddressID (FK), DeliveryAddressID (FK) )
Addresses (AddresID (PK), Street, ZipCode)
The resultset returned by the stored procedure has to rename the address related columns, so that the invoice and delivery addresses are distinct from each other.
OrderID InvAddrID DelAddrID InvStreet DelStreet InvZipCode DelZipCode
1 27 46 Main St Back St abc123 xyz789
This, however, means that LINQ has no idea what to do with these columns in the resultset, as they no longer match the property names in the Address entity.
The frustrating thing about this is that there seems to be no way to define which resultset columns map to which Entity properties, even though it is possible (to a certain extent) to map entity properties to stored procedure parameters for the insert/update operations.
Has anybody else had the same issue?
I'd imagine that this would be a relatively common scenarios, from a schema point of view, but the stored procedure seems to be the key factor here.
Have you considered creating a view like the below for the stored procedure to select from? It would add complexity, but allow LINQ to see the Entity the way you wanted.
Create view OrderAddress as
Select o.OrderID
,i.AddressID as InvID
,d.AddressID as DelID
...
from Orders o
left join Addresses i
on o.InvAddressID= i.AddressID
left join Addresses d
on o.DelAddressID = i.AddressID
LINQ is a bit fussy about querying data; it wants the schema to match. I suspect you're going to have to bring that back into an automatically generated type, and do the mapping to you entity type afterwards in LINQ to objects (i.e. after AsEnumerable() or similar) - as it doesn't like you creating instances of the mapped entities manually inside a query.
Actually, I would recommend challenging the requirement in one respect: rather than SPs, consider using UDFs to query data; they work similarly in terms of being owned by the database, but they are composable at the server (paging, sorting, joinable, etc).
(this bit a bit random - take with a pinch of salt)
UDFs can be associated with entity types if the schema matches, so another option (I haven't tried it) would be to have a GetAddress(id) udf, and a "main" udf, and join them:
var qry = from row in ctx.MainUdf(id)
select new {
Order = ctx.GetOrder(row.OrderId),
InvoiceAddress = ctx.GetAddress(row.InvoiceAddressId),
DeliveryAddress = ctx.GetAddress(row.DeliveryAddressId)) };
(where the udf just returns the ids - actually, you might have the join to the other udfs, making it even worse).
or something - might be too messy for serious consideration, though.
If you know exactly what columns your result set will include, you should be able to create a new entity type that has properties for each column in the result set. Rather than trying to pack the data into an Order, for example, you can pack it into an OrderWithAddresses, which has exactly the structure your stored procedure would expect. If you're using LINQ to Entities, you should even be able to indicate in your .edmx file that an OrderWithAddresses is an Order with two additional properties. In LINQ to SQL you will have to specify all of the columns as if it were an entirely unrelated data type.
If your columns get generated dynamically by the stored procedure, you will need to try a different approach: Create a new stored procedure that only pulls data from the Orders table, and one that only pulls data from the addresses table. Set up your LINQ mapping to use these stored procedures instead. (Of course, the only reason you're using stored procs is to comply with your company policy). Then, use LINQ to join these data. It should be only slightly less efficient, but it will more appropriately reflect the actual structure of your data, which I think is better programming practice.
I think I understand what you're after, but I could wildy off...
If you mock up classes in a DBML (right-click -> new -> class) that are the same structure as your source tables, you could simply create new objects based on what is read from the stored procedure. Using LINQ to objects, you could still query your selection. It's more code, but it's not that hard to do. For example, mock up your DBML like this:
Pay attention to the associations http://geeksharp.com/screens/orders-dbml.png
Make sure you pay attention to the associations I added. You can expand "Parent Property" and change the name of those associations to "InvoiceAddress" and "DeliveryAddress." I also changed the child property names to "InvoiceOrders" and "DeliveryOrders" respectively. Notice the stored procedure up top called "usp_GetOrders." Now, with a bit of code, you can map the columns manually. I know it's not ideal, especially if the stored proc doesn't expose every member of each table, but it can get you close:
public List<Order> GetOrders()
{
// our DBML classes
List<Order> dbOrders = new List<Order>();
using (OrderSystemDataContext db = new OrderSystemDataContext())
{
// call stored proc
var spOrders = db.usp_GetOrders();
foreach (var spOrder in spOrders)
{
Order ord = new Order();
Address invAddr = new Address();
Address delAddr = new Address();
// set all the properties
ord.OrderID = spOrder.OrderID;
// add the invoice address
invAddr.AddressID = spOrder.InvAddrID;
invAddr.Street = spOrder.InvStreet;
invAddr.ZipCode = spOrder.InvZipCode;
ord.InvoiceAddress = invAddr;
// add the delivery address
delAddr.AddressID = spOrder.DelAddrID;
delAddr.Street = spOrder.DelStreet;
delAddr.ZipCode = spOrder.DelZipCode;
ord.DeliveryAddress = delAddr;
// add to the collection
dbOrders.Add(ord);
}
}
// at this point I have a List of orders I can query...
return dbOrders;
}
Again, I realize this seems cumbersome, but I think the end result is worth a few extra lines of code.
this it isn't very efficient at all, but if all else fails, you could try making two procedure calls from the application one to get the invoice address and then another one to get the delivery address.

Resources