I'm wondering if it is possible to speed up the first query made with EF code first.
I've made a small test program with one entity containing 2 fields, and the first query takes 2.2 seconds, the second query (which is the exact same) takes 0.006 second.
I am already precompiling the view, so that wont help here.
I think the problem is that it takes some time to contruct the model in memory, but should it take that long? And is there a way to precompile this model like there is with the views?
This article: Squash Entity Framework startup time with pre-compiled views describes a solution in detail.
It involves using the Optimize Entity Data Model option in Entity Framework Power Tools to generate a pre-compiled .Views class file.
When you make your first query, EF initializes itself and that takes some time. I don't think there's much to do in order to speed up EF's infrastructure initialization but, if what you are really looking is to speed up the first query you make and not EF's initialization itself, well, you can try to force EF to initialize before running your first query.
using (var db = new MyContext())
{
db.Database.Initialize(force: true);
}
Related
I've been doing some performance stats on an ASP.NET 4.5 Webforms app, which seems a bit sluggish on initial startup after a fresh deployment.
One of the points I noticed is that creating the AutoMapper maps does take some time.
Since those maps are only used rather rarely, I was wondering if I could possibly "delay" creating those maps until the first time they're needed - sort of a "lazy initialization".
In that case, I would have to have some "non-destructive" (e.g. without throwing an exception) way of checking whether or not a given map exists - if there something like that in AutoMapper?
Thanks!
You can use FindTypeMapFor:
if (Mapper.FindTypeMapFor<TSource, TDestination>() == null)
Mapper.CreateMap<TSource, TDestination>();
// Map object
There's also an overload that takes type parameters.
I am using MVC3, .NET4.5, C#, EF5.0, MSSQL2008 R2
My web application can take between 30 and 60secs to warm up ie get through first page load. Following page loads are very quick.
I have done a little more analysis, using DotTrace.
I have discovered that some of my LINQ queries, particularly the .COUNT and .ANY() ones take a long time to execute first time ie:
if (!Queryable.Any<RPRC_Product>((IQueryable<RPRC_Product>) this.db.Class.OfType<RPRC_Product>(), (Expression<Func<RPRC_Product, bool>>) (c => c.ReportId == (int?) myReportId)))
takes around 12 secs on first use.
Can you provide pointer how I can get these times down. I have heard about precompilers for EF queries.
I have a feeling the answer lies in using precompilation rather than altering this specific query.
Many thanks in advance
EDIT
Just read up on EF5's auto compile feature, so second time round compiled queries are in cache. So first time round, compilation is still required to intermediate EF language. Also read up on pregeneration of Views which may help generally as well?
It is exactly as you said - you do not have to worry about compiling queries - they are cached automatically by ef. Pregenerated Views might help you or may not. Automatic tools for scaffolding them generates not all views required and the missing ones still needs to be made at run time. When I developed my solutions and expected the same problem as you the only thing that helped me was to simplify a query. It turns out that if the first query to run is very complex and involves complicated joins then ef needs to generate many views to execute it. So I simplified the queries - instead of loading whole joined entities I loaded only ids (grouped , filtered out or whatever) and then when I needed to load single entities I loaded them by ids one by one. This allowed me to avoid long execution time of my first query.
So I'm using web api to expose data services. Initially I created my dbcontext as a static memory, and each time I open up my project under IISExpress, the memory balloons to over 100MB in memory. I understand that it isnt recommended to use static due to the solved answer here:
Entity framework context as static
So I went ahead and converted my application to using regular non-static dbcontext and included a dispose method on my api:
protected override void Dispose(Boolean disposing)
{
if (provider.Context != null)
{
provider.Context.Dispose();
provider = null;
}
base.Dispose(disposing);
}
Now every time I make a call, it goes through this method and disposes. Now, I open the application, still balloons to 100k, and each time I make a call, I watch the memory of my iisexpress process, and it keeps on going up and it's not coming back down after the dispose, it keeps increasing to almost 200MB+.
So static or not, memory explodes whenever I use it.
Initially I thought it was my web api that was causing it, until I removed all my services and just created the EF object in my api (I'm using breezejs, so this code is trivial, the actual implementation is down below, but makes no diff to memory consumption):
private DistributorLocationEntities context = new DistributorLocationEntities();
And bam, 110MB immediately.
Is there any helpful tips and tweaks on how I can release memory when I use it? Should I add garbage collect to my dispose()? Any pitfalls to allocating and deallocating memory rapidly like that? For example, I make calls to the service each time I make a keystroke to accomplish an "autocomplete" feature.
I'm also not certain what will happen if I put this in production, and we have dozens of users accessing the db; I wouldn't want the users to increase the memory to 1 or 2GB and it doesn't get released.
Side note: All my data services for now are searches, so there are no save changes or updates, though there can be later on though. Also, I don't return any linq queries as an array or enumerable, they remain as queryables throughout the service call.
One more thing, I do use breezejs, so I wrap up my context as such:
readonly EFContextProvider<DistributorLocationEntities> provider = new EFContextProvider<DistributorLocationEntities>();
and the tidbits that goes along with this:
Doc for Breeze's EFContextProvider
proxycreationenabled = false
ladyloadingenabled = false
idispose is not included
but I still dispose the context anyways, which makes no difference.
I don't know what you're doing. I do know that you should not have any static resources of any kind in your Web API controllers (breeze-flavored or not).
I strongly suspect you've violated that rule.
Adding a Dispose method no difference if the object is never disposed ... which it won't be if it is held in a static variable.
I do not believe that Breeze has any role in your problem whatsoever. You've already shown that it doesn't.
I suggest you start from a clean slate, forget Breeze for now, a get a simple Web API controller that creates a DbContext per request. When you've figured that out, proceed to add some Breeze.
As mentioned in Ward's comment, statics are a big no-no, so I spent time on moving my EF objects out of static. Dispose method didn't really help either.
I gave this article a good read:
http://msdn.microsoft.com/en-us/data/hh949853.aspx
There are quite a few performance options EF provides (that doesn't come out of the box). So here are a few things I've done:
Added pre-generated views to EF: T4 templates for generating views for EF4/EF5. The nice thing about this is that it abstracts away from the DB and pre-generates the view to decrease model load time
Next, I read this post on Contains in EF: Why does the Contains() operator degrade Entity Framework's performance so dramatically?. Apparently I saw an an attractive answer of converting my IEnumerable.Contains into a HashSet.Contains. This boosted my performance considerably.
Finally, reading the microsoft article, I realized there is a "AsNoTracking()" that you can hook up to the DBContext, this turns of automatic caching for that specific context in linq. So you can do something like this
var query = (from t in db.Context.Table1.AsNoTracking() select new { ... }
Something I didn't have to worry about was compiling queries in EF5, since it does it for you automatically, so you don't have to add CompileQuery.Compile(). Also if you're using EF 6 alpha 2, you don't need to worry about Contains or pre-generating views, since this is fixed in that version.
So when I start up my EF, this is a "cold" query execution, my memory goes high, but after recycling IIS, memory is cut in half and uses "warm" query execution. So that explains a lot!
I am in the process of upgrading an application from EF1 to EF4.1
I created a DbContext and a set of POCOs using the "ADO.NET DbContext Generator" templates.
When I query the generated DbContext the database part of the query takes 4ms to execute (validated with EF Profiler). And then it takes the context about 40 seconds (in words: FORTY!) to do whatever it does before it returns the result to the application.
EF1 handles the same query in less than 2 seconds.
Turning off AutoDetectChanges, LazyLoading and ProxyGeneration wins me 2-3 seconds.
When I use the AsNoTracking() extension method I am able to reduce the total execution time to about 3 seconds.
That indicates that ChangeTracking is the culprit.
But ChangeTracking is what I need. I must be able to eventually persist all changes without having to handpick which entities were modified.
Any ideas how I could solve that performance issue?
Is the technique at the end of this documentation useful? Alternatively, I've avoided many of the performance pitfalls using a fluent interface to declaratively state which entities in a given transaction for sure won't change vs. might change (immutable vs. immutable). For example, if the entities I am saving are aggregate roots in which the root or its entities refer to "refdata" items, then this heuristic prevents many writes because the immutable items don't need to be tracked. The mutable items all get written without check (a weakness... One which may or may not be acceptable).
I'm using this with a generic repository pattern precisely because I don't want to track changes or implement a specific strategy for each case. If that's not enough, perhaps rolling your own change tracking outside of the context and adding entities in as needed will work.
Without seeing the query, I can't say for sure what the problem might be. Could this be related?
Why does the Contains() operator degrade Entity Framework's performance so dramatically?
Depending on the LINQ operators being used, it appears that EF has a tough time converting some queries to SQL. Maybe you're running up against a similar situation here.
I'm considering spending time learning and using LINQ to SQL but after years of best practices advising NOT to embed SQL I'm having a hard time changing paradigms.
Why does it seem accepted now to embed queries in compiled code? It seems almost a step backwards to me in some ways.
Has anyone had issues with fix query / compile / deploy cycle after switching to LINQ?
I think I still might wait for the finished Entity Framework.
What do you think?
The advantage of Linq to Sql is that it doesn't really embed queries in compiled code - not really. The Linq statement means that your .Net code actually has the logic required to build the Sql statement embedded, not the raw Sql.
It really makes a lot of sense to have .Net code that converts directly to the Sql to execute, rather than a long list of sprocs with associated documentation. The Linq way is much easier to maintain and improve.
I don't think I'd switch an existing project to Linq - really it's a replacement for the entire data-layer and it can change the way all access to that layer is done. Unless you're switching from a very similar model the cost is going to be far too high for any potential gains.
Linq to Sql's real power is in quickly creating new applications - it allows you to very rapidly create the data-layer code.
I undertand your point, this does indeed seem like a bit of a backward step...
Actually I would probably steer away from LINQ to SQL and look more at LINQ to Entities, your entities model your conceptual data model and I personaly feel more comfortable embedding queries agains a conceptual model in my code. The actual physical model is abstracted away from you by an entity framework.
This link (excuse the pun) discusses LINQ to Entities and the Entity Framework: http://msdn.microsoft.com/en-us/library/bb386992.aspx
This is an interesting article discussign the pros and cons of both approaches: http://dotnetaddict.dotnetdevelopersjournal.com/adoef_vs_linqsql.htm
Edit Another thought, if you don't want wait for EF, have a look at NHibernate, you can LINQ to that too... See http://www.hookedonlinq.com/LINQToNHibernate.ashx
You need to think of LINQ to SQL as an abstraction above writing SQL directly yourself. If you can get your head around this then you’ve made a step in the right direction. You also need to let go of some long held beliefs such as compiled sprocs are always faster and SQL accounts shouldn’t have data reader / writer privileges.
I’ve found that it’s possible to begin gradually moving existing solutions towards LINQ to SQL so long as there is a clear DAL in place and you’re just changing the implementation without affecting the contract it may have with consuming code. Reference lists are an easy candidate as they’re low impact, read only sets of data. The main thing you need to remain conscious of if retrofitting is potential ambiguous class names if you’ve already hand coded them to model the database.
With the value of hindsight in bringing LINQ to SQL into a large enterprise (since CTP days), I’d do it again in a heartbeat. It’s not perfect and there are issues but there are enormous benefits particularly when it comes to development speed and maintainability. It’s a new paradigm and is definitely, definitely a step forward.
There is an implementation of LINQ to SQL not only for SQL Server databases, so the non-SQL Server developers can also take advantage of using this efficient ORM.
We have already added support for query-level LaodWith( ) and extended the error processing.
Also we plan to support all three inheritance models (TPH, TPT, TPC) and key field generation.
You can find the list of supported databases here
I don't think of it as embedding SQL in your code any more than embedding a Stored Proc name in your code is. More often than not a change to your Proc involves change to your code anyway. For example, you usually need to add a new in/out parameter or update a getter/setter method to reference a new column.
What it does is remove a lot of the leg work of writing twice as much code to align properties and methods in your code with procs and columns in your DB.