Lazily initializing AutoMapper - performance

I've been doing some performance stats on an ASP.NET 4.5 Webforms app, which seems a bit sluggish on initial startup after a fresh deployment.
One of the points I noticed is that creating the AutoMapper maps does take some time.
Since those maps are only used rather rarely, I was wondering if I could possibly "delay" creating those maps until the first time they're needed - sort of a "lazy initialization".
In that case, I would have to have some "non-destructive" (e.g. without throwing an exception) way of checking whether or not a given map exists - if there something like that in AutoMapper?
Thanks!

You can use FindTypeMapFor:
if (Mapper.FindTypeMapFor<TSource, TDestination>() == null)
Mapper.CreateMap<TSource, TDestination>();
// Map object
There's also an overload that takes type parameters.

Related

Magento register models for performance gain

I have a large application in Magento which is pretty "heavy" in terms of data collections and actions that are performed. Currently I'm trying to optimize the performance and I noticed that during a regular page load, around 900 Mage::getModel calls are being performed. The call itself is quite fast but when 900 calls are made, this affect performance as all the calls take around 2 seconds.
I was wondering if it's safe to use Magento's registry functionality to register models that have no construct arguments. Basically if a Mage::getModel('sales/quote') is called, after loading the class I intend to register the instance under a unique key (like 'model_register_'.className) and all subsequent Mage::getModel('sales/quote') calls will no longer create a new instance of the model but return the one in the registry, which should improve performance. This, of course, would only be used for calls that have no $constructArguments in the Mage::getModel call.
Has anyone done this before? As I am interested if this approach is safe or if this might cause other issues.
Apparently this does not work because registry keeps a reference to the object. So when retrieving the model from the registry, you would not get a "clean" instance

EF5 (entity framework) memory leak and doesn't release after dispose

So I'm using web api to expose data services. Initially I created my dbcontext as a static memory, and each time I open up my project under IISExpress, the memory balloons to over 100MB in memory. I understand that it isnt recommended to use static due to the solved answer here:
Entity framework context as static
So I went ahead and converted my application to using regular non-static dbcontext and included a dispose method on my api:
protected override void Dispose(Boolean disposing)
{
if (provider.Context != null)
{
provider.Context.Dispose();
provider = null;
}
base.Dispose(disposing);
}
Now every time I make a call, it goes through this method and disposes. Now, I open the application, still balloons to 100k, and each time I make a call, I watch the memory of my iisexpress process, and it keeps on going up and it's not coming back down after the dispose, it keeps increasing to almost 200MB+.
So static or not, memory explodes whenever I use it.
Initially I thought it was my web api that was causing it, until I removed all my services and just created the EF object in my api (I'm using breezejs, so this code is trivial, the actual implementation is down below, but makes no diff to memory consumption):
private DistributorLocationEntities context = new DistributorLocationEntities();
And bam, 110MB immediately.
Is there any helpful tips and tweaks on how I can release memory when I use it? Should I add garbage collect to my dispose()? Any pitfalls to allocating and deallocating memory rapidly like that? For example, I make calls to the service each time I make a keystroke to accomplish an "autocomplete" feature.
I'm also not certain what will happen if I put this in production, and we have dozens of users accessing the db; I wouldn't want the users to increase the memory to 1 or 2GB and it doesn't get released.
Side note: All my data services for now are searches, so there are no save changes or updates, though there can be later on though. Also, I don't return any linq queries as an array or enumerable, they remain as queryables throughout the service call.
One more thing, I do use breezejs, so I wrap up my context as such:
readonly EFContextProvider<DistributorLocationEntities> provider = new EFContextProvider<DistributorLocationEntities>();
and the tidbits that goes along with this:
Doc for Breeze's EFContextProvider
proxycreationenabled = false
ladyloadingenabled = false
idispose is not included
but I still dispose the context anyways, which makes no difference.
I don't know what you're doing. I do know that you should not have any static resources of any kind in your Web API controllers (breeze-flavored or not).
I strongly suspect you've violated that rule.
Adding a Dispose method no difference if the object is never disposed ... which it won't be if it is held in a static variable.
I do not believe that Breeze has any role in your problem whatsoever. You've already shown that it doesn't.
I suggest you start from a clean slate, forget Breeze for now, a get a simple Web API controller that creates a DbContext per request. When you've figured that out, proceed to add some Breeze.
As mentioned in Ward's comment, statics are a big no-no, so I spent time on moving my EF objects out of static. Dispose method didn't really help either.
I gave this article a good read:
http://msdn.microsoft.com/en-us/data/hh949853.aspx
There are quite a few performance options EF provides (that doesn't come out of the box). So here are a few things I've done:
Added pre-generated views to EF: T4 templates for generating views for EF4/EF5. The nice thing about this is that it abstracts away from the DB and pre-generates the view to decrease model load time
Next, I read this post on Contains in EF: Why does the Contains() operator degrade Entity Framework's performance so dramatically?. Apparently I saw an an attractive answer of converting my IEnumerable.Contains into a HashSet.Contains. This boosted my performance considerably.
Finally, reading the microsoft article, I realized there is a "AsNoTracking()" that you can hook up to the DBContext, this turns of automatic caching for that specific context in linq. So you can do something like this
var query = (from t in db.Context.Table1.AsNoTracking() select new { ... }
Something I didn't have to worry about was compiling queries in EF5, since it does it for you automatically, so you don't have to add CompileQuery.Compile(). Also if you're using EF 6 alpha 2, you don't need to worry about Contains or pre-generating views, since this is fixed in that version.
So when I start up my EF, this is a "cold" query execution, my memory goes high, but after recycling IIS, memory is cut in half and uses "warm" query execution. So that explains a lot!

Using Core Data as cache

I am using Core Data for its storage features. At some point I make external API calls that require me to update the local object graph. My current (dumb) plan is to clear out all instances of old NSManagedObjects (regardless if they have been updated) and replace them with their new equivalents -- a trump merge policy of sorts.
I feel like there is a better way to do this. I have unique identifiers from the server, so I should be able to match them to my objects in the store. Is there a way to do this without manually fetching objects from the context by their identifiers and resetting each property? Is there a way for me to just create a completely new context, regenerate the object graph, and just give it to Core Data to merge based on their unique identifiers?
Your strategy of matching, based on the server's unique IDs, is a good approach. Hopefully you can get your server to deliver only the objects that have changed since the time of your last update (which you will keep track of, and provide in the server call).
In order to update the Core Data objects, though, you will have to fetch them, instantiate the NSManagedObjects, make the changes, and save them. You can do this all in a background thread (child context, performBlock:), but you'll still have to round-trip your objects into memory and back to store. Doing it in a child context and its own thread will keep your UI snappy, but you'll still have to do the processing.
Another idea: In the last day or so I've been reading about AFIncrementalStore, an NSIncrementalStore implementation which uses AFNetworking to provide Core Data properties on demand, caching locally. I haven't built anything with it yet but it looks pretty slick. It sounds like your project might be a good use of this library. Code is on GitHub: https://github.com/AFNetworking/AFIncrementalStore.

DbContext ChangeTracking kills performance?

I am in the process of upgrading an application from EF1 to EF4.1
I created a DbContext and a set of POCOs using the "ADO.NET DbContext Generator" templates.
When I query the generated DbContext the database part of the query takes 4ms to execute (validated with EF Profiler). And then it takes the context about 40 seconds (in words: FORTY!) to do whatever it does before it returns the result to the application.
EF1 handles the same query in less than 2 seconds.
Turning off AutoDetectChanges, LazyLoading and ProxyGeneration wins me 2-3 seconds.
When I use the AsNoTracking() extension method I am able to reduce the total execution time to about 3 seconds.
That indicates that ChangeTracking is the culprit.
But ChangeTracking is what I need. I must be able to eventually persist all changes without having to handpick which entities were modified.
Any ideas how I could solve that performance issue?
Is the technique at the end of this documentation useful? Alternatively, I've avoided many of the performance pitfalls using a fluent interface to declaratively state which entities in a given transaction for sure won't change vs. might change (immutable vs. immutable). For example, if the entities I am saving are aggregate roots in which the root or its entities refer to "refdata" items, then this heuristic prevents many writes because the immutable items don't need to be tracked. The mutable items all get written without check (a weakness... One which may or may not be acceptable).
I'm using this with a generic repository pattern precisely because I don't want to track changes or implement a specific strategy for each case. If that's not enough, perhaps rolling your own change tracking outside of the context and adding entities in as needed will work.
Without seeing the query, I can't say for sure what the problem might be. Could this be related?
Why does the Contains() operator degrade Entity Framework's performance so dramatically?
Depending on the LINQ operators being used, it appears that EF has a tough time converting some queries to SQL. Maybe you're running up against a similar situation here.

How to handle a large amount of mapping files in NHibernate

We're working on a large windows forms .NET application with a very large database. Currently we're reaching 400 tables and business objects but that's maybe 1/4 of the whole application.
My question now is, how to handle this large amount of mapping files with NHibernate with performance and memory usage in mind?
The business objects and their mapping files are already separated in different assemblies. But I believe that a NH SessionFactory with all assemblies will use a lot memory and the performance will suffer. But if I build different Factories with only a subset of assemblies (maybe something like a domain context, which separates the assemblies in logic parts) I can't exchange objects between them easily, and only have access to a subset of objects.
Our current approach is to separate the business objects with the help of a context attribute. A business object can be part of multiple contexts. When a SessionFactory is created all mapping files of a given context (one or more) are merged into one large mapping file and compiled to a DLL at runtime. The Session itself is then created with this new mapping DLL.
But this approach has some serious drawbacks:
The developer has to take care of the assembly references between the business object assemblies ;
The developer has to take care of the contexts or NHibernate will not find the mapping for a class ;
The creation of the new mapping file is slow ;
The developer can only access business objects within the current context - any other access will result in an exception at runtime.
Maybe there is a complete different approach? I'll be glad about any new though about this.
The first thing you need to know is that you do not need to map everything. I have a similar case at work where I mapped the main subset of objects/tables I was to work against, and the others I either used them via ad-hoc mapping, or by doing simple SQL queries through NHibernate (session.createSqlQuery). The ones I mapped, a few of them I used Automapper, and for the peskier ones, regular Fluent mapping (heck, I even have NHibernate calls which span across different databases, like human resources, finances, etc.).
As far as performance, I use only one session factory, and I personally haven't seen any drawbacks using this approach. Sure, Application_Start takes more than your regular ADO.NET application, but after that's, it's smooth sailing through and through. It would be even slower to start open and closing session factory on demand, since they do take a while to freshen up.
Since SessionFactory should be a singleton in your application, the memory cost shouldn't be that important in the application.
Now, if SessionFactory is not a singleton, there's your problem.

Resources