Trying to improve efficiency of MVC3 + Unity project - asp.net-mvc-3

I have an MVC3 project that uses Unity for dependency injection.
There is a main MVC3 project, a “domain” class library that sits between MVC3 and the data tier, and a bunch of class libraries that make up the data tier.
(MVC3) – (domain) – (data tier)
This is an example of one of the service constructors in the domain class:
public DomainModelCacheServices(
Data.Interface.ICountryRepository countryRepository,
Data.Interface.ILanguageRepository languageRepository,
Data.Interface.ISocialNetRepository socialNetRepository
)
Every time a controller is called that has DomainModelCacheServices in its constructor, a new DomainModelCacheServices object is constructed, plus the three repository classes in the constructor of DomainModelCacheServices.
I cannot believe this is efficient!
What makes this worse is that the class DomainModelCacheServices is a cache class. It loads lists of data that never change, and holds them as statics. But it still needs to construct three repository classes for every reference!
If I give DomainModelCacheServices the lifetime of a singleton (forever), I have to ensure it is thread-safe, and if the day comes when I am getting hundreds of hits, there’s going to be a lot of locking.
I could change the constructor to this:
public DomainModelCacheServices(
IServiceLocator serviceLocator
)
I don’t know why, but this doesn’t look right. The constructor becomes meaningless to the eye, and I have to reference Unity in the domain class and somehow make the domain class aware of the ServiceLocator owned by the MVC3 application. Maybe the loose-coupling can be too loose?
Maybe constructing all these classes is not as inefficient as it looks I shouldn’t worry about it?
What would be nice is if Unity supported “Lazy” constructor parameters. But it doesn’t.
So, any ideas on how to make an MVC3 + Unity project more efficient, specifically in the domain model design?
Thanks for reading!

The cache shouldn't be definied on the domain level but on the repositories implemntation level (so in DAL). So for example ICountryRepository should have two implementations in DAL : CountryRepository and ChachedCountryRepository. These should be wired as decorators in Unity (CountryRepository is inside the ChachedCountryRepository). CachedCountryRepository would check if the data is in the cache and if not it would pass the call to the inner CountryRepository.
Creating objects is not expensive and wouldn't care too much about issues as a caching is correctly definied.

Great reasoning.
However, creating objects is cheap. I would not create a singleton since you already are caching all objects in static fields. The current approach is easy to understand.
I got another question for you:
Why are you not caching in your repository classes?
The repositories are responsible for the data and all data handling should be transparent to everything else. It also makes everything easier since they are responsible of updating the data sources. How do you keep the cache in sync with changes today? Through domain events?
I would create a cache class which I would use as a private field in the repository.

Related

Realm Swift2: Best practice for model and Realm model class separation

I just have started working with Realm for Swift. After reading the documentation, still I have a couple of question marks in mind.
My biggest one is, if there is a best practice for separation Model classes from Realm classes.
For example, in Java MVC projects we used DAO classes (Data Access Object classes) which were responsible for the communication with the database layer.
Our corresponding model classes only have either the dao object injected or we used service classes for this (like CRUD operations).
If I habe a Realm “Model” class, this seems now to be everything in one. But after having the object persisted to the database, changing attributes of the object in the UI-Layer results in
'Attempting to modify object outside of a write transaction - call
beginWriteTransaction on an RLMRealm instance first.'
Which brings me back to my initial though: Shouldn’t this be separated in Realm objects and model objects. Or is it OK to have "realm.write" processes in View-Classes then?
I did some research on this but the results are very vage on this.
How do you deal with this in your projects. Do you have some kind of best practice or guidance?
Many thanks in advance
John
Officially, (on the iOS side at least), there's no established best practice for abstracting the model class logic away from the actual Realm Object subclasses. That being said, I've definitely heard of apps in the past who do do this sort of logic in order to support multiple types of data frameworks.
If I were to do this, I would make a separate object class for each model implementing my own API on how to get/set data properties, and make the Realm object an internal member of this object. This object would then serve as the generic interface between my app's logic and how to save that data to Realm. That way, if I did want to swap out the data framework down the line, I could simply replace my own data object with a new one, but keep the API consistent.
In regards to that error message you posted, in order to ensure data integrity, you cannot modify a Realm object (UI thread or otherwise) unless it's in a write transaction. That being said, you could encapsulate that logic (i.e. open up a Realm write transaction on the current thread) pretty easily within that abstract object.

EF to ADO.NET transition

Need suggestion for migration few modules of asp.net mvc 3 application:
Right now we are using EF with POCO classes, but in the future for some performance driven modules we need to move to ADO.NET or some other ORM tool (may be DAPPER.NET).
The issue we are facing as of now is: our views dependent on Collection classes getting loaded through EF, what strategy should i used for these entity classes to be get loaded exactly the same way as by EF with some other ADO.NET or ORM tool.
What i want is to be able to switch between EF & ADO.NET with minimum of change at data access layer, as i don't want my Views to get effect by that.
You should use the Repository Design Pattern to hide the implementation of your data access layer. The Repository will return the same POCO's and have the same operations contracts no matter what data access layer you are using underneath the hood. Now this works fine if you are using real POCO's that do not have virtual methods for lazy loading in them. You will need to explicitly handle loading of dependent collections on entities to make this work.
What I've seen so far, a seamless transition from one ORM/DAL to another is an illusion. The repository pattern as suggested by Kevin is a great help, a prerequisite even (+1). But nonetheless, each ORM has a footprint in the domain layer. With EF you probably use virtual properties, maybe data annotations or, easily forgotten, a validation framework that easily fits in (and dismiss others that don't). You may rely on EF's ability to map across join tables. There may be things you don't (but would like to) do because of EF.
(Not to mention tools that execute scaffolding on top of an EF context. That would make the lock-in even tighter.)
Some things I might do in your situation (forgive me if I'm stating the obvious for you)
Use EF optimally. (Best mappings, best associations, ...) Preparing for the future is good, but it easily degenerates into the YAGNI pattern.
Become proficient in linq + EF: there are many ways to needlessly kill performance with EF. Suppose EF is good enough!
Keep looking for alternatives for high performance, like using stored procedures, parallellization, background processing, and/or reducing data volumes, and choose one when the requirements (functional and non-functional) are clear enough.
Maybe introduce an abstraction layer with DTO's that serve your views now, and in the future can be readily materialized by another ORM or ADO.
Expose POCO's as interfaces, which can be implemented by other objects later.
Just to add to both of these answers...
I always structure my solution into these basic projects:
Front end (usually asp.net MVC web app),
Services/Core with business processes,
Objects with application model POCOs and communication interfaces - referenced by all other projects so they serve as data interchange contracts and
Data that implements repositories that return POCO instances from Objects
Front end references Objects and Services/Core
Services Core references Objects and Data
Data references Objects
Objects doesn't reference any of the others, because they're the domain application model
If front end needs any additional POCOs I create those in the front end as view models and are only seen to that project (ie. registration process usually uses a separate type with more (and different) properties than Objects.User entity (POCO). No need to put these kind of types in Objects.
The same goes with data-only types. If additional properties are required, these classes usually inherit from the same Objects class and add additional properties and methods like generic ToPoco() method that knows exactly how to map from data type to application mode class.
Changing DAL
So whenever I need to change (which is as #GetArnold pointed out) my data access code I only have to provide a different Data project that uses different library/framework. All additional data-specific types etc. are then part of it as well... But all communication with other layers stays the same.
Instead of creating a new Data project you can also change existing one, by changing repository code and use a different DAL in them.

Best practice for persisting database-stored lookup data at app level in MVC

Slogging through MVC+EF and trying to focus on doing things the right way. Right now I'm looking to add a dropdown to a form but I'd like to avoid hitting the database every time the page loads so I'd like to store the data in the app level. I figure creating an application level variable isn't the best approach. I've read about using the cache and static utility functions but surprisingly, nothing has sounded terribly definitive. (Static classes bad for unit testing, caching bad
So I have two scenarios that I'm curious about, I'm not sure if the approach would differ between the two.
1) A basic lookup, let's say the fifty states. Small, defined, will never change. Load at application startup. (Not looking for a hard coded solution but retrieval from the database.)
2) A lookup that will very rarely change and only via an admin-like screen. Let's say, cities/stores where your product is being sold. So data would be stored
in the model but would be relatively static unless someone made changes via the application. So not looking to hit the database every time I need to populate a dropdown/listbox.
Seems like basic stuff but it's basically the same as this topic that was never answered:
Is it good to use a static EF object context in an MVC application for better perf?
Any help is appreciated.
I will address you question in a few parts. First off, is it inherently bad to use static variables or caching patterns in MVC. The answer is simply no. As long as your architecture supports them it is OK. Just put your cache in the right place and design for testability as I will explain later.
The second part is what is the "right" way to have this type of persisted data stored so you don't have to make round trips to the DB to populate common UI items. For this, I don't recommend storing EF objects. I would create POCO objects (View models or similar) that you cache. So in the example of your 50 states you might have something like this:
public class State
{
public string Abbreviation { get; set; }
public string Name { get; set; }
}
Then you would do something like this to create your cached list:
List<State> states = Context.StateData.Select(s => new State { Abbreviation = s.Abbreviation, Name = s.Name}).ToList();
Finally, whatever your caching solution is, it should implement an interface so you can mock that caching method for testing.
To do this without running into circular references or using reflection, you will need at least 3 assemblies:
Your MVC application
A class library to define your POCO objects and interfaces
A class library do perform your data access and caching (this can obviously be split into 2 libraries if that makes it easier to maintain and/or test)
That way you could have something like this in your MVC code:
ICache myCache = CacheFactory.CreateCache();
List<State> states = myCache.ListStates();
// populate your view model with states
Where ICache and State are in one library and your actual implementation of ICache is in another.
This is what I do for my standard architecture: splitting POCO objects and interfacees which are data access agnostic into a separate library from data access which is the separate from my MVC app.
Look into using a Dependency Injection tool such as unity, ninject, structuremap, etc. These will allow for the application level control you are looking for by implementing a kernel which holds on to objects in a very similar way to what you seem to be describing.

Static? Repositories MVC3, EF4.2 (Code First)

I'm new to MVC, EF and the like so I followed the MVC3 tutorial at http://www.asp.net/mvc and set up an application (not yet finished with everything though).
Here's the "architecture" of my application so far
GenericRepository
PropertyRepository inherits GenericRepository for "Property" Entity
HomeController which has the PropertyRepository as Member.
Example:
public class HomeController
{
private readonly PropertyRepository _propertyRepository
= new PropertyRepository(new ConfigurationDbContext());
}
Now let's consider the following:
I have a Method in my GenericRepository that takes quite some time, invoking 6 queries which need to be in one transaction in order to maintain integrity. My google results yeldet that SaveChanges() is considered as one transaction - so if I make multiple changes to my context and then call SaveChanges() I can be "sure" that these changes are "atomic" on the SQL Server. Right? Wrong?
Furthermore, there's is an action method that calls _propertyRepository.InvokeLongAndComplex() Method.
I just found out: MVC creates a new controller for each request. So I end up with multiple PropertyRepositories which mess up my Database Integrity. (I have to maintain a linked list of my properties in the database, and if a user moves a property it needs 6 steps to change the list accordingly but that way I avoid looping through all entities when having thousands...)
I thougth about making my GenericRepository and my PropertyRepository static, so every HomeController is using the same Repository and synchronize the InvokeLongAndComplex Method to make sure there's only one Thread making changes to the DB at a time.
I have the suspicion that this idea is not a good solution but I fail to find a suitable solution for this problem - some guys say that's okay to have static repositories (what happens with the context though?). Some other guys say use IOC/DI (?), which sounds like a lot of work to set up (not even sure if that solves my problem...) but it seems that I could "tell" the container to always "inject" the same context object, the same Repository and then it would be enough to synchronize the InvokeLongAndComplex() method to not let multiple threads mess up the integrity.
Why aren't data repositories static?
Answer 2:
2) You often want to have 1 repository instance per-request to make it easier to ensure that uncommited changes from one user don't mess things up for another user.
why have a repository instance per-request doesn't it mess up my linked list again?
Can anyone give me an advice or share a best practice which I can follow?
No! You must have a new context for each request so even if you make your repositories static you will have to pass current context instance to each its method instead of maintaining single context inside repository.
What you mean by integrity in the first place? Are you dealing with transactions, concurrency issues or referential constraints? Handling all of these issues is your responsibility. EF will provide some basic infrastructure for that but the final solution is still up to your implementation.

Business Layer structure, how do you build yours?

I am a big fan of NTiers for my development choices, of course it doesnt fit every scenario.
I am currently working on a new project and I am trying to have a play with the way I normally work, and trying to see if I can clean it up. As I have been a very bad boy and have been putting too much code in the presentation layer.
My normal business layer structure is this (basic view of it):
Business
Services
FooComponent
FooHelpers
FooWorkflows
BahComponent
BahHelpers
BahWorkflows
Utilities
Common
ExceptionHandlers
Importers
etc...
Now with the above I have great access to directly save a Foo object and a Bah object, via their respective helpers.
The XXXHelpers give me access to Save, Edit and Load the respective objects, but where do I put the logic to save objects with child objects.
For example:
We have the below objects (not very good objects I know)
Employee
EmployeeDetails
EmployeeMembership
EmployeeProfile
Currently I would build these all up in the presentation layer and then pass them to their Helpers, I feel this is wrong, I think the data should be passed to a single point above presentation in the business layer some place and sorted out there.
But I'm at a bit of a loss as to where I would put this logic and what to call the sector, would it go under Utilities as EmployeeManager or something like this?
What would you do? and I know this is all preference.
A more detailed layout
The workflows contain all the calls directly to the DataRepository for example:
public ObjectNameGetById(Guid id)
{
return DataRepository.ObjectNameProvider.GetById(id);
}
And then the helpers provider access to the workflows:
public ObjectName GetById(Guid id)
{
return loadWorkflow.GetById(id);
}
This is to cut down on duplicate code, as you can have one call in the workflow to getBySomeProperty
and then several calls in the Helper which could do other operations and return the data in different ways, a bad example would be public GetByIdAsc and GetByIdDesc
By seperating the calls to the Data Model by using the DataRepository, it means that it would be possible to swap out the model for another instance (that was the thinking) but ProviderHelper has not been broken down so it is not interchangable, as it is hardcode to EF unfortunately.
I dont intend to change the access technology, but in the future there might be something better or just something that all the cool kids are now using that I might want to implement instead.
projectName.Core
projectName.Business
- Interfaces
- IDeleteWorkflows.cs
- ILoadWorkflows.cs
- ISaveWorkflows.cs
- IServiceHelper.cs
- IServiceViewHelper.cs
- Services
- ObjectNameComponent
- Helpers
- ObjectNameHelper.cs
- Workflows
- DeleteObjectNameWorkflow.cs
- LoadObjectNameWorkflow.cs
- SaveObjectNameWorkflow.cs
- Utilities
- Common
- SettingsManager.cs
- JavascriptManager.cs
- XmlHelper.cs
- others...
- ExceptionHandlers
- ExceptionManager.cs
- ExceptionManagerFactory.cs
- ExceptionNotifier.cs
projectName.Data
- Bases
- ObjectNameProviderBase.cs
- Helpers
- ProviderHelper.cs
- Interfaces
- IProviderBase.cs
- DataRepository.cs
projectName.Data.Model
- Database.edmx
projectName.Entities (Entities that represent the DB tables are created by EF in .Data.Model, this is for others that I may need that are not related to the database)
- Helpers
- EnumHelper.cs
projectName.Presenation
(depends what the call of the application is)
projectName.web
projectName.mvc
projectName.admin
The test Projects
projectName.Business.Tests
projectName.Data.Test
+1 for an interesting question.
So, the problem you describe is pretty common - I'd take a different approach - first with the logical tiers and secondly with the utility and helper namespaces, which I'd try and factor out completely - I'll tell you why in a second.
But first, my preferred approach here is pretty common enterprise architecture which I'll try to highlight in brief, but there's much more depth out there. It does require some radical changes in thinking - using NHibernate or Entity framework to allow you to query your object model directly and let the ORM deal with things like mapping to and from the database and lazy loading relationships etc. Doing this will allow you to implement all of your business logic within a domain model.
First the tiers (or projects in your solution);
YourApplication.Domain
The domain model - the objects representing your problem space. These are plain old CLR objects with all of your key business logic. This is where your example objects would live, and their relationships would be represented as collections. There is nothing in this layer that deals with persistence etc, it's just objects.
YourApplication.Data
Repository classes - these are classes that deal with getting the aggregate root(s) of your domain model.
For instance, it's unlikely in your sample classes that you would want to look at EmployeeDetails without also looking at Employee (an assumption I know, but you get the gist - invoice lines is a better example, you generally will get to invoice lines via an invoice rather than loading them independently). As such, the repository classes, of which you have one class per aggregate root will be responsible for getting initial entities out of the database using the ORM in question, implementing any query strategies (like paging or sorting) and returning the aggregate root to the consumer. The repository would consume the current active data context (ISession in NHibernate) - how this session is created depends on what type of app you are building.
YourApplication.Workflow
Could also be called YourApplication.Services, but this can be confused with web services
This tier is all about interrelated, complex atomic operations - rather than have a bunch of things to be called in your presentation tier, and therefore increase coupling, you can wrap such operations into workflows or services.
It's possible you could do without this in many applications.
Other tiers then depend on your architecture and the application you're implementing.
YourApplication.YourChosenPresentationTier
If you're using web services to distribute your tiers, then you would create DTO contracts that represent just the data you are exposing between the domain and the consumers. You would define assemblers that would know how to move data in and out of these contracts from the domain (you would never send domain objects over the wire!)
In this situation, and you're also creating the client, you would consume the operation and data contracts defined above in your presentation tier, probably binding to the DTOs directly as each DTO should be view specific.
If you have no need to distribute your tiers, remembering the first rule of distributed architectures is don't distribute, then you would consume the workflow/services and repositories directly from within asp.net, mvc, wpf, winforms etc.
That just leaves where the data contexts are established. In a web application, each request is usually pretty self contained, so a request scoped context is best. That means that the context and connection is established at the start of the request and disposed at the end. It's trivial to get your chosen IoC/dependency injection framework to configure per-request components for you.
In a desktop app, WPF or winforms, you would have a context per form. This ensures that edits to domain entities in an edit dialog that update the model but don't make it to the database (eg: Cancel was selected) don't interfere with other contexts or worse end up being accidentally persisted.
Dependency injection
All of the above would be defined as interfaces first, with concrete implementations realised through an IoC and dependency injection framework (my preference is castle windsor). This allows you to isolate, mock and unit test individual tiers independently and in a large application, dependency injection is a life saver!
Those namespaces
Finally, the reason I'd lose the helpers namespace is, in the model above, you don't need them, but also, like utility namespaces they give lazy developers an excuse not to think about where a piece of code logically sits. MyApp.Helpers.* and MyApp.Utility.* just means that if I have some code, say an exception handler that maybe logically belongs within MyApp.Data.Repositories.Customers (maybe it's a customer ref is not unique exception), a lazy developer can just place it in MyApp.Utility.CustomerRefNotUniqueException without really having to think.
If you have common framework type code that you need to wrap up, add a MyApp.Framework project and relevant namespaces. If your're adding a new model binder, put it in MyApp.Framework.Mvc, if it's common logging functionality, put it in MyApp.Framework.Logging and so on. In most cases, there shouldn't be any need to introduce a utility or helpers namespace.
Wrap up
So that scratches the surface - hope it's of some help. This is how I'm developing software today, and I've intentionally tried to be brief - if I can elaborate on any specifics, let me know. The final thing to say on this opinionated piece is the above is for reasonably large scale development - if you're writing notepad version 2 or a corporate phone book, the above is probably total overkill!!!
Cheers
Tony
There is a nice diagram and a description on this page about the application layout, alhtough looks further down the article the application isnt split into physical layers (seperate project) - Entity Framework POCO Repository

Resources