Cache solution for Dapper when using stored procedures (MSSQL) - asp.net-mvc-3

I'm using Dapper mainly for calling stored procedures in the database MSSQL 2008 R2.I do not have classes that map to database tables. Most of the data ends up in IEnumerable <Dynamic> and is transmitted to the grid on the screen.
Is there a ready to use solution for data buffering that I could use? (I need to use it on the MVC).
The data in the database are both static and dynamic in nature.I use the repository model to access the data.

Dapper doesn't include any inbuilt data caching features (although it uses extensive caching internally for the meta-programming layer): it aims itself squarely at the ADO.NET stuff - however, you could use pretty much any off-the-shelf caching component, including the HTTP runtime cache (HttpContext.Current.Cache), or the newer ObjectCache etc implementations. Because these just take objects, it should work fine.
If you are using a distributed cache (maybe via app-fabric, redis, or memcached) then you'd need the data to be serializable. In that scenario, I would strongly suggest using formal POCO types for the binding, rather than the dynamic API. As an example, in-house we use dapper to populate POCOs that are annotated with protobuf-net markers for serialization, and stored via BookSleeve to redis. Which sounds more complicated than it actually is.

Related

Hibernate using akka actors

I would like to read and process a whole table of accounts using akka actors. We have a multi-threaded actors framework that currently does this using simple jdbc queries that read "chunks" of the data. We now want to take advantage of JPA/Hibernate's mapping and object graph. We have a spring application.
How do I use Hibernate and still take advantage of the multi-threading? My experience with Hibernate is creating DAOs with an EntityManager and calling myDao.getById(...) but how do I work on data that we have ALREADY fetched using jdbc and now manage it using Hibernate?
Of course you can use hibernate with akka actors you just should be carefully and follow this answer.
But for using actors with an ORM I recommend you SORM
SORM is a Scala ORM-framework designed to eliminate boilerplate code
and solve the problems of scalability with a high level abstraction
and a functional programming style. Features
Complete abstraction from relational concepts. You work with case classes, collections and other standard Scala data types instead of
tables, rows, foreign keys and relations.
Complete separation of domain model from persistence layer. There are no annotations, special types or any other dependencies on persistence layer present in model declaration. This house is clear!
An intuitive and centralized connection-agnostic API. No tangled implicit constructions polluting your namespace and functionality scattered across multiple components. No manual management of connections.
Concurrency. A single SORM instance can safely be used across multiple threads and seamlessly integrates into actor-based concurrent systems, like Akka.
Integrated connection pooling. Scalable just by setting a “poolSize” parameter.
Automated schema generation.

POCO entity generator or edmgen entityclassgeneration: Performance issues

I am working on a project where performance is a key factor for success. I need to decide whether to use POCO entity generator or not. I might use entityclassgeneration instead of the POCO but I'm not sure if performance may be affected.
Another thing I should consider is the fact of having to work with stored procedures. I'm not sure if using the POCO entity generator template will give me some problems later in the development phase. Specially with the stored procedures.
Any advice would be helpful about using POCO entity generator or entityclassgeneration. By the way, I'm using Entity Framework 5.0 and MySQL database.

Entity Framework POCO Serialization

I will start to code a new Web application soon. The application will be built using ASP.Net MVC 3 and Entity Framework 4.1 (Database First approach). Instead of using the default EntityObject classes, I will create POCO classes using the ADO.NET POCO Entity Generator.
When I create POCOs using this tool, it automatically adds the Virtual keyword to all properties for change tracking and navigation properties for lazy loading.
I have however read and seen from demonstrations, that Julie Lerman (EF Guru!) seems to turn off lazy loading and also modifies her POCO template so that the Virtual keyword is removed from her POCO classes. Julie states the reason why she does this is because she is writing applications for WCF services and using the Virtual keyword with this causes a Serialization issue. She says, as an object is getting serialized, the serializer is touching the navigation properties which then triggers lazy loading, and before you know it you are pulling the whole database across the wire.
I think Julie was perhaps exagarating when she said this could pull the whole database across the wire, however, even so, this thought scares me!
My question is (finally), should I also remove the Virtual keyword from my POCO classes for my MVC application and use DectectChanges for my change tracking and Eager Loading to request navigation properties.
Your help with this would be greatly appreciated.
Thanks as ever.
Serialization can indeed trigger lazy loading because the getter of the navigation property doesn't have a way to detect if the caller is the serializer or user code.
This is not the only issue: whether you have virtual navigation properties or all properties as virtual EF will create a proxy type at runtime for your entities, therefore entity instances the serializer will have to deal with at runtime will typically be of a type different from the one you defined.
Julie's recommendations are the simplest and most reasonable way to deal with the issues, but if you still want to work with the capabilities of proxies most of the time and only sometimes serialize them with WCF, there are other workarounds available:
You can use a DataContractResolver to map the proxy types to be serialized as the original types
You can also turn off lazy loading only when you are about to serialize a graph
More details are contained in this blog post: http://blogs.msdn.com/b/adonet/archive/2010/01/05/poco-proxies-part-2-serializing-poco-proxies.aspx
Besides this, my recommendation would be that you use the DbContext template and not the POCO template. DbContext is the new API we released as part of EF 4.1 with the goal of providing greater productivity. It has several advantages like the fact that it will automatically perform DetectChanges so that you won't need in general to care about calling the method yourself. Also the POCO entities we generate for DbContext are simpler than the ones that we generate with the POCO templates. You should be able to find lots of MVC exampels using DbContext.
Well it depends on your need, if you are going to serialize your POCO classes than yes you should remove them (For example: when using WCF services or basically anything that will serialize your entire object). But if you are just building a web app that needs to access your classes than I would leave them in your classes as you control the objects that you will access in your classes through your code.

ASP.NET MVC developing using an ORM

When developing with MVC with an ORM
I dont like the idea that the ORM will make changes in my DB.
My application is a data driven application and the DB is the the first thing i created.
Isn't that an overhead to maintain the data scheme both in the model and in the DB?
How do i manage it?
Any ORM that is more suitable to this kind of work method?
I dont like the idea that the ORM will make changes in my DB
ORM don't have to make any changes in your database structure. If you have existing database you can simply use it without requiring any automated changes.
Isn't that an overhead to maintain the data scheme both in the model and in the DB?
How do you want to present your data in MVC? Are you going to use classes representing your data from the database? If yes then you have a reason why ORM exists. ORM maps relational data from database to classes = it loads them for you and persists them for you (= you don't have to deal with database access and SQL). If you are going to use object oriented strongly typed approach then ORM will not be overhead for you.
If you are not going to use such approach you don't have to use MVC. Just use ASP.NET with SQL data sources or ASP.NET dynamic data.
Any ORM that is more suitable to this kind of work method?
You have no special method.
Almost every ORM has some support tools or extensions which allows you creating basic mapping and sometimes also classes from existing database. In EF you will simply add Entity Data model to your project and in wizard selects tables you want in your application.
Sure the last paragraph was simplified. Each ORM has learning curve and its specialties so it will not be so "simple".
For .NET 4 Entity Framework, the tooling let's you go both directions; generate a database from a model and generate a model from a database. These features give you flexibility when implementing your change management protocols. I'm not sure what options are available for NHibernate.
Entity Framework references:
http://msdn.microsoft.com/en-us/library/bb386876.aspx
http://msdn.microsoft.com/en-us/library/bb399249.aspx
http://www.simple-talk.com/dotnet/.net-framework/entity-framework-4460---learn-to-create-databases-from-the-model/
A Stackoverflow comparison of the two:
Deciding between NHibernate vs Entity Framework?

How to Implement Database Independence with Entity Framework

I have used the Entity Framework to start a fairly simple sample project. In the project, I have created a new Entity Data Model from a SQL Server 2000 database. I am able to query the data using LINQ to Entities and display values on the screen.
I have an Oracle database with an extremely similar schema (I am trying to be exact but I do not know all the details of Oracle). I would like my project to be able to run on both the SQL Server and Oracle data stores with minimal effort. I was hoping that I could simply change the configuration string of my Entity Data Model and the Entity Framework would take care of the rest. However, it appears that will not work at seamlessly as I thought.
Has anyone done what I am trying to do? Again, I am trying to write an application that can query (and update) data from a SQL Server or Oracle database with minimal effort using the Entity Framework. The secondary goal is to not have to re-compile the application when switching back and forth between data stores. If I have to "Update Model from Database" that might be ok because I wouldn't have to recompile, but I'd prefer not to have to go this route. Does anyone know of any steps that might be necessary?
What is generally understood under the term "Persistence Ignorance" is that your entity classes are not being flooded with framework dependencies (important for N-tier scenarios). This is not the case right now, as entity classes must implement certain EF interfaces ("IPOCO"), as opposed to plain old CLR objects. As another poster has mentioned, there is a solution called Persistence Ignorance (POCO) Adapter for Entity Framework V1 for that, and EF V2 will support POCO out of the box.
But I think what you really had in mind was database independence. With one big configuration XML that includes storage model, conceptual model and the mapping between those two from which a typed ObjectContext will be generated at designtime, I also find it hard to image how to transparently support two databases.
What probably looks more promising is applying a database-independent ADO.NET provider like the one from DataDirect. DataDirect has also announced EF support for Q3/2008.
http://blogs.msdn.com/jkowalski/archive/2008/09/09/persistence-ignorance-poco-adapter-for-entity-framework-v1.aspx
The main problem is that the entity framework was not designed with persistence ignorance in mind. I would honestly look at using something other than entity framework.

Resources