How to refactor repeating IDbConnection.open\close? - refactoring

maybe its stupid question, but I confused my self.
I use Dapper with Autofac (DI) and in my business layer I always use the following construction:
db.Open(); //db is IDbConnection instance
db.InsertOrDeleteOrUpdate(...)
db.Close();
whether a good solution to hide db.Open() and db.Close() by delegate method? For example:
dbHelper.do(db => db.InsertOrDeleteOrUpdate(...));
What do you do in such cases?

Redundant code is not the only issue in this code. Your code is hitting the database for every action. This will hit the performance if your DBServer is deployed on remote machine. Refer this answer.
Have you ever come across UnitOfWork pattern? It is very good solution for handling connections on some higher level.
Connection Factory is another good alternative and it can be used with UnitOfWork. But I personally prefer only UnitOfWork.
Refer this answer for sample code of UnitOfWork.
Once you choose from above, it is easy to reduce the redundancy by using "connection per request" pattern. You need to identify centralized location in your code where to create DalSession and where to dispose it.

Related

Add multiple key values in context.Context from web services API

I have a web application written in Go with multiple modules, one deals with all database related things, one deals with reports, one consists all web services, one for just business logic and data integrity validation and several others. So, I have numerous methods, functions have been covered by these modules.
Now, the requirement is to use session in web service as well as we need to use transaction in some APIs. The first approach came to my mind is to change the signature of the existing methods to support session, transaction (*sql.Tx) (which is a painful task, but have to do in anyways!). Now, I'm afraid actually what if something will come in future that needs to be passed through all these methods and then should I have to go through this cycle again to change the method signature again? This does not seem to be a good approach.
Later, I found that context.Context might be a good approach (well, you can suggest other approaches too, apart from this!) that for every method call, just pass context parameter at first argument place in a method call hence I've to change methods signature only one time. If I go with this approach, can anyone tell me how would I set/pass multiple keys (session, sql.Tx) in that context object?
(AFAIK, context.Context provides WithValue method, but can I use it for multiple keys? How would I set a key in the nested function call, is that even possible?)
Actually, this question has two questions:
Should I consider context.Context for my solution? If not, give me a light on another approach.
How do I set multiple keys and values in context.Context?
For your second question you can group all your key/values in struct as follows:
type vars struct {
lock sync.Mutex
db *sql.DB
}
Then you can add this struct in context:
ctx := context.WithValue(context.Background(), "values", vars{lock: mylock, db: mydb})
And you can retrieve it:
ctxVars, ok := r.Context().Value("values").(vars)
if !ok {
log.Println(err)
return err
}
db := ctxVars.db
lock := ctxVars.lock
I hope it helps you.
Finally, I decided to go with context package solution, after studying the articles from the Go context experience reports. And especially I found Dave Cheney's article helpful.
Well, I can make my custom solution for context as gorilla (Ah, somewhat!). But as Go already have a solution for this, I would go with context package.
Right now, I only need session and database transaction in each method to support transaction if began and user authentication, authorization.
It might be overhead, of having context.Context in each method of the application cause I don't need cancellation, deadline, timeout functionality at the moment but it could be helpful in future.

Flux Dispatcher - View actions vs. Server Actions

Is there any reason, other than semantics, to create different dispatch methods for view and server actions? All tutorials and examples I’ve seen (most notably this) ignore the source constant entirely when listening to dispatched payloads in favor of switching on the payload's action type.
I suppose there is a reason why this pattern is pervasive in flux examples, but I have yet to see a concrete example as to why this is useful. Presumably one could add an additional if or switch on the payload source to determine whether to act in stores, but no examples I've seen consider this constant at all. Any thoughts on this would be much appreciated.
Yes, this was cruft/cargo-culting that came over from a particular Flux project at Facebook, but there is no real reason to do this. If you do need to differentiate between server and view actions, you can just give them different types, or have another property of the action itself to help differentiate them.
When I get time, I plan to rewrite all the examples and documentation to reflect this.

ASP.NET MVC - Repository pattern with Entity Framework

When you develop an ASP.NET application using the repository pattern, do each of your methods create a new entity container instance (context) with a using block for each method, or do you create a class-level/private instance of the container for use by any of the repository methods until the repository itself is disposed? Other than what I note below, what are the advantages/disadvantages? Is there a way to combine the benefits of each of these that I'm just not seeing? Does your repository implement IDisposable, allowing you to create using blocks for instances of your repo?
Multiple containers (vs. single)
Advantages:
Preventing connections from being auto-closed/disposed (will be closed at the end of the using block).
Helps force you to only pull into memory what you need for a particular view/viewmodel, and in less round-trips (you will get a connection error for anything you attempt to lazy load).
Disadvantages:
Access of child entities within the Controller/View is limited to what you called with Include()
For pages like a dashboard index that shows information gathered from many tables (many different repository method calls), we will add the overhead of creating and disposing many entity containers.
If you are instantiating your context in your repository, then you should always do it locally, and wrap it in a using statement.
If you're using Dependency Injection to inject the context, then let your DI container handle calling dispose on the context when the request is done.
Don't instantiate your context directly as a class member, since this will not dispose of the contexts resources until garbage collection occurs. If you do, then you will need to implement IDipsosable to dispose the context, and make sure that whatever is using your repository properly disposes of your repository.
I, personally, put my context at the class level in my repository. My primary reason for doing so is because a distinct advantage of the repository pattern is that I can easily swap repositories and take advantage of a different backend. Remember - the purpose of the repository pattern is that you provide an interface that provides back data to some client. If you ever switch your data source, or just want to provide a new data source on the fly (via dependency injection), you've created a much more difficult problem if you do this on a per-method level.
Microsoft's MSDN site has good information the repository pattern. Hopefully this helps clarify some things.
I disagree with all four points:
Preventing connections from being auto-closed/disposed (will be closed
at the end of the using block).
In my opinion it doesn't matter if you dispose the context on method level, repository instance level or request level. (You have to dispose the context of course at the end of a single request - either by wrapping the repository method in a using statement or by implementing IDisposable on the repository class (as you proposed) and wrapping the repository instance in a using statement in the controller action or by instantiating the repository in the controller constructor and dispose it in the Dispose override of the controller class - or by instantiating the context when the request begins and diposing it when the request ends (some Dependency Injection containers will help to do this work).) Why should the context be "auto-disposed"? In desktop application it is possible and common to have a context per window/view which might be open for hours.
Helps force you to only pull into memory what you need for a
particular view/viewmodel, and in less round-trips (you will get a
connection error for anything you attempt to lazy load).
Honestly I would enforce this by disabling lazy loading altogether. I don't see any benefit of lazy loading in a web application where the client is disconnected from the server anyway. In your controller actions you always know what you need to load and can use eager or explicit loading. To avoid memory overhead and improve performance, you can always disable change tracking for GET requests because EF can't track changes on a client's web page anyway.
Access of child entities within the Controller/View is limited to what
you called with Include()
Which is rather an advantage than a disadvantage because you don't have the unwished surprises of lazy loading. If you need to populate child entities later in the controller actions, depending on some condition, you could load them through additional repository methods (LoadNavigationProperty or something) with the same or even a new context.
For pages like a dashboard index that shows information gathered from
many tables (many different repository method calls), we will add the
overhead of creating and disposing many entity containers.
Creating contexts - and I don't think we are talking about hundreds or thousands of instances - is a cheap operation. I would call this a very theoretical overhead which doesn't play a role in practice.
I've used both approaches you mentioned in web applications and also the third option, namely to create a single context per request and inject this same context into every repository/service I need in a controller action. They all three worked for me.
Of course if you use multiple contexts you have to be careful to do all the work in the same unit of work to avoid attaching entities to multiple contexts which will lead to well know exceptions. It's usually not a problem to avoid this situations but requires a bit more attention, especially when processing POST requests.
I lately use contexts per request, because it is easier and I just don't see the benefit of having very narrow contexts and I see no reason to use more than one single unit of work for the whole request processing. If I would need multiple contexts - for whatever reason - I could always create specialized methods which act with their own context instead of the "default context" of the request.

Entity Framework and ObjectContext n-tier architecture

I have a n-tier application based on pretty classic different layers: User Interface, Services (WCF), Business Logic and Data Access.
Database (Sql Server) is obviously quered throught Entity Framework, the problem is basically that every call starts from user interface and go throught all the layers, but doing that I need to create a new ObjectContext each time for every operation and that makes performance very bad because every time I need to reload metadata and recompile the query.
The most suggested pattern it would be the one below and it is what I'm actually doing: creating and passing the new context throught business layer methods each time the service receives a call
public BusinessObject GetQuery(){
using (MyObjectContext context = new MyObjectContext()){
//..do something } }
For easy query I don't see any particular dealy and it works fine but for complex and heavy query it makes a 2 seconds query to keep going for like 15 seconds each call.
I could set the ObjectContext static and it would solve the performance issue but it appears to be not suggested by anyone, also because I won't be able to access the context at the same time from different thread and multiple calls raise an exception. I could make it thread-safe but mantain the same ObjectContext for long time makes it bigger and bigger (and slower) because the reference it imports each query it execute a query.
The architecture I have I think it is the most common so what is the best and known way to implement and use ObjectContext?
Thank you,
Marco
In a Web context, it's best to use a stateless approach and create an ObjectContext for each request.
The cost of ObjectContext construction are minimal. The metadata is loaded from a global cache so only the first call will have to load it.
Static is definitely not a good idea. The ObjectContext is not thread save and this will lead to problems when using it in a WCF service with multiple calls. Making it thread save will result in less performance and it can cause subtle errors when reusing it in multiple requests.
Check this info: How to decide on a lifetime for your ObjectContext
Working with a static object context is not a good idea. A static context will be shared by all users of the web application meaning that when one user makes modifications to a context such as calling saveChanges , all other users using the context will be affected (this would be a problem when supposing they have added or updated data to the context but have not called save changes). The best practice while working with object context is to keep it alive for the period of the request and use if to perform any atomic business operations. You would want to check out the UnitOfWork pattern and repository pattern
uow
uow and repository in EF
If you feel you are having performance issues with your queries and there is a possibility that you would reuse your query , I would recommend you use precompiled linq queries. You can check out the links below for more info
precompiled linq julie lermann
precompiled linq
What you show is the typical pattern to use a context - by request, similar to using a database connection.
What makes you think the bad performance is related to recreating the context? This is very, very likely not the case. How did you measure this impact?
If you have such performance critical code that this overhead truly matters you should not use Entity Framework since there always will be some overhead, even if the overhead should be very little in the general case. I would start focusing on your data model though and the underlying data store which will have a much larger impact on your query performance. Have you optimized your queries? Did you put indexes everywhere you need them? Can you de-normalize the data to remove joins?

Why use service layer?

I looked at the example on http://solitarygeek.com/java/developing-a-simple-java-application-with-spring/comment-page-1#comment-1639
I'm trying to figure out why the service layer is needed in the first place in the example he provides. If you took it out, then in your client, you could just do:
UserDao userDao = new UserDaoImpl();
Iterator users = userDao.getUsers();
while (…) {
…
}
It seems like the service layer is simply a wrapper around the DAO. Can someone give me a case where things could get messy if the service layer were removed? I just don’t see the point in having the service layer to begin with.
Having the service layer be a wrapper around the DAO is a common anti-pattern. In the example you give it is certainly not very useful. Using a service layer means you get several benefits:
you get to make a clear distinction between web type activity best done in the controller and generic business logic that is not web-related. You can test service-related business logic separately from controller logic.
you get to specify transaction behavior so if you have calls to multiple data access objects you can specify that they occur within the same transaction. In your example there's an initial call to a dao followed by a loop, which could presumably contain more dao calls. Keeping those calls within one transaction means that the database does less work (it doesn't have to create a new transaction for every call to a Dao) but more importantly it means the data retrieved is going to be more consistent.
you can nest services so that if one has different transactional behavior (requires its own transaction) you can enforce that.
you can use the postCommit interceptor to do notification stuff like sending emails, so that doesn't junk up the controller.
Typically I have services that encompass use cases for a single type of user, each method on the service is a single action (work to be done in a single request-response cycle) that that user would be performing, and unlike your example there is typically more than a simple data access object call going on in there.
Take a look at the following article:
http://www.martinfowler.com/bliki/AnemicDomainModel.html
It all depends on where you want to put your logic - in your services or your domain objects.
The service layer approach is appropriate if you have a complex architecture and require different interfaces to your DAO's and data. It's also good to provide course grained methods for clients to call - which call out to multiple DAO's to get data.
However, in most cases what you want is a simple architecture so skip the service layer and look at a domain model approach. Domain Driven Design by Eric Evans and the InfoQ article here expand on this:
http://www.infoq.com/articles/ddd-in-practice
Using service layer is a well accepted design pattern in the java community. Yes, you could straightaway use the dao implementation but what if you want to apply some business rules.
Say, you want to perform some checks before allowing a user to login into the system. Where would you put those logics? Also, service layer is the place for transaction demarcation.
It’s generally good to keep your dao layer clean and lean. I suggest you read the article “Don’t repeat the DAO”. If you follow the principles in that article, you won’t be writing any implementation for your daos.
Also, kindly notice that the scope of that blog post was to help beginners in Spring. Spring is so powerful, that you can bend it to suit your needs with powerful concepts like aop etc.
Regards,
James

Resources