How to configure LifeTimeManager for IContainerRegistry registrations in PrismLibrary? - prism

How to configure LifeTimeManager for IContainerRegistry registrations in PrismLibrary ?
Am I missing any using statement ?

Prism’s IContainerRegistry is a general abstraction layer meant to decouple the underlying DI Container from your app, and simplify your Service registrations for the most common registrations.
Prism only has a concept of Transient and Singleton lifetimes.
// Registers IFoo with a Transient Lifetime
containerRegistry.Register<IFoo, Foo>();
// Registers IFoo with a Singleton Lifetime
containerRegistry.RegisterSingleton<IFoo, Foo>();
// Registers an instance as a Singleton
containerRegistry.RegisterInstance<IFoo>(new Foo());
As I mentioned this is meant to cover most of your needs, though it will not cover all of your possible needs. For those scenarios you can access the underlying DI Container and access any of it’s API’s.
containerRegistry.GetContainer().SomeContainerSpecificAPI();
I should also note that this existing API is being considered for some expansion in 7.2 which you can track in Issue 1654

Related

Spring bean scoping

I've been googling and I just realized something really odd (at least to me). apparently it's recommended to set spring objects as singletons. I have several questions:
logically, it would affect performance right? if 100 users (total exaggeration) are using the same #service objects, wouldn't that mean the other 99 must queue in line to get that #service? (thus, performance issue)
what if by some diabolical design, those spring objects have states (synchronization problem)? This usually happens with ORM since the BaseDAOImpl usually have protected injected sessionfactory
speaking of injected sessionfactory, I thought annotations are not inherited? could somebody give explanation about this?
thanks
apparently it's recommended to set spring objects as singletons.
It's not necessarily recommended, but it is the default.
logically, it would affect performance right? if 100 users (total
exaggeration) are using the same #service objects, wouldn't that mean
the other 99 must queue in line to get that #service? (thus,
performance issue)
First of all, forget about users. Think about objects, threads, and object monitors. A thread will block and wait if it tries to acquire an object monitor that is owned by another thread. This is the basis of Java's synchronization concept. You use synchronization to achieve mutual exclusion from a critical section of code.
If your bean is stateless, which a #Service bean probably should be (just like a #Controller beans), then there is no critical section. No object is shared between threads using the same #Service bean. As such, there are no synchronized blocks involved and no thread waits on any other thread. There wouldn't be any performance degradation.
what if by some diabolical design, those spring objects have states
(synchronization problem)? This usually happens with ORM since the
BaseDAOImpl usually have protected injected sessionfactory
A typical design would have you use SessionFactory#getCurrentSession() which returns a Session bound to thread, using ThreadLocal. So again, these libraries are well written. There's almost always a way with which you can avoid any concurrency issue, either through ThreadLocal design as above or by playing with bean scopes.
If you can't, you should write your code so that the bottleneck is as small as possible, ie. just the critical section.
speaking of injected sessionfactory, I thought annotations are not
inherited? could somebody give explanation about this?
I'm not sure what you mean with this. You are correct that annotations are not inherited (but methods are inherited with their annotations). But that might not apply to the situation you are asking about, so please clarify.
service object being singleton doesnt mean that it has synchronised access. multple users can invoke it simultaneouly. Like a single servlet instance is used by many concurrent users in a webapplication. You only need to ensure that there is no state in your singleton object.
SessionFactory is a threaf safe object as its immutable, session is not thread safe. Ad since session factory is a heavy object, its recommended to share one session factory object per application but to use mutiple sessions.
Not clear about your 3 point, can you please elaborate a little.
Hope it helps

Inject Session object to DAO bean instead of Session Factory?

In our application we are using Spring and Hibernate.
In all the DAO classes we have SessionFactory auto wired and each of the DAO methods are calling getCurrentSession() method.
Question I have is why not we inject Session object instead of SessionFactory object in prototype scope? This will save us the call to getCurrentSession.
I think the first method is correct but looking for concrete scenarios where second method will throw errors or may be have bad performance?
When you define a bean as prototype scope a new instance is created for each place it needs to be injected into. So each DAO will get a different instance of Session, but all invocations of methods on the DAO will end up using the same session. Since session is not thread safe it should not be shared across multiple threads this will be an issue.
For most situations the session should be transaction scope, i.e., a new session is opened when the transaction starts and then is closed automatically once the transaction finishes. In a few cases it might have to be extended to request scope.
If you want to avoid using SessionFactory.currentSession - then you will need to define your own scope implementation to achieve that.
This is something that is already implemented for JPA using proxies. In case of JPA EntityManager is injected instead of EntityManagerFactory. Instead of #Autowired there is a new #PersistenceContext annotation. A proxy is created and injected during initialization. When any method is invoked the proxy will get hold of the actual EntityManager implementation (using something similar to SessionFactory.getCurrentSession) and delegate to it.
Similar thing can be implemented for Hibernate as well, but the additional complexity is not worth it. It is much simpler to define a getSession method in a BaseDAO which internally call SessionFactory.getCurrentSession(). With this the code using the session is identical to injecting session.
Injecting prototype sessions means that each one of your DAO objects will, by definition, get it's own Session... On the other hand SessionFactory gives you power to open and share sessions at will.
In fact getCurrentSession will not open a new Session on every call... Instead, it will reuse sessions binded to the current session context (e.g., Thread, JTA Transacion or Externally Managed context).
So let's think about it; assume that in your business layer there is a operation that needs to read and update several database tables (which means interacting, directly or indirectly, with several DAOs)... Pretty common scenario right? Customarily when this kind of operation fails you will want to rollback everything that happened in the current operation right? So, for this "particular" case, what kind of strategy seems appropriate?
Spanning several sessions, each one managing their own kind of objects and bound to different transactions.
Have a single session managing the objects related to this operation... Demarcate the transactions according to your business needs.
In brief, sharing sessions and demarcating transactions effectively will not only improve your application performance, it is part of the functionality of your application.
I would deeply recommend you to read Chapter 2 and Chapter 13 of the Hibernate Core Reference Manual to better understand the roles that SessionFactory, Session and Transaction plays within the framework. It will also teach will about Units of work as well as popular session patterns and anti-patterns.

ASP.NET MVC - Repository pattern with Entity Framework

When you develop an ASP.NET application using the repository pattern, do each of your methods create a new entity container instance (context) with a using block for each method, or do you create a class-level/private instance of the container for use by any of the repository methods until the repository itself is disposed? Other than what I note below, what are the advantages/disadvantages? Is there a way to combine the benefits of each of these that I'm just not seeing? Does your repository implement IDisposable, allowing you to create using blocks for instances of your repo?
Multiple containers (vs. single)
Advantages:
Preventing connections from being auto-closed/disposed (will be closed at the end of the using block).
Helps force you to only pull into memory what you need for a particular view/viewmodel, and in less round-trips (you will get a connection error for anything you attempt to lazy load).
Disadvantages:
Access of child entities within the Controller/View is limited to what you called with Include()
For pages like a dashboard index that shows information gathered from many tables (many different repository method calls), we will add the overhead of creating and disposing many entity containers.
If you are instantiating your context in your repository, then you should always do it locally, and wrap it in a using statement.
If you're using Dependency Injection to inject the context, then let your DI container handle calling dispose on the context when the request is done.
Don't instantiate your context directly as a class member, since this will not dispose of the contexts resources until garbage collection occurs. If you do, then you will need to implement IDipsosable to dispose the context, and make sure that whatever is using your repository properly disposes of your repository.
I, personally, put my context at the class level in my repository. My primary reason for doing so is because a distinct advantage of the repository pattern is that I can easily swap repositories and take advantage of a different backend. Remember - the purpose of the repository pattern is that you provide an interface that provides back data to some client. If you ever switch your data source, or just want to provide a new data source on the fly (via dependency injection), you've created a much more difficult problem if you do this on a per-method level.
Microsoft's MSDN site has good information the repository pattern. Hopefully this helps clarify some things.
I disagree with all four points:
Preventing connections from being auto-closed/disposed (will be closed
at the end of the using block).
In my opinion it doesn't matter if you dispose the context on method level, repository instance level or request level. (You have to dispose the context of course at the end of a single request - either by wrapping the repository method in a using statement or by implementing IDisposable on the repository class (as you proposed) and wrapping the repository instance in a using statement in the controller action or by instantiating the repository in the controller constructor and dispose it in the Dispose override of the controller class - or by instantiating the context when the request begins and diposing it when the request ends (some Dependency Injection containers will help to do this work).) Why should the context be "auto-disposed"? In desktop application it is possible and common to have a context per window/view which might be open for hours.
Helps force you to only pull into memory what you need for a
particular view/viewmodel, and in less round-trips (you will get a
connection error for anything you attempt to lazy load).
Honestly I would enforce this by disabling lazy loading altogether. I don't see any benefit of lazy loading in a web application where the client is disconnected from the server anyway. In your controller actions you always know what you need to load and can use eager or explicit loading. To avoid memory overhead and improve performance, you can always disable change tracking for GET requests because EF can't track changes on a client's web page anyway.
Access of child entities within the Controller/View is limited to what
you called with Include()
Which is rather an advantage than a disadvantage because you don't have the unwished surprises of lazy loading. If you need to populate child entities later in the controller actions, depending on some condition, you could load them through additional repository methods (LoadNavigationProperty or something) with the same or even a new context.
For pages like a dashboard index that shows information gathered from
many tables (many different repository method calls), we will add the
overhead of creating and disposing many entity containers.
Creating contexts - and I don't think we are talking about hundreds or thousands of instances - is a cheap operation. I would call this a very theoretical overhead which doesn't play a role in practice.
I've used both approaches you mentioned in web applications and also the third option, namely to create a single context per request and inject this same context into every repository/service I need in a controller action. They all three worked for me.
Of course if you use multiple contexts you have to be careful to do all the work in the same unit of work to avoid attaching entities to multiple contexts which will lead to well know exceptions. It's usually not a problem to avoid this situations but requires a bit more attention, especially when processing POST requests.
I lately use contexts per request, because it is easier and I just don't see the benefit of having very narrow contexts and I see no reason to use more than one single unit of work for the whole request processing. If I would need multiple contexts - for whatever reason - I could always create specialized methods which act with their own context instead of the "default context" of the request.

spring is it advisable to make all domain classes prototype

In most cases only service classes are managed by spring and are singletons. In some situations, domain code needs injection which won't work unless its managed by spring. That being said, it is advisable and non performance intensive to have all your domain classes as #bean with scope as prototype and anytime you want to do
Person p = new Person();
just do
Person p = ctx.getBean("person");
Any help on the pros and cons would be appreciated.
There is obviously more overhead in obtaining a prototype bean than simply instantiating directly via the new keyword (any dependency injection, lifecycle callbacks, etc. performed by the Spring IoC container). While likely not significant for a single instantiation, if you performed this in a loop, you could see performance issues.
If, however, you need any singleton beans (typically services) or resources (such as a DataSource), then you will prefer to use the prototype bean. Any additional dependencies will be wired in automatically.
Apart from performance considerations, your choice may also depend on your design. If you follow a "traditional" architecture with a service tier and data access objects that perist domain objects, then everything from a Spring point of view is generally stateless. Your services and data access objects are singletons using domain objects that are POJO's. Here you will rarely need a prototype bean.
If on the other hand you follow a more object-oriented approach where an object has a stateless factory (to allow instances to be fetched or created) and the object is then able to persist itself (say with a 'save' method), then nearly all your domain objects may be prototype beans.
As in nearly all decisions, there will be trade-offs and no one right answer.

Java/JSF/Spring/WebFlow DDD architecture design question

I'm currently porting a massive, ancient, Oracle Forms app over to JSF & I need to make decisions on the domain model.
I'm locked in to using the Spring JDBC templates(no ORM) and utilizing a DAO layer to deal with baffling legacy database schema, which must have been desined by 1st year co-ops. For the domain model I would really like to make things highly OO, for instance: presume there is a domain object Plan. The goal would be too OO-ify it be able to do PlanInstance.load(byId("12345")), PlanInstance.save(), .delete(), .create(), etc etc. But then the situation arises; because these domain objects contain references to stateful beans(like Repositories for instance), then they can't be Serialized. How does one overcome this?
Initially I started splitting things up like: PlanData(Statefull, SessionScoped) which is used by PlanManager(Stateless, Singleton). This way the common controller code is extracted and is prevented from being duplicated in each session scoped bean, and most importantly allows the session scoped beans to be serialized.
At this point I really need to structure it OO style to minimize complexity, but I just don't know how I can have an object in session scope when it has references to stateful objects(due to serialization errors).
The only possibility I can think of is makeing the stateful refs transient & devising some sort of mechanism to re-inject the dependancies when a bean is un-serialized. Can any one provide me with any insight into solutions to this dilemma? There must be some sort of pattern/practice that solves which I am probably just missing.
I would keep the state and management of that state separate (i.e. Plan vs PlanManager.) By using the data access pattern (PlanManager), you keep the door open to using ORM later, (say) should the db schema be reworked in future. Putting state and state management together in the same class (PlanInstance) goes against the OO principle of single responsibility.
The stateful session scoped beans are not themselves serialized (at least not to store in your persistent store - but you might serialize them to support session fail over). The controller and session beans maintain references to your data beans.
The stateful beans decide when to load, invoking logic, change state and save your data objects. They provide the context for your domain objects. In some designs (often cited as the Anemic Domain Model) the domain objects have no behaviour, and all the logic exists in stateless services. If I understand correctly, you want encapsulation of state and behaviour in your domain objects, and that the domain objects need to use stateful session beans to perform their work. Where possible, try to factor the functionality in the domain objects to not rely upon session state (will make testing simpler), or to push that functionality out into a service bean that is invoked with the appropriate session state. If you have no choice but to use references to stateful beans from your domain model behaviour, the stateful beans can provide the necessary state/repository references as parameters to method calls on your domain objects. That way, the domain objects are still stateless, but can implement domain logic using stateful beans.
All the while, consider the single responsibility of the domain object. At some point it may become clear that the domain logic can be split into layers (say, low-level and higher-level logic) which may make the need for stateful beans in the domain objects unnecessary.

Resources