For example, I can implements BeanNameAware and rewrite its set method to change some properties of bean objects.
But when do I need to do this? I want to know the application scenario of this approach or the benefits of this approach.
After a few days of study, I have a new understanding, these Aware are similar to the functions in the bean's life cycle, such as init-method or destroy-method, I can use these hook functions to execute different tasks when the bean is initialized or destroyed. For example, adding a connection to the database in the initialization method or adding an operation to close the connection in the destroy method
Related
I've been googling and I just realized something really odd (at least to me). apparently it's recommended to set spring objects as singletons. I have several questions:
logically, it would affect performance right? if 100 users (total exaggeration) are using the same #service objects, wouldn't that mean the other 99 must queue in line to get that #service? (thus, performance issue)
what if by some diabolical design, those spring objects have states (synchronization problem)? This usually happens with ORM since the BaseDAOImpl usually have protected injected sessionfactory
speaking of injected sessionfactory, I thought annotations are not inherited? could somebody give explanation about this?
thanks
apparently it's recommended to set spring objects as singletons.
It's not necessarily recommended, but it is the default.
logically, it would affect performance right? if 100 users (total
exaggeration) are using the same #service objects, wouldn't that mean
the other 99 must queue in line to get that #service? (thus,
performance issue)
First of all, forget about users. Think about objects, threads, and object monitors. A thread will block and wait if it tries to acquire an object monitor that is owned by another thread. This is the basis of Java's synchronization concept. You use synchronization to achieve mutual exclusion from a critical section of code.
If your bean is stateless, which a #Service bean probably should be (just like a #Controller beans), then there is no critical section. No object is shared between threads using the same #Service bean. As such, there are no synchronized blocks involved and no thread waits on any other thread. There wouldn't be any performance degradation.
what if by some diabolical design, those spring objects have states
(synchronization problem)? This usually happens with ORM since the
BaseDAOImpl usually have protected injected sessionfactory
A typical design would have you use SessionFactory#getCurrentSession() which returns a Session bound to thread, using ThreadLocal. So again, these libraries are well written. There's almost always a way with which you can avoid any concurrency issue, either through ThreadLocal design as above or by playing with bean scopes.
If you can't, you should write your code so that the bottleneck is as small as possible, ie. just the critical section.
speaking of injected sessionfactory, I thought annotations are not
inherited? could somebody give explanation about this?
I'm not sure what you mean with this. You are correct that annotations are not inherited (but methods are inherited with their annotations). But that might not apply to the situation you are asking about, so please clarify.
service object being singleton doesnt mean that it has synchronised access. multple users can invoke it simultaneouly. Like a single servlet instance is used by many concurrent users in a webapplication. You only need to ensure that there is no state in your singleton object.
SessionFactory is a threaf safe object as its immutable, session is not thread safe. Ad since session factory is a heavy object, its recommended to share one session factory object per application but to use mutiple sessions.
Not clear about your 3 point, can you please elaborate a little.
Hope it helps
In our application we are using Spring and Hibernate.
In all the DAO classes we have SessionFactory auto wired and each of the DAO methods are calling getCurrentSession() method.
Question I have is why not we inject Session object instead of SessionFactory object in prototype scope? This will save us the call to getCurrentSession.
I think the first method is correct but looking for concrete scenarios where second method will throw errors or may be have bad performance?
When you define a bean as prototype scope a new instance is created for each place it needs to be injected into. So each DAO will get a different instance of Session, but all invocations of methods on the DAO will end up using the same session. Since session is not thread safe it should not be shared across multiple threads this will be an issue.
For most situations the session should be transaction scope, i.e., a new session is opened when the transaction starts and then is closed automatically once the transaction finishes. In a few cases it might have to be extended to request scope.
If you want to avoid using SessionFactory.currentSession - then you will need to define your own scope implementation to achieve that.
This is something that is already implemented for JPA using proxies. In case of JPA EntityManager is injected instead of EntityManagerFactory. Instead of #Autowired there is a new #PersistenceContext annotation. A proxy is created and injected during initialization. When any method is invoked the proxy will get hold of the actual EntityManager implementation (using something similar to SessionFactory.getCurrentSession) and delegate to it.
Similar thing can be implemented for Hibernate as well, but the additional complexity is not worth it. It is much simpler to define a getSession method in a BaseDAO which internally call SessionFactory.getCurrentSession(). With this the code using the session is identical to injecting session.
Injecting prototype sessions means that each one of your DAO objects will, by definition, get it's own Session... On the other hand SessionFactory gives you power to open and share sessions at will.
In fact getCurrentSession will not open a new Session on every call... Instead, it will reuse sessions binded to the current session context (e.g., Thread, JTA Transacion or Externally Managed context).
So let's think about it; assume that in your business layer there is a operation that needs to read and update several database tables (which means interacting, directly or indirectly, with several DAOs)... Pretty common scenario right? Customarily when this kind of operation fails you will want to rollback everything that happened in the current operation right? So, for this "particular" case, what kind of strategy seems appropriate?
Spanning several sessions, each one managing their own kind of objects and bound to different transactions.
Have a single session managing the objects related to this operation... Demarcate the transactions according to your business needs.
In brief, sharing sessions and demarcating transactions effectively will not only improve your application performance, it is part of the functionality of your application.
I would deeply recommend you to read Chapter 2 and Chapter 13 of the Hibernate Core Reference Manual to better understand the roles that SessionFactory, Session and Transaction plays within the framework. It will also teach will about Units of work as well as popular session patterns and anti-patterns.
In a Grails app, the default behaviour of service methods is that they are transactional and the transaction is automatically rolled-back if an unchecked exception is thrown. However, in Groovy one is not forced to handle (or rethrow) checked exceptions, so there's a risk that if a service method throws a checked exception, the transaction will not be rolled back. On account of this, it seems advisable to annotate every Grails service class
#Transactional(rollbackFor = Throwable.class)
class MyService {
void writeSomething() {
}
}
Assume I have other methods in MyService, one of which only reads the DB, and the other doesn't touch the DB, are the following annotations correct?
#Transactional(readOnly = true)
void readSomething() {}
// Maybe this should be propagation = Propagation.NOT_SUPPORTED instead?
#Transactional(propagation = Propagation.SUPPORTS)
void dontReadOrWrite() {}
In order to answer this question, I guess you'll need to know what my intention is:
If an exception is thrown from any method and there's a transaction in progress, it will be rolled back. For example, if writeSomething() calls dontReadOrWrite(), and an exception is thrown from the latter, the transaction started by the former will be rolled back. I'm assuming that the rollbackFor class-level attribute is inherited by individual methods unless they explicitly override it.
If there's no transaction in progress, one will not be started for methods like dontReadOrWrite
If no transaction is in progress when readSomething() is called, a read-only transaction will be started. If a read-write transaction is in progress, it will participate in this transaction.
Your code is right as far as it goes: you do want to use the Spring #Transactional annotation on individual methods in your service class to get the granularity you're looking for, you're right that you want SUPPORTS for dontReadOrWrite (NOT_SUPPORTED will suspend an existing transaction, which won't buy you anything based on what you've described and will require your software to spend cycles, so there's pain for no gain), and you're right that you want the default propagation behavior (REQUIRED) for readSomething.
But an important thing to keep in mind with Spring transactional behavior is that Spring implements transaction management by wrapping your class in a proxy that does the appropriate transaction setup, invokes your method, and then does the appropriate transaction tear-down when control returns. And (crucially), this transaction-management code is only invoked when you call the method on the proxy, which doesn't happen if writeSomething() directly calls dontReadOrWrite() as in your first bullet.
If you need different transactional behavior on a method that's called by another method, you've got two choices that I know of if you want to keep using Spring's #Transactional annotations for transaction management:
Move the method being called by the other into a different service class, which will be accessed from your original service class via the Spring proxy.
Leave the method where it is. Declare a member variable in your service class to be of the same type as your service class's interface and make it #Autowired, which will give you a reference to your service class's Spring proxy object. Then when you want to invoke your method with the different transactional behavior, do it on that member variable rather than directly, and the Spring transaction code will fire as you want it to.
Approach #1 is great if the two methods really aren't related anyway, because it solves your problem without confusing whoever ends up maintaining your code, and there's no way to accidentally forget to invoke the transaction-enabled method.
Approach #2 is usually the better option, assuming that your methods are all in the same service for a reason and that you wouldn't really want to split them out. But it's confusing to a maintainer who doesn't understand this wrinkle of Spring transactions, and you have to remember to invoke it that way in each place you call it, so there's a price to it. I'm usually willing to pay that price to not splinter my service classes unnaturally, but as always, it'll depend on your situation.
I think that what you're looking for is more granular transaction management, and using the #Transactional annotation is the right direction for that. That said, there is a Grails Transaction Handling Plugin that can give you the behavior that you're looking for. The caveat is that you will need to wrap your service method calls in a DomainClass.withTransaction closure and supply the non-standard behavior that you're looking for as a parameter map to the withTransaction() method.
As a note, on the backend this is doing exactly what you're talking about above by using the #Transactional annotation to change the behavior of the transaction at runtime. The plugin documentation is excellent, so I don't think you'll find yourself without sufficient guidance.
Hope this is what you're looking for.
In most cases only service classes are managed by spring and are singletons. In some situations, domain code needs injection which won't work unless its managed by spring. That being said, it is advisable and non performance intensive to have all your domain classes as #bean with scope as prototype and anytime you want to do
Person p = new Person();
just do
Person p = ctx.getBean("person");
Any help on the pros and cons would be appreciated.
There is obviously more overhead in obtaining a prototype bean than simply instantiating directly via the new keyword (any dependency injection, lifecycle callbacks, etc. performed by the Spring IoC container). While likely not significant for a single instantiation, if you performed this in a loop, you could see performance issues.
If, however, you need any singleton beans (typically services) or resources (such as a DataSource), then you will prefer to use the prototype bean. Any additional dependencies will be wired in automatically.
Apart from performance considerations, your choice may also depend on your design. If you follow a "traditional" architecture with a service tier and data access objects that perist domain objects, then everything from a Spring point of view is generally stateless. Your services and data access objects are singletons using domain objects that are POJO's. Here you will rarely need a prototype bean.
If on the other hand you follow a more object-oriented approach where an object has a stateless factory (to allow instances to be fetched or created) and the object is then able to persist itself (say with a 'save' method), then nearly all your domain objects may be prototype beans.
As in nearly all decisions, there will be trade-offs and no one right answer.
The question is a bit long since it's conceptual. I hope it's not a bad read :)
I'm working in a performance critical Spring MVC/Tiles web-app (10,000 users typical load). We load an update employee screen, where we load an employee details screen (bound to an employee business object) for updates via a MultiActionController. There are multiple tabs on this screen, but only tab1 has the updatabale data. Rest of the tabs are read-only stuff, for reference basically.
Needless to say, we've decided to load these read-only tabs in a lazy manner, i.e., when each tab is activated, we fire an ajax call (one-time) for fetch the data from the server. We don't load everything via the update view loading method. Remember: this is one time, read-only data.
Now, I'm in a dilemma. I've made another multiaction controller, named "AjaxController" for handling these ajax calls. Now, my questions:
What should be the best scope for this controller?
Thoughts: If I make it request scoped, then 10,000 users together can create 10,000 instances of this bean: memory problem there. If I make it session scoped, then one will be created per user session. That means, when 10,000 users log in to the app, regardless of whether they hit the AjaxController methods, they will each have a bean in possession.
Then, is singleton the best scope for this controller?
Thoughts: A singleton bean will be created when spring boots, and this very instance will be provided throughout. Sounds good.
Should the handler methods (like fetchTab7DataInJsonFormat) be static and attached to the class?
Thoughts: In this case, can havign static methods semantically conflict with the scope? For example: does scope="session"/"request" + static methods make sense? I ask because even though each user session has its own AjaxController bean, the handler methods are actually attached to the class, and not the instances. Also, does scope="singleton" + static handler methods make sense?
Can I implement the singleton design pattern into AjaxController manually?
Thoughts: What if I control the creation: do the GoF singleton basically. Then what can the scope specification do? Scope session/request surely can't create multiple instances can they?
If, by whatever mechanism (bean specification/design pattern/static methods), I do manage to have one single instance of AjaxController: Will these STATIC methods need to be synchronized? I think not, because even if STATIC handler methods can talk to services (which talk to DB/WS/MQ etc.) which take time, I think each request thread entering the static methods will be returned by their thread Id's right? It's not like user1 enters the static method, and then user2 enters the static method before user1 has been returned, and then they both get some garbled data? This is probably silly, but I want to be sure.
I'm confused. I basically want exactly one single instance of the controller bean servicing all requests for all clients.
Critical Note: The AjaxController bean is not INJECTED anywhere else, it exists isolated. It's methods are hit via ajax calls.
If I were doing this, I would definitely make the LazyLoadController singleton without having static methods in it and without any state in it.
Also, you definitely shouldn't instantiate singletons manually, it's better to use Spring's common mechanism and let the framework control everything.
The overall idea is to avoid using any static methods and/or persistent data in controllers. The right mechanism would be use some service bean for generating data for request, so controller acts as request parameter dispatcher to fetch the data out into the view. No mutable state or concurrently unsafe stuff should be allowed in controller. If some components are user-specific, Spring's AOP system provides injection of the components based on session/request.
That's about good practice in doing thing like that. There's something to clarify to give more specific answer for your case. Did I understand it right that typical use case for will be that AjaxController will pass some of requests to LazyLoadController to get tab data? Please provide details about that in comment or your question, so I may update my answer.
The thing that is wrong with having static methods in controller is that you have to manage concurrent safety by yourself which is not just error-prone but will also reduce overall performance. Spring runs every request in its own thread, so if two concurrent calls need to use some static method and there are shared resources (so you need to use synchronize statement or locks), one of threads will have to wait for another one to complete working in protected block. From the other hand, if you use stateless services and avoid having data that may be shared for multiple calls, you get increased performance and no need to deal with concurrent data access.