I'm studying both of these approaches to include transactions in my Spring application. As for now, I prefer using annotations, as opposed to the tx namespace. The reason is that it sort of clears up the XML/complexity. But this is just my opinion.
I have not had a chance to see what current Spring practitioners use for transactions. Which one is now the preferred approach, and why?
In other words, what are the pros and cons of each approach that ultimately justify the use of one over the other?
<tx:advice> / <tx:attributes> / <tx:method>
Pros
No Spring-dependencies in your code
Very flexible, e.g. make all methods with get prefix transactional but read-only
Easily applying transaction demarcation into wide range of beans
Cons
cumbersome and hard to maintain XML
more XML
...even more XML
#Transactional and <tx:annotation-driven/>
Pros
Dead-simple, just add annotation over class or method
One line of XML (or even none with #EnableTransactionManagement) and it just works
Cons
Spring dependency in your code
Not possible to apply more general rules, like: all classes within a package that end with Dao
I would prefer to use annotations for marking transactions, not because of the ease of configuration or because of concerns about purity of coupling to Spring, but rather because it is typically the case that the code inside the method cannot work correctly without a transaction in place: the annotation is indicating something functional about the implementation as opposed to the way in which the code is managed (which would belong to the Spring configuration file).
Related
I am using Spring AOP to log the DB execution time, but it is applying to the entire method execution time.
#Target(ElementType.METHOD)
#Retention(RetentionPolicy.RUNTIME)
public #interface TrackExecutionTime {
}
Is there any possibility that we can use this #TrackExecutionTime not in the method level but inside a method just above some statement like below -
#TrackExecutionTime
List<Release> releaseList = releaseRepo.findByProductName(productName.toUpperCase());
that way I can able to get only the DB execution time not only the entire method execution time, as my method contains other business logic too which also including if we use the AOP annotation at the method level.
Your question is not AOP-specific, because annotations are a Java language feature. The answer is: Annotations on arbitrary lines of code are not part of the Java language concept, which for you means you also cannot use them for AOP purposes. This is simply a Java limitation. Neither Spring AOP nor native AspectJ can support a feature which does not exist in Java to begin with.
Friendly suggestion: Please learn more about Java first, then get acquainted with some basic software design and clean code principles. Finally, you shall be able to achieve what you want, albeit in a different way from what you just dreamed up here.
Spring AOP default configuration uses proxies to execute the aspect hence only methods can be annotated.
A bit of a detour on the proxies. A proxy wraps a target method so when you call a method elsewhere Spring makes sure to invoke the method on the proxy and that invocation then contains the aspect code which gets executed before, after, around the call itself (depending on the aspect). There can be several proxies wrapping a single class.
Then an option is to add your aspect annotation to the repository method.
If we need to track the execution time only for subset of calls to the method (which sounds a bit strange a requirement) then we can add a wrapper method - say make a Spring-managed Metrics class with a said time tracking method that accepts a lambda and is annotated with the #TrackExecutionTime. The original call would then be something like
metrics.executeTimed(() -> releaseRepo.findByProductName(productName.toUpperCase()));
I would like to add extra attribute to the internal representation of beans in Spring. Is it possible? What mechanism should be applied if any?
My goal is to define my own beans for my framework. I can do it from scratch or reuse Spring mechanisms.
You could have a look at the documentation Container Extension Points.
To achieve customization you can create a:
BeanPostProcessor bean which operates on a bean instance. For example this allows to create a custom bean registry, to proxify...
BeanFactoryPostProcessor which can operate on bean metadata. This allows for overriding or adding properties even to eager-initializing beans, modifying the class...
BeanDefinitionRegistryPostProcessor which can operate right after the registry initialization. This allows to create, remove or update beans definitions.
For example you can create a new BeanDefinitionRegistryPostProcessor which will register (or modify) beans using a custom implementation of BeanDefinition which will contain custom attribute based on for example your owns annotation.
Could you elaborate a bit what are you trying to achieve with your framework?
Merci beaucoup, Nicolas :)
I will study both your answer and the documentation you provided. I have already found the *Postprocessors you mentioned but I was not sure if this is the right place and what is the nature of their customizations (subclassing or something different) and what are the consequences. My problem is not as simple as I told (not just adding an attribute) - the extended Spring bean should be used also in cooperation to Spring+AspectJ (not SpringAOP), especially with declare-parents construct. I would like to be able to create proxies for the redefined beans as well. I will let you know what are the results of my investigation and may be I will ask some questions.
And the answer to all of you:
My framework is dedicated to defining graph modeling languages (meta-models) at run-time (being far extension of OMG standards) and I am looking for solutions of limits introduced by current object representation in JVM, which promotes behaviour over structure. This is one of several approaches, but the most prospective for me due to the relatively small effort.
I have converted a simple Spring project made with pure aop namespace xml coding to the same project but using annotations this time.
I've noticed that now the before-part of the around advice comes out before the before advice, which is the exact opposite behavior of the project's result when I was using aop namespace xml coding.
Is it the default behavior of the annotation style?
See Advice ordering:
When two pieces of advice defined in different aspects both need to run at the same join point, unless you specify otherwise the order of execution is undefined. You can control the order of execution by specifying precedence. This is done in the normal Spring way by either implementing the org.springframework.core.Ordered interface in the aspect class or annotating it with the Order annotation. Given two aspects, the aspect returning the lower value from Ordered.getValue() (or the annotation value) has the higher precedence.
Since the ordering is undefined, it could possibly vary even between multiple executions (having the same xml config).
So far, in our project, we have got our beans having the references set through setter injections; recently, couple of people have started to use #Autowired annotation to set the references on their beans; is it a good to mix annotations and xml configurations for context?
There is no problem using the two together but better to choose one for consistency sake. It would be easier for all the developers to understand and maintain the code.
My preference is annotations as I like things defined at one place.
Mixing annotations and XML definitions works pretty well to reduce amount of code in XML file.
Certainly you can have both coexisting; XML definitions will always override any annotations in Java code without causing a trouble.
Is it good or bad?? I think it's just a way to balance size of an XML file. I find very useful to define my beans in a simple way in XML and then just use #Autowire annotation including #Required to ensure bean has been properly injected before been used.
Why do people default to DI vs a global factory with a hashmap that maps interfaces/abstracts to classes? It would seem this is a higher performance solution to the problem no?
All the things mentioned so far in this thread can be provided by a global factory with a method like:
TestGlobalFactory implements GlobalFactoryI
ProductionGlobalFactory implements GlobalFactoryI //configures classes to interfaces
protected GlobalFactoryI gf=GlobalFactoryFactory.getInstance(); //only singleton used in app, specifies which GlobalFactory to use
protected SportsCarI mySportsCar=gf.new("sportsCarI",constructorVar1,constructorVar2);
The above would be much faster than recursive reflection to detect DI instances.
However I admittedly prefer the convention of DI as it ends up being fewer characters and greater flexibility with the option of third party containers.
artima.com/forums/flat.jsp?forum=106&thread=238700
Regardless DI, is the superior approach as it has the containers written to specify which implementation belongs in which class. With Service Locator one would actually have to do gf.new("thisClass","sportsCarI",constructor1)
The main advantage of dependency injection over a factory-based approach is not performance. Its main advantage is testability. The whole point of DI is to be able to inject mock dependencies to implement the unit tests of a component. Doing it with a factory is much more painful.
Because you get a lot more with Spring. Spring manages the lifecycle of its beans, in addition to caching instances in line with the user specified scope for the beans (e.g. prototype, singleton, etc). Also note that Spring's DI is related to its "Inversion of Control", so things are not hard coded directly into the app (unless you consider configuration code).