spring transaction of JdbcTemplate/HibernateTemplate and HibernateDaoSupport/JdbcDaoSupport - spring

How transaction is controlled while using JdbcTemplate/HibernateTemplateand HibernateDaoSupport/JdbcDaoSupport? I used to check source code and didn't find where the transaction is controlled by JdbcTemplate/HibernateTemplate and HibernateDaoSupport/JdbcDaoSupport.
And In source code HibernateDaoSupport/JdbcDaoSupport is using JdbcTemplate/HibernateTemplate, what's the role of HibernateDaoSupport/JdbcDaoSupport and what's the role of JdbcTemplate/HibernateTemplate?
Why do we use JdbcTemplate/HibernateTemplate and HibernateDaoSupport/JdbcDaoSupport? It seems all sample code is using them. What should I use if I don't want to use them, such as only using spring + hibernate?
If I'm using JdbcTemplate/HibernateTemplate and HibernateDaoSupport/JdbcDaoSupport, do I still need to config transaction proxy in xml? If I still need to config transaction proxy in xml, it means it's ok for me to put both getHibernateTemplate().saveOrUpdate(user)and getHibernateTemplate().saveOrUpdate(order) together, and they're invoked in the same transaction, is this right?

First off all please forget about HibernateTemplate and HibernateDaoSupport these classes should be considered deprecated since the release of hibernate 3.0.1 (which was somewhere in 2006!). You should be creating daos/repositories based on a plain hibernate API, as explained in the Spring Reference Guide. (The same goes for JpaTemplate and JpaDaoSupport).
JdbcTemplate (and all other *Template classes) intend is to make it easier to work with the underlying technology. Once upon a time this was also needed for Hibernate (< 3.0.1), now it isn't.
JdbcTemplate makes it easier to work with plain JDBC code. You don't have to get a connection, create a (Prepared)Statement, add the parameters, execute the query, iterate over the resultset and convert the ResultSet. With the JdbcTemplate much of this is hidden and most of it can be written in 1 to 3 lines of code, whereas plain JDBC would require a lot more.
The *Support classes make it easier to gain access to a template but aren't a must to use. Creating a JdbcTemplate is quite easy and you don't really need to extend JdbcDaoSupport. But you can if you want. For more information a lot is explained in the reference guide.

Related

How to use AOP annotation inside method not in method level

I am using Spring AOP to log the DB execution time, but it is applying to the entire method execution time.
#Target(ElementType.METHOD)
#Retention(RetentionPolicy.RUNTIME)
public #interface TrackExecutionTime {
}
Is there any possibility that we can use this #TrackExecutionTime not in the method level but inside a method just above some statement like below -
#TrackExecutionTime
List<Release> releaseList = releaseRepo.findByProductName(productName.toUpperCase());
that way I can able to get only the DB execution time not only the entire method execution time, as my method contains other business logic too which also including if we use the AOP annotation at the method level.
Your question is not AOP-specific, because annotations are a Java language feature. The answer is: Annotations on arbitrary lines of code are not part of the Java language concept, which for you means you also cannot use them for AOP purposes. This is simply a Java limitation. Neither Spring AOP nor native AspectJ can support a feature which does not exist in Java to begin with.
Friendly suggestion: Please learn more about Java first, then get acquainted with some basic software design and clean code principles. Finally, you shall be able to achieve what you want, albeit in a different way from what you just dreamed up here.
Spring AOP default configuration uses proxies to execute the aspect hence only methods can be annotated.
A bit of a detour on the proxies. A proxy wraps a target method so when you call a method elsewhere Spring makes sure to invoke the method on the proxy and that invocation then contains the aspect code which gets executed before, after, around the call itself (depending on the aspect). There can be several proxies wrapping a single class.
Then an option is to add your aspect annotation to the repository method.
If we need to track the execution time only for subset of calls to the method (which sounds a bit strange a requirement) then we can add a wrapper method - say make a Spring-managed Metrics class with a said time tracking method that accepts a lambda and is annotated with the #TrackExecutionTime. The original call would then be something like
metrics.executeTimed(() -> releaseRepo.findByProductName(productName.toUpperCase()));

What strategies exist for using Spring Cache on methods that take an array or collection parameter?

I want to use Spring's Cache abstraction to annotate methods as #Cacheable. However, some methods are designed to take an array or collection of parameters and return a collection. For example, consider this method to find entites:
public Collection<Entity> getEntities(Collection<Long> ids)
Semantically, I need to cache Entity objects individually (keyed by id), not based on the collection of IDs as a whole. Similar to what this question is asking about.
Simple Spring Memcached supports what I want, via its ReadThroughMultiCache, but I want to use Spring's abstraction in order to support easy changing of the cache store implementation (Guava, Coherence, Hazelcast, etc), not just memcached.
What strategies exist for caching this kind of method using Spring Cache?
Spring's Cache Abstraction does not support this behavior out-of-the-box. However, it does not mean it is not possible; it's just a bit more work to support the desired behavior.
I wrote a small example demonstrating how a developer might accomplish this. The example uses Spring's ConcurrentMapCacheManager to demonstrate the customizations. This example will need to be adapted to your desired caching provider (e.g. Hazelcast, Coherence, etc).
In short, you need to override the CacheManager implementation's method for "decorating" the Cache. This varies from implementation to implementation. In the ConcurrentMapCacheManager, the method is createConcurrentMapCache(name:String). In Spring Data GemFire, you would override the getCache(name:String) method to decorate the Cache returned. For Guava, it would be the createGuavaCache(name:String) in the GuavaCacheManager, and so on.
Then your custom, decorated Cache implementation (perhaps/ideally, delegating to the actual Cache impl, from this) would handle caching Collections of keys and corresponding values.
There are few limitations of this approach:
A cache miss is all or nothing; i.e. partial keys cached will be considered a miss if any single key is missing. Spring (OOTB) does not let you simultaneously return cache values and call the method for the diff. That would require some very extensive modifications to the Cache Abstraction that I would not recommend.
My implementation is just an example so I chose not to implement the Cache.putIfAbsent(key, value) operation (here).
While my implementation works, it could be made more robust.
Anyway, I hope it provides some insight in how to handle this situation properly.
The test class is self-contained (uses Spring JavaConfig) and can run without any extra dependencies (beyond Spring, JUnit and the JRE).
Cheers!
Worked for me. Here's a link to my answer.
https://stackoverflow.com/a/60992530/2891027
TL:DR
#Cacheable(cacheNames = "test", key = "#p0")
public List<String> getTestFunction(List<String> someIds) {
My example is with String but it also works with Integer and Long, and probably others.

Object created in Spring

I would like to know, whether this is a valid practice to use "new" in spring to create a Object?
You can either use xml->bean to create a object using xml file or use annotations to create a object.
This question arises when I am working on a project and where I want to create a object of a property class(which contains properties and setter/getter method of those properties).
I am able to create a object using new and its working fine but if spring has capability to create and manage object lifecycle then which way I need to go create a object and why?
I think the confusion may arise because of the (over)usage of spring as DI mechanism. Spring is a framework providing many services. Bean or dependency injection is just on of those.
I would say that for POJOs which have just setter and getters without much logic in them you can safely create objects using new keyword. For example, in case of value objects and data classes which do not have much configuration or life cycle events to worry about, go ahead and crate those using new keyword. If you repetitively create these objects and have fields which are not changing often, then I would use spring because it will lessen some of the repetitive code and object creation can be considered externalized or separated from your object usage.
Classes instantiated using spring bean definition xml/annotations are basically 'Spring-Managed' beans which mostly means that their life cycle, scope, etc are managed by spring. Spring manages objects which are beans, which may have some life cycle methods and APIs. These beans are dependencies for the classes in which the are set. The parent objects call some API of these dependencies to fulfil some business cases.
Hope this helps.
The Dependency Injection concept in spring is more useful when we need to construct an object that depends upon many objects, because it saves you time and effort for constructing as well as instantiating dependent objects.
In your case , Since it's a POJO class with only setters and getters , I think it is absolutely safe to instantiate it using a new keyword.

Dynamic creation of beans in Spring

Is there a way in spring wherein we can read the fields of a bean from the DB table and create a complete bean class - with getters and setters on server startup????
I require this to make my application completely configurable...as in if I have to add a new field in future , all I would require would be adding a field in the db and the bean setters and getters would be available to me.
Thanks
You could try approaches for dynamically registering beans . You could use the BeanDefinitionBuilder for this purpose . See a sample here . But as #Darren says , It's not a wise idea to creak a bean via DB lookup .
1: Improve your accept-rate
2: You might benefit from something like an ORM approach (Hibernate or JPA). Another slightly different approach that might suite you is the Active Record pattern as implemented in, forinstance, ActiveJDBC.
Spring does not, in itself, offer anything like what you are after, but using spring-jpa together with Hibernate might get you a bit closer towards your goal. If, OTOH, you want auto-generated code you could also look at something like Spring-Roo
You might want to think about this a little more. Even if you made your fields totally configurable, you will still have to write the code that accesses them. And given that you are going to have to write code anyway, might as well keep everything in code. It's much simpler that way.

Spring Transaction Annotations vs Tx Namespace: Which is used more?

I'm studying both of these approaches to include transactions in my Spring application. As for now, I prefer using annotations, as opposed to the tx namespace. The reason is that it sort of clears up the XML/complexity. But this is just my opinion.
I have not had a chance to see what current Spring practitioners use for transactions. Which one is now the preferred approach, and why?
In other words, what are the pros and cons of each approach that ultimately justify the use of one over the other?
<tx:advice> / <tx:attributes> / <tx:method>
Pros
No Spring-dependencies in your code
Very flexible, e.g. make all methods with get prefix transactional but read-only
Easily applying transaction demarcation into wide range of beans
Cons
cumbersome and hard to maintain XML
more XML
...even more XML
#Transactional and <tx:annotation-driven/>
Pros
Dead-simple, just add annotation over class or method
One line of XML (or even none with #EnableTransactionManagement) and it just works
Cons
Spring dependency in your code
Not possible to apply more general rules, like: all classes within a package that end with Dao
I would prefer to use annotations for marking transactions, not because of the ease of configuration or because of concerns about purity of coupling to Spring, but rather because it is typically the case that the code inside the method cannot work correctly without a transaction in place: the annotation is indicating something functional about the implementation as opposed to the way in which the code is managed (which would belong to the Spring configuration file).

Resources