Spring Transaction propagation nested Propagation REQUIRES_NEW in Propagation.REQUIRED - spring

I'm using Spring 3.2 integrate with hibernate 4
Here is my code
#Service
public class MyService{
#AutoWired
private NestedServcie ns;
#Transactional(propagation=Propagation.REQUIRED)
public void outer(){
while(true){
dao.findOne(); // This method find data from db using hibernate hql
ns.inner(); // insert some data and commit and loop again.
}
}
}
#Service
public class NestedServcie{
#Transactional(propagation=Propagation.REQUIRES_NEW)
public void inner(){
//here insert some data into db using hibernate
}
}
Here is the spring config xml
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory"/>
</bean>
And here is my question
before I run this program, there is no data in db, so dao.findOne() is null in the first loop. But after ns.inner() excute, I insert some data into db and commit (where I think REQUIRES_NEW works). And when the second loop begin, dao.findOne is still null, the outer
can not get the inner insert data. why??
Thanks!!

There is already an ongoing transaction which basically has its own version of the data. The newly added data is not visible to that transaction. Next to that you have hibernate in the mix, which uses caching and depending on what is executed, the query is only executed once and on subsequent calls it simply returns the cached values (within the same transaction/session for instance).
Links
Can/Should spring reuse hibernate session for sub transaction
Transaction Isolation
Visibility of objects in different hibernate sessions

Related

Is my app's Controllers thread safe ? Spring 4.1

I am developing an app in spring 4.1 . I know that Controllers / any other bean in spring are not thread safe . ie: Singleton. That mean same instance of Controller will be used to process multiple concurrent requests. Till here I am clear . I want to confirm that do I need to explicitly set #Scope("prototype") or request in the Controller class ? I read on StackOverflow previous post that even if scope is not set as request/prototype , Spring container will be able to process each request individually based on #RequestParams passed or #ModelAttribute associated with method arguements .
So i want to confirm is my below code is safe to handle multiple request concurrently ?
#Controller
public class LogonController {
/** Logger for this class and subclasses */
protected final Log logger = LogFactory.getLog(getClass());
#Autowired
SimpleProductManager productManager;
#Autowired
LoginValidator validator;
#RequestMapping( "logon")
public String renderForm(#ModelAttribute("employee") Logon employeeVO)
{
return "logon";
}
#RequestMapping(value="Welcome", method = RequestMethod.POST)
public ModelAndView submitForm(#ModelAttribute("employee") Logon employeeVO,
BindingResult result)
{
//Check validation errors
validator.validate(employeeVO, result);
if (result.hasErrors()) {
return new ModelAndView("logon");
}
if(!productManager.chkUserValidation(employeeVO.getUsername(), employeeVO.getPassword())){
return new ModelAndView("logon");
}
ModelAndView model = new ModelAndView("Welcome");
return model ;
}
}
Also i have another doubt.
since i am using SimpleProductManager productManager; Do i need to specify scope="prototype in its bean declaration in app-servlet.xml ?
Below is my configuration.xml
<bean id="mySessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource"><ref bean="dataSource"/></property>
<property name="configLocation" value="classpath:hibernate.cfg.xml" />
</bean>
<bean id="productManager" class="com.BlueClouds.service.SimpleProductManager" >
<property name="productDao" ref="productDao"/>
</bean>
<bean id="productDao" class="com.BlueClouds.dao.HbmProductDao">
<property name="sessionFactory"><ref bean="mySessionFactory"/></property>
</bean>
<bean id="loginValidator" class="com.BlueClouds.service.LoginValidator" >
</bean>
Being singleton single instance of validator is being shared among all request , for that do i need to add scope=request in bean configuration xml or do i need to surround validate() in synchronized block ? Please advise.
Thanks much .
You can tell your code is thread safe or not by answering following questions
Are there threads might modify a static field, which is not thread safe(ex: arrayList), in the same time?
Are there threads might modify a field of an instance, which is not thread safe, in the same time?
If any answer of the above is yes, then your code is not thread safe.
Since your code doesn't change any field, so it should be thread safe.
The general idea about thread safe is that if there are threads might change/access the same memory section in the same time, then it's not thread safe, which means "synchronized" is needed.
You'd better learn more about stack memory, heap memory and global memory in JAVA. So that you can understand if your code changes the same memory section in the same time or not.

Spring Transactions not working with #Transactional

I have following code...
#Transactional (rollbackFor = Exception.class)
public String log(AuditRecord aRecord) throws AuditException {
auditMaster(aRecord); //1. has an audit id and inserts into Audit table
System.out.println(aRecord.getAuditId());
if(true)
throw new AuditException(Error.DUMMY_ERROR);
auditSingleValued(aRecord); //2.
auditMultiValued(aRecord); //3
}
The three auditXXX() Methods in the log function are inserting into different tables. As I have declared #Transactional on the log method I am expecting that the insert on auditMaster() method (at 1.) should be rolled back because of the exception in the if(true) block. But that is not happening, when I check my database it has the new Audit Id entry that the System.out.println is printing.
What can be made to achieve the rollback and make it a Atomic Transaction on the log() method as a whole?
Do I need to declare #Transacional on individual auditXXX methods as well?
My bean configuration has this entry...
<bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"/>
</bean>

jpa fetching data from cache not database eclipaslink

Hi I am working in a small project where I am using jpa (eclipselink), I am running two same application with from different tomcat, they are using same schema, but if i am
changing value form one application, the other application not fetching data from database,
it always return its local data, I have also tried this #Cacheable(false) in entity class and
<property name="eclipselink.cache.shared.default" value="false" />
<property name="eclipselink.query-results-cache" value="false"/>
in xml file but they are not returning latest data from database, my code is as follows:
EntityManager entityManager = GlobalBean.store.globalEntityManager();
String queryString = "select g from GlobalUrnConf g";
TypedQuery<GlobalUrnConf> globalUrnConfQuery = entityManager.createQuery
(queryString, GlobalUrnConf.class);
return globalUrnConfQuery.getSingleResult();
I am fetching entity manager and factory in following ways -
public EntityManagerFactory factory() {
if (this.entityManagerFactory == null) {
entityManagerFactory = Persistence.createEntityManagerFactory("FileUpload");
}
return this.entityManagerFactory;
}
public EntityManager globalEntityManager() {
if (this.entityManager == null) {
this.entityManager = factory().createEntityManager();
}
return this.entityManager;
}
Please help me. Thanks in advance.
You are caching the EntityManager which is required to cache and hold onto managed entities tht are read through it for both identity purposes and to manage changes. You can get around this by using a refresh query hint, or other query hints so they can avoid the caches, but it is probably better is you just manage the EntityManager's life and cache more directly and only obtain them as needed, or clear them at logical points when returned entities can be released. Try calling EntityManager clear for instance.
return globalUrnConfQuery
.setHint(QueryHints.CACHE_USAGE, CacheUsage.CheckCacheOnly)
.getSingleResult();

How to use low-level driver APIs with Spring Data MongoDB

I am using Spring Data MongoDB. But I don't want to map my result to a domain class. Also, I want to access low level MongoAB APIs in few cases. But I want spring to manage the connections pooling etc.
How can i get an instance of com.mongodb.MongoClient to perform low level operations.
Here is what I am trying to do :
MongoClient mongoClient = new MongoClient();
DB local = mongoClient.getDB("local");
DBCollection oplog = local.getCollection("oplog.$main");
DBCursor lastCursor = oplog.find().sort(new BasicDBObject("$natural", -1)).limit(1);
Or I simply want a JSON object / DBCursor / DBObject.
you can do it this way
#Autowired MongoDbFactory factory;
DB local = factory.getDB("local");
DBCollection oplog = local.getCollection("oplog.$main");
DBCursor lastCursor = oplog.find().sort(new BasicDBObject("$natural", -1)).limit(1);
Where
MongoDbFactory is an interface provifed by spring-data-mongo that can obtain a
com.mongodb.DB object and access allthe functionality of a specific MongoDB database
instance
your configuration file should contain these informations :
<bean id="mongoFactoryBean"
class="org.springframework.data.mongodb.core.MongoFactoryBean">
<property name="host" value="127.0.0.1"/>
<property name="port" value="27017"/>
</bean>
<bean id="mongoDbFactory"
class="org.springframework.data.mongodb.core.SimpleMongoDbFactory">
<constructor-arg name="mongo" ref="mongoFactoryBean"/>
<constructor-arg name="databaseName" value="local"/>
</bean>
doing it like that, spring should stay managing your connection pool.
You usually perform low level access through MongoTemplate's execute(…) methods that take callbacks giving you access to the native Mongo driver API.
class MyClient {
private final MongoOperations operations;
#Autowired
public MyClient(MongoOperations mongoOperations) {
this.operations = operations;
}
void yourMethod() {
operations.execute(new CollectionCallback<YourDomainClass>() {
YourDomainClass doInCollection(DBCollection collection) {
// here goes your low-level code
}
});
}
The advantage of this template approach is that the MongoTemplate instance backing the MongoOperations interface will still take care of all resource management and exception translation (converting all Mongo specific exceptions into Spring's DataAccessException hierarchy).
However, for your concrete example you could just go ahead and do the following directly:
Query query = new Query().with(new Sort(DESC, "$natural")).limit(1);
DBObject result = operations.find(query, DBObject.class, "oplog.$main");
Here you can mix and match the type you pass into the find(…) method to let the template convert the result into a Map or a domain object if needed. As indicated above you also get resource management and exception translation which your sample code above is missing.

Why hibernate flushes on select queries (EmptyInterceptor)?

I would like to understand a counter intuitive Hibernate behaviour I am seeing. I always thought that "flush" meant that hibernate had a data structure in-memory that has to be written to the DB. This is not what I am seeing.
I have created the following Interceptor:
public class FeedInterceptor extends EmptyInterceptor
{
#Override
public void postFlush(Iterator entities)
{
System.out.println("postFlush");
}
}
Registered it in my ApplicationContext
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="entityInterceptor">
<bean class="interceptor.FeedInterceptor"/>
</property>
</bean>
But, strange enough, I see "postFlush" written to the console for every row retrieved from the DB from my DAO:
Session session = sessionFactory.getCurrentSession();
Query query = session.createQuery("from Feed feed");
query.list();
Why is that?
Let's assume Hibernate wouldn't flush the session, then you could have the following situation:
Person p = new Person();
p.setName("Pomario");
dao.create(p);
Person pomario = dao.findPersonByName("Pomario")
//pomario is null?
When finding a person by name, hibernate issues a select statement to the database. If it doesn't send the previous statements first, then the returned result might not be consistent with changes that have been done previously in the session, here the dababase hasn't received the create statement yet, so it returns an empty result set.

Resources