spring, use HikariCP VS c3p0 ,same code, different result - spring

Environment
HikariCP version: HikariCP-java7 2.4.13
JDK version : 1.7.0_080
Database : PostgreSQL
Driver version : 9.1-901.jdbc3
spring, use HikariCP VS c3p0 ,same code, different results
#Transactional
public Integer enableItem(Long id){
//change item status from 0 to 1
Integer result = itemDao.enableItem(id);
//load item
//if c3p0 , item status is new value 1
// but Hikari, item status still is 0
Item item = itemDao.findItemById(id);
return result;
}
In the same transaction, first change the item status from 0 to 1, and then read the latest item information, if c3p0 , item status is new value 1 , but Hikari, item status still is 0
Hikari config:
<property name="driverClassName" value="#{meta['dataSource.driverClassName']}" />
<property name="jdbcUrl" value="#{meta['dataSource.url']}" />
<property name="username" value="#{meta['dataSource.username']}" />
<property name="password" value="#{meta['dataSource.password']}" />
<property name="readOnly" value="false" />
<property name="idleTimeout" value="#{meta['dataSource.maxIdleTime']}" />
<property name="connectionTimeout" value="30000" />
<property name="maxLifetime" value="1800000" />
<property name="maximumPoolSize" value="#{meta['dataSource.maxPoolSize']}" />
<property name="minimumIdle" value="#{meta['dataSource.minPoolSize']}" />
</bean>
I expect to get the latest value with Hikari. Is there any problem with the configuration?
see https://github.com/brettwooldridge/HikariCP/issues/1522

These 2 connection pools probably have different default values for the transaction isolation level.
Try adding
<property name="transactionIsolation" value="TRANSACTION_READ_COMMITTED"

Related

Spring Hibernate: Identify which database connection

NOTE: I know I can fix this using scripts or a database field, but I am curious about accessing the connection string.
I have two testing environments. Each has its own database, one for Chinese and one for English data. Otherwise the two databases are identical.
The only difference is the connection string in my beans.xml (ctest vs ctestzh).
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
destroy-method="close">
<property name="driverClass" value="org.postgresql.Driver" />
<property name="jdbcUrl" value="jdbc:postgresql://localhost/ctest?useUnicode=true&characterEncoding=utf8" />
<property name="user" value="testuser" />
<property name="password" value="xxxx" />
....
</bean>
<!-- Hibernate
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
destroy-method="close">
<property name="driverClass" value="org.postgresql.Driver" />
<property name="jdbcUrl" value="jdbc:postgresql://localhost/ctestzh?useUnicode=true&characterEncoding=utf8" />
<property name="user" value="testuser" />
<property name="password" value="xxxx" />
.....
</bean>-->
I use an xml configuration file that configures my application to either process English or Chinese data. If I forget to change the beans.xml to the appropriate data, however, I corrupt the database (i.e., put Chinese data in the English database)
Can I access the connection string in code so I can fail if I am connected to the wrong database? I have looked at SessionFactory, but saw nothing obvious.
public String getConnectionString(DataSource dataSource) {
return dataSource.getConnection().getMetaData().getURL();
}
See DatabaseMetaData class for more info.

Can we use DBCP 2 or Tomcat connection pool for distributed transactions in Spring? Can these connection pool be used along with JOTM or Atomikos?

Initially i was using different transaction manager for multiple data sources. But i had problem with managing rollback on all data sources if one of the data sources has transaction failure.I want to manage multiple datasources with single Transaction manager in Spring. So i opted for using JOTM or Atomikos. Both these transaction manager uses XA Connection pool(org.enhydra.jdbc.pool.StandardXAPoolDataSource). But in my project i was allowed to use only DBCP 2(org.apache.commons.dbcp.BasicDataSource) or Tomcat Connection Pool(org.apache.tomcat.jdbc.pool.DataSource). Is it possible to use either of this connection pools with JOTM or Atomikos. Please someone help me on this along with configuration example. Below is my configuration details,
<
bean id="jotm" class="org.springframework.transaction.jta.JotmFactoryBean"/>
<bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="userTransaction" ref="jotm" />
</bean>
<bean id="dataSource1" class="org.enhydra.jdbc.pool.StandardXAPoolDataSource" destroy-method="shutdown">
<property name="dataSource">
<bean class ="org.enhydra.jdbc.standard.StandardXADataSource " destroy-method ="shutdown">
<property name="transactionManager" ref="jotm" />
<property name="driverName" value="${jdbc.d1.driver}" />
<property name ="url" value = "${jdbc.d1.url}" />
</bean>
</property>
<property name="user" value="${jdbc.d1.username}" />
<property name = "password" value="${jdbc.d1.password}" />
</bean>
<bean id="dataSource2" class="org.enhydra.jdbc.pool.StandardXAPoolDataSource" destroy-method="shutdown">
<property name="dataSource">
<bean class ="org. enhydra.jdbc.standard.StandardXADataSource " destroy-method ="shutdown">
<property name="transactionManager" ref="jotm" />
<property name="driverName" value="${jdbc.d2.driver}" />
<property name="url" value="${jdbc.d2.url}" />
</bean>
</property>
<property name="user" value="${jdbc.d2.username}" />
<property name = "password" value ="${jdbc.d2.password}" />
</bean>
Also do help if any other possible ways to achieve this.

JDBCBatchItemWriter not receiving the List for batch update

I'm a new bee to spring batch. My requirement is to fetch records from a DB table, process them(each record can be processed independently so i'm partitioning and using a task executor) and then update the status column in the same table based on processing status.
Simplified version of my code is below.
Item Reader (My custom column partitioner will decide the min & max value below):
<bean name="databaseReader" class="org.springframework.batch.item.database.JdbcCursorItemReader" scope="step">
<property name="dataSource" ref="dataSource"/>
<property name="sql">
<value>
<![CDATA[
select id,user_login,user_pass,age from users where id >= #{stepExecutionContext['minValue']} and id <= #{stepExecutionContext['maxValue']}
]]>
</value>
</property>
<property name="rowMapper">
<bean class="com.springapp.batch.UserRowMapper" />
</property>
<property name="verifyCursorPosition" value="false"/>
</bean>
Item Processor:
<bean id="itemProcessor" class="com.springapp.batch.UserItemProcessor" scope="step"/>
....
public class UserItemProcessor implements ItemProcessor<Users, Users>
{
#Override
public Users process(Users users) throws Exception {
// do some processing here..
//update users status
//users.setStatus(users.getId() + ": Processed by :" + Thread.currentThread().getName() + ": Processed at :" + new GregorianCalendar().getTime().toString());
//System.out.println("Processing user :" + users + " :" +Thread.currentThread().getName());
return users;
}
}
Item Writer:
<bean id="databaseWriter" class="org.springframework.batch.item.database.JdbcBatchItemWriter">
<property name="dataSource" ref="dataSource" />
<property name="sql">
<value>
<![CDATA[
update users set status = :status where id= :id
]]>
</value>
</property>
<property name="itemSqlParameterSourceProvider">
<bean class="org.springframework.batch.item.database.BeanPropertyItemSqlParameterSourceProvider" />
</property>
</bean>
Step configuration:
<batch:job id="usersJob">
<batch:step id="stepOne">
<batch:partition step="worker" partitioner="myColumnRangepartitioner" handler="partitionHandler" />
</batch:step>
</batch:job>
<batch:step id="worker" >
<batch:tasklet transaction-manager="transactionManager">
<batch:chunk reader="databaseReader" writer="databaseWriter" commit-interval="5" processor="itemProcessor" />
</batch:tasklet>
</batch:step>
<bean id="asyncTaskExecutor" class="org.springframework.core.task.SimpleAsyncTaskExecutor" />
<bean id="partitionHandler" class="org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler" scope="step">
<property name="taskExecutor" ref="asyncTaskExecutor"/>
<property name="step" ref="worker" />
<property name="gridSize" value="3" />
</bean>
Since i have specified the commit interval as 5 my understanding is that when 5 items are processed by a partition, it will call JDBCItemWriter with a List of 5 Users object to perform a batch JDBC update. However with the current setup, i'm receiving 1 User object at a time during batch update.
Is my understanding above correct or am i missing any step/configuration ?
Note: I'm using HSQL file based database for testing.
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="org.hsqldb.jdbc.JDBCDriver"/>
<property name="url" value="jdbc:hsqldb:file:C://users.txt"/>
<property name="username" value="sa"/>
<property name="password" value=""/>
</bean>

Spring Transaction Manager: Rollback doesnt work

I wish to execute few insert queries within a transaction block where if there is any error all the inserts will be rolled back.
I am using MySQL database and Spring TransactionManager for this.
Also the table type is InnoDB
I have done my configuration by following the steps mentioned here.
Following is my code (for now only one query)
TransactionDefinition def = new DefaultTransactionDefinition();
TransactionStatus status = null;
status = transactionManager.getTransaction(def);
jdbcTemplate.execute(sqlInsertQuery);
transactionManager.rollback(status);
Spring config xml:
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource">
<ref bean="dataSource" />
</property>
</bean>
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
Datasource config:
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="${jdbc.driverClassName}" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.username}" />
<property name="password" value="${jdbc.password}" />
<property name="initialSize" value="${jdbc.initialSize}" />
<property name="maxActive" value="${jdbc.maxActive}" />
<property name="minIdle" value="${jdbc.minIdle}" />
<property name="maxIdle" value="${jdbc.maxIdle}" />
<property name="testOnBorrow" value="${jdbc.testOnBorrow}" />
<property name="testWhileIdle" value="${jdbc.testWhileIdle}" />
<property name="testOnReturn" value="${jdbc.testOnReturn}" />
<property name="validationQuery" value="${jdbc.validationQuery}" />
<property name="timeBetweenEvictionRunsMillis" value="${jdbc.timeBetweenEvictionRunsMillis}" />
<!--<property name="removeAbandoned" value="true"/> <property name="removeAbandonedTimeout"
value="10"/> <property name="logAbandoned" value="false"/> -->
<property name="numTestsPerEvictionRun" value="${jdbc.numTestsPerEvictionRun}" />
</bean>
This code works perfectly fine and the record gets inserted.
But the rollback doesnt work! It executes the rollback statement without any error but to no effect.
Can anyone guide me where am I going wrong?
It appears that the issues is your datasource is not set to have autocommit off.
<property name="defaultAutoCommit" value="false"/>
Give that a try. I have never used the TransactionManager outside of a proxy so I am not sure if there any other gotchas using it directly like this, but I would recommend you look at either AOP transactions or the convience AOP proxy annotation #Transactional just because it is more common.
EDIT:
I was finally able to resolve this by doing the following:
dmlDataSource.setDefaultAutoCommit(false); //set autocommit to false explicitly.
Exception ex = (Exception)transactionTemplate.execute(new TransactionCallback() {
public Object doInTransaction(TransactionStatus ts) {
try {
dmlJdbcTemplate.execute(sqlInsertQuery);
ts.setRollbackOnly();
dmlDataSource.setDefaultAutoCommit(true); // set autocommit back to true
return null;
} catch (Exception e) {
ts.setRollbackOnly();
LOGGER.error(e);
dmlDataSource.setDefaultAutoCommit(true); // set autocommit back to true
return e;
}
}
});
I am not using the transaction manager now. Using the trasactionTemplate and doing the following:
Exception ex = (Exception)transactionTemplate.execute(new TransactionCallback() {
public Object doInTransaction(TransactionStatus ts) {
try {
dmlJdbcTemplate.execute(sqlInsertQuery);
ts.setRollbackOnly();
return null;
} catch (Exception e) {
ts.setRollbackOnly();
LOGGER.error(e);
return e;
}
}
});
after using #Moles-JWS's answer I am now able to rollback successfully. But I want to handle this only in this method and not change the global configuration of the datasource.
Can I do it here programmatically?

Spring Batch - Issue with PageSize in JdbcPagingItemReader

Hi We are working on a spring batch, which processes all the SKUs in SKU table and send a request to inventory system to get the inventory details. To send to invetory details we need to send 100 SKI ids at a time so we have set the pageSize as 100.
in the reader log:
we see
SELECT * FROM (SELECT S_ID ,S_PRNT_PRD,SQ, ROWNUM as TMP_ROW_NUM FROM
XXX_SKU WHERE SQ>=:min and SQ <=:max ORDER BY SQ ASC) WHERE ROWNUM <=
100]
But we observe in the WRITER that is for certain time 100 SKU are sent and for certain requests only 1 SKU is sent.
public void write(List<? extends XXXPagingBean> pItems) throws XXXSkipItemException {
if (mLogger.isLoggingDebug()) {
mLogger.logDebug("XXXInventoryServiceWriter.write() method STARTING, ItemsList size:{0}" +pItems.size());
}
....
....
}
pageSize and commitInterval is set to 100 (are these suppose to be same?)
sortKey (SEQ_ID) should be same a column use in partitiner?
Bean configurations in XML:
<!-- InventoryService Writer configuration -->
<bean id="inventoryGridService" class="atg.nucleus.spring.NucleusResolverUtil" factory-method="resolveName">
<constructor-arg value="/com/XXX/gigaspaces/inventorygrid/service/InventoryGridService" />
</bean>
<bean id="inventoryWriter" class="com.XXX.batch.integrations.XXXXXX.XXXXInventoryServiceWriter" scope="step">
<property name="jdbcTemplate" ref="batchDsTemplate"></property>
<property name="inventoryGridService" ref="inventoryGridService" />
</bean>
<bean id="pagingReader" class="org.springframework.batch.item.database.JdbcPagingItemReader" xmlns="http://www.springframework.org/schema/beans" scope="step">
<property name="dataSource" ref="dataSource" />
<property name="queryProvider">
<bean id=" productQueryProvider" class="org.springframework.batch.item.database.support.SqlPagingQueryProviderFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="selectClause" value="select S_ID ,S_PRNT_PRD" />
<property name="fromClause" value="from XXX_SKU" />
<property name="sortKey" value="SEQ_ID" />
<property name="whereClause" value="SEQ_ID>=:min and SEQ_ID <=:max"></property>
</bean>
</property>
<property name="parameterValues">
<map>
<entry key="min" value="#{stepExecutionContext[minValue]}"></entry>
<entry key="max" value="#{stepExecutionContext[maxValue]}"></entry>
</map>
</property>
<property name="pageSize" value="100" />
<property name="rowMapper">
<bean class="com.XXX.batch.integrations.endeca.XXXPagingRowMapper"></bean>
</property>
</bean>
Please suggest.
Remove your whereClause from the productQueryProvider bean definition and get rid of your parameterValues and it should work. The PagingQueryProvider takes care of paging automatically for you. There's no need to do that manually yourself.

Resources