Camel: Aggregator doesn't persist Exchange properties - jdbc

I'm using camel:aggregate backed by jdbc and it seems it doesn't save Exchange properties. For instance, if I configure the following route and the execution is stopped once aggregation has been completed and just before execute camel:to(log) forcing the aggregation to retrieve data from database when restarted, then camel:to(log) won't print the property myProperty
<camel:route id="myRoute">
<camel:from uri="direct:in"/>
<camel:setProperty propertyName="myProperty">
<camel:constant>myPropertyValue</camel:constant>
</camel:setProperty>
<camel:aggregate strategyRef="myStrategy" aggregationRepositoryRef="myAggregationRepo" discardOnCompletionTimeout="true" completionTimeout="86400000" >
<camel:correlationExpression>
<camel:simple>${property.partlastcorrelationkey}</camel:simple>
</camel:correlationExpression>
<camel:completionPredicate>
<camel:simple>${property.partlastcorrelationwaitmore} == false</camel:simple>
</camel:completionPredicate>
<camel:to uri="log:com.test?showAll=true"/>
</camel:aggregate>
</camel:route>
My aggregation repository is configured this way:
<bean id="myAggregationRepo" class="org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository" init-method="start" destroy-method="stop">
<property name="transactionManager" ref="transactionManager"/>
<property name="repositoryName" value="PROC_AGG"/>
<property name="dataSource" ref="oracle-ds"/>
<property name="lobHandler">
<bean class="org.springframework.jdbc.support.lob.OracleLobHandler">
<property name="nativeJdbcExtractor">
<bean class="org.springframework.jdbc.support.nativejdbc.CommonsDbcpNativeJdbcExtractor"/>
</property>
</bean>
</property>
</bean>
How can I save properties when using the Aggregator?

I'll reply myself. As seen on the code JdbcCamelCodec doesn't allow to save properties when backing the Aggregator with a database:
public final class JdbcCamelCodec {
public byte[] marshallExchange(CamelContext camelContext, Exchange exchange) throws IOException {
// use DefaultExchangeHolder to marshal to a serialized object
DefaultExchangeHolder pe = DefaultExchangeHolder.marshal(exchange, false);
// add the aggregated size property as the only property we want to retain
DefaultExchangeHolder.addProperty(pe, Exchange.AGGREGATED_SIZE, exchange.getProperty(Exchange.AGGREGATED_SIZE, Integer.class));
// add the aggregated completed by property to retain
DefaultExchangeHolder.addProperty(pe, Exchange.AGGREGATED_COMPLETED_BY, exchange.getProperty(Exchange.AGGREGATED_COMPLETED_BY, String.class));
// add the aggregated correlation key property to retain
DefaultExchangeHolder.addProperty(pe, Exchange.AGGREGATED_CORRELATION_KEY, exchange.getProperty(Exchange.AGGREGATED_CORRELATION_KEY, String.class));
// persist the from endpoint as well
if (exchange.getFromEndpoint() != null) {
DefaultExchangeHolder.addProperty(pe, "CamelAggregatedFromEndpoint", exchange.getFromEndpoint().getEndpointUri());
}
return encode(pe);
}
Basically, the problem lies on this line where false means: don't save properties.
DefaultExchangeHolder pe = DefaultExchangeHolder.marshal(exchange, false);
The headers and the body are the only ones stored on database.

Related

Hibernate Transaction Manager not committing data changes

I'm using Hibernate 4 to write data to an H2 embedded in-memory database and there seems to be a problem with transactions. The application already uses Oracle and H2 has been added with a separate DataSource, SessionFactory, and TransactionManager. The original TransactionManager is marked as default and the H2 TransactionManager has the qualifier memTransactions
The following code - specifically the load function - correctly populates the memEvents variable at the end with the written data.
#Repository
#Transactional(transactionManager = "memTransactions")
public class EventMemDaoHibernate implements EventMemDao {
#Autowired
#Qualifier(value = "memSessionFactory")
private SessionFactory memSessionFactory;
#Override
public List<EventMem> getEvents() {
return memSessionFactory.getCurrentSession().createCriteria(EventMem.class).list();
}
#Override
public void load(List<Event> allEvents) {
Session session = memSessionFactory.getCurrentSession();
for (Event e : allEvents) {
EventMem memEvent = new EventMem(e);
session.save(memEvent);
}
List<EventMem> memEvents = getEvents(); // correct
}
}
However the following code produces an empty memEvents list
#Autowired
private EventMemDao eventMemDao;
List<Event> allEvents = eventDao.getAllEvents();
eventMemDao.load(allEvents); // calls the load function shown above
List<EventMem> memEvents = eventMemDao.getEvents(); // empty
I assume this is related to transaction management (e.g.: data is not auto-committed after the call to .save()). However when I tried explicitly beginning and committing a transaction within EventMemDaoHibernate#load, I receive this error:
nested transactions not supported
So, from what I can tell the TransactionManager is working.
My TransactionManager and related bean definitions are shown below.
<bean
id="memTransactionManager"
class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="memSessionFactory" />
<qualifier value="memTransactions"/>
</bean>
<bean id="hDataSource" class="org.h2.jdbcx.JdbcDataSource">
<property name="url" value="jdbc:h2:mem:db1;DB_CLOSE_DELAY=-1;INIT=RUNSCRIPT FROM 'classpath:scripts/init-h2.sql'" />
<property name="user" value="sa" />
<property name="password" value="" />
</bean>
<bean
id="memSessionFactory"
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource" ref="hDataSource" />
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.H2Dialect</prop>
</props>
</property>
</bean>
This was due to my configuration error (of course). I didn't fully grasp that the connection URL was evaluated every time a session was opened against H2 and that means init-h2.sql was executed repeatedly. init-h2.sql included a truncate followed by an insert so it was dropping and recreating data every time Hibernate opened a session.

Spring batch MongoItemReader to read as json

I am trying to write a Spring Batch job that reads documents from a mongo db and writes the documents to a CMS (for now I will attempt to test this first with WireMock). Can I set up the job and itemreader without specifying the exact structure of the document? I would just like to read each document as json and then sent that json through to the CMS. Is this even possible?
Since JSON is just a String, you should configure your MongoItemReader for String type and provide MongoTemplate with some custom simple converter:
public class DBObjectToStringConverter implements Converter<DBObject, String> {
public String convert(DBObject source) {
return source == null ? null : source.toString();
}
}
This one just return a String JSON representation of DBObject.
Then configuration:
<mongo:db-factory/>
<mongo:mapping-converter id="mappingConverter">
<mongo:custom-converters>
<mongo:converter>
<bean class="com.my.batch.mongo.DBObjectToStringConverter "/>
</mongo:converter>
</mongo:custom-converters>
</mongo:mapping-converter>
<bean id="mongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate">
<constructor-arg ref="mongoDbFactory"/>
<constructor-arg ref="mappingConverter"/>
</bean>
<bean class="org.springframework.batch.item.data.MongoItemReader">
<property name="template" ref="mongoTemplate"/>
<property name="query" value="..."/>
<property name="targetType" value="java.lang.String"/>
</bean>
And voila! Each item returns as JSON String.

transactions not working with Spring 3.1 – H2 – junit 4– hibernate 3.2

I have the following test..
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = {"/schedule-agents-config-context.xml"})
#TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true)
#Transactional
public class H2TransactionNotWorkingTest extends SubmitAgentIntegratedTestBase {
private static final int FIVE_SUBMISSIONS = 5;
#Autowired
private ApplicationSubmissionInfoDao submissionDao;
private FakeApplicationSubmissionInfoRepository fakeRepo;
#Before
public void setUp() {
fakeRepo = fakeRepoThatNeverFails(submissionDao, null);
submitApplication(FIVE_SUBMISSIONS, fakeRepo);
}
#Test
#Rollback(true)
public void shouldSaveSubmissionInfoWhenFailureInDatabase() {
assertThat(fakeRepo.retrieveAll(), hasSize(FIVE_SUBMISSIONS));
}
#Test
#Rollback(true)
public void shouldSaveSubmissionInfoWhenFailureInXmlService() {
assertThat(fakeRepo.retrieveAll().size(), equalTo(FIVE_SUBMISSIONS));
}
}
...and the following config...
<jdbc:embedded-database id="dataSource" type="H2">
<jdbc:script location="classpath:/db/h2-schema.sql" />
</jdbc:embedded-database>
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"/>
</bean>
<bean id="transactionalSessionFactory"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource" ref="dataSource"/>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.H2Dialect</prop>
<prop key="hibernate.show_sql">false</prop>
<prop key="hibernate.cache.use_second_level_cache">false</prop>
<prop key="hibernate.cache.use_query_cache">false</prop>
</props>
</property>
<property name="namingStrategy">
<bean class="org.hibernate.cfg.ImprovedNamingStrategy"/>
</property>
<property name="configurationClass" value="org.hibernate.cfg.AnnotationConfiguration"/>
<property name="packagesToScan" value="au.com.mycomp.life.snapp"/>
</bean>
<bean id="regionDependentProperties" class="org.springframework.core.io.ClassPathResource">
<constructor-arg value="region-dependent-service-test.properties"/>
</bean
>
I have also set auto commit to false in the sql script
SET AUTOCOMMIT FALSE;
There are not REQUIRES_NEW in the code.
Why is the rollback not working in the test?
Cheers
Prabin
I have faced the same problem but I have finally solved it albeit I don't use Hibernate (shouldn't really matter).
The key item making it work was to extend the proper Spring unit test class, i.e. AbstractTransactionalJUnit4SpringContextTests. Note the "Transactional" in the class name. Thus the skeleton of a working transactional unit test class looks like:
#ContextConfiguration(locations = {"classpath:/com/.../testContext.xml"})
public class Test extends AbstractTransactionalJUnit4SpringContextTests {
#Test
#Transactional
public void test() {
}
}
The associated XML context file has the following items contained:
<jdbc:embedded-database id="dataSource" type="H2" />
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"></property>
</bean>
Using this setup the modifications by each test method is properly rolled back.
Regards, Ola
I'm experiencing similar problems, I'm also using TestNG + Spring test support and Hibernate. What happens is that Hibernate disables autocommit on the connection before the transaction begins and it remembers the original autocommit setting:
org.hibernate.engine.transaction.internal.jdbc#JdbcTransaction:
#Override
protected void doBegin() {
try {
if ( managedConnection != null ) {
throw new TransactionException( "Already have an associated managed connection" );
}
managedConnection = transactionCoordinator().getJdbcCoordinator().getLogicalConnection().getConnection();
wasInitiallyAutoCommit = managedConnection.getAutoCommit();
LOG.debugv( "initial autocommit status: {0}", wasInitiallyAutoCommit );
if ( wasInitiallyAutoCommit ) {
LOG.debug( "disabling autocommit" );
managedConnection.setAutoCommit( false );
}
}
catch( SQLException e ) {
throw new TransactionException( "JDBC begin transaction failed: ", e );
}
isDriver = transactionCoordinator().takeOwnership();
}
Later on, after rolling back the transaction, it will release the connection. Doing so hibernate will also restore the original autocommit setting on the connection (so that others who might be handed out the same connection start with the original setting). However, setting the autocommit during a transaction triggers an explicit commit, see JavaDoc
In the code below you can see this happening. The rollback is issued and finally the connection is released in releaseManagedConnection. Here the autocommit will be re-set which triggers a commit:
org.hibernate.engine.transaction.internal.jdbc#JdbcTransaction:
#Override
protected void doRollback() throws TransactionException {
try {
managedConnection.rollback();
LOG.debug( "rolled JDBC Connection" );
}
catch( SQLException e ) {
throw new TransactionException( "unable to rollback against JDBC connection", e );
}
finally {
releaseManagedConnection();
}
}
private void releaseManagedConnection() {
try {
if ( wasInitiallyAutoCommit ) {
LOG.debug( "re-enabling autocommit" );
managedConnection.setAutoCommit( true );
}
managedConnection = null;
}
catch ( Exception e ) {
LOG.debug( "Could not toggle autocommit", e );
}
}
This should not be a problem normally, because afaik the transaction should have ended after the rollback. But even more, if I issue a commit after a rollback it should not be committing any changes if there were no changes between the rollback and the commit, from the javadoc on commit:
Makes all changes made since the previous commit/rollback permanent
and releases any database locks currently held by this Connection
object. This method should be used only when auto-commit mode has been
disabled.
In this case there were no changes between rollback and commit, since the commit (triggered indirectly by re-setting autocommit) happens only a few statements later.
A work around seems to be to disable autocommit. This will avoid restoring autocommit (since it was not enabled in the first place) and thus prevent the commit from happening. You can do this by manipulating the id for the embedded datasource bean. The id is not only used for the identification of the datasource, but also for the databasename:
<jdbc:embedded-database id="dataSource;AUTOCOMMIT=OFF" type="H2"/>
This will create a database with name "dataSource". The extra parameter will be interpreted by H2. Spring will also create a bean with name "dataSource;AUTOCOMMIT=OFF"". If you depend on the bean names for injection, you can create an alias to make it cleaner:
<alias name="dataSource;AUTOCOMMIT=OFF" alias="dataSource"/>
(there isn't a cleaner way to manipulate the embedded-database namespace config, I wish Spring team would have made this a bit better configurable)
Note: disabling the autocommit via the script (<jdbc:script location="...") might not work, since there is no guarantee that the same connection will be re-used for your test.
Note: this is not a real fix but merely a workaround. There is still something wrong that cause the data to be committed after a rollback occured.
----EDIT----
After searching I found out the real problem. If you are using HibernateTransactionManager (as I was doing) and you use your database via the SessionFactory (Hibernate) and directly via the DataSource (plain JDBC), you should pass both the SessionFactory and the DataSource to the HibernateTransactionManager. From the Javadoc:
Note: To be able to register a DataSource's Connection for plain JDBC code, this instance >needs to be aware of the DataSource (setDataSource(javax.sql.DataSource)). The given >DataSource should obviously match the one used by the given SessionFactory.
So eventually I did this:
<bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
<property name="dataSource" ref="dataSource" />
</bean>
And everything worked for me.
Note: the same goes for JpaTransactionManager! If you both use the EntityManager and perform raw JDBC access using the DataSource, you should supply the DataSource separately next to he EMF. Also don't forget to use DataSourecUtils to obtain a connection (or JDBCTemplate which uses DataSourceUtils internally to obtain the connection)
----EDIT----
Aight, while the above did solve my problem, it is not the real cause after all :)
In normal cases when using Spring's LocalSessionFactoryBean, setting the datasource will have no effect since it's done for you.
If the SessionFactory was configured with LocalDataSourceConnectionProvider, i.e. by Spring's LocalSessionFactoryBean with a specified "dataSource", the DataSource will be auto-detected: You can still explicitly specify the DataSource, but you don't need to in this case.
In my case the problem was that we created a caching factory bean that extended LocalSessionFactoryBean. We only use this during testing to avoid booting the SessionFactory multiple times. As told before, Spring test support does boot multiple application contexts if the resource key is different. This caching mechanism mitigates the overhead completely and ensures only 1 SF is loaded.
This means that the same SessionFactory is returned for different booted application contexts. Also, the datasource passed to the SF will be the datasource from the first context that booted the SF. This is all fine, but the DataSource itself is a new "object" for each new application context. This creates a discrepancy:
The transaction is started by the HibernateTransactionManager. The datasource used for transaction synchronization is obtained from the SessionFactory (so again: the cached SessionFactory with the DataSource instance from the application context the SessionFactory was initially loaded from). When using the DataSource in your test (or production code) directly, you'll be using the instance belonging to the app context active at that point. This instance does not match the instance used for the transaction synchronization (extracted from the SF). This result into problems as the connection obtained will not be properly participating in the transaction.
By explicitly setting the datasource on the transactionmanager this appeared to be solved since the post initialization will not obtain the datasource from the SF but use the injected one instead. The appropriate way for me was to adjust the caching mechanism and replace the datasource in the cached SF with the one from the current appcontext each time the SF was returned from cache.
Conclusion: you can ignore my post:) as long as you're using HibernateTransactionManager or JtaTransactionManager in combination with some kind of Spring support factory bean for the SF or EM you should be fine, even when mixing vanilla JDBC with Hibernate. In the latter case don't forget to obtain connections via DataSourceUtils (or use JDBCTemplate).
Try this:
remove the org.springframework.jdbc.datasource.DataSourceTransactionManager
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"/>
</bean>
and replace it with the org.springframework.orm.jpa.JpaTransactionManager
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="dataSource" ref="dataSource"/>
</bean>
or you inject an EntityManagerFactory instead ...
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
you need an EntityManagerFactory then, like the following
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="jpaVendorAdapter">
<bean id="jpaAdapter"
class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="true" />
<property name="generateDdl" value="true" />
</bean>
</property>
</bean>
You haven't shown all the pieces to the puzzle. My guess at this point would be that your ApplicationSubmissionInfoDao is transactional and is committing on its own, though I'd think that would conflict with the test transactions if everything were configured properly. To get more of an answer, ask a more complete question. The best thing would be to post an SSCCE.
Thanks Ryan
The test code is something like this.
#Test
#Rollback(true)
public void shouldHave5ApplicationSubmissionInfo() {
for (int i = 0; i < 5; i++) {
hibernateTemplate.saveOrUpdate(new ApplicationSubmissionInfoBuilder()
.with(NOT_PROCESSED)
.build());
}
assertThat(repo.retrieveAll(), hasSize(5));
}
#Test
#Rollback(true)
public void shouldHave5ApplicationSubmissionInfoAgainButHas10() {
for (int i = 0; i < 5; i++) {
hibernateTemplate.saveOrUpdate(new ApplicationSubmissionInfoBuilder()
.with(NOT_PROCESSED)
.build());
}
assertThat(repo.retrieveAll(), hasSize(5));
}
I figure out that embedded DB define using jdbc:embedded-database don't have transaction support. When I used commons DBCP to define the datasource and set default auto commit to false, it worked.
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" scope="singleton">
<property name="driverClassName" value="org.h2.Driver" />
<property name="url" value="jdbc:h2:~/snappDb"/>
<property name="username" value="sa"/>
<property name="password" value=""/>
<property name="defaultAutoCommit" value="false" />
<property name="connectionInitSqls" value=""/>
</bean>
None of the above worked for me!
However, the stack i am using is [spring-test 4.2.6.RELEASE, spring-core 4.2.6.RELEASE, spring-data 1.10.1.RELEASE]
The thing is, using any unit test class annotated with [SpringJUnit4ClassRunner.class] will result in an auto-rollback functionality by spring library design
check ***org.springframework.test.context.transaction.TransactionalTestExecutionListener*** >> ***isDefaultRollback***
To overcome this behavior just annotate the unit test class with
#Rollback(value = false)

Loading Schema in Spring XML

I'm trying to load an XSD file as a Schema instance in my application context XML file. We're building it in java code, but I'd like to be injecting it.
Here's what the user class looks like if that'll clarify things.
class XmlBuilder {
...
Schema schema; // set by spring
public String createXml(Object param1) {
// create xml
Validator validator = schema.newValidator();
try {
validator.validate(documentSS);
} catch (SAXException e) {
// log and convert to a proper exception
}
return xml;
}
}
The schema is in the app's classpath, I'm just having trouble loading it and creating the schema object. I can get this far
<bean id="schemaFactory" class="javax.xml.validation.SchemaFactory" factory-method="newInstance">
<constructor-arg value="http://www.w3.org/2001/XMLSchema"/>
</bean>
<bean id="xmlBuilder" class="XmlBuilder">
<property name="schema">
<bean factory-bean="schemaFactory" factory-method="newSchema">
<!-- missing bit -->
</bean>
</property>
</bean>
But I'm stuck on loading the file and passing it into the factory method.
I ended up using the ResourceSource class in 'org.springframework.ws:spring-xml'
<property name="schema">
<bean class="org.springframework.xml.transform.ResourceSource">
<constructor-arg value="classpath:/schema.xsd"/>
</bean>
</property>

Does order matter while injecting properties in ProxyFactoryBean

I am trying to inject the aspects in a service. For this service I am creating a proxied object using classic way.
I have written a bean- baseProxy of type (ProxyFactoryBean) which contains a list of all the required advices.
<bean id="baseProxy" class="org.springframework.aop.framework.ProxyFactoryBean">
<property name="interceptorNames">
<list>
<value>methodInvocationAdvice</value>
</list>
</property>
</bean>
I am creating a proxy for the service like this :
<bean id="singproxy" parent="baseProxy">
<property name="target" ref="singtarget" />
<property name="targetClass" value="com.spring.learning.SingingService"></property>
</bean>
Which doesn't work but when I revert these two properties and write like this :
<bean id="singproxy" parent="baseProxy">
<property name="targetClass" value="com.spring.learning.SingingService"></property>
<property name="target" ref="singtarget" />
</bean>
To my surprise it works fine. In spring does it matter on the order for bean ? Or its a special case with ProxyFactoryBean?
I tried with Spring 3.0 I am not sure same behavior exists with previous versions.
Concerning target and targetClass, It's one or the other, but not both. Here's the relevant source (from org.springframework.aop.framework.AdvisedSupport), a parent class of ProxyFactoryBean:
public void setTarget(Object target) {
setTargetSource(new SingletonTargetSource(target));
}
public void setTargetSource(TargetSource targetSource) {
this.targetSource = (targetSource != null ? targetSource : EMPTY_TARGET_SOURCE);
}
public void setTargetClass(Class targetClass) {
this.targetSource = EmptyTargetSource.forClass(targetClass);
}
As you can see, both setTarget() and setTargetClass() write to the same field, so the last assignment wins.

Resources