I have run into an issue where the Actuator probe fails for JMS health even though my routes can connect and produce message to JMS. So in short Actuator is saying it is down but it is working.
Tech stack and tech notes:
Spring-boot: 2.3.1.RELEASE
Camel: 3.4.1
Artemis: 2.11.0
Artemis has been setup to use a user name and password(artemis/artemis).
Using org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory for connection factory.
My route is as simple as chips:
<route id="timer-cluster-producer-route">
<from uri="timer:producer-ticker?delay=5000"/>
<setBody>
<groovy>
result = ["Name":"Johnny"]
</groovy>
</setBody>
<marshal>
<json library="Jackson"/>
</marshal>
<to uri="ref:jms-producer-cluster-event" />
</route>
XML Based Artemis Configuration
With Spring-boot favoring java based configuration I am busy migrating our XML beans accordingly.Thus I took a working beans.xml file pasted into the project and fired up the route and I could send messages flowing and the health check returned OK.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd">
<bean id="jmsConnectionFactory"
class="org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://localhost:61616" />
<property name="user" value="artemis"/>
<property name="password" value="artemis"/>
<property name="connectionLoadBalancingPolicyClassName" value="org.apache.activemq.artemis.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy"/>
</bean>
<!--org.messaginghub.pooled.jms.JmsPoolConnectionFactory-->
<!--org.apache.activemq.jms.pool.PooledConnectionFactory-->
<bean id="jmsPooledConnectionFactory"
class="org.apache.activemq.jms.pool.PooledConnectionFactory"
init-method="start" destroy-method="stop">
<property name="maxConnections" value="64" />
<property name="MaximumActiveSessionPerConnection"
value="500" />
<property name="connectionFactory" ref="jmsConnectionFactory" />
</bean>
<bean id="jmsConfig" class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory"
ref="jmsPooledConnectionFactory" />
<property name="concurrentConsumers" value="1" />
<property name="artemisStreamingEnabled" value="true"/>
</bean>
<bean id="jms"
class="org.apache.camel.component.jms.JmsComponent">
<property name="configuration" ref="jmsConfig"/>
</bean>
<!-- <bean id="activemq"
class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="configuration" ref="jmsConfig" />
</bean>-->
</beans>
Spring-boot Auto (Black)Magic Configuration
I then used the application.yaml file to configure the artemis connection by using this method as outlined in the Spring-boot documentation. This also worked when my application.yaml file contained the following configuration:
artemis:
user: artemis
host: localhost
password: artemis
pool:
max-sessions-per-connection: 500
enabled: true
max-connections: 16
This worked like a charm.
Brave Attempt At Java Configuration.
So I then went for gold and tried the Java based configuration as outlined below:
#SpringBootApplication
#ImportResource("classpath:/camel/camel.xml")
public class ClusterProducerApplication {
public static void main(String[] args) {
SpringApplication.run(ClusterProducerApplication.class, args);
}
#Bean
public JmsComponent jms() throws JMSException {
// Create the connectionfactory which will be used to connect to Artemis
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory();
cf.setBrokerURL("tcp://localhost:61616");
cf.setUser("artemis");
cf.setPassword("artemis");
//Create connection pool using connection factory
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory();
pooledConnectionFactory.setMaxConnections(2);
pooledConnectionFactory.setConnectionFactory(cf);
//Create configuration which uses connection factory
JmsConfiguration jmsConfiguration = new JmsConfiguration();
jmsConfiguration.setConcurrentConsumers(2);
jmsConfiguration.setArtemisStreamingEnabled(true);
jmsConfiguration.setConnectionFactory(pooledConnectionFactory);
// Create the Camel JMS component and wire it to our Artemis configuration
JmsComponent jms = new JmsComponent();
jms.setConfiguration(jmsConfiguration);
return jms;
}
}
So when camel starts up I see the following warning logged on start up:
020-07-28 12:33:38.631 WARN 25329 --- [)-192.168.1.158] o.s.boot.actuate.jms.JmsHealthIndicator : JMS health check failed
javax.jms.JMSSecurityException: AMQ229031: Unable to validate user from /127.0.0.1:42028. Username: null; SSL certificate subject DN: unavailable
After the 5sec delay the timer kicks in and message are being produced. I logged into the Artemis console and I can browse the messages and can see them being created. However when I run a get on actuator health I see the following:
"jms": {
"status": "DOWN",
"details": {
"error": "javax.jms.JMSSecurityException: AMQ229031: Unable to validate user from /127.0.0.1:42816. Username: null; SSL certificate subject DN: unavailable"
}
},
This feels like a big of a bug to me.
Observations about connection pooling implementations.
I noticed that AMQ connection pooling has been moved into the following maven dependency:
<dependency>
<groupId>org.messaginghub</groupId>
<artifactId>pooled-jms</artifactId>
</dependency>
I thought let me give that a try as well. It show the same behaviour as outlined above with one more interesting thing. When using org.messaginghub.pooled-jms as the connection pool(recommended by spring-boot docs as well) the following is logged on startup.
2020-07-28 12:41:37.255 INFO 26668 --- [ main] o.m.pooled.jms.JmsPoolConnectionFactory : JMS ConnectionFactory on classpath is not a JMS 2.0+ version.
Which is weird as according to the official repo the connector is JMS 2.0 compliant.
Quick Summary:
It appears that actuator does not pick up the credentials of the connection factory when configuring the JMS component via Java. While a work around exists at the moment by using the spring-boot application.yaml configuration it limits the way you can configure JMS clients on Camel.
So after some digging and reaching out to the Spring-boot people on GitHub, I found what the issue is. When using Java configuration I am configuring the JMS component of Camel with a connection factory. However Spring-boot is completely unaware of this as it is a Camel component. Thus the connection factory used by the JMS needs to be exposed to Spring-boot for it to work.
The fix is relatively simple. See code below:
#Configuration
public class ApplicationConfiguration {
private ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory();
#Bean
public JmsComponent jms() throws JMSException {
// Create the connectionfactory which will be used to connect to Artemis
cf.setBrokerURL("tcp://localhost:61616");
cf.setUser("artemis");
cf.setPassword("artemis");
// Setup Connection pooling
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory();
pooledConnectionFactory.setMaxConnections(2);
pooledConnectionFactory.setConnectionFactory(cf);
JmsConfiguration jmsConfiguration = new JmsConfiguration();
jmsConfiguration.setConcurrentConsumers(2);
jmsConfiguration.setArtemisStreamingEnabled(true);
jmsConfiguration.setConnectionFactory(pooledConnectionFactory);
// Create the Camel JMS component and wire it to our Artemis connectionfactory
JmsComponent jms = new JmsComponent();
jms.setConfiguration(jmsConfiguration);
return jms;
}
/*
This line will expose the connection factory to Spring-boot.
*/
#Bean
public ConnectionFactory jmsConnectionFactory() {
return cf;
}
}
Log4j2 JDBC appender can be setup using a pooled connection factory that is defined using calls and method (see log4j2 Appenders):
<ConnectionFactory class="net.example.db.ConnectionFactory" method="getDatabaseConnection" />
Using Spring I have already a defined datasource that is providing a pooled connection :
<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="${db_driver_class}" />
<property name="url" value="${db_jdbc_url}" />
<property name="username" value="${db_username}" />
<property name="password" value="${db_password}" />
<property name="initialSize" value="10" />
<property name="maxActive" value="100" />
<property name="maxIdle" value="50" />
<property name="minIdle" value="10" />
<property name="validationQuery" value="select 1" />
<property name="testOnBorrow" value="true" />
I would like to use the the Spring connection pool for the JDBC appender. Any idea how can this be done ?
thanks
Raz
I mange to create a 3-steps solution :
Define a bean in spring context that provide access to a data
source
Build an implementation of the bean that provide the
desired connection.
Build a static wrapper that can be accessed by the log4j JDBC appender.
1st step - bean declaration :
<bean id="springConnection" class="com.dal.entities.SpringConnection" scope="singleton">
<property name="dataSource" ref="myDataSource" />
the 2nd step - bean implementation - is also simple :
class SpringConnection {
private DataSource dataSource;
public void setDataSource(DataSource dataSource) {
this.dataSource = dataSource;
}
Connection getConnection() throws Exception {
return dataSource.getConnection();
}
}
the 3rd part - wrapper with static method - is a bit more complex:
public class SpringAccessFactory {
private final SpringConnection springCon;
private static ApplicationContext context;
private interface Singleton {
final SpringAccessFactory INSTANCE = new SpringAccessFactory();
}
private SpringAccessFactory() {
this.springCon = context.getBean(SpringConnection.class);
}
public static Connection getConnection() throws Exception {
return Singleton.INSTANCE.springCon.getConnection();
}
public static void setContext( ApplicationContext context) {
SpringAccessFactory.context = context;
}
}
There are - however - 2 issues I found so far:
You need to initialize the spring context and send it into the wrapper (SpringAccessFactory.setConetxt) before you start using the logger
initializing spring context early in program may trigger #PostConstruct methods (if any exists), before you plan to do so.....
We are using spring integration adapters for file ftp in our project, the problem we are facing is, the adapters are not closing the open socket connections.
As a result, other modules which are in the same managed server are failing with "Too many open files" socket connection exception. Is there a way to close the unused open socket connections from the channel adapters Or Can we get the underlying jsch connections and close the sockets from sftp channel adapters.
We have tried caching session factory and it did not close the open sockets. The file handles kept on piling up. Thanks in advance for the inputs.
We have two xmls one with outboundAdapter and the other with InboundAdapter. These two are in different xmls as they are different jobs that are run using spring batch. We are expected to send files to a location.
We are using spring batch 2.2.0 and spring integration 2.1.6 and spring integration 2.1.6.
Here is the configuration:
We have one session factory and it is wrapped by cachingSession factory:
<beans:bean id="sftpSessionFactory"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<beans:property name="host" value="hostname"/>
<beans:property name="privateKey" value="somepath"/>
<beans:property name="port" value="22"/>
</beans:bean>
<bean id="cachingSessionFactory"
class="org.springframework.integration.file.remote.session.CachingSessionFactory">
<constructor-arg ref="sftpSessionFactory"/>
<constructor-arg value="10"/>
<property name="sessionWaitTimeout" value="1000"/>
</bean>
**and then we have a channel**
<int:channel id="ftpChannel" />
**and then we have the following outbound Channel adapter**
<int-sftp:outbound-channel-adapter id="sftpOutboundAdapter"
session-factory="cachingSessionFactory"
channel="inputChannel"
charset="UTF-8"
use-temporary-filename="false"/>
**With the above configuration we are using the ftpChannel to send the files by constructing a payload like this:**
message = MessageBuilder.withPayLoad(f).build() // MessageBuilder is //org.springframework.integration.support.MessageBuilder and f is the file
ftpChannel.send(message)
**In another inbound job, the following is the configuration of adapters:
Session factory:**
<beans:bean id="sftpSessionFactory2"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<beans:property name="host" value="hostname"/>
<beans:property name="privateKey" value="somepath"/>
<beans:property name="port" value="22"/>
</beans:bean>
**Caching session factory:**
<bean id="cachingSessionFactory2"
class="org.springframework.integration.file.remote.session.CachingSessionFactory">
<constructor-arg ref="sftpSessionFactory2"/>
<constructor-arg value="10"/>
<property name="sessionWaitTimeout" value="1000"/>
</bean>
**and another channel:**
<int:channel id="ftpChannel2" />
**Now we have the following adapter in this xml:**
<int-sftp:outbound-channel-adapter id="sftpInboundAdapter"
session-factory="cachingSessionFactory2"
channel="inputChannel"
charset="UTF-8"
use-temporary-filename="false"/>
With this configuration in the above xml we are trying to get session from the cachingSessionFactory configured in the first xml, getting a session out of it, getting a list of files and then sending some files with ftpChannel2.send() and doing session.close() in finally block. When I do session.isOpen() in after session.close(), I see true being returned.
With these two jobs, I could see a lot of open file handles, which are socket connections and I am absolutely clueless as to how I can close those opened sockets.
The session will be closed when the operation is complete as long as you don't use the caching session factory - that is intended to keep the session open for the next use.
If you turn on DEBUG logging, you should get some insight into what it wrong.
EDIT
Just ran this with no problems:
#Test
public void test() throws Exception {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("10.0.0.3");
sf.setUsername("ftptest");
sf.setPassword("ftptest");
FtpSession session = sf.getSession();
Thread.sleep(10000);
session.close();
assertFalse(session.isOpen());
System.out.println("closed");
Thread.sleep(10000);
}
During the first sleep netstat -ntp shows the socket open; socket is gone after the close.
The session is the socket...
public void disconnect() throws IOException
{
closeQuietly(_socket_);
...
}
EDIT2
I had forgotten that with 2.1.x there was the cache-sessions attribute (2.1.x is very old).
I just tested with this (and 2.1.6) ...
<bean id="sftpSessionFactory"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="10.0.0.3" />
<property name="privateKey" value="file:/somPathTo/.ssh/id_rsa" />
<property name="port" value="22" />
<property name="user" value="ftptest" />
</bean>
<int:channel id="inputChannel" />
<int-sftp:outbound-channel-adapter id="sftpOutboundAdapter"
session-factory="sftpSessionFactory"
channel="inputChannel"
charset="UTF-8"
cache-sessions="false"
use-temporary-file-name="false"
remote-directory="." />
public class Main {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = new ClassPathXmlApplicationContext("context.xml");
File f = new File("foo.txt");
FileOutputStream fos = new FileOutputStream(f);
fos.write("bar".getBytes());
fos.close();
context.getBean("inputChannel", MessageChannel.class).send(MessageBuilder.withPayload(f).build());
System.out.println("Sleeping - check socket");
Thread.sleep(60000); // check socket
context.close();
System.exit(0);
}
}
With no problems (the socket is closed); if I set the cache-sessions to true, the socket remains open as expected.
I do notice you don't have a remote-directory attribute - that's illegal:
exactly one of 'remote-directory' or 'remote-directory-expression' is required on a remote file outbound adapter
I am trying to put a message on a Queue in Weblogic JMS, via a Camel Route.
My aim is to eventually configure a Route to consume the messages from the jms queue to which I publish the data from the earlier Route.
Here is my config:
<bean id="jndiTemplate" class="org.springframework.jndi.JndiTemplate">
<property name="environment">
<props>
<prop key="java.naming.factory.initial">weblogic.jndi.WLInitialContextFactory</prop>
<prop key="java.naming.provider.url">t3://localhost:7001</prop>
<!-- opional ... -->
<prop key="java.naming.security.principal">weblogic</prop>
<prop key="java.naming.security.credentials">weblogic</prop>
</props>
</property>
</bean>
<!-- Gets a Weblogic JMS Connection factory object from JDNI Server by jndiName-->
<bean id="webLogicJmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="jndiTemplate" />
<property name="jndiName" value="jms/TestConnectionFactory" /> <!-- the connection factory object is store under this name -->
</bean>
<!-- Create a new WebLogic Jms Camel Component -->
<bean id="wmq" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="webLogicJmsConnectionFactory"/>
</bean>
My Route looks like this:
from("cxfrs:bean:rsServer")
.setBody().body(TestRequest.class)
.process(new Processor(){
#Override
public void process(Exchange exchange) throws Exception {
TestRequest request = exchange.getIn().getBody(TestRequest.class);
TestResponse response = new TestResponse();
response.setAddress(request.getAddress());
response.setName(request.getName());
}
}).to("wmq:queue:TestJMSQueue");
I am getting this exception when I try to execute this Route:
May 27, 2013 6:37:47 PM org.apache.cxf.jaxrs.impl.WebApplicationExceptionMapper toResponse
WARNING: javax.ws.rs.WebApplicationException: org.springframework.jms.UncategorizedJmsException: Uncategorized exception occured during JMS processing; nested exception is weblogic.jms.common.JMSException: [JMSExceptions:045101]The destination name passed to createTopic or createQueue "TestJMSModule!TestJMSQueue" is invalid. If the destination name does not contain a "/" character then it must be the name of a distributed destination that is available in the cluster to which the client is attached. If it does contain a "/" character then the string before the "/" must be the name of a JMSServer or a ".". The string after the "/" is the name of a the desired destination. If the "./" version of the string is used then any destination with the given name on the local WLS server will be returned.
at org.apache.camel.component.cxf.jaxrs.CxfRsInvoker.returnResponse(CxfRsInvoker.java:149)
at org.apache.camel.component.cxf.jaxrs.CxfRsInvoker.asyncInvoke(CxfRsInvoker.java:104)
at org.apache.camel.component.cxf.jaxrs.CxfRsInvoker.performInvocation(CxfRsInvoker.java:57)
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:167)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:94)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:58)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor
...
Caused by: weblogic.jms.common.JMSException: [JMSExceptions:045101]The destination name passed to createTopic or createQueue "TestJMSModule!TestJMSQueue" is invalid. If the destination name does not contain a "/" character then it must be the name of a distributed destination that is available in the cluster to which the client is attached. If it does contain a "/" character then the string before the "/" must be the name of a JMSServer or a ".". The string after the "/" is the name of a the desired destination. If the "./" version of the string is used then any destination with the given name on the local WLS server will be returned.
at weblogic.jms.frontend.FEManager.destinationCreate(FEManager.java:202)
at weblogic.jms.frontend.FEManager.invoke(FEManager.java:544)
at weblogic.messaging.dispatcher.Request.wrappedFiniteStateMachine(Request.java:961)
at weblogic.messaging.dispatcher.DispatcherImpl.syncRequest(DispatcherImpl.java:184)
at weblogic.messaging.dispatcher.DispatcherImpl.dispatchSyncNoTran(DispatcherImpl.java:287)
at weblogic.jms.dispatcher.DispatcherAdapter.dispatchSyncNoTran(DispatcherAdapter.java:59)
at weblogic.jms.client.JMSSession.createDestination(JMSSession.java:3118)
at weblogic.jms.client.JMSSession.createQueue(JMSSession.java:2514)
I followed the procedure to create a Queue mentioned here: https://blogs.oracle.com/soaproactive/entry/how_to_create_a_simple
I am creating a JMS Module(TestJMSModule) and in that I am creating a Queue(TestJMSQueue) and a connection factory inside it.
I am new to JMS and I know I am doing something wrong with the configurations either on the Camel side or the Weblogic side, but not able to figure out what. Any help would be greatly appreciated.
Thanks in advance.
You need to create a JMS Server. Then you need to create a subdeployment in your JMS Module and then target the subdeployment to the JMS Server.
Then the syntax needs to be
JMSServer/JMSModule!Queue
Unfortunately I'm not an expert in WebLogic configuration.
Client side config looks correct.
The exception says the object name is not right.
In the example you mentioned jndi name of the queue is "jms/TestJMSQueue", not just "TestJMSQueue".
To me it means you should be using .to("wmq:queue:jms/TestJMSQueue"); instead.
I was to integrate Spring (4.1.6) + Apache Camel (2.15.2) and consuming messages from a JMS Queue hosted on Oracle Weblogic (11g).
applicationContext.xml
<bean id="jndiTemplate" class="org.springframework.jndi.JndiTemplate">
<property name="environment">
<props>
<prop key="java.naming.factory.initial">weblogic.jndi.WLInitialContextFactory</prop>
<prop key="java.naming.provider.url">t3://localhost:7001</prop>
<prop key="java.naming.security.principal">weblogic</prop>
<prop key="java.naming.security.credentials">welcome1</prop>
</props>
</property>
</bean>
<bean id="webLogicJmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="jndiTemplate" />
<!-- Connection factory JNDI name -->
<property name="jndiName" value="jms/TestConnectionFactory" />
</bean>
<bean id="weblogicJmsComponent" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="webLogicJmsConnectionFactory" />
</bean>
<camel:camelContext id="camel" xmlns:camel="http://camel.apache.org/schema/spring">
<!-- Route to copy files -->
<camel:route startupOrder="1">
<camel:from uri="file:data/inbox?noop=true" />
<camel:process ref="loggingProcessor" />
<camel:to uri="file:data/outbox" />
</camel:route>
<!-- Route to read from JMS and process them in jmsReaderProcessor -->
<camel:route startupOrder="2">
<camel:from uri="weblogicJmsComponent:queue:TestJMSServer/TestJMSModule!TestJMSQueue" />
<camel:process ref="jmsReaderProcessor" />
</camel:route>
</camel:camelContext>
loggingProcessor and jmsReaderProcessor are two Camel Processor that just log the messages in/out from Exchange object.
public void process(Exchange exchange) throws Exception {
LOG.info("begin process()");
LOG.info("process() -- Got exchange: {}", exchange);
Message messageIn = exchange.getIn();
LOG.info("process() -- Got messageIn: {}", messageIn);
LOG.info("process() -- Got messageIn.getBody(): {}", messageIn.getBody());
Message messageOut = exchange.getOut();
LOG.info("process() -- Got messageOut: {}", messageOut);
LOG.info("end process()");
}
Kind Regards,
Cristian Manoliu
I have the following test..
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = {"/schedule-agents-config-context.xml"})
#TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true)
#Transactional
public class H2TransactionNotWorkingTest extends SubmitAgentIntegratedTestBase {
private static final int FIVE_SUBMISSIONS = 5;
#Autowired
private ApplicationSubmissionInfoDao submissionDao;
private FakeApplicationSubmissionInfoRepository fakeRepo;
#Before
public void setUp() {
fakeRepo = fakeRepoThatNeverFails(submissionDao, null);
submitApplication(FIVE_SUBMISSIONS, fakeRepo);
}
#Test
#Rollback(true)
public void shouldSaveSubmissionInfoWhenFailureInDatabase() {
assertThat(fakeRepo.retrieveAll(), hasSize(FIVE_SUBMISSIONS));
}
#Test
#Rollback(true)
public void shouldSaveSubmissionInfoWhenFailureInXmlService() {
assertThat(fakeRepo.retrieveAll().size(), equalTo(FIVE_SUBMISSIONS));
}
}
...and the following config...
<jdbc:embedded-database id="dataSource" type="H2">
<jdbc:script location="classpath:/db/h2-schema.sql" />
</jdbc:embedded-database>
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"/>
</bean>
<bean id="transactionalSessionFactory"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource" ref="dataSource"/>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.H2Dialect</prop>
<prop key="hibernate.show_sql">false</prop>
<prop key="hibernate.cache.use_second_level_cache">false</prop>
<prop key="hibernate.cache.use_query_cache">false</prop>
</props>
</property>
<property name="namingStrategy">
<bean class="org.hibernate.cfg.ImprovedNamingStrategy"/>
</property>
<property name="configurationClass" value="org.hibernate.cfg.AnnotationConfiguration"/>
<property name="packagesToScan" value="au.com.mycomp.life.snapp"/>
</bean>
<bean id="regionDependentProperties" class="org.springframework.core.io.ClassPathResource">
<constructor-arg value="region-dependent-service-test.properties"/>
</bean
>
I have also set auto commit to false in the sql script
SET AUTOCOMMIT FALSE;
There are not REQUIRES_NEW in the code.
Why is the rollback not working in the test?
Cheers
Prabin
I have faced the same problem but I have finally solved it albeit I don't use Hibernate (shouldn't really matter).
The key item making it work was to extend the proper Spring unit test class, i.e. AbstractTransactionalJUnit4SpringContextTests. Note the "Transactional" in the class name. Thus the skeleton of a working transactional unit test class looks like:
#ContextConfiguration(locations = {"classpath:/com/.../testContext.xml"})
public class Test extends AbstractTransactionalJUnit4SpringContextTests {
#Test
#Transactional
public void test() {
}
}
The associated XML context file has the following items contained:
<jdbc:embedded-database id="dataSource" type="H2" />
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"></property>
</bean>
Using this setup the modifications by each test method is properly rolled back.
Regards, Ola
I'm experiencing similar problems, I'm also using TestNG + Spring test support and Hibernate. What happens is that Hibernate disables autocommit on the connection before the transaction begins and it remembers the original autocommit setting:
org.hibernate.engine.transaction.internal.jdbc#JdbcTransaction:
#Override
protected void doBegin() {
try {
if ( managedConnection != null ) {
throw new TransactionException( "Already have an associated managed connection" );
}
managedConnection = transactionCoordinator().getJdbcCoordinator().getLogicalConnection().getConnection();
wasInitiallyAutoCommit = managedConnection.getAutoCommit();
LOG.debugv( "initial autocommit status: {0}", wasInitiallyAutoCommit );
if ( wasInitiallyAutoCommit ) {
LOG.debug( "disabling autocommit" );
managedConnection.setAutoCommit( false );
}
}
catch( SQLException e ) {
throw new TransactionException( "JDBC begin transaction failed: ", e );
}
isDriver = transactionCoordinator().takeOwnership();
}
Later on, after rolling back the transaction, it will release the connection. Doing so hibernate will also restore the original autocommit setting on the connection (so that others who might be handed out the same connection start with the original setting). However, setting the autocommit during a transaction triggers an explicit commit, see JavaDoc
In the code below you can see this happening. The rollback is issued and finally the connection is released in releaseManagedConnection. Here the autocommit will be re-set which triggers a commit:
org.hibernate.engine.transaction.internal.jdbc#JdbcTransaction:
#Override
protected void doRollback() throws TransactionException {
try {
managedConnection.rollback();
LOG.debug( "rolled JDBC Connection" );
}
catch( SQLException e ) {
throw new TransactionException( "unable to rollback against JDBC connection", e );
}
finally {
releaseManagedConnection();
}
}
private void releaseManagedConnection() {
try {
if ( wasInitiallyAutoCommit ) {
LOG.debug( "re-enabling autocommit" );
managedConnection.setAutoCommit( true );
}
managedConnection = null;
}
catch ( Exception e ) {
LOG.debug( "Could not toggle autocommit", e );
}
}
This should not be a problem normally, because afaik the transaction should have ended after the rollback. But even more, if I issue a commit after a rollback it should not be committing any changes if there were no changes between the rollback and the commit, from the javadoc on commit:
Makes all changes made since the previous commit/rollback permanent
and releases any database locks currently held by this Connection
object. This method should be used only when auto-commit mode has been
disabled.
In this case there were no changes between rollback and commit, since the commit (triggered indirectly by re-setting autocommit) happens only a few statements later.
A work around seems to be to disable autocommit. This will avoid restoring autocommit (since it was not enabled in the first place) and thus prevent the commit from happening. You can do this by manipulating the id for the embedded datasource bean. The id is not only used for the identification of the datasource, but also for the databasename:
<jdbc:embedded-database id="dataSource;AUTOCOMMIT=OFF" type="H2"/>
This will create a database with name "dataSource". The extra parameter will be interpreted by H2. Spring will also create a bean with name "dataSource;AUTOCOMMIT=OFF"". If you depend on the bean names for injection, you can create an alias to make it cleaner:
<alias name="dataSource;AUTOCOMMIT=OFF" alias="dataSource"/>
(there isn't a cleaner way to manipulate the embedded-database namespace config, I wish Spring team would have made this a bit better configurable)
Note: disabling the autocommit via the script (<jdbc:script location="...") might not work, since there is no guarantee that the same connection will be re-used for your test.
Note: this is not a real fix but merely a workaround. There is still something wrong that cause the data to be committed after a rollback occured.
----EDIT----
After searching I found out the real problem. If you are using HibernateTransactionManager (as I was doing) and you use your database via the SessionFactory (Hibernate) and directly via the DataSource (plain JDBC), you should pass both the SessionFactory and the DataSource to the HibernateTransactionManager. From the Javadoc:
Note: To be able to register a DataSource's Connection for plain JDBC code, this instance >needs to be aware of the DataSource (setDataSource(javax.sql.DataSource)). The given >DataSource should obviously match the one used by the given SessionFactory.
So eventually I did this:
<bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
<property name="dataSource" ref="dataSource" />
</bean>
And everything worked for me.
Note: the same goes for JpaTransactionManager! If you both use the EntityManager and perform raw JDBC access using the DataSource, you should supply the DataSource separately next to he EMF. Also don't forget to use DataSourecUtils to obtain a connection (or JDBCTemplate which uses DataSourceUtils internally to obtain the connection)
----EDIT----
Aight, while the above did solve my problem, it is not the real cause after all :)
In normal cases when using Spring's LocalSessionFactoryBean, setting the datasource will have no effect since it's done for you.
If the SessionFactory was configured with LocalDataSourceConnectionProvider, i.e. by Spring's LocalSessionFactoryBean with a specified "dataSource", the DataSource will be auto-detected: You can still explicitly specify the DataSource, but you don't need to in this case.
In my case the problem was that we created a caching factory bean that extended LocalSessionFactoryBean. We only use this during testing to avoid booting the SessionFactory multiple times. As told before, Spring test support does boot multiple application contexts if the resource key is different. This caching mechanism mitigates the overhead completely and ensures only 1 SF is loaded.
This means that the same SessionFactory is returned for different booted application contexts. Also, the datasource passed to the SF will be the datasource from the first context that booted the SF. This is all fine, but the DataSource itself is a new "object" for each new application context. This creates a discrepancy:
The transaction is started by the HibernateTransactionManager. The datasource used for transaction synchronization is obtained from the SessionFactory (so again: the cached SessionFactory with the DataSource instance from the application context the SessionFactory was initially loaded from). When using the DataSource in your test (or production code) directly, you'll be using the instance belonging to the app context active at that point. This instance does not match the instance used for the transaction synchronization (extracted from the SF). This result into problems as the connection obtained will not be properly participating in the transaction.
By explicitly setting the datasource on the transactionmanager this appeared to be solved since the post initialization will not obtain the datasource from the SF but use the injected one instead. The appropriate way for me was to adjust the caching mechanism and replace the datasource in the cached SF with the one from the current appcontext each time the SF was returned from cache.
Conclusion: you can ignore my post:) as long as you're using HibernateTransactionManager or JtaTransactionManager in combination with some kind of Spring support factory bean for the SF or EM you should be fine, even when mixing vanilla JDBC with Hibernate. In the latter case don't forget to obtain connections via DataSourceUtils (or use JDBCTemplate).
Try this:
remove the org.springframework.jdbc.datasource.DataSourceTransactionManager
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"/>
</bean>
and replace it with the org.springframework.orm.jpa.JpaTransactionManager
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="dataSource" ref="dataSource"/>
</bean>
or you inject an EntityManagerFactory instead ...
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
you need an EntityManagerFactory then, like the following
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="jpaVendorAdapter">
<bean id="jpaAdapter"
class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="true" />
<property name="generateDdl" value="true" />
</bean>
</property>
</bean>
You haven't shown all the pieces to the puzzle. My guess at this point would be that your ApplicationSubmissionInfoDao is transactional and is committing on its own, though I'd think that would conflict with the test transactions if everything were configured properly. To get more of an answer, ask a more complete question. The best thing would be to post an SSCCE.
Thanks Ryan
The test code is something like this.
#Test
#Rollback(true)
public void shouldHave5ApplicationSubmissionInfo() {
for (int i = 0; i < 5; i++) {
hibernateTemplate.saveOrUpdate(new ApplicationSubmissionInfoBuilder()
.with(NOT_PROCESSED)
.build());
}
assertThat(repo.retrieveAll(), hasSize(5));
}
#Test
#Rollback(true)
public void shouldHave5ApplicationSubmissionInfoAgainButHas10() {
for (int i = 0; i < 5; i++) {
hibernateTemplate.saveOrUpdate(new ApplicationSubmissionInfoBuilder()
.with(NOT_PROCESSED)
.build());
}
assertThat(repo.retrieveAll(), hasSize(5));
}
I figure out that embedded DB define using jdbc:embedded-database don't have transaction support. When I used commons DBCP to define the datasource and set default auto commit to false, it worked.
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" scope="singleton">
<property name="driverClassName" value="org.h2.Driver" />
<property name="url" value="jdbc:h2:~/snappDb"/>
<property name="username" value="sa"/>
<property name="password" value=""/>
<property name="defaultAutoCommit" value="false" />
<property name="connectionInitSqls" value=""/>
</bean>
None of the above worked for me!
However, the stack i am using is [spring-test 4.2.6.RELEASE, spring-core 4.2.6.RELEASE, spring-data 1.10.1.RELEASE]
The thing is, using any unit test class annotated with [SpringJUnit4ClassRunner.class] will result in an auto-rollback functionality by spring library design
check ***org.springframework.test.context.transaction.TransactionalTestExecutionListener*** >> ***isDefaultRollback***
To overcome this behavior just annotate the unit test class with
#Rollback(value = false)