Make OracleDataSource robust against Database restarts and hickups - spring

So I got a advanced queue working with a Connectionfactory:
ConnectionFactory jmsQueueConnectionFactory() throws JMSException, SQLException {
final OracleDataSource dataSource = new OracleDataSource();
dataSource.setUser(username);
dataSource.setPassword(password);
dataSource.setURL(url);
dataSource.setImplicitCachingEnabled(true);
dataSource.setFastConnectionFailoverEnabled(true);
return AQjmsFactory.getConnectionFactory(dataSource);
}
This is running on a shared Database which might be restarted or sometimes the network just has a short hickup. Which results in no more messages from the Queue.
I use a spring MessageListener to retrieve messages and there is actually no indicator or what so ever that the queue is not running anymore. After restarting the application I then get a load of older messages that should have been processed already.
Is there a way or specific data source implementation which reconnects or something?
Update: Listener Impl
#Bean
OracleAqQueueFactoryBean etlQueueFactory() throws JMSException, SQLException {
final OracleAqQueueFactoryBean bean = new OracleAqQueueFactoryBean();
bean.setConnectionFactory(jmsQueueConnectionFactory());
bean.setOracleQueueUser("USER");
bean.setOracleQueueName("QUEUE");
return bean;
}
#Bean
DefaultMessageListenerContainer jmsContainer() throws JMSException, SQLException {
final DefaultMessageListenerContainer bean = new DefaultMessageListenerContainer();
bean.setConnectionFactory(jmsQueueConnectionFactory());
bean.setDestination(etlQueueFactory().getObject());
bean.setMessageListener(new MyListener());
bean.setSessionTransacted(false);
return bean;
}
public class MyListener implements MessageListener {
#Override
public void onMessage(Message message) {
...
}
}

I guess you have to do it on the JMS level, not DB level.
Not sure what type of listener you are using, but a DefaultMessageListenerContainer in Spring is implemented with a consumer.receive(timeout) loop. It's more robust than using a plain listener as it will attempt to reconnect on each poll cycle (if needed).

Related

How can I test that I have configured ChainedKafkaTransactionManager correctly in my spring boot service

My spring boot service needs to consume kafka events off one topic, do some processing (including writing to the db with JPA) and then produce some events on a new topic. No matter what happens I cannot have a situation where I have published events without updating the database, and if anything goes wrong then I want the next poll of the consumer to retry the event. My processing logic including the db update is idempotent so retrying that is fine
I think I have achieved exactly once semantics as described on https://docs.spring.io/spring-kafka/reference/html/#exactly-once by using a ChainedKafkaTransactionManager like so:
#Bean
public ChainedKafkaTransactionManager chainedTransactionManager(JpaTransactionManager jpa, KafkaTransactionManager<?, ?> kafka) {
kafka.setTransactionSynchronization(SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
return new ChainedKafkaTransactionManager(kafka, jpa);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory,
ChainedKafkaTransactionManager chainedTransactionManager) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory);
factory.getContainerProperties().setTransactionManager(chainedTransactionManager);
return factory;
}
The relevant kafka config in my application.yaml file looks like:
kafka:
...
consumer:
group-id: myGroupId
auto-offset-reset: earliest
properties:
isolation.level: read_committed
...
producer:
transaction-id-prefix: ${random.uuid}
...
Because the commit order is critical to my application I would like to write a integration test to prove that the commits happen in the desired order and that if an error occurs during the commit to kafka then the original event is consumed again. However I am struggling to find a good way of causing a failure between the db commit and the kafka commit.
Any suggestions or alternative ways I could do this?
Thanks
You could use a custom ProducerFactory to return a MockProducer (provided by kafka-clients.
Set the commitTransactionException so that it is thrown when the KTM tries to commit the transaction.
EDIT
Here is an example; it doesn't use the chained TM, but that shouldn't make a difference.
#SpringBootApplication
public class So66018178Application {
public static void main(String[] args) {
SpringApplication.run(So66018178Application.class, args);
}
#KafkaListener(id = "so66018178", topics = "so66018178")
public void listen(String in) {
System.out.println(in);
}
}
spring.kafka.producer.transaction-id-prefix=tx-
spring.kafka.consumer.auto-offset-reset=earliest
#SpringBootTest(classes = { So66018178Application.class, So66018178ApplicationTests.Config.class })
#EmbeddedKafka(bootstrapServersProperty = "spring.kafka.bootstrap-servers")
class So66018178ApplicationTests {
#Autowired
EmbeddedKafkaBroker broker;
#Test
void kafkaCommitFails(#Autowired KafkaListenerEndpointRegistry registry, #Autowired Config config)
throws InterruptedException {
registry.getListenerContainer("so66018178").stop();
AtomicReference<Exception> listenerException = new AtomicReference<>();
CountDownLatch latch = new CountDownLatch(1);
((ConcurrentMessageListenerContainer<String, String>) registry.getListenerContainer("so66018178"))
.setAfterRollbackProcessor(new AfterRollbackProcessor<>() {
#Override
public void process(List<ConsumerRecord<String, String>> records, Consumer<String, String> consumer,
Exception exception, boolean recoverable) {
listenerException.set(exception);
latch.countDown();
}
});
registry.getListenerContainer("so66018178").start();
Map<String, Object> props = KafkaTestUtils.producerProps(this.broker);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf = new DefaultKafkaProducerFactory<>(props);
KafkaTemplate<String, String> template = new KafkaTemplate<>(pf);
template.send("so66018178", "test");
assertThat(latch.await(10, TimeUnit.SECONDS)).isTrue();
assertThat(listenerException.get()).isInstanceOf(ListenerExecutionFailedException.class)
.hasCause(config.exception);
}
#Configuration
public static class Config {
RuntimeException exception = new RuntimeException("test");
#Bean
public ProducerFactory<Object, Object> pf() {
return new ProducerFactory<>() {
#Override
public Producer<Object, Object> createProducer() {
MockProducer<Object, Object> mockProducer = new MockProducer<>();
mockProducer.commitTransactionException = Config.this.exception;
return mockProducer;
}
#Override
public Producer<Object, Object> createProducer(String txIdPrefix) {
Producer<Object, Object> producer = createProducer();
producer.initTransactions();
return producer;
}
#Override
public boolean transactionCapable() {
return true;
}
};
}
}
}
Do not use ChainedKafkaTransactionManager anymore, it is deprecated.
according to docs:
https://docs.spring.io/spring-kafka/reference/html/#container-transaction-manager
"The ChainedKafkaTransactionManager is now deprecated, since version 2.7; see the javadocs for its super class ChainedTransactionManager for more information. Instead, use a KafkaTransactionManager in the container to start the Kafka transaction and annotate the listener method with #Transactional to start the other transaction."
In my tests, where I tried to simulate exception in Producer after DB transaction committed, I simply left mandatory field empty in Kafka event (used Avro schema), and in the second test I deleted the topic for producing with the help of Kafka Admin. And then I wrote some asserts to verify that Kafka Listener was called again, when retrying.

Only send JMS message once the JPA transaction commits

I'm working on a project that makes use of Spring's JmsTemplate, ActiveMQ and Hibernate. I have a method wrapped in a transaction which sends a message through the JmsTemplate, does a bit more work and then returns so that the transaction can commit. I want the message to only be sent once the transaction commits, i.e. the JmsListener should only trigger once the aforementioned method returns.
Take the following example sender and receiver:
#Service
#Transactional
public class TestService{
#Autowired
private JmsTemplate jmsTemplate;
public void test() throws InterruptedException {
jmsTemplate.convertAndSend("test_queue", "Test");
Thread.sleep(1000L);
System.out.println("This should run first");
}
}
#Service
#Transactional
public class Listener {
#JmsListener(destination = "test_queue", containerFactory = "jmsListenerContainerFactory")
public void onMessage() {
System.out.println("This should run last.");
}
}
I want the text "This should run first" to print before "This should run last", but because of the Thread.sleep it never does! I tried a number of changes to the configuration on my jmsListenerContainerFactory, but none make any difference.
Not sure if XA is involved in this case. Is the actual send of the message part of a separate transaction? If so the issue is probably that the two transactions aren't synchronizing, but I don't know how to solve that.
I had to set Session Transacted on JmsTemplate instead of JmsListenerContainerFactory:
#Bean
public JmsTemplate jmsTemplate(ConnectionFactory connectionFactory) {
JmsTemplate jmsTemplate = new JmsTemplate(connectionFactory);
jmsTemplate.setSessionTransacted(true);
return jmsTemplate;
}

Closing Sessions in Spring Boot JMS CachingConnectionFactory

I have my JMS configuration like below (Spring boot 1.3.8);
#Configuration
#EnableJms
public class JmsConfig {
#Autowired
private AppProperties properties;
#Bean
TopicConnectionFactory topicConnectionFactory() throws JMSException {
return new TopicConnectionFactory(properties.getBrokerURL(), properties.getBrokerUserName(),
properties.getBrokerPassword());
}
#Bean
CachingConnectionFactory connectionFactory() throws JMSException {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(topicConnectionFactory());
connectionFactory.setSessionCacheSize(50);
return connectionFactory;
}
#Bean
JmsTemplate jmsTemplate() throws JMSException {
JmsTemplate jmsTemplate = new JmsTemplate(connectionFactory());
jmsTemplate.setPubSubDomain(Boolean.TRUE);
return jmsTemplate;
}
#Bean
DefaultJmsListenerContainerFactory defaultContainerFactory() throws JMSException {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setPubSubDomain(Boolean.TRUE);
factory.setRecoveryInterval(30 * 1000L);
return factory;
}
}
This should work fine. But i am worried about whats written on the doc of CachingConnectionFactory
Specially, these parts;
NOTE: This ConnectionFactory requires explicit closing of all Sessions obtained from its shared Connection
Note also that MessageConsumers obtained from a cached Session won't get closed until the Session will eventually be removed from the pool. This may lead to semantic side effects in some cases.
I thought the framework handled the closing session and connection part? If it does not; how should i close them properly?
or maybe i am missing something?
Any help is appreciated :)
F.Y.I : I Use SonicMQ as the broker
Yes, the JmsTemplate will close the session; the javadocs refer to direct use outside of the framework.

Atomikos Transaction management spring boot/spring jams

I have a spring boot application with spring JMS using DefaultMessageListener container. I am using Atomikos for transaction management.
On exception the message queue roll back works fine and messages do move to back out queue, but the database updates do not roll back. I have set the autoconfigured JtaTransactionManager on DefaultMessageContainerBean. Are there any other configurations required here to get a true global transaction management. I am using My Batis for database.
public class CusListener implements MessageListener{
public void onMessage(Message message) {
//Database call
catch (Exception ex) {
throw (new RuntimeException());
}
}
}
#Configuration
public class ListenerContainer{
#Bean
public DefaultMessageListenerContainer defaultMessageListenerContainer(ConnectionFactory queueConnectionFactory,MQQueue queue, MessageListener listener,
JtaTransactionManager jtaTransactionManager) {
DefaultMessageListenerContainer defaultMessageListenerContainer =
new DefaultMessageListenerContainer();
defaultMessageListenerContainer.setConnectionFactory(queueConnectionFactory);
defaultMessageListenerContainer.setDestination(queue);
defaultMessageListenerContainer.setMessageListener(listerner);
defaultMessageListenerContainer.setTransactionManager(jtaTransactionManager);
defaultMessageListenerContainer.setSessionTransacted(true);
defaultMessageListenerContainer.setConcurrency("3-10");
return defaultMessageListenerContainer;
}
//other beans declaration passed in the method above
}
#Configuration
public class PlanListenerSqlSessFac {
#Bean(name="sqlSessionFactory")
public SqlSessionFactory sqlSessionFactory(#Qualifier("dataSource") NMCryptoDataSourceWrapper dataSource) throws Exception {
}
#Bean(name="driverManagerDataSource")
public DriverManagerDataSource driverManagerDataSource() {
DriverManagerDataSource driverManagerDataSource = new DriverManagerDataSource();
return driverManagerDataSource;
}
}
You should use AtomikosDataSourceBean as dataDource.
See documentation : https://www.atomikos.com/bin/view/Documentation/ConfiguringJdbc

How to pause #JmsListener in my spring boot application?

Here are my Hornetq configuration in spring boot.
spring.hornetq.mode=embedded
spring.hornetq.embedded.enabled=true
spring.hornetq.embedded.persistent=true
spring.hornetq.port=5445
spring.hornetq.embedded.queues=jms.testqueue
Here is my Producer
public class Producer {#Inject
private JmsTemplate jmsTemplate;
public void resolveError( String message) {
try{
jmsTemplate.convertAndSend(DATA_QUEUE, message);
}catch(Exception e){
//log error
}
}}
Here is my Consumer
#JmsListener(destination = DATA_QUEUE)
public void consume(String message) throws InterruptedException {
log.info("Receiving event: {}", message);
try {
//do stuff with message
}catch (Exception e){
log.error(e.toString());
}
}
Here is my config file
#Configuration#EnableJms public class JmsConfig {
public static final String LOGGING_SCRAPPER_KEY ="DATA_SYNC_ERROR";
public static final String DATA_QUEUE = "jms.testqueue"; }
I want to slow down the consuming process of #JMSlistener, I don't want to the JMS listener hit the queue all the time any help is appreciated, thanks!!
The listeners that are created under the covers for each #JmsListener annotated method are held in a registry as explained in the documentation
If you want to pause your listener, it is very easy to look it up and stop it. Let's assume you have a way to invoke the following bean (JMX endpoint, secure rest mapping, whatever):
static class YourService {
private final JmsListenerEndpointRegistry registry;
#Autowired
public YourService(JmsListenerEndpointRegistry registry) {
this.registry = registry;
}
public void stopListener() {
this.registry.getListenerContainer("myListener").stop();
}
public void startListener() {
this.registry.getListenerContainer("myListener").start();
}
}
Then you need to associate the proper id to your listener (myListener) in the example above.
#JmsListener(id = "myListener", destination = DATA_QUEUE)
public void consume(String message) throws InterruptedException { ... }
I'm not able to set the consuming time of JmsListener but I found an alternative where I'm able to set delivery delay time limit on jmsTemplate instead, use jmsTemplate setDeliveryDelay which will delay sending it to the queue. Either way, it is delayed only if you go with delaying the consuming process of JMS listener you will have the message in the queue in my approach it won't be in the queue until the delay delivery time.

Resources