Trying to redesign JMS configuration as Spring Integration: Redelivery policies - spring

My legacy configuration exposes a ConnectionFactory #Bean of type ActiveMQConnectionFactory, with custom redelivery through activeMQConnectionFactory.setRedeliveryPolicy(..).
I found out that Spring Integration DSL allows redeliveries on the handle operation as well by means of RequestHandlerRetryAdvice, which can be configured for instance with an ExponentialBackOffPolicy.
I am wondering if they are triggering the same code at the lower level (not sure its a client thing or a signaling thing to the broker), and if not, whether they are equivalent and whether can I safely replace the abstract version without missing any configuratbility

No; it's completely different and has nothing to do with JMS and retrying incoming deliveries.
The retry advice is generally used to retry outgoing requests, e.g. an http request, or a send to JMS.

Related

When an Exception is thrown in a provider-side ActiveMq BrokerFilter send method, how can the JMS sender subscribe as listener to that exception?

We implemented a filter, or plugin, in ActiveMq broker that intercepts inbound messages and validates them from a security standpoint.
We are needing a programmer friendly way of receiving these exceptions on the producer side (ideally not at connection level, but at session or producer level, since they may need session specific reaction).
We are doing message level authorization in the broker side in the following way: In the ActiveMq provider (server) we implement BrokerFilter (plugin) in order to intercept the incoming JMS message, and validate a JWT access token attached to the message as property. If the JWT token is valid, then the message is let through downstream chain, if it is not valid, a SecurityException is thrown.
We notice that the message does reach back to the sending JVM which reports that no ExceptionListener instances registered for the specific exception.
Our question is, where can we best register an ExceptionListener in Spring JMS for this scenario? We have direct access to the producer and JMS session, but not to the JMS connection.
It is true that registering an ExceptionListener to the connection would be useful for connection level events, but for session level events it may make the code more understandable and cohesive if we could register such exception listeners locally to the session or producer, since they are kind of direct responses to a message send attempt.
Of course it would be also possible to implement local exception listeners via connection level and a thread local structure of local listeners, but i am wondering if JMS or Spring already provides such possibility of the session or producer finding out directly that their message was not authorized, so that they answer upstream to calling microservice rather then retyrying to send it for instance.
We are using persistent messages but unsure if we do synchronous or asynchronous send. I believe on asynchronous send, an ExceptionListener of some kind will be called back on such an event (exception thrown in BrokerFilter.send method). While on the synchronous send perhaps the exception will directly be thrown there (but the thread blocking may decrease robustness of the microservice).
This is solvable with connection.setExceptionListener but to us it would likely be more convenient a session.setExceptionListener or even a message request level listener.
We would like to see any other options possible with Spring JMS except registering an exception listener at connection level and except synchronous send, if any such other options are possible.
Since Spring JMS uses the JMS API then you're pretty much limited to what the JMS API provides and it doesn't provide a session or request level exception listener. It provides a connection level exception listener for exceptions which are reported asynchronously and normal Java checked exceptions for the synchronous use-case.

Spring JMS Consumers to a TIBCO EMS Server expire on their own

We have built a Spring Boot messaging service that listens to a JMS queue hosted on a TIBCO EMS (Enterprise Messaging Service) Server. It is a fairly straightforward application that receives a JMS message, does some data manipulation and updates a database.
The issue is that, occasionally, there are no JMS consumers on the queue, and incoming messages are not processed. However the Spring Boot app is up and running (verified by ps -ef). Restarting the app restores the consumer, but unfortunately this is not a feasible solution in production etc.
Other facts of interest:
We have observed this to happen when the JMS server accepts SSL traffic and is on deployed as a Fault Tolerant pair (although this has been a conculsive observation yet)
There is absolutely no indication in the log (like an error) when the consumer goes down.
We are using Spring-JMS (4.1.0) and TIBCO EMS (8.3.0)
Code Snippet of instantiating a DefaultJmsListenerContainerFactory:
#Bean
public DefaultJmsListenerContainerFactory listenerJmsContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
TibjmsQueueConnectionFactory cf = new TibjmsQueueConnectionFactory("tcp://localhost:7222");
cf.setUserName("admin");
cf.setUserPassword("");
factory.setConnectionFactory(cf);
return factory;
}
The JMS Listener:
#JmsListener(destination = "queue.sample", containerFactory = "listenerJmsContainerFactory")
public void listen(TextMessage message, Session session) throws JMSException{
System.out.println("Received Message: "+message.getJMSMessageID());
System.out.println("Acknowledgement Mode: "+session.getAcknowledgeMode());
// Some more application specific stuff
}
While we are trying to setup additional logging on both the Spring Boot and TIBCO side, we would like to check some points like:
Can there be a situation where a consumer idle for more than a certain time, automatically expires?
Is this something that is governed by DMLC settings like idleConsumerLimit, idleTaskExecutionLimit etc.?
Can these properties be viewed in the Spring Boot code mentioned above? For instance in the code above, the JMS Listener is being created under the hood by the DefaultJmsListenerContainerFactory. So how can we access the DMLC object so that we can invoke methods like getIdleConsumerLimit(), getIdleTaskExecutionLimit() etc.
Thanks for the inputs,
Prabal
Most likely, something in the network (router, firewall etc) is silently dropping idle connections.
While not part of the JMS spec, most vendors implement some kind of heartbeat mechanism so that the client/server exchange pings from time to time, to prevent such actions by network components and/or to detect such conditions.
Look at the Tibco documentation to figure out how to configure heartbeats (they might call it something else).

Spring Integration : QueueChannel guarantee no data loss?

I want my system to guarantee there is no data loss even if the system is shutting down.
What this mean is that the system must not miss the request message. So, I will change the way that accept http reqeust. Now, I am using http gateway/webservice gateway in spring integration. But, This isn't receive the message even if the system dies. So, I want to add the queue between the http client and the http receiver. So, I want to use a queue channel. Here is the question.
① I have to install other queue program such as activemq or rabbitmq and have to connect to the queue channel in spring integration?
② and which one is the best combination with spring integration? I heard that rabbit mq is the best one.
please give me a elaborate explanation. thanks.
First of all you description isn't clear...
If you don't want to lose messages from the QueueChannel use some Persistence MessageStore, like JdbcChannelMessageStore:
http://docs.spring.io/spring-integration/docs/latest-ga/reference/html/system-management-chapter.html#message-store
From other side there are channel wrappers for the AMQP as well as for JMS:
http://docs.spring.io/spring-integration/docs/latest-ga/reference/html/amqp.html#d4e5846
http://docs.spring.io/spring-integration/docs/latest-ga/reference/html/jms.html#jms-channel
Which really provide the same persistence durability, fault tollerant options for your use-case.
Re. activemq VS rabbitmq. I can say by my own expiriance that the last one is better, by configuration, usage from Spring Integration (Spring AMQP is under the shell). And its performance is really better.
All other info you can find in the Internet.

Spring Integration message redelivery best practice

I am currently working on an application with Spring Integration. The application requires guaranteed delivery and the option that the it will be functional for a specific amount of time that the external systems are unavailable without losing messages. Channels will be JMS backed with expiration time. I would like to understand which is the best practice for redelivery with Spring Integration. We are having the following options:
The application's integration flow has a number of outbound message gateways that requires RPC calls with external system. Statefull retry advice can be used. After the max attemps is reached for specific runtime exceptions the message will be addressed to a recovery channel. The recovery channel will use a delayer and will then address the message back to the original channel. After X times that the message will reach the recovery channel it will be addressed to the error channel where it will be simply logged without further processing. The delayer component in this case should use the jdbc message store option.
Another option would be to use the standard JMS option for redelivery. In this case the redelivery policy will not be implemented on Spring Integration but on the JMS provider side.
Which is the best practice for message redelivery with Spring Integration?
I'd say like this: don't reinvent the wheel!
If there is already some similar solution on the matter, just use it as is with its specific configuration.
Right, if JMS has that solution, just go ahead.
There is need, of course, to get deal with DLQ in case of message expiration or redelivery exhausting. But the concept is here.

about Session.rollBack() in JMS

All,
i am new to JMS and i have a question about Session.rollBack() method in JMS. AFAIK, this method is used to roll back all operations to JMS server (sending/receiving) by the session when using *SESSION_TRANSACTED* acknowledge mode. Now suppose I am calling this method in a catch block of a receiving/processing operation (is reasonable?), to tell JMS server to redeliver the message for processing, But even if it is redelivered the processing still throws the same exception which cause the JMS server to redeliver the message again, so it seems a infinite process. How to handle this problem? or are there any other JMS features that is designed for it? Thanks in advance!
The rollback method in JMS will rollback any message sends and receives in that "transaction". Transaction here is local to the JMS session.
Whether a redelivery will cause a problem really depends on why the exception occurred. If it was due to some transitory issue then a redelivery may work. If you have the kind of problem that is once it occurs will always occur (an example of this would be a JMS TextMessage whose body should contain XML, but doesn't).
The JMS API doesn't provide any solution to this itself. This is typically taken care of by the JMS provider and how it behaves will depend on which one you use. WebSphere MQ for instance will redeliver up to a configurable maximum at which point it will move it off to a queue for bad messages. The Service Integration Bus in WebSphere Application Server has similar behaviour. I suggest you consult your JMS provider documentation to determine exactly how it behaves in this situation.
If you are running in an application server rollback typically doesn't do anything because the application server will be managing transactions for you.

Resources