We're using spring-amqp 1.5.2, with RabbitMQ version 3.5.3. All queues work fine and we have consumers listening on them with no issues, except one consumer which keeps on dropping connections mysteriously. spring-amqp auto recovers, but after a few hours the consumers are disconnected and never come back up.
The queue is declared as
#Bean()
public Queue analyzeTransactionsQueue(){
Map<String, Object> args = new HashMap<>();
args.put("x-message-ttl", 60000);
return new Queue("analyze.txns", true, false, false, args);
}
Other queues are declared in a similar fashion, and have no issues.
The consumer (listener) is declared as
#Bean
public SimpleRabbitListenerContainerFactory analyzeTransactionListenerContainerFactory(ConnectionFactory connectionFactory, AsyncTaskExecutor asyncTaskExecutor) {
connectionFactory.getVirtualHost());
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrentConsumers(2);
factory.setMaxConcurrentConsumers(4);
factory.setTaskExecutor(asyncTaskExecutor);
ConsumerTagStrategy consumerTagStrategy = new ConsumerTagStrategy() {
#Override
public String createConsumerTag(String queue) {
return queue;
}
};
factory.setConsumerTagStrategy(consumerTagStrategy);
return factory;
}
Again, other consumers having no issues are declared in a similar fashion.
The code after the message is received has no exceptions. Even after turning on DEBUG logging for SimpleMessageListenerContainer, there are no errors in the logs.
LogLevel=DEBUG; category=org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer; msg=Cancelling Consumer: tags=[{}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#10.17.1.13:5672/,47), acknowledgeMode=AUTO local queue size=0;
LogLevel=DEBUG; category=org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer; msg=Idle consumer terminating: Consumer: tags=[{}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#10.17.1.13:5672/,47), acknowledgeMode=AUTO local queue size=0;
Any ideas on why this would be happening. Have tried DEBUG logging but to no avail.
one thing I have observed is that consumer would disconnect if there's an exception during parsing and it doesn't always log the problem, depending on your logging config...
since then, I always wrap the handleDelivery method into a try catch, to get better logging and no connection drop :
consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag,
Envelope envelope,
AMQP.BasicProperties properties,
byte[] body) throws IOException {
log.info("processing message - content : " + new String(body, "UTF-8"));
try {
MyEvent myEvent = objectMapper.readValue(new String(body, "UTF-8"), MyEvent.class);
processMyEvent(myEvent);
} catch (Exception exp) {
log.error("couldn't process "+MyEvent.class+" message : ", exp);
}
}
};
Looking at the way you have configured things, it is pretty obvious that you have enabled dynamic scaling of consumers.
factory.setConcurrentConsumers(2);
factory.setMaxConcurrentConsumers(4);
There was a threading issue that I submitted a fix for which caused number of consumers to drop to zero. This was happening while consumers were scaling down.
By the looks of it, you have been a victim of that problem. The fix has been back-ported I believe and can be seen here
Try using the latest version and see whether you get the same problem.
Related
Im trying to utilise RabbitMQ streams (as of 3.9+) with Spring boot's org.springframework.boot:spring-boot-starter-amqp starter.
First I started the RabbitMQ docker compose by
rabbitmq:
image: rabbitmq:management
ports:
- "5672:5672"
- "15672:15672"
and then run the bash script to enable stream plugin:
docker exec docker_rabbitmq_1 rabbitmq-plugins enable rabbitmq_stream
By declaring the queue
#Bean
Queue queue2() {
Map<String, Object> args = new HashMap<>();
args.put("x-queue-type", "stream");
return new Queue("test3-queue",true, false, false, args);
}
I configure the queue as stream and the broker says in logs
2021-09-22 21:06:46.400520+00:00 [warn] <0.1090.0> rabbit_stream_coordinator: started writer __test3-queue_1632344806396284388 on rabbit#1115e1f022e2 in 1
This should be enough config to start using Rabbit's new feature Streams with the given queue.
When I send message to the rabbitTemplate
template.convertAndSend("test3-stream", request.getMessage());
all my listeners
#RabbitListener(id = "listener1", queues = "#{queue2}")
public void listen1(String in) {
log.info("AMQP listener 1: {}", in);
}
#RabbitListener(id = "listener2", queues = "#{queue2}")
public void listen2(String in) {
log.info("AMQP listener 2: {}", in);
}
#RabbitListener(id = "listener3", queues = "#{queue2}")
public void listen3(String in) {
log.info("AMQP listener 3: {}", in);
}
receive the message and print its log. According the doc, referencing the queue by SpEL receives the queue configured with x-queue-type: stream.
As SB uses the amqp starter with the amqp client 0.9.1, it should work, but id does not:
But when I add new listener, old messages are not being processed with it.
Is it even possible to use the #RabbitListener with the new Streams feature or Im too early with using the append-only log with Rabbit broker?
Should I use Kafka instead just because Spring has not yet implemented support for RabbitMQ streams?
Rabbit comes with its own Java library handling the stream events, which works well but it is missing the simplicity of the Spring underlying heavy lifting..
EDIT:
I've got a bit further by configuring the RabbitListenerContainerFactory with customizer:
#Bean
RabbitListenerContainerFactory rabbitListenerContainerFactory(
SimpleRabbitListenerContainerFactoryConfigurer configurer, ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setContainerCustomizer(c -> {
Map<String, Object> args = new HashMap<>(c.getConsumerArguments());
args.put("x-stream-offset", "first");
c.setConsumerArguments(args);
});
return factory;
}
Which causes the new listener to read the messages from the beginning (offset 0). The odd is that ALL listeners now read from offset 0.
I guess its because the consumers (listeners) do not have the name set correctly?
Working versions in the app
IBM AllClient version : 'com.ibm.mq:com.ibm.mq.allclient:9.1.1.0'
org.springframework:spring-jms : 4.3.9.RELEASE
javax.jms:javax.jms-api : 2.0.1
My requirement is that in case of the failure of a message processing due to say, consumer not being available (eg. DB is unavailable), the message remains in the queue or put back on the queue (if that is even possible). This is because the order of the messages is important, messages have to be consumed in the same order that they are received. The Java app is single-threaded.
I have tried the following
#Override
public void onMessage(Message message)
{
try{
if(message instanceOf Textmessage)
{
}
:
:
throw new Exception("Test");// Just to test the retry
}
catch(Exception ex)
{
try
{
int temp = message.getIntProperty("JMSXDeliveryCount");
throw new RuntimeException("Redlivery attempted ");
// At this point, I am expecting JMS to put the message back into the queue.
// But it is actually put into the Bakout queue.
}
catch(JMSException ef)
{
String temp = ef.getMessage();
}
}
}
I have set this in my spring.xml for the jmsContainer bean.
<property name="sessionTransacted" value="true" />
What is wrong with the code above ?
And if putting the message back in the queue is not practical, how can one browse the message, process it and, if successful, pull the message (so it is consumed and no longer on the queue) ? Is this scenario supported in IBM provider for JMS?
The IBM MQ Local queue has BOTHRESH(1).
To preserve message ordering, one approach might be to stop the message listener temporarily as part of your rollback strategy. Looking at the Spring Boot doc for DefaultMessageListenerContainer there is a stop(Runnable callback) method. I've experimented with using this in a rollback as follows.
To ensure my Listener is single threaded, on my DefaultJmsListenerContainerFactory I set containerFactory.setConcurrency("1").
In my Listener, I set an id
#JmsListener(destination = "DEV.QUEUE.2", containerFactory = "listenerTwoFactory", concurrency="1", id="listenerTwo")
And retrieve the DefaultMessageListenerContainer instance.
JmsListenerEndpointRegistry reg = context.getBean(JmsListenerEndpointRegistry.class);
DefaultMessageListenerContainer mlc = (DefaultMessageListenerContainer) reg.getListenerContainer("listenerTwo");
For testing, I check JMSXDeliveryCount and throw an exception to rollback.
retryCount = Integer.parseInt(msg.getStringProperty("JMSXDeliveryCount"));
if (retryCount < 5) {
throw new Exception("Rollback test "+retryCount);
}
In the Listener's catch processing, I call stop(Runnable callback) on the DefaultMessageListenerContainer instance and pass in a new class ContainerTimedRestart as defined below.
//catch processing here and decide to rollback
mlc.stop(new ContainerTimedRestart(mlc,delay));
System.out.println("#### "+getClass().getName()+" Unable to process message.");
throw new Exception();
ContainerTimedRestart extends Runnable and DefaultMessageListenerContainer is responsible for invoking the run() method when the stop call completes.
public class ContainerTimedRestart implements Runnable {
//Container instance to restart.
private DefaultMessageListenerContainer theMlc;
//Default delay before restart in mills.
private long theDelay = 5000L;
//Basic constructor for testing.
public ContainerTimedRestart(DefaultMessageListenerContainer mlc, long delay) {
theMlc = mlc;
theDelay = delay;
}
public void run(){
//Validate container instance.
try {
System.out.println("#### "+getClass().getName()+"Waiting for "+theDelay+" millis.");
Thread.sleep(theDelay);
System.out.println("#### "+getClass().getName()+"Restarting container.");
theMlc.start();
System.out.println("#### "+getClass().getName()+"Container started!");
} catch (InterruptedException ie) {
ie.printStackTrace();
//Further checks and ensure container is in correct state.
//Report errors.
}
}
I loaded my queue with three messages with payloads "a", "b", and "c" respectively and started the listener.
Checking DEV.QUEUE.2 on my queue manager I see IPPROCS(1) confirming only one application handle has the queue open. The messages are processed in order after each is rolled five times and with a 5 second delay between rollback attempts.
IBM MQ classes for JMS has poison message handling built in. This handling is based on the QLOCAL setting BOTHRESH, this stands for Backout Threshold. Each IBM MQ message has a "header" called the MQMD (MQ Message Descriptor). One of the fields in the MQMD is BackoutCount. The default value of BackoutCount on a new message is 0. Each time a message rolled back to the queue this count is incremented by 1. A rollback can be either from a specific call to rollback(), or due to the application being disconnected from MQ before commit() is called (due to a network issue for example or the application crashing).
Poison message handling is disabled if you set BOTHRESH(0).
If BOTHRESH is >= 1, then poison message handling is enabled and when IBM MQ classes for JMS reads a message from a queue it will check if the BackoutCount is >= to the BOTHRESH. If the message is eligible for poison message handling then it will be moved to the queue specified in the BOQNAME attribute, if this attribute is empty or the application does not have access to PUT to this queue for some reason, it will instead attempt to put the message to the queue specified in the queue managers DEADQ attribute, if it can't put to either of these locations it will be rolled back to the queue.
You can find more detailed information on IBM MQ classes for JMS poison message handling in the IBM MQ v9.1 Knowledge Center page Developing applications>Developing JMS and Java applications>Using IBM MQ classes for JMS>Writing IBM MQ classes for JMS applications>Handling poison messages in IBM MQ classes for JMS
In Spring JMS you can define your own container. One container is created for one Jms Destination. We should run a single-threaded JMS listener to maintain the message ordering, to make this work set the concurrency to 1.
We can design our container to return null once it encounters errors, post-failure all receive calls should return null so that no messages are polled from the destination till the destination is active once again. We can maintain an active state using a timestamp, that could be simple milliseconds. A sample JMS config should be sufficient to add backoff. You can add small sleep instead of continuously returning null from receiveMessage method, for example, sleep for 10 seconds before making the next call, this will save some CPU resources.
#Configuration
#EnableJms
public class JmsConfig {
#Bean
public JmsListenerContainerFactory<?> jmsContainerFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory() {
#Override
protected DefaultMessageListenerContainer createContainerInstance() {
return new DefaultMessageListenerContainer() {
private long deactivatedTill = 0;
#Override
protected Message receiveMessage(MessageConsumer consumer) throws JMSException {
if (deactivatedTill < System.currentTimeMillis()) {
return receiveFromConsumer(consumer, getReceiveTimeout());
}
logger.info("Disabled due to failure :(");
return null;
}
#Override
protected void doInvokeListener(MessageListener listener, Message message)
throws JMSException {
try {
super.doInvokeListener(listener, message);
} catch (Exception e) {
handleException(message);
throw e;
}
}
private long getDelay(int retryCount) {
if (retryCount <= 1) {
return 20;
}
return (long) (20 * Math.pow(2, retryCount));
}
private void handleException(Message msg) throws JMSException {
if (msg.propertyExists("JMSXDeliveryCount")) {
int retryCount = msg.getIntProperty("JMSXDeliveryCount");
deactivatedTill = System.currentTimeMillis() + getDelay(retryCount);
}
}
#Override
protected void doInvokeListener(SessionAwareMessageListener listener, Session session,
Message message)
throws JMSException {
try {
super.doInvokeListener(listener, session, message);
} catch (Exception e) {
handleException(message);
throw e;
}
}
};
}
};
// This provides all boot's default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
// You could still override some of Boot's default if necessary.
return factory;
}
}
We are using rabbitmq and our producers and consumers were developed using spring-boot-starter-amqp (spring-rabbit 1.5.6.RELEASE),
We've had some deadlocks in our consumers in production and, when it happened, we needed restart our consumers to process new messages again. Probably this problem is happening because some messages are blocked by another transaction in one of our consumers. We started an investigation to found these deadlocks but we have a complex context and is little hard to found dead-locks problems, so I'd like ask:
There is a way to put some timeout when we process our messages in our spring-boot application to help-us to abort just one message don't bloking all messages in our consumers.
our code:
AmqpConfig.class{
#Bean
public SimpleMessageListenerContainer container(ConnectionFactory connectionFactory, MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueues(consumerQueue());
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
public MessageListenerAdapter listenerAdapter(Receiver receiver) {
MessageListenerAdapter listener = new MessageListenerAdapter(receiver);
listener.addQueueOrTagToMethodName(consumerQueue().getName(), "consumerListener");
return listener;
}
#Bean
public Queue consumerQueue() {
return new Queue("some.consumer", true, false, false, null);
}
}
Receiver.class{
//method than I'd like put some timeout to process.
public String consumerListener(String message) {
//code probably causing deadlock.
}
}
Thanks for your help.
If the deadlocks are on synchronized blocks or methods, you are out of luck; they are not interruptible.
You have to solve the deadlocks.
If you can change them to use java.util.concurrent.locks.Locks instead, you use tryLock() with a timeout.
There is nothing the framework can do to help you with this problem.
1. Background
We are evaluating Spring JMS and testing out the JMSTemplate for various scenarios - queues, topics (durable, non-durable).
We experienced message loss for non-durable topic subscribers and would like to seek clarifications here.
2. Problem Statement
a) We wrote a standalone java program that would call the JMSTemplate.receive method every n secs to receive messages synchronously from a non-durable topic**.
b) We noticed that there is always message loss after the 1st invocation of the JMSTemplate.receive method. This was due to the JMSTemplate.receive method stopping the connection when it reaches ConnectionFactoryUtils.releaseConnection(...).
JMSTemplate:
public <T> T execute(SessionCallback<T> action, boolean startConnection) throws JmsException
{
Assert.notNull(action, "Callback object must not be null");
Connection conToClose = null;
Session sessionToClose = null;
try {
Session sessionToUse = ConnectionFactoryUtils.doGetTransactionalSession(
getConnectionFactory(), this.transactionalResourceFactory, startConnection);
if (sessionToUse == null) {
conToClose = createConnection();
sessionToClose = createSession(conToClose);
if (startConnection) {
conToClose.start();
}
sessionToUse = sessionToClose;
}
if (logger.isDebugEnabled()) {
logger.debug("Executing callback on JMS Session: " + sessionToUse);
}
return action.doInJms(sessionToUse);
}
catch (JMSException ex) {
throw convertJmsAccessException(ex);
}
finally {
JmsUtils.closeSession(sessionToClose);
ConnectionFactoryUtils.releaseConnection(conToClose, getConnectionFactory(), startConnection); // the connection is stopped here
}
}
ConnectionFactoryUtils.releaseConnection(...):
public static void releaseConnection(Connection con, ConnectionFactory cf, boolean started) {
if (con == null) {
return;
}
if (started && cf instanceof SmartConnectionFactory && ((SmartConnectionFactory) cf).shouldStop(con)) {
try {
con.stop(); // connection was stopped here
}
catch (Throwable ex) {
logger.debug("Could not stop JMS Connection before closing it", ex);
}
}
try {
con.close();
}
catch (Throwable ex) {
logger.debug("Could not close JMS Connection", ex);
}
3. Validation with Spring Documentation
The Spring JMS documentation advised to use pooled connections, so we made sure we did.
Our java program is obtaining the JMS Connection Factories from WLS JMS and MQ JMS (LDAP) Providers and decorated with SingleConnectionFactory and CachingConnectionFactory in respective test cases.
This is what we observed during testing:
a) SingleConnectionFactory - Connection was stopped (Consumer/Session were closed as well).
b) CachingConnectionFactory - Connection was also stopped (although Consumer/Session were cached and not closed)
4. Questions:
a) Has anybody hit the same issue as us?
b) Would you consider this as a defect of Spring JMS for the use case of Non-Durable Subscriptions?
c) We are considering customizing a CachingConnectionFactory that won't stop the connection. Any downsides?
Note: We are aware that Async MessageListeners like DMLC/SMLC and Sync Durable Topic Subscribers using JMSTemplate would not have this issue. We just wish to clarify for Sync Non-Durable Topic Subscribers using JMSTemplate.
Would greatly appreciate any comments and thoughts.
Thanks!
Victor
ActiveMQ 5.10.0
Spring 4.1.2
I'm using Spring to access activeMQ and trying to peek at the queue before adding a new message onto the queue. The message is added successfully, but it does not show anything in the queue. Through the web interface, I see my messages are pending in the queue.
Thanks!
#Service
public class MessageQueueService{
private static final Logger logger = LoggerFactory.getLogger(MessageQueueService.class);
#Inject
JmsTemplate jmsTemplate;
#SuppressWarnings({ "rawtypes", "unchecked" })
public void testAddJob(){
jmsTemplate.send(new MessageCreator() {
public Message createMessage(Session session) throws JMSException {
IndexJob j1=new IndexJob();
j1.setOperation("post");
ObjectMessage om=session.createObjectMessage();
om.setObject(j1);
QueueBrowser qb=session.createBrowser((javax.jms.Queue) jmsTemplate.getDefaultDestination());
Enumeration<Message> messages=qb.getEnumeration();
logger.info("browsing "+qb.getQueue().getQueueName());
int i=0;
while(messages.hasMoreElements()) {
i++;
Message message=messages.nextElement();
logger.info(message+"");
}
logger.info("total record:"+i);
return om;
}
});
}
output:
2014-12-07 00:03:43.874 [main] INFO c.b.b.s.MessageQueueService - browsing indexJob
2014-12-07 00:03:43.878 [main] INFO c.b.b.s.MessageQueueService - total record:0
UPDATE: execute has a not yet well-documented parameter boolean startConnection. When it is set to "true", it seem to work. This is not a solution though -
String result=jms.execute(new SessionCallback<String>() {
#Override
public String doInJms(Session session) throws JMSException {
QueueBrowser queue=session.createBrowser((Queue)session.createQueue("indexJob"));
Enumeration<Message> messages=queue.getEnumeration();
String result="";
logger.info("Browse Queue: "+queue.getQueue().getQueueName());
while(messages.hasMoreElements()) {
Message message=messages.nextElement();
result+=message;
}
logger.info(result);
return result;
}
}, true);
Looking at org.springframework.jms.core.JmsTemplate.class source, most of the send methods are using execute() method with startConnection=false.
If the connection was not started, then how did the messages get added to the queue?
Does anyone know what this #param startConnection whether to start the Connection means?
This can be a somewhat confusing bit of JMS. The Connection start only refers to consumption of messages from the connection, not to producing. You are free to produce messages whenever you like, started or not, but if you want to consume or browse a destination you need to start the connection otherwise you will not get any messages dispatched to your consumers.
This purpose behind this is to allow you to create all your JMS resources prior to receiving any messages which might otherwise catch you in an state where you app isn't quite ready for them.
So in short, if you want to browse that message, you need to ensure the connection gets started.