Background
I have a JMS message queue on Apache Artemis 2.7.0.redhat-00056. The broker is configured with a redelivery-delay of 10 minutes. If I publish a message to the queue and it fails on the consumer then it goes back to the queue as a scheduled message to be delivered in 10 minutes time. Any subsequent messages that are published are processed straightaway, so the queue is not blocked by the scheduled message.
If a number of messages are sent in quick succession then what happens is they all fail and get scheduled for 10 minutes time. In this case it looks like Artemis is trying to preserve the message order.
Documentation
The docs on redelivery say the following:
Other subsequent messages will be delivery regularly, only the cancelled message will be sent asynchronously back to the queue after the delay.
Redelivery documentation
Problem
It seems inconsistent to me that if you publish the messages in close succession that Artemis appears to preserve the order, whereas if there is a slight delay between messages then the queue does not block and only the failed messages are scheduled with a delay (as per the docs).
I'm trying to find a solution so that if one message fails and needs to be redelivered in 10 minutes that it doesn't block subsequent messages.
Example
It doesn't need anything special to recreate this. As described you just need to send some messages in quick succession to a queue that has a redelivery policy on the broker. I've been testing with a basic example as follows:
Spring boot app that produces five messages on startup.
#SpringBootApplication
public class ArtemisTestApplication
{
private Logger logger = LoggerFactory.getLogger(ArtemisTestApplication.class);
#Autowired
private JmsTemplate jmsTemplate;
#PostConstruct
public void init()
{
send("Message1");
send("Message2");
send("Message3");
send("Message4");
send("Message5");
}
public void send(String msg)
{
logger.debug("Sending message :{}", msg);
jmsTemplate.convertAndSend("jms.queue.TestQueue", msg);
}
public static void main(String[] args)
{
SpringApplication.run(ArtemisTestApplication.class, args);
}
}
Consume messages and throw an error to trigger the redelivery policy.
#Component
public class TestConsumer
{
private Logger logger = LoggerFactory.getLogger(TestConsumer.class);
#JmsListener(destination = "jms.queue.TestQueue")
public void receive(TextMessage message) throws JMSException
{
logger.debug("Message received: {}", message.getText());
throw new RuntimeException("Force redelivery policy");
}
}
The app was generated using the spring boot initializr. Other than giving it a name, the only thing of note selected was the artemis dependancy under messaging.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-artemis</artifactId>
</dependency>
In application.properties I have configured the connection properties to the locally running instance of Artemis.
spring.artemis.mode=native
spring.artemis.host=localhost
spring.artemis.port=61616
spring.artemis.user=
spring.artemis.password=
And on the broker I have configured the queue with a redelivery policy. Note: I set the delay to 0 here and the problem still occurs in that all messages are blocked until the first message has had three attempts and been moved to the DLQ. If you change the delay to a positive number then you see all five messages are scheduled for delivery later.
<address-settings>
<address-setting match="jms.queue.TestQueue">
<dead-letter-address>DLQ</dead-letter-address>
<redelivery-delay>0</redelivery-delay>
<max-delivery-attempts>3</max-delivery-attempts>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="jms.queue.TestQueue">
<anycast>
<queue name="jms.queue.TestQueue" />
</anycast>
</address>
</addresses>
I have come to the conclusion that this is a bug with Artemis. I raised a ticket for this and a comment has been left with somebody else experiencing the same issue.
https://issues.apache.org/jira/browse/ARTEMIS-2417
In the mean time I have had to change our client application to handle the redelivery policy itself. If there is an error reading the message then we increment a counter on the message and write it as a new message with the required delay. The message being consumed is then acknowledged to unblock the queue and allow the other messages to be read. I have left the redelivery policy configured on the broker as a fall back in case there is an error outside of this logic or something not caught. It's not ideal but it is at least now meeting the requirements.
Related
I am using ActiveMQ Artemis 2.19.1. I created producer and consumer apps using Spring Boot. I need multiple instances of the consumer to receive all the messages (multicast). I configured a Last Value Queue like this (broker.xml):
<address-settings>
<address-setting match="quote.#">
<max-size-bytes>1000000000</max-size-bytes> <!-- 1GB -->
<address-full-policy>BLOCK</address-full-policy>
<default-last-value-key>symbol</default-last-value-key>
<default-last-value-queue>true</default-last-value-queue>
<default-non-destructive>true</default-non-destructive>
</address-setting>
...
</address-settings>
Sending is like this and appears to work correctly. "symbol" is the VLQ key.
import org.springframework.jms.core.JmsTemplate;
#Service
public class DispatcherService {
#Autowired
JmsTemplate jmsTemplate;
public void sendMessageA(String message) {
jmsTemplate.convertAndSend(jmsQueue, message, m-> {
m.setStringProperty("symbol", "ABC");
return m;
});
}
If Spring Boot applicaiton.properties has:
spring.jms.pub-sub-domain=true
...then all clients receive all messages when published (good). However, the most recent message is not published to new clients when they start and subscribe to the topic.
If instead using:
spring.jms.pub-sub-domain=false
I can see the last message remains in the Last Value Queue (good) and connecting consumers get the last msg. However as messages are published they're distributed round-robin (anycast), not all messages to all consumers.
How can I make sure clients connecting to a LVQ receive the most recent message then all future messages, not just a round-robin distribution of future messages?
EDIT:
Doing this works. Just leave spring.jms.pub-sub-domain=true and set retroactive-message-count greater than the number of symbols that may be encountered otherwise some will not be retained:
<address-setting match="quotes">
<retroactive-message-count>100000</retroactive-message-count>
</address-setting>
<address-setting match="*.*.*.quotes.*.retro">
<default-last-value-key>symbol</default-last-value-key>
</address-setting>
It sounds to me like everything is working as designed. I believe your expectations are being thwarted because you're using pub/sub (i.e. JMS topics).
Let me provide a bit of background. When a JMS client creates a subscription on a topic the broker responds by creating a multicast queue on the address with the same name. The queue is named according to the kind of subscription it is. If it is a non-durable subscription then the queue is named with a UUID. If it is a durable subscription then the queue is named according to the subscription name provided by the client and the client ID (if available). When a message is sent to the address it is put in all the multicast queues bound to that address.
Therefore, when a new non-durable subscription is created a new queue for that subscription is also created which means that the subscriber will receive none of the messages sent to the topic prior to the creation of the subscription. This is the expected behavior for JMS topics (i.e. normal pub/sub semantics). Also, since the queue for a non-durable subscription is only available while the subscriber is connected that means there's no way to enforce LVQ semantics since any message which arrives in the queue will be immediately dispatched to the consumer. In short, LVQ with JMS topics doesn't make a lot of sense.
The behavior changes when you use a JMS queue because the queue is always there to receive messages. Consumers can come and go as they please while the broker enforces LVQ semantics.
One possible solution would be to create a special "initialization" queue where consumers could initially connect to get the latest information and after that they could subscribe to the JMS topic to get the pub/sub semantics you need. You could use a divert to make this transparent for the applications sending the messages so they can continue to just send to the JMS topic. Here's sample configuration:
<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core">
...
<diverts>
<divert name="myDivert">
<address>myTopic</address>
<forwarding-address>initQueue</forwarding-address>
<exclusive>false</exclusive>
</divert>
</diverts>
...
<addresses>
<address name="myTopic">
<multicast/>
</address>
<address name="initQueue">
<anycast>
<queue name="initQueue" last-value-key="symbol" non-destructive="true" />
</anycast>
</address>
...
</addresses>
</core>
</configuration>
Using this configuration every message send to the JMS topic myTopic will transparently sent to initQueue as well. This queue will keep only the most up-to-date messages since it using last-value semantics. Also, those up-to-date messages will stay in the queue for any subsequent consumer since the queue is non-destructive.
The only difficulty I anticipate here is with Spring which may not provide you with the flexibility to create the initial queue consumer and then create a topic subscriber. If you used the JMS API directly this would be a relatively simple matter.
Another potential solution would be to use retroactive addresses. The main thing to do here would be to ensure the internal ring queues were LVQs. You can do that with the default-last-value-key address-setting. See the documentation details on the match to use.
I'm using spring boot with spring-amqp and annotation based listener to consume message from a rabbitmq broker.
I've a spring component which contains a method like this:
#RabbitListener(queues = "tasks")
public void receiveMessage(#Payload Task task) {...}
I'm using the AUTO mode to acknowledge messages after successful execution of receiveMessage(...). If i detect a special error, i'm throwing AmqpRejectAndDontRequeueException to get this message into a configured dead letter queue. Now i need to nack a message only, so that the message gets requeued into the main queue of rabbitmq and another consumer has the possibility to work on that message again.
Which exception should i throw for that? I wouldn't like to use channel.basicNack(...) like described here (http://docs.spring.io/spring-integration/reference/html/amqp.html) if possible.
As long as defaultRequeueRejected is true (the default) in the container factory, throwing any exception other than AmqpRejectAndDontRequeueException will cause the message to be rejected and requeued.
The exception must not have a AmqpRejectAndDontRequeueException in its cause chain (the container traverses the causes to ensure there is no such exception).
I made a simple Jms project with 2 java files names are MessageSender.java,MessageConsumer.java.one for sending messages to Activemq:Queue and another for consuming messages from Activemq:Queue.Deployed this project in Apache Tomcat.following code was consumer code.
ActiveMQConnectionFactory connectionFactory=new ActiveMQConnectionFactory("admin","admin","tcp://localhost:61617?jms.prefetchPolicy.queuePrefetch=1");
Connection connection=connectionFactory.createConnection();
final Session session=connection.createSession(true, Session.CLIENT_ACKNOWLEDGE);
Queue queue=session.createQueue("ThermalMap");
javax.jms.MessageConsumer consumer=session.createConsumer(queue);
//anonymous class
MessageListener listener = new MessageListener() {
#Override
public void onMessage(Message msg) {
// My business code
}
};
Later If I want to change consumer code,I don't want to stop Tomcatbecause If I stop Tomcat entire jms project should not work. So clients can't able to sent messages to Activemq:Queue.So I don't want to follow this way.
I am thinking, If I stop consumers through Activemq console page.I don't need to stop Tomcat So clients can able to send messages normally.For this I check AMQ console page,I didn't seen any consumers.
Is it correct way to do this.
If it is correct way, How can I do this.
can anyone suggest me.
Thanks.
Call the .close() method on your MessageConsumer.
Is there a way to Spring DMLC (DefaultMessageListenerContainer) to consume messages (say every 10 minutes) using CRON?
I don't want the messages to be picked up by Spring DMLC all the times.
Let's say a message is produced and dropped off into the JMS broker, I like the consumer (Spring DMLC) to pick up after some time for processing.
I am wondering if there is a way to configure Spring DMLC and Quartz?
Why do you need a DMLC in that case? If you use spring, a JMSTemplate might be what you are looking for.
void readOneMessageAndProcess() throws JmsException{
Message msg = jmsTemplate.receive("SOME.QUEUE");
// Process.
}
Then have Quartz, java timer, or a simple public static void main(String args[]) triggerd by a cron job run the method
I'm using Spring 2.5 with my custom class that implements MessageListener. If a JmsException is thrown in my onMessage( ) method, what happens to the state of the queue?
Is the message considered "delivered" by the queue the moment onMessage is called? Or does the JmsException trigger some kind of rollback and the message is re-entered on the queue?
Thanks in advance!
From the JMS 1.1 spec...
4.5.2 Asynchronous Delivery
A client can register an object that implements the JMS MessageListener interface with a MessageConsumer. As messages arrive for the consumer, the provider delivers them by calling the listener’s onMessage method.
It is possible for a listener to throw a RuntimeException; however, this is considered a client programming error. Well-behaved listeners should catch such exceptions and attempt to divert messages causing them to some form of application-specific ‘unprocessable message’ destination.
The result of a listener throwing a RuntimeException depends on the session’s acknowledgment mode.
AUTO_ACKNOWLEDGE or
DUPS_OK_ACKNOWLEDGE - the message
will be immediately redelivered. The
number of times a JMS provider will
redeliver the same message before
giving up is provider-dependent. The
JMSRedelivered message header field
will be set for a message redelivered
under these circumstances.
CLIENT_ACKNOWLEDGE - the next message
for the listener is delivered. If a
client wishes to have the previous
unacknowledged message redelivered,
it must manually recover the session.
Transacted Session - the next message
for the listener is delivered. The
client can either commit or roll back
the session (in other words, a
RuntimeException does not
automatically rollback the session).
JMS providers should flag clients with message listeners that are throwing
RuntimeExceptions as possibly malfunctioning.