Multi-threaded JMS message consumption - jms

I need to concurrently consume messages from a queue.
The order of the messages is luckily not important.
I have come up with a solution, but I am not sure how correct it is.
The solution consumes messages in multiple threads, reuses them, and eventually kills the threads.
But isn't consuming messages like this thread-unsafe?
In addition, the current solution uses AUTO_ACKNOWLEDGE, ideally I would like to replace it with CLIENT_ACKNOWLEDGE.
Unfortunately, as I am using a single session, all the messages will get acknowledged at once, so it seems there is no easy way of doing it.
ActiveMQ Artemis has no option for INDIVIDUAL_ACKNOWLEDGE like ActiveMQ "Classic" did.
Does anyone have any ideas if this solution is okay and/or any ideas on how to improve this so that I can use CLIENT_ACKNOWLEDGE?
private void createConsumer(Connection connection, String queueName, int maxPoolSize) {
try {
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Queue queue = session.createQueue(queueName);
MessageConsumer messageConsumer = session.createConsumer(queue);
ThreadPoolExecutor executor = new ThreadPoolExecutor(
0, maxPoolSize,
60L,
TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
messageConsumer.setMessageListener(message -> {
try {
executor.submit( () -> {
messageHandler.handleMessage(message);
});
message.acknowledge();
} catch (JMSException e) {
e.printStackTrace();
}
});
} catch (JMSException e) {
e.printStackTrace();
}
}

ActiveMQ Artemis does have an INDIVIDUAL_ACKNOWLEDGE mode for consumers. You just need to use ActiveMQJMSConstants.INDIVIDUAL_ACKNOWLEDGE as noted in the documentation.
Also, basically the only JMS object that is really thread-safe is javax.jms.Connection so you can't use sessions, consumers, producers, etc. concurrently. However, in your code you're not actually using any of these concurrently.
That said, you're still just creating a single consumer here. I think you'd get even better performance and perhaps simpler code as well if you just created multiple sessions, consumers, listeners, etc. The client implementation will take care of invoking each of your listeners in their own thread as messages arrive so you won't have to manage any of the threads yourself.

Related

Spring RabbitMQ Retry policy only on a specific listener

I would like to have a Retry Policy only on a specific listener that listen to a specific queue (DLQ in the specific case).
#RabbitListener(queues = "my_queue_dlq", concurrency = "5")
public void listenDLQ(Message dlqMessage) {
// here implement logic for resend message to original queue (my_queue) for n times with a certain interval, and after n times... push to the Parking Lot Queue
}
BUT if I am not misunderstood when I specify a Retry Policy (for ex. on the application.yaml) all #RabbitListeners use it.
I suppose the only way would be to create a dedicated container factory, but this last one would be identical to the default one with ONLY one more Retry policy ... and it doesn't seem to me like the best to do so.
Something like that :
#RabbitListener(containerFactory = "container-factory-with-retrypolicy", queues = "myDLQ", concurrency = "5")
public void listenDLQ(Message dlqMessage) {
// here implement logic for resend message to original queues for n times with a certain interval
}
Do you see alternatives ?
Thank you in advance.
The ListenerContainer instances are registered to the RabbitListenerEndpointRegistry. You can obtain a desired one by the #RabbitListener(id) value. There you can get access to the MessageListener (casting to the AbstractAdaptableMessageListener) and set its retryTemplate property.
Or you can implement a ContainerCustomizer<AbstractMessageListenerContainer>, check its getListenerId() and do the same manipulation against its getMessageListener().

Stop consumption of message if it cannot be completed

I'm new to mass transit and have a question regarding how I should solve a failure to consume a message. Given the below code I am consuming INotificationRequestContract's. As you can see the code will break and not complete.
public class NotificationConsumerWorker : IConsumer<INotificationRequestContract>
{
private readonly ILogger<NotificationConsumerWorker> _logger;
private readonly INotificationCreator _notificationCreator;
public NotificationConsumerWorker(ILogger<NotificationConsumerWorker> logger, INotificationCreator notificationCreator)
{
_logger = logger;
_notificationCreator = notificationCreator;
}
public Task Consume(ConsumeContext<INotificationRequestContract> context)
{
try
{
throw new Exception("Horrible error");
}
catch (Exception e)
{
// >>>>> insert code here to put message back for later consumption. <<<<<
_logger.LogError(e, "Failed to consume message");
throw;
}
}
}
How do I best handle a scenario such as this where the consumption fails? In my specific case this is likely to occur if a required external service is unavailable.
I can see two solutions.
If there is a way to put the message back, or cancel the consumption so that it will be tried again.
I could store it locally in a database and create my own re-try method to wrap this (but would prefer not to for sake of simplicity).
The exceptions section of the documentation provides sufficient guidance for dealing with consumer exceptions.
There are two retry approaches, which can be used in combination:
Message Retry, which waits while the message is locked, in-process, for the next retry. Therefore, these should be short, to deal with transient issues.
Message Redelivery, which delays the message using either the broker delayed delivery, or a message scheduler, so that it is redelivered to the receive endpoint at some point in the future.
Once all retry/redelivery attempts are exhausted, the message is moved to the _error queue.

How to tell RSocket to read data stream by Java 8 Stream which backed by Blocking queue

I have the following scenario whereby my program is using blocking queue to process message asynchronously. There are multiple RSocket clients who wish to receive this message. My design is such a way that when a message arrives in the blocking queue, the stream that binds to the Flux will emit. I have tried to implement this requirement as below, but the client doesn't receive any response. However, I could see Stream supplier getting triggered correctly.
Can someone pls help.
#MessageMapping("addListenerHook")
public Flux<QueryResult> addListenerHook(String clientName){
System.out.println("Adding Listener:"+clientName);
BlockingQueue<QueryResult> listenerQ = new LinkedBlockingQueue<>();
Datalistener.register(clientName,listenerQ);
return Flux.fromStream(
()-> Stream.generate(()->streamValue(listenerQ))).map(q->{
System.out.println("I got an event : "+q.getResult());
return q;
});
}
private QueryResult streamValue(BlockingQueue<QueryResult> inStream){
try{
return inStream.take();
}catch(Exception e){
return null;
}
}
This is tough to solve simply and cleanly because of the blocking API. I think this is why there aren't simple bridge APIs here to help you implement this. You should come up with a clean solution to turn the BlockingQueue into a Flux first. Then the spring-boot part becomes a non-event.
This is why the correct solution is probably involving a custom BlockingQueue implementation like ObservableQueue in https://www.nurkiewicz.com/2015/07/consuming-javautilconcurrentblockingque.html
A alternative approach is in How can I create reactor Flux from a blocking queue?
If you need to retain the LinkedBlockingQueue, a starting solution might be something like the following.
val f = flux<String> {
val listenerQ = LinkedBlockingQueue<QueryResult>()
Datalistener.register(clientName,listenerQ);
while (true) {
send(bq.take())
}
}.subscribeOn(Schedulers.elastic())
With an API like flux you should definitely avoid any side effects before the subscribe, so don't register your listener until inside the body of the method. But you will need to improve this example to handle cancellation, or however you cancel the listener and interrupt the thread doing the take.

Reading and removing Exchange from SEDA queue in Camel

About SEDA component in Camel, anybody knows if a router removes the Exchange object from the queue when routing it? My router is working properly, but I'm afraid it keeps the Exchange objects in the queue, so my queue will be continuously growing...
This is my router:
public class MyRouter extends RouteBuilder {
#Override
public void configure() {
from("seda:input")
.choice()
.when(someValue)
.to("bean:someBean?method=whatever")
.when(anotherValue)
.to("bean:anotherBean?method=whatever");
}
}
If not, does anybody know how to remove the Exchange object from the queue once it has been routed or processed (I am routing the messages to some beans in my application, and they are working correctly, the only problem is in the queue).
Another question is, what happens if my input Exchange does not match any of the choice conditions? Is it kept in the queue as well?
Thanks a lot in advance.
Edited: after reading Claus' answer, I have added the end() method to the router. But my problem persists, at least when testing the seda and the router together. I put some messages in the queue, mocking the endpoints (which are receiving the messages), but the queue is getting full every time I execute the test. Maybe I am missing something. This is my test:
#Test
public void test() throws Exception {
setAdviceConditions(); //This method sets the advices for mocking the endpoints
Message message = createMessage("text", "text", "text"); //Body for the Exchange
for (int i = 0; i < 10; i++) {
template.sendBody("seda:aaa?size=10", message);
}
template.sendBody("seda:aaa?size=10", message); //java.lang.IllegalStateException: Queue full
}
Thanks!!
Edited again: after checking my router, I realised of the problem, I was writing to a different endpoint than the one the router was reading from (facepalm)
Thank you Claus for your answer.
1)
Yes when a Exchange is routed from a SEDA queue its removed immediately. The code uses poll() to poll and take the top message from the SEDA queue.
SEDA is in-memory based so yes the Exchanges is stored on the SEDA queue in-memory. You can configure a queue size so the queue can only hold X messages. See the SEDA docs at: http://camel.apache.org/seda
There is also JMX operations where you can purge the queue (eg empty the queue) which you can use from a management console.
2)
When the choice has no predicates that matches, then nothing happens. You can have an otherwise to do some logic in these cases if you want.
Also mind that you can continue route after the choice, eg
#Override
public void configure() {
from("seda:input")
.choice()
.when(someValue)
.to("bean:someBean?method=whatever")
.when(anotherValue)
.to("bean:anotherBean?method=whatever")
.end()
.to("bean:allGoesHere");
}
eg in the example above, we have end() to indicate where the choice ends. So after that all the messages goes there (also the ones that didnt match any predicates)

Spring integration concurrency - detecting completion

I have a spring integration workflow that embeds task executors in its channels so as to enable concurrent processing. I manually fire off processing via a gateway and need to block the main thread until all asynchronous processes have completed. Is there a way to accomplish this? I have tried thinking along the lines of barriers, latches, and channel interceptors, but no solution is forthcoming. Any ideas anyone?
Have a look at the Aggregator section from the reference manual:
http://static.springsource.org/spring-integration/docs/latest-ga/reference/htmlsingle/#aggregator
If an aggregator is downstream from the gateway, the gateway caller can block (or use a Future if that's defined as the return type on the gateway interface) until the aggregator has received and released the correlated group of messages, even if those were processed on different threads asynchronously.
Essentially the Aggregator is a barrier itself, and its default release-strategy is essentially a countdown-latch based on the sequence-size of the message group.
Hope that helps.
-Mark
To answer my own question, here's what I ended up doing:
Create a customized ExecutorService that knows when to shutdown - in my case this was simply when releasing the last active thread - i.e. after executing the last piece in the workflow:
public class WorkflowThreadPoolExecutor extends ScheduledThreadPoolExecutor {
public WorkflowThreadPoolExecutor(int corePoolSize) {
super(corePoolSize);
}
#Override
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
if (getActiveCount() == 1) {
shutdown();
}
}
}
Await executor termination in main thread as follws:
try {
executorService.awaitTermination(Integer.MAX_VALUE, TimeUnit.SECONDS);
} catch (InterruptedException ex) {
LOG.error("message=Error awaiting termination of executor", ex);
}
Hope this helps someone else facing a similar issue.

Resources