How to synchronize onMessage() of MessageListener - spring

I have a spring3 web application.I use org.springframework.jms.listener.DefaultMessageListenerContainer to configure my message listener.I have a MDPOJO registered. What I wanted is, when onMessage() is getting executed for a particular request,others should wait until the first one finishes.In other words, onMessage() method invokes further work flow and it would take time to finish it off.Other messages in the queue should not be picked by onMessage until it confirms that previous request is complete.
Does it possible to synchronize the processing onMessage().I need to do the following :
Users will be posting n number of message into the Queue
I should be having an interface where user can remove a message from the queue.
When one message is under process, any of the other messages should not be picked up.
User should be able to change the priority of message processing

I could programatically list the messages in the queue using below code :
public void listAllJMS_Messages()
{
try {
ObjectName objectName=new ObjectName("jboss.messaging.destination:name=DLQ,service=Queue");
List ls = (List) server.invoke(objectName, "listAllMessages" , null, null);
List<javax.jms.Message> messages=(List<javax.jms.Message>)server.invoke(objectName, "listAllMessages" , null, null);
int count=0;
for(javax.jms.Message msg : messages) {
System.out.println((++count)+"t"+msg.getJMSMessageID());
if(msg.getJMSType() != null && msg.getJMSType().equalsIgnoreCase("Text")) {
TextMessage text = (TextMessage)msg;
System.out.println(text.getText());
}
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// System.out.println((++count)+"t"+msg.getText());
}
But the above code list messages from those Queue which do not have any consumers so far.My case I have an MDPOJO as message consumer which processes message synchronously. Still I want list messages from the Queue, so that user can delete it if he wishes.Above code return null list in case I have a consumer of the queue.

Related

What does "Too Many Messages without acknowledgement in topic" meaning for Quarkus consummer?

I see this message in my Too Many Messages without acknowledgement in topic my-topic-retry ... amount 1 ... The connector cannot commit as a record processing has not completed.
Does this affect that specific topic/connector or does it affect all the topics/connectors registered in my Quarkus application? I have 3 topics configured with mp.messaging.incoming.[some-name-here].topic=some-topic-here. They are not connected to each other.
Can my code below somehow cause this issue?
How is one message not acked considered Too many (See above amount 1)?
#Incoming("my-topic-retry")
#Outgoing("my-topic-back")
public Uni<Message<MyRequest>> retry(Message<MyRequest> in) {
try {
// Check if the message should be reprocessed immediately or delay it.
if (delayTimeSecs == 0)
return Uni
.createFrom()
.item(in.addMetadata(metadataOut)
.withPayload(in.getPayload()));
else
return Uni
.createFrom()
.item(in.addMetadata(metadataOut)
.withPayload(in.getPayload()))
.onItem().delayIt()
.by(Duration.ofSeconds(delayTimeSecs)); // Setting is 300 seconds.
} catch(Exception ex) {
in.nack(new IllegalStateException("An error occurred while trying to process the retry.", ex));
return Uni.createFrom().nullItem();
}
}

Stop consumption of message if it cannot be completed

I'm new to mass transit and have a question regarding how I should solve a failure to consume a message. Given the below code I am consuming INotificationRequestContract's. As you can see the code will break and not complete.
public class NotificationConsumerWorker : IConsumer<INotificationRequestContract>
{
private readonly ILogger<NotificationConsumerWorker> _logger;
private readonly INotificationCreator _notificationCreator;
public NotificationConsumerWorker(ILogger<NotificationConsumerWorker> logger, INotificationCreator notificationCreator)
{
_logger = logger;
_notificationCreator = notificationCreator;
}
public Task Consume(ConsumeContext<INotificationRequestContract> context)
{
try
{
throw new Exception("Horrible error");
}
catch (Exception e)
{
// >>>>> insert code here to put message back for later consumption. <<<<<
_logger.LogError(e, "Failed to consume message");
throw;
}
}
}
How do I best handle a scenario such as this where the consumption fails? In my specific case this is likely to occur if a required external service is unavailable.
I can see two solutions.
If there is a way to put the message back, or cancel the consumption so that it will be tried again.
I could store it locally in a database and create my own re-try method to wrap this (but would prefer not to for sake of simplicity).
The exceptions section of the documentation provides sufficient guidance for dealing with consumer exceptions.
There are two retry approaches, which can be used in combination:
Message Retry, which waits while the message is locked, in-process, for the next retry. Therefore, these should be short, to deal with transient issues.
Message Redelivery, which delays the message using either the broker delayed delivery, or a message scheduler, so that it is redelivered to the receive endpoint at some point in the future.
Once all retry/redelivery attempts are exhausted, the message is moved to the _error queue.

is it safe to unsubscribe while consuming reaches some condition?

i want to end subscribe a queue while consuming.
but my ack mode is AcknowledgeMode.AUTO, the container will issue the ack/nack based on whether the listener returns normally, or throws an exception.
so, if i unsubscribed in the consume method, then the method returns, and container try to ack, but it already unsubscribed before, so what would happens, is it safe to do so as follows:
unsubscribe way 1
DirectMessageListenerContainer container = getContainer();
container.setMessageListener(message -> {
// do something with message
// if some condition reaches, unsubscribe
if (reachEnd()) {
container.removeQueueNames(message.getMessageProperties().getConsumerQueue());
}
});
unsubscribe way 2
container.setMessageListener(new ChannelAwareMessageListener() {
#Override
public void onMessage(Message message, Channel channel) throws Exception {
// do something with message
// if some condition reaches, unsubscribe
if (reachEnd()) {
channel.basicCancel(message.getMessageProperties().getConsumerTag());
}
}
});
I would do neither, stop the container instead. Either way causes the consumer to be cancelled.
You should call stop() on a new thread, not the listener thread - it would cause a deadlock.

How to call kafkaconsumer api from partition assignor' s implementation

I have implemented my own partition assignment strategy by implementing RangeAssignor in my spring boot application.
I have overridden its subscriptionUserData method and adding some user data. Whenever this data is getting changed I want to trigger partition rebalance by invoking below kafkaConsumer's api
kafkaconsumer apis enforce rebalance
I am not sure how can I get the object of kafka consumer and invoke this api.
Please suggest
You can call consumer.wakeup() function
consumer.wakeup() is the only consumer method that is safe to call from a different thread. Calling wakeup will cause poll() to exit with WakeupException, or if consumer.wakeup() was called while the thread was not waiting on poll, the exception will be thrown on the next iteration when poll() is called. The WakeupException doesn’t need to be handled, but before exiting the thread, you must call consumer.close(). Closing the consumer will commit off‐ sets if needed and will send the group coordinator a message that the consumer is leaving the group. The consumer coordinator will trigger rebalancing immediately
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
System.out.println("Starting exit...");
consumer.wakeup(); **//1**
try {
mainThread.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
} });
...
Duration timeout = Duration.ofMillis(100);
try {
// looping until ctrl-c, the shutdown hook will cleanup on exit
while (true) {
ConsumerRecords<String, String> records =
movingAvg.consumer.poll(timeout);
System.out.println(System.currentTimeMillis() +
"-- waiting for data...");
for (ConsumerRecord<String, String> record : records) {
System.out.printf("offset = %d, key = %s, value = %s\n",
record.offset(), record.key(), record.value());
}
for (TopicPartition tp: consumer.assignment())
System.out.println("Committing offset at position:" +
consumer.position(tp));
movingAvg.consumer.commitSync();
}
} catch (WakeupException e) {
// ignore for shutdown. **//2**
} finally {
consumer.close(); **//3**
System.out.println("Closed consumer and we are done");
}
ShutdownHook runs in a separate thread, so the only safe action we can take is to call wakeup to break out of the poll loop.
Another thread calling wakeup will cause poll to throw a WakeupException. You’ll want to catch the exception to make sure your application doesn’t exit unexpect‐ edly, but there is no need to do anything with it.
Before exiting the consumer, make sure you close it cleanly.
full example at:
https://github.com/gwenshap/kafka-examples/blob/master/SimpleMovingAvg/src/main/java/com/shapira/examples/newconsumer/simplemovingavg/SimpleMovingAvgNewConsumer.java

Consuming from Camel queue every x minutes

Attempting to implement a way to time my consumer to receive messages from a queue every 30 minutes or so.
For context, I have 20 messages in my error queue until x minutes have passed, then my route consumes all messages on queue and proceeds to 'sleep' until another 30 minutes has passed.
Not sure the best way to implement this, I've tried spring #Scheduled, camel timer, etc and none of it is doing what I'm hoping for. I've been trying to get this to work with route policy but no dice in the correct functionality. It just seems to immediately consume from queue.
Is route policy the correct path or is there something else to use?
The route that reads from the queue will always read any message as quickly as it can.
One thing you could do is start / stop or suspend the route that consumes the messages, so have this sort of set up:
route 1: error_q_reader, which goes from('jms').
route 2: a timed route that fires every 20 mins
route 2 can use a control bus component to start the route.
from('timer?20mins') // or whatever the timer syntax is...
.to("controlbus:route?routeId=route1&action=start")
The tricky part here is knowing when to stop the route. Do you leave it run for 5 mins? Do you want to stop it once the messages are all consumed? There's probably a way to run another route that can check the queue depth (say every 1 min or so), and if it's 0 then shutdown route 1, you might get it to work, but I can assure you this will get messy as you try to deal with a number of async operations.
You could also try something more exotic, like a custom QueueBrowseStrategy which can fire an event to shutdown route 1 when there are no messages on the queue.
I built a customer bean to drain a queue and close, but it's not a very elegant solution, and I'd love to find a better one.
public class TriggeredPollingConsumer {
private ConsumerTemplate consumer;
private Endpoint consumerEndpoint;
private String endpointUri;
private ProducerTemplate producer;
private static final Logger logger = Logger.getLogger( TriggeredPollingConsumer.class );
public TriggeredPollingConsumer() {};
public TriggeredPollingConsumer( ConsumerTemplate consumer, String endpoint, ProducerTemplate producer ) {
this.consumer = consumer;
this.endpointUri = endpoint;
this.producer = producer;
}
public void setConsumer( ConsumerTemplate consumer) {
this.consumer = consumer;
}
public void setProducer( ProducerTemplate producer ) {
this.producer = producer;
}
public void setConsumerEndpoint( Endpoint endpoint ) {
consumerEndpoint = endpoint;
}
public void pollConsumer() throws Exception {
long count = 0;
try {
if ( consumerEndpoint == null ) consumerEndpoint = consumer.getCamelContext().getEndpoint( endpointUri );
logger.debug( "Consuming: " + consumerEndpoint.getEndpointUri() );
consumer.start();
producer.start();
while ( true ) {
logger.trace("Awaiting message: " + ++count );
Exchange exchange = consumer.receive( consumerEndpoint, 60000 );
if ( exchange == null ) break;
logger.trace("Processing message: " + count );
producer.send( exchange );
consumer.doneUoW( exchange );
logger.trace("Processed message: " + count );
}
producer.stop();
consumer.stop();
logger.debug( "Consumed " + (count - 1) + " message" + ( count == 2 ? "." : "s." ) );
} catch ( Throwable t ) {
logger.error("Something went wrong!", t );
throw t;
}
}
}
You configure the bean, and then call the bean method from your timer, and configure a direct route to process the entries from the queue.
from("timer:...")
.beanRef("consumerBean", "pollConsumer");
from("direct:myRoute")
.to(...);
It will then read everything in the queue, and stop as soon as no entries arrive within a minute. You might want to reduce the minute, but I found a second meant that if JMS as a bit slow, it would time out halfway through draining the queue.
I've also been looking at the sjms-batch component, and how it might be used with with a pollEnrich pattern, but so far I haven't been able to get that to work.
I have solved that by using my application as a CronJob in a MicroServices approach, and to give it the power of gracefully shutting itself down, we may set the property camel.springboot.duration-max-idle-seconds. Thus, your JMS consumer route keeps simple.
Another approach would be to declare a route to control the "lifecycle" (start, sleep and resume) of your JMS consumer route.
I would strongly suggest that you use the first approach.
If you use ActiveMQ you can leverage the Scheduler feature of it.
You can delay the delivery of a message on the broker by simply set the JMS property AMQ_SCHEDULED_DELAY to the number of milliseconds of the delay. Very easy in the Camel route
.setHeader("AMQ_SCHEDULED_DELAY", 60000)
It is not exactly what you look for because it does not drain a queue every 30 minutes, but instead delays every individual message for 30 minutes.
Notice that you have to enable the schedulerSupport in your broker configuration. Otherwise the delay properties are ignored.
<broker brokerName="localhost" dataDirectory="${activemq.data}" schedulerSupport="true">
...
</broker>
You should consider Aggregation EIP
from(URI_WAITING_QUEUE)
.aggregate(new GroupedExchangeAggregationStrategy())
.constant(true)
.completionInterval(TIMEOUT)
.to(URI_PROCESSING_BATCH_OF_EXCEPTIONS);
This example describes the following rules: all incoming in URI_WAITING_QUEUE objects will be grouped into List. constant(true) is a grouping condition (wihout any). And every TIMEOUT period (in millis) all grouped objects will be passed into URI_PROCESSING_BATCH_OF_EXCEPTIONS queue.
So the URI_PROCESSING_BATCH_OF_EXCEPTIONS queue will deal with List of objects to process. You can apply Split EIP to split them and to process one by one.

Resources