Specifying timeout for reading messages from activemq queue using camel - jms

I am using camel to read messages from an activemq queue, process it and post it to another queue. The route looks as follows:
from("jms:incoming.queue")
.process(new MyProcessor())
.to("jms:outgoing.queue");
I need to specify a timeout such that if there are no messages in "incoming.queue" for more than 3 minutes, I would like to stop the route. I can use OnCompletion() but it gets called after each message. I can specify timeout for sending message to "outgoing.queue". Is there a way I can specify a timeout such that if there are no message for more than 3 minutes in the "incoming.queue", I can stop the route?
Thanks in advance for you help.

two options I can think of...
use a CronScheduledRoutePolicy to start/stop your route automatically at specified times...
CronScheduledRoutePolicy myPolicy = new CronScheduledRoutePolicy();
myPolicy.setRouteStartTime("0 20 * * * ?");
myPolicy.setRouteStopTime("0 0 * * * ?");
from("jms:incoming.queue")
.routePolicy(myPolicy).noAutoStartup()
.process(new MyProcessor())
.to("jms:outgoing.queue");
use a camel-quartz route and a polling consumer to drain the queue on a schedule
MyCoolBean cool = new MyCoolBean();
cool.setProducer(template);
cool.setConsumer(consumer);
from("quartz://myGroup/myTimerName?cron=0+20+*+*+*+?")
.bean(cool);
//MyCoolBean snippet
while (true) {
// receive the message from the queue, wait at most 60s
Object msg = consumer.receiveBody("jms:incoming.queue", 60000);
if (msg == null) {
break;
}
producer.sendBody("jms:outgoing.queue", msg);
}

Based on your comment above it appears you are just looking to start and stop the route on a schedule. You can use a quartz job to call the start and stop methods on your jms route. You could even make the quartz logic a route as well using the quartz endpoint if you like.

Related

Is There way to find the queue is empty using rabbit-template

I have subscriber which collects the messages until reaches the specified limit and then pass collected messages to the processor to perform some operations. Code works fine, problem is subscriber waits Until it collects specified number messages. If we have lesser message program control will not pass to processor.
For example Lets say my chunk size is 100 and if I have 100 or multiple of 100 messages then program works fine But if I have messages < 100 or 150 some of messages are read by subscriber but they were never passed to processor. Is there way I can figure-out is that Queue is empty using rabbit template so that I can check that condition and break the loop
#RabbitListener(id="messageListener",queues = "#{rabbitMqConfig.getSubscriberQueueName()}",containerFactory="queueListenerContainer")
public void receiveMessage(Message message, Channel channel, #Header("id") String messageId,
#Header("amqp_deliveryTag") Long deliveryTag) {
LOGGER.info(" Message:"+ message.toString());
if(messageList.size() < appConfig.getSubscriberChunkSize() ) {
messageList.add(message);
deliveryTagList.add(deliveryTag);
if(messageList.size() == appConfig.getSubscriberChunkSize()) {
LOGGER.info("------------- Calling Message processor --------------");
Message [] messageArry = new Message[messageList.size()];
messageArry = messageList.toArray(messageArry);
LOGGER.info("message Array Length: "+messageArry.length);
messageProcessor.process(messageArry);
messageList = new ArrayList<Message>(Arrays.asList(messageArry));
LOGGER.info("message Array to List conversion Size: "+messageList.size());
LOGGER.info("-------------- Completed Message processor -----------");
eppQ2Publisher.sendMessages(messageList, channel, deliveryTagList);
messageList.clear();
deliveryTagList.clear();
}
} else {
// do nothing..
}
There are two ways to achieve this.
Add an #EventListener to listen for ListenerContainerIdleEvents which are published when no messages have been received for some time; set the container's idleEventInterval property. The source of the event is the listener container; it contains the #RabbitListener's id. See Detecting Idle Consumers.
Use RabbitAdmin.getQueueProperties().
You can use RabbitAdmin.getQueueInfo("queue name").getMessageCount() that will be 0 for empty queue.

Scheduling camel route for google-pubsub component

Apache Camel CronScheduledRoutePolicy not stopping route?
I am trying to start and stop my route which reads from google-pubsub component and pushes to a JDBC datasource(oracle). I want to do this only in certain times of day as the Oracle database would be down from say 10pm-12AM every night during which times I don't want my route to keep processing incoming pubsub messages and want it to stop. But when I try it the route at stop time says:
'{"severity":"WARN","message":"o.a.c.r.q.ScheduledRoutePolicy | Route is not in a started/suspended state and cannot be stopped. The current route state is Stopped"}'
#Override
public void configure() {
CronScheduledRoutePolicy routePolicy = new CronScheduledRoutePolicy();
routePolicy.setRouteStartTime("0 15 00 * * ?");
routePolicy.setRouteStopTime("0 00 22 * * ?");
System.out.println("am here!!");
onException(Exception.class)
.log(LoggingLevel.ERROR," Error processing message: ${header['CamelGooglePubsub.MessageId']} : ${exception}" )
.to("log:app_error.log?level=DEBUG&showAll=true&showException=true")
.markRollbackOnlyLast()
.end();
CamelContext camelContext= getContext();
System.out.println("Route Status is" + camelContext.getRouteStatus("{{routeID}"));
from("google-pubsub:{{google_project_name}}:{{google_pubsub_subscription}}"+
"?concurrentConsumers={{concurrent_consumers}}"+
"&maxMessagesPerPoll={{max_messages_per_poll}}"+
"&connectionFactory=#googlePubsubConnectionFactory")
.routeId("{{routeID}")
.routePolicy(startPolicy)
.noAutoStartup()
I wanted the route to start at say 12:15 am (routePolicy.setRouteStartTime("0 15 00 * * ?");
and end at
routePolicy.setRouteStartTime("0 00 22 * * ?");
Am I doing this right, or should use process() on the route to stop it forcefully, how can I do it?
Thank you
After going through the command, I think it more like a spring boot question instead of a camel question.
I don't think using the camel cron component can do the job. You may consider to call a service from outside to shutdown the whole spring boot application by running the cron script to send shutdown command to the spring boot application from your docker image.
Here is an example that you can take a reference.

How to continuously read JMS Messages in a thread and achnowledge them based on their JMSMessageID in another thread?

I've written a Continuous JMS Message reveiver :
Here, I'm using CLIENT_ACKNOWLEDGE because I don't want this thread to acknowledge the messages.
(...)
connection.start();
session = connection.createQueueSession(true, Session.CLIENT_ACKNOWLEDGE);
queue = session.createQueue(QueueId);
receiver = session.createReceiver(queue);
While (true) {
message = receiver.receive(1000);
if ( message != null ) {
// NB : I can only pass Strings to the other thread
sendMessageToOtherThread( message.getText() , message.getJMSMessageID() );
}
// TODO Implement criteria to exit the loop here
}
In another thread, I'll do something as follows (after successful processing) :
This is in a distinct JMS Connection executed simultaneously.
public void AcknowledgeMessage(String messageId) {
if (this.first) {
this.connection.start();
this.session = this.connection.createQueueSession( false, Session.AUTO_ACKNOWLEDGE );
this.queue = this.session.createQueue(this.QueueId);
}
QueueReceiver receiver = this.session.createReceiver(this.queue, "JMSMessageID='" + messageId + "'");
Message AckMessage = receiver.receive(2000);
receiver.close();
}
It appears that the message is not found (AckMessage is null after timeout) whereas it does exist in the Queue.
I suspect the message to be blocked by the continuous input thread.. indeed, when firing the AcknowledgeMessage() alone, it works fine.
Is there a cleaner way to retrieve 1 message ? based on its QueueId and messageId
Also, I feel like there could be a risk of memory leak in the continuous reader if it has to memorize the Messages or IDs during a long time.. justified ?
If I'm using a QueueBrowser to avoid impacting the Acknowledge Thread, it looks like I cannot have this continuous input feed.. right ?
More context : I'm using ActiveMQ and the 2 threads are 2 custom "Steps" of a Pentaho Kettle transformation.
NB : Code samples are simplified to focus on the issue.
Well, you can't read that message twice, since you have already read it in the first thread.
ActiveMQ will not delete the message as you have not acknowledge it, but it won't be visible until you drop the JMS connection (I'm not sure if there is a long timeout here as well in ActiveMQ).
So you will have to use the original message and do: message.acknowledge();.
Note, however, that sessions are not thread safe, so be careful if you do this in two different threads.

Issue or confusion with JMS/spring/AMQ not processing messages asynchronously

We have a situation where we set up a component to run batch jobs using spring batch remotely. We send a JMS message with the job xml path, name, parameters, etc. and we wait on the calling batch client for a response from the server.
The server reads the queue and calls the appropriate method to run the job and return the result, which our messaging framework does by:
this.jmsTemplate.send(queueName, messageCreator);
this.LOGGER.debug("Message sent to '" + queueName + "'");
try {
final Destination replyTo = messageCreator.getReplyTo();
final String correlationId = messageCreator.getMessageId();
this.LOGGER.debug("Waiting for the response '" + correlationId + "' back on '" + replyTo + "' ...");
final BytesMessage message = (BytesMessage) this.jmsTemplate.receiveSelected(replyTo, "JMSCorrelationID='"
+ correlationId + "'");
this.LOGGER.debug("Response received");
Ideally, we want to be able to call out runJobSync method twice, and have two jobs simultaneously operate. We have a unit test that does something similar, without jobs. I realize this code isn't very great, but, here it is:
final List result = Collections.synchronizedList(new ArrayList());
Thread thread1 = new Thread(new Runnable(){
#Override
public void run() {
client.pingWithDelaySync(1000);
result.add(Thread.currentThread().getName());
}
}, "thread1");
Thread thread2 = new Thread(new Runnable(){
#Override
public void run() {
client.pingWithDelaySync(500);
result.add(Thread.currentThread().getName());
}
}, "thread2");
thread1.start();
Thread.sleep(250);
thread2.start();
thread1.join();
thread2.join();
Assert.assertEquals("both thread finished", 2, result.size());
Assert.assertEquals("thread2 finished first", "thread2", result.get(0));
Assert.assertEquals("thread1 finished second", "thread1", result.get(1));
When we run that test, thread 2 completes first since it just has a 500 millisencond wait, while thread 1 does a 1 second wait:
Thread.sleep(delayInMs);
return result;
That works great.
When we run two remote jobs in the wild, one which takes about 50 seconds to complete and one which is designed to fail immediately and return, this does not happen.
Start the 50 second job, then immediately start the instant fail job. The client prints that we sent a message requesting that the job run, the server prints that it received the 50 second request, but waits until that 50 second job is completed before handling the second message at all, even though we use the ThreadPoolExecutor.
We are running transactional with Auto acknowledge.
Doing some remote debugging, the Consumer from AbstractPollingMessageListenerContainer shows no unhandled messages (so consumer.receive() obviously just returns null over and over). The webgui for the amq broker shows 2 enqueues, 1 deque, 1 dispatched, and 1 in the dispatched queue. This suggests to me that something is preventing AMQ from letting the consumer "have" the second message. (prefetch is 1000 btw)
This shows as the only consumer for the particular queue.
Myself and a few other developers have poked around for the last few days and are pretty much getting nowhere. Any suggestions on either, what we have misconfigured if this is expected behavior, or, what would be broken here.
Does the method that is being remotely called matter at all? Currently the job handler method uses an executor to run the job in a different thread and does a future.get() (the extra thread is for reasons related to logging).
Any help is greatly appreciated
not sure I follow completely, but off the top, you should try the following...
set the concurrentConsumers/maxConcurrentConsumers greater than the default (1) on the MessageListenerContainer
set the prefetch to 0 to better promote balancing messages between consumers, etc.

Azure Worker: Read a message from the Azure queue in a mutex way

The run method of my worker role is:
public override void Run()
{
Message msg=null;
while (true)
{
msg = queue.GetMessage();
if(msg!=null && msg.DequeueCount==1){
//delete message
...
//execute operations
...
}
else if(msg!=null && msg.DequeueCount>1){
//delete message
...
}
else{
int randomTime = ...
Thread.Sleep(randomTime);
}
}
}
For performance tests I would that a message could be analysed only by a worker (I don't consider failure problems on workers).
But seems by my tests, that two workers can pick up the same message and read DequeueCount equals to 1 (both workers). Is it possible?
Does exist a way that allow just a worker to read a message in a "mutex" way?
How is your "getAMessage(queue)" method defined? If you do PeekMessage(), a message will be visible by all workers. If you do GetMessage(), the message will be got only by the worker which firsts get it. But for the invisibility timeout either specified or the default (30 sec.). You have to delete the message before the invisibility timeout comes.
Check out the Queue Service API for more information. I am sure that there is something wrong in your code. I use queues and they behave as by documentation in dev storage and in production storage. You may want to explicitly put higher value of the Visibility Timeout when you do GetMessage. And make sure you do not sleep longer than the visibility timeout.

Resources