How to continuously read JMS Messages in a thread and achnowledge them based on their JMSMessageID in another thread? - jms

I've written a Continuous JMS Message reveiver :
Here, I'm using CLIENT_ACKNOWLEDGE because I don't want this thread to acknowledge the messages.
(...)
connection.start();
session = connection.createQueueSession(true, Session.CLIENT_ACKNOWLEDGE);
queue = session.createQueue(QueueId);
receiver = session.createReceiver(queue);
While (true) {
message = receiver.receive(1000);
if ( message != null ) {
// NB : I can only pass Strings to the other thread
sendMessageToOtherThread( message.getText() , message.getJMSMessageID() );
}
// TODO Implement criteria to exit the loop here
}
In another thread, I'll do something as follows (after successful processing) :
This is in a distinct JMS Connection executed simultaneously.
public void AcknowledgeMessage(String messageId) {
if (this.first) {
this.connection.start();
this.session = this.connection.createQueueSession( false, Session.AUTO_ACKNOWLEDGE );
this.queue = this.session.createQueue(this.QueueId);
}
QueueReceiver receiver = this.session.createReceiver(this.queue, "JMSMessageID='" + messageId + "'");
Message AckMessage = receiver.receive(2000);
receiver.close();
}
It appears that the message is not found (AckMessage is null after timeout) whereas it does exist in the Queue.
I suspect the message to be blocked by the continuous input thread.. indeed, when firing the AcknowledgeMessage() alone, it works fine.
Is there a cleaner way to retrieve 1 message ? based on its QueueId and messageId
Also, I feel like there could be a risk of memory leak in the continuous reader if it has to memorize the Messages or IDs during a long time.. justified ?
If I'm using a QueueBrowser to avoid impacting the Acknowledge Thread, it looks like I cannot have this continuous input feed.. right ?
More context : I'm using ActiveMQ and the 2 threads are 2 custom "Steps" of a Pentaho Kettle transformation.
NB : Code samples are simplified to focus on the issue.

Well, you can't read that message twice, since you have already read it in the first thread.
ActiveMQ will not delete the message as you have not acknowledge it, but it won't be visible until you drop the JMS connection (I'm not sure if there is a long timeout here as well in ActiveMQ).
So you will have to use the original message and do: message.acknowledge();.
Note, however, that sessions are not thread safe, so be careful if you do this in two different threads.

Related

how to read messages by packet with JmsTemplate of Sring

i have an outOfMemoryException while reading messages from a queue with 2 M of messages.
and i am trying to find a way to read messgages by 1000 for example .
here is my code
List<TextMessage> messages = jmsTemplate.browse(JndiQueues.BACKOUT, (session,browser) -> {
Enumeration<?> browserEnumeration = browser.getEnumeration().;
List<TextMessage> messageList = new ArrayList<TextMessage>();
while (browserEnumeration.hasMoreElements()) {
messageList.add((TextMessage) browserEnumeration.nextElement());
}
return messageList;
});
thanks
Perform the browse on a different thread and pass the results subset to the main thread via a BlockingQueue<List<TextMessage>>. e.g. a LinkedBlockingQueue with a small capacity.
The browsing thread will block when the queue is full. When the main thread removes an entry from the queue, the browser can add a new one.
Probably makes sense to have a capacity at least 2 so that the next list is available immediately.

Is There way to find the queue is empty using rabbit-template

I have subscriber which collects the messages until reaches the specified limit and then pass collected messages to the processor to perform some operations. Code works fine, problem is subscriber waits Until it collects specified number messages. If we have lesser message program control will not pass to processor.
For example Lets say my chunk size is 100 and if I have 100 or multiple of 100 messages then program works fine But if I have messages < 100 or 150 some of messages are read by subscriber but they were never passed to processor. Is there way I can figure-out is that Queue is empty using rabbit template so that I can check that condition and break the loop
#RabbitListener(id="messageListener",queues = "#{rabbitMqConfig.getSubscriberQueueName()}",containerFactory="queueListenerContainer")
public void receiveMessage(Message message, Channel channel, #Header("id") String messageId,
#Header("amqp_deliveryTag") Long deliveryTag) {
LOGGER.info(" Message:"+ message.toString());
if(messageList.size() < appConfig.getSubscriberChunkSize() ) {
messageList.add(message);
deliveryTagList.add(deliveryTag);
if(messageList.size() == appConfig.getSubscriberChunkSize()) {
LOGGER.info("------------- Calling Message processor --------------");
Message [] messageArry = new Message[messageList.size()];
messageArry = messageList.toArray(messageArry);
LOGGER.info("message Array Length: "+messageArry.length);
messageProcessor.process(messageArry);
messageList = new ArrayList<Message>(Arrays.asList(messageArry));
LOGGER.info("message Array to List conversion Size: "+messageList.size());
LOGGER.info("-------------- Completed Message processor -----------");
eppQ2Publisher.sendMessages(messageList, channel, deliveryTagList);
messageList.clear();
deliveryTagList.clear();
}
} else {
// do nothing..
}
There are two ways to achieve this.
Add an #EventListener to listen for ListenerContainerIdleEvents which are published when no messages have been received for some time; set the container's idleEventInterval property. The source of the event is the listener container; it contains the #RabbitListener's id. See Detecting Idle Consumers.
Use RabbitAdmin.getQueueProperties().
You can use RabbitAdmin.getQueueInfo("queue name").getMessageCount() that will be 0 for empty queue.

NetMQ PUSH socket blocks indefinitely when it reaches HWM

I'm using NetMQ (Nuget 3.3.2.2) on .NET 4.5 and I have a single fast generator process with a PUSH socket, and a single slow consumer process using a PULL socket. If I send enough messages to hit the sending HWM, the sending process blocks the thread indefinitely.
Some contrived (generator) code which illustrates the problem:
using (var ctx = NetMQContext.Create())
using (var pushSocket = ctx.CreatePushSocket())
{
pushSocket.Connect("tcp://127.0.0.1:42404");
var template = GenerateMessageBody(i);
for (int i = 1; i <= 100000; i++)
{
pushSocket.SendMoreFrame("SampleMessage").SendFrame(Messages.SerializeToByteArray(template));
if (i % 1000 == 0)
Console.WriteLine("Sent " + i + " messages");
}
Console.WriteLine("All finished");
Console.ReadKey();
}
On my configuration, this will usually report it has sent about 5000 or 6000 messages, and will then simply block. If I set the send HWM set to a large value (or 0), then it sends all of the messages as expected.
It looks like it's waiting to receive another command before it tries again, here: (SocketBase.TrySend)
// Oops, we couldn't send the message. Wait for the next
// command, process it and try to send the message again.
// If timeout is reached in the meantime, return EAGAIN.
while (true)
{
ProcessCommands(timeoutMillis, false);
From what I've read in the 0MQ guide, blocking on a full PUSH sockeet is the correct behaviour (and is what I want it to do), however I would expect it to recover once the consumer has cleared its queue.
Short of using some sort of TrySend pattern and dealing with the block myself, is there some option I can set or some other facility I can use to have the PUSH socket attempt to resend blocked messages periodically?

hornerq JMSbridge acknowledge

I learning Hornetq code recently, and have a doubt about JMSbridge.
you can see, there has a fun named "sendMessages()" in the JMSbridgeImpl.java. the fun send msg to the remote JMSServer, but without doing acknowledge.
but int the fun named "sendBatchNonTransacted()" , there just dong acknowledge with the last msg, such as "messages.getLast().acknowledge();"
so the question is: why not do ack by the each msg in the fun named "sendMessages()"?
apologize, my English is pool.
I'm online waiting for you help! thank you !
--------------------------------------------
oh, thanks "Moj Far" very much for the frist question, i got it.
but i have a other question: i modifid the hornetQ source codes,
i want to use ClientConsumer(have successful init) to get msg in the local HornetQ
and use JMS producer to send msg to the remote JMSServer.
In the "Run" function of the "SourceReceiver" class, i modyfy as this:
if (bridgeType == JMSBridgeImpl.ALL_NETTY_MODE ) {
msg = sourceConsumer.receive(1000);
} else { /* core client receive msg */
cmsg = localClientConsumer.receive(1000);
if (cmsg != null) {
hq_msg = HornetQMessage.createMessage(cmsg, localClientSession);
//hq_msg = HornetQMessage.createMessage(cmsg, null);
hq_msg.doBeforeReceive();
//cmsg.acknowledge(); do ack after send
msg = hq_msg;
}
}
but after 2 hours runing, the VM memary is Overflow.
i alsotry to not use localClientSession to create HornetQMessage,but the memory is also Overflow.
is there something wrong in my code?
We have two ways for guarantee of sending messages:
Transactional (used in sendBatchLocalTx() and sendBatchXA())
Non Transactional or Acknowledgement (used in sendBatchNonTransacted())
All of sendBatchLocalTx(), sedBatchXA(), and sendBatchNonTransacted(), use sendMessages() for sending messages, then guarantee sending messages with their own way (commit a transaction or ack messages)!
In sendBatchNonTransacted(), it uses Client Acknowledgement Mode; In this mode, with acking a message (here, the last message), all sent messages in current session will be acked!

Issue or confusion with JMS/spring/AMQ not processing messages asynchronously

We have a situation where we set up a component to run batch jobs using spring batch remotely. We send a JMS message with the job xml path, name, parameters, etc. and we wait on the calling batch client for a response from the server.
The server reads the queue and calls the appropriate method to run the job and return the result, which our messaging framework does by:
this.jmsTemplate.send(queueName, messageCreator);
this.LOGGER.debug("Message sent to '" + queueName + "'");
try {
final Destination replyTo = messageCreator.getReplyTo();
final String correlationId = messageCreator.getMessageId();
this.LOGGER.debug("Waiting for the response '" + correlationId + "' back on '" + replyTo + "' ...");
final BytesMessage message = (BytesMessage) this.jmsTemplate.receiveSelected(replyTo, "JMSCorrelationID='"
+ correlationId + "'");
this.LOGGER.debug("Response received");
Ideally, we want to be able to call out runJobSync method twice, and have two jobs simultaneously operate. We have a unit test that does something similar, without jobs. I realize this code isn't very great, but, here it is:
final List result = Collections.synchronizedList(new ArrayList());
Thread thread1 = new Thread(new Runnable(){
#Override
public void run() {
client.pingWithDelaySync(1000);
result.add(Thread.currentThread().getName());
}
}, "thread1");
Thread thread2 = new Thread(new Runnable(){
#Override
public void run() {
client.pingWithDelaySync(500);
result.add(Thread.currentThread().getName());
}
}, "thread2");
thread1.start();
Thread.sleep(250);
thread2.start();
thread1.join();
thread2.join();
Assert.assertEquals("both thread finished", 2, result.size());
Assert.assertEquals("thread2 finished first", "thread2", result.get(0));
Assert.assertEquals("thread1 finished second", "thread1", result.get(1));
When we run that test, thread 2 completes first since it just has a 500 millisencond wait, while thread 1 does a 1 second wait:
Thread.sleep(delayInMs);
return result;
That works great.
When we run two remote jobs in the wild, one which takes about 50 seconds to complete and one which is designed to fail immediately and return, this does not happen.
Start the 50 second job, then immediately start the instant fail job. The client prints that we sent a message requesting that the job run, the server prints that it received the 50 second request, but waits until that 50 second job is completed before handling the second message at all, even though we use the ThreadPoolExecutor.
We are running transactional with Auto acknowledge.
Doing some remote debugging, the Consumer from AbstractPollingMessageListenerContainer shows no unhandled messages (so consumer.receive() obviously just returns null over and over). The webgui for the amq broker shows 2 enqueues, 1 deque, 1 dispatched, and 1 in the dispatched queue. This suggests to me that something is preventing AMQ from letting the consumer "have" the second message. (prefetch is 1000 btw)
This shows as the only consumer for the particular queue.
Myself and a few other developers have poked around for the last few days and are pretty much getting nowhere. Any suggestions on either, what we have misconfigured if this is expected behavior, or, what would be broken here.
Does the method that is being remotely called matter at all? Currently the job handler method uses an executor to run the job in a different thread and does a future.get() (the extra thread is for reasons related to logging).
Any help is greatly appreciated
not sure I follow completely, but off the top, you should try the following...
set the concurrentConsumers/maxConcurrentConsumers greater than the default (1) on the MessageListenerContainer
set the prefetch to 0 to better promote balancing messages between consumers, etc.

Resources