I need to pick up messages off an IBM MQ queue without polling so I am looking at some async sample code from IBM. I see sample code to async get with JMS/XMS but not MQI (amqmdnet).
The closest I have come to not having to poll continuously with MQI is to wait, but not advised of course for long periods of time so you are still polling:
requestMessage = new MQMessage();
MQGetMessageOptions gmo = new MQGetMessageOptions();
gmo.Options = MQC.MQGMO_WAIT;
Is there a way to asynchronously get messages off a queue using MQI? Or is XMS the only way to go?
Related
I am facing a scenario where the reply queue I connect to, runs out of handles. I have traced it to the fact that my JMS Producers are being cached but not my JMS consumers. I am able to send and receive messages just fine so there is no problem with connecting-sending-receiving to/from the queues. I am using the CachedConnectionFactory (SessionCacheSize = 10)with the target factory as com.ibm.mq.jms.MQQueueConnectionFactory while instantiating the jmsTemplate. Code snippet is as follows
:
:
String replyQueue = "MyQueue";// replyQueue which runs out of handles
messageCreator.setReplyToQueue(new MQQueue(replyQueue));
jmsTemplate.setReceiveTimeout(receiveTimeout);
jmsTemplate.send(destination, messageCreator);// Send to destination queue
Message message = jmsTemplate.receiveSelected(replyQueue,
String.format("JMSCorrelationID = '%s'", messageCreator.getMessageId()));
:
:
From the logs (jms TRACE is enabled) Producer is cached, so the destination queue "handle count" does not increase.
// The first time around (for Producer)
Registering cached JMS MessageProducer for destination[queue:///<destination>:com.ibm.mq.jms.MQQueueSender#c9x758b
// Second time around, the cached producer is reused
Found cached JMS MessageProducer for destination [queue:///<destination>]: com.ibm.mq.jms.MQQueueSender#c9x758b
However, the handles for the replyQueue keep increasing because for every call to that queue, I see a new JMS Consumer being registered. Ultimately the calls to open the replyQueue fail because of MQRC_HANDLE_NOT_AVAILABLE
// First time around
Registering cached JMS MessageConsumer for destination [queue:///<replyQueue>]:com.ibm.mq.jms.MQQueueReceiver#b3ytd25b
// Second time around, another MessageConsumer is registered !
Registering cached JMS MessageConsumer for destination [queue:///<replyQueue>]:com.ibm.mq.jms.MQQueueReceiver#re25b
My memory is a bit dim on this, but here is what is happening. You are receiving messages based on a message selector. This selector is always changing, however. As a test, either remove the selector or make it a constant and see what happens. So when you try to cache/pool based on connection/session/consumer, the consumer is always changing. This requires a new cache entry.
After you go through your 10 sessions, a new connection will be created, but the existing one is not closed. Increase your session count to 100, for example, and your connection count on the MQ broker should climb 10 time slower.
You need to create a new consumer for every message receive as your correlation ID is always changing. So just cache connection/session. No matter what you do, you will always have to round trip to the broker to ask for the new correlation ID.
I'm sending video as a sequence of images (equals zmq messages) but sometimes, perhaps when the network is slow, they are received at a slower rate than they're sent and a growing latency appears, seemingly up to about a minute of video or 100s of images or megabytes of data. It usually clears itself eventually with the subscriber receiving messages at a faster rate than the publisher sends.
Instead, I want it to discard missed messages the same way it's supposed to if the subscriber is too slow recving them. I hoped zmq.CONFLATE=1 would do this but it doesn't. How then? I suspect they're being buffered at the publisher, which is not supposed to have any zmq buffer, or in the network stack somehow.
Simplified server code
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind("tcp://*:12345")
camera = PiCamera()
stream = io.BytesIO()
for _ in camera.capture_continuous(stream, 'jpeg', use_video_port=True):
stream.truncate()
stream.seek(0)
socket.send(stream.read())
stream.seek(0)
Simplified client code
# Initialization
self.context = zmq.Context()
self.video_socket = self.context.socket(zmq.SUB)
self.video_socket.setsockopt(zmq.CONFLATE, 1)
self.video_socket.setsockopt(zmq.SUBSCRIBE, b"")
self.video_socket.connect("tcp://" + ip_address + ":12345")
def get_image(self):
# Receive the latest image
poll_result = self.video_socket.poll(timeout=0)
if poll_result == zmq.POLLIN:
return self.video_socket.recv()
else:
return None
The publisher is on a Raspberry Pi and the subscriber is on Windows.
I am not sure which version of python zmq you are using but based on the underlying c++ libzmq you need to:
Set the ZMQ_SNDHWM socket option on the server socket
Set the ZMQ_RCVHWM socket option on the client socket.
These options limit the number of messages to queue per completed connection in the case of pub/sub. If the queue grows larger than the HWM (high water mark) the messages will be discarded.
Also turn off conflate as that will interfere with these options.
Also set zmq.CONFLATE=1 on the server to keep only the latest message in the send queue.
Before binding the server socket
socket.setsockopt(zmq.CONFLATE, 1)
For some reason I mistakenly thought the PUB socket didn't have a send queue but it does.
When trying to use c# and ibm mq client (9.1.5), I want to use the syncpoint functionality.
var getMessageOptions = new MQGetMessageOptions();
getMessageOptions = new MQGetMessageOptions();
getMessageOptions.Options += MQC.MQGMO_WAIT + MQC.MQGMO_SYNCPOINT;
getMessageOptions.WaitInterval = 20000; // 20 seconds wait
Hashtable props = new Hashtable();
props.Add(MQC.HOST_NAME_PROPERTY, "localhost");
props.Add(MQC.CHANNEL_PROPERTY, "DOTNET.SVRCONN");
props.Add(MQC.PORT_PROPERTY, 3636);
MQQueueManager qm = new MQQueueManager("QM", props);
MQQueue queue = qm.AccessQueue("Q1", MQC.MQOO_INPUT_AS_Q_DEF);
try
{
var message = new MQMessage();
queue.Get(message, getMessageOptions);
string messageStr = message.ReadString(message.DataLength);
SaveTheMessageToAFile(messageStr);
//qm.Commit();
}
catch (MQException e) when (e.Reason == 2033)
{
// Report exceptions other than "no messages in the queue"
Log.Information("No messages in the queue");
}
catch (Exception ex)
{
Log.Error($"Exception when trying to capture a message from the queue:
}
I would have expected to see the same message each time if i didn't call commit. Is there something that needs to be enabled on the queuemanager?
In your example you are not issuing a Commit() or a Backout(), so at that point the message will just be in a uncommitted state. If you were to then kill off your process the message would get rolled back to the queue. As you mentioned in the comments if you call Disconnect(), in most cases this will implicitly commit uncommitted messages.
This is documented in the IBM MQ KC in a few pages:
Reference>Developing applications reference>The IBM MQ .NET classes and interfaces>MQQueueManager.NET class
Disconnect();
...
Generally, any work performed as part of a unit of work is committed. However, if the unit of work is managed by .NET, the unit of work might be rolled back.
NOTE: managed by .NET means a Distributed transactions, not what you are doing.
Developing applications>Developing .NET applications
When you use the procedural interface, you disconnect from a queue manager by using the call MQDISC( Hconn, CompCode, Reason), where Hconn is a handle to the queue manager.
In the .NET interface, the queue manager is represented by an object of class MQQueueManager. You disconnect from the queue manager by calling the Disconnect() method on that class.
Developing applications>Developing MQI applications with IBM MQ>Writing a procedural application for queuing>Committing and backing out units of work>Syncpoint considerations in IBM MQ applications
Except on z/OS batch with RRS, if a program issues the MQDISC call while there are uncommitted requests, an implicit syncpoint occurs. If the program ends abnormally, an implicit backout occurs.
So I have request/response queues that I am putting messages on and reading messages off from.
The problem is that I have multiple local instances that are reading/feeding off the same queues, and what happens sometimes is that one instance can read some other instance's reply message.
So is there a way I can configure my JMS, using spring that actually makes the instances read the messages that are only requested by them and not read other instance's messages.
I have very little knowledge about JMS and related stuff. So if the above question needs more info then I can dig around and provide it.
Thanks
It's easy!
A JMS message have two properties you can use - JMSMessageID and JMSCorrelationID.
A JMSMessageId is supposed to be unique for each message, so you could do something like this:
Let the client send a request, then start to listen for responses where the correlation id = the sent message id. The server side is then responsible for copying the message id of the request to the correlation id of the response. Something like: responseMsg.setJMSCorrelationID(requestMsg.getJMSMessageID());
Example client side code:
Session session = getSession();
Message msg = createRequest();
MessageProducer mp = session.createProducer(session.createQueue("REQUEST.QUEUE"));
mp.send(msg,DeliveryMode.NON_PERSISTENT,0,TIMEOUT);
// If session is transactional - commit now.
String msgID = msg.getJMSMessageID();
MessageConsumer mc = session.createConsumer(session.createQueue("REPLY.QUEUE"),
"JMSCorrelationID='" + msgId + "'");
Message response = mc.receive(TIMEOUT);
A more performant solution would be to use dedicated reply queues per destination. Simply set message.setJMSReplyTo(session.createQueue("REPLY.QUEUE."+getInstanceId())); and make sure the server side sends response to requestMsg.getJMSReplyTo() and not to a hard coded value.
I use the following url to create ActiveMQConnactionFactory:
failover:(tcp://server1:port,tcp://server2:port,tcp://server2:port)
What I want to do is to create multiple message consumers from this network of brokers.
The following is not a real code, but it helps to undestand how I do that:
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("BROKER_URL");
connection = connectionFactory.createConnection();
connection.start();
for (int i=0; i<10; i++) {
session = connection.createSession(false, Session.CLIENT_ACKNOWLEDGE);
Destination queue = consumerSession.createQueue("QUEUE_NAME");
consumer = consumerSession.createConsumer(queue);
consumer.setMessageListener(new MessageListener());
}
The problem is that all consumers will be connected to one randomly choosen broker.
But I want them to be balanced over the network of brokers.
I believe it is possible to do that by creating multiple connections with the factory.
But what are the best practices for that?
And is this a good thing which I want? :)
Actually, the consumer would not be connected to a randomly chosen broker.
A connection is the part that connects to a broker. With the connection string you have provided, you will have ONE connection mapped to ONE randomly chosen broker. All consumers have their own sessions but these would use the same ONE connection to that ONE broker.
The only setting I know of, is that you can disable the randomize behavior of the failover protocol by setting ?randomize=false on the connection string. This would mean your connection will first try the first, then the second, then the third, etc.
But to achieve your requirement. I would make each consumer to have it's own connection. This, together with the randomize feature in the fail-over protocol would kinda load-balance the consumers; but not for real, there is no intelligence in there and is just "randomizing" the broker it is connecting to.
This means, I would do the following (from your code)
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("BROKER_URL");
for (int i=0; i<10; i++) {
connection = connectionFactory.createConnection();
connection.start();
session = connection.createSession(false, Session.CLIENT_ACKNOWLEDGE);
Destination queue = consumerSession.createQueue("QUEUE_NAME");
consumer = consumerSession.createConsumer(queue);
consumer.setMessageListener(new MessageListener());
}
This way, each consumer will have it's own connection to "a" broker of your fail-over connection string
UPDATED AFTER QUESTION CHANGE:
If you want to let ActiveMQ randomly choose a broker for each consumer, the above mentioned solution is the way to go.
The best practice would be to put your consumers and producers as close to each other as possible. For this, I would recommend lowering the network consumer priority, so the local consumer and producer would have highest priority. Only when the local consumer is not idle, it would distribute further over the network to other consumers.
In addition to that, it will be a good idea if the operation on consumer side is long running to set a lower prefetch value, so that the messages do get load balanced around the network of brokers instead of one consumer snatching up 1,000 messages while other consumers are idle.