WMQ Message Group Parallel execution - ibm-mq

I have a requirement to store large number of messages in a queue when the consumer is down and process them sequentially when the consumer is restored. I want to group together logically related messages and execute all the groups in parallel.
Eg: Consider the queue below with 3 groups A, B, C. I have assigned seqnum to contents and unique groupIds to the groups correctly.
Queue-1
A1,A2,A3-last,B1,B2-last,C1,C2,c3,C4-last
Is it possible to Fetch A1, B1, C1 in parallel?
Also currently the consumer is fetching A1, A2 and A3 correctly in order. But it is not able to fetch any content from group B or C. What might be wrong?
Any suggestions will deeply help.

First read this page from the MQ Knowledge.
I converted the pseudo code (from the web page above) to MQ classes for Java and changed it from a browse to a destructive get.
Also, I prefer to do each group of messages under a syncpoint (assuming a reasonable sized groups).
MQGetMessageOptions gmo = new MQGetMessageOptions();
MQMessage rcvMsg = new MQMessage();
/* Get the first message in a group, or a message not in a group */
gmo.Options = CMQC.MQGMO_COMPLETE_MSG | CMQC.MQGMO_LOGICAL_ORDER | CMQC.MQGMO_ALL_MSGS_AVAILABLE | CMQC.MQGMO_WAIT | CMQC.MQGMO_SYNCPOINT;
gmo.MatchOptions = CMQC.MQMO_MATCH_MSG_SEQ_NUMBER;
rcvMsg.messageSequenceNumber = 1;
rcvMsg.groupId = CMQC.MQGI_NONE;
inQ.get(rcvMsg, gmo);
/* Examine first or only message */
...
gmo.Options = CMQC.MQGMO_COMPLETE_MSG | CMQC.MQGMO_LOGICAL_ORDER | CMQC.MQGMO_SYNCPOINT;
do while ((rcvMsg.messageFlags & CMQC.MQMF_MSG_IN_GROUP) == CMQC.MQMF_MSG_IN_GROUP)
{
rcvMsg.clearMessage();
inQ.get(rcvMsg, gmo);
/* Examine each remaining message in the group */
...
}
qMgr.commit();
There are a couple of coding things that must be done: (1) using the right options and (2) clearing the groupId field before you get the next group.

Related

XMS.NET - Error while sending response back to reply queue/out queue

Regarding: “Sending response back to the out/reply queue.”
There is a requirement to send the response back to a different queue (reply queue).
While sending the response, we have to use the correlation and message id from the request message and pass it to the reply queue as header. I suspect the format of correlation/message id is wrong.
While reading the message, the correlation id and message id format are as below:
MessageId = “ID:616365323063633033343361313165646139306638346264”
CorrelationId = “ID:36626161303030305f322020202020202020202020202020”
While sending the back to out/reply queue, we are passing these ids as below:
ITextMessage txtReplyMessage = sessionOut.CreateTextMessage();
txtReplyMessage.JMSMessageID = “616365323063633033343361313165646139306638346264”;
txtReplyMessage.JMSCorrelationID = “36626161303030305f322020202020202020202020202020”;
txtReplyMessage.Text = sentMessage.Contents;
txtReplyMessage.JMSDeliveryMode = DeliveryMode.NonPersistent;
txtReplyMessage.JMSPriority = sentMessage.Priority;
messagePoducerOut.Send(txtReplyMessage);
Please note:
With the XMS.NET library, we need to pass the correlation and message id in string format as per shown above
With MQ API’s (which we were using earlier) passing the correlation and message ids we use to send in bytes format like below:
MQMessage queueMessage = new MQMessage();
string[] parms = document.name.Split('-');
queueMessage.MessageId = StringToByte(parms[1]);
queueMessage.CorrelationId = StringToByte(parms[2]);
queueMessage.CharacterSet = 1208;
queueMessage.Encoding = MQC.MQENC_NATIVE;
queueMessage.Persistence = 0; // Do not persist the replay message.
queueMessage.Format = "MQSTR ";
queueMessage.WriteString(document.contents);
queueOut.Put(queueMessage);
queueManagerOut.Commit();
Please help to troubleshoot the problem.
Troubleshooting is a bit difficult because you haven’t clearly specified the trouble (is there an exception, or is the message just not be correlated successfully?).
In your code you have missed to add the “ID:” prefix. However, to address the requirements, you should not need to bother too much about what is in this field, because you simply need to copy one value to the other:
txtReplyMessage.JMSCorrelationID = txtRequestMessage.JMSMessageID
A bit unclear what the issue is. Are you able to run the provided examples in the MQ tools/examples? This approach uses tmp queues(AMQ.*) as JMSReplyTo
Start the "server" application first.
Request/Response Client: "SimpleRequestor"
Request/Response Server: "SimpleRequestorServer"
You can find the exmaples at the default install location(win):
"C:\Program Files\IBM\MQ\tools\dotnet\samples\cs\xms\simple\wmq"
The "SimpleMessageSelector" will show how to use the selector pattern.
Note the format on the selector: "JMSCorrelationID = '00010203040506070809'"
IBM MQ SELECTOR

how to read messages by packet with JmsTemplate of Sring

i have an outOfMemoryException while reading messages from a queue with 2 M of messages.
and i am trying to find a way to read messgages by 1000 for example .
here is my code
List<TextMessage> messages = jmsTemplate.browse(JndiQueues.BACKOUT, (session,browser) -> {
Enumeration<?> browserEnumeration = browser.getEnumeration().;
List<TextMessage> messageList = new ArrayList<TextMessage>();
while (browserEnumeration.hasMoreElements()) {
messageList.add((TextMessage) browserEnumeration.nextElement());
}
return messageList;
});
thanks
Perform the browse on a different thread and pass the results subset to the main thread via a BlockingQueue<List<TextMessage>>. e.g. a LinkedBlockingQueue with a small capacity.
The browsing thread will block when the queue is full. When the main thread removes an entry from the queue, the browser can add a new one.
Probably makes sense to have a capacity at least 2 so that the next list is available immediately.

google.cloud.pubsub - Streaming Pull hogging PubSub Messages

I'm currently running some tests on the latest google-cloud-pubsub==0.35.4 pubsub release. My intention is to process a never ending stream (variating in load) using a dynamic amount of subscriber clients.
However, when i have a queue of say.. 600 messages and 1 client running and then add additional clients:
Expected: All remaining messages get distributed evenly across all clients
Observed: Only new messages are distributed across clients, any older messages are send to pre-existing clients
Below is simplified version of what i use for my clients (for reference we'll only be running the low priority topic).
I won't include the publisher since it has no relation.
PRIORITY_HIGH = 1
PRIORITY_MEDIUM = 2
PRIORITY_LOW = 3
MESSAGE_LIMIT = 10
ACKS_PER_MIN = 100.00
ACKS_RATIO = {
PRIORITY_LOW: 100,
}
PRIORITY_TOPICS = {
PRIORITY_LOW: 'test_low',
}
PRIORITY_SEQUENCES = {
PRIORITY_LOW: [PRIORITY_LOW, PRIORITY_MEDIUM, PRIORITY_HIGH],
}
class Subscriber:
subscriber_client = None
subscriptions = {}
priority_queue = defaultdict(Queue.Queue)
priorities = []
def __init__(self):
logging.basicConfig()
self.subscriber_client = pubsub_v1.SubscriberClient()
for option, percentage in ACKS_RATIO.iteritems():
self.priorities += [option] * percentage
def subscribe_to_topic(self, topic, max_messages=10):
self.subscriptions[topic] = self.subscriber_client.subscribe(
BASE_TOPIC_PATH.format(project=PROJECT, topic=topic,),
self.process_message,
flow_control=pubsub_v1.types.FlowControl(
max_messages=max_messages,
),
)
def un_subscribe_from_topic(self, topic):
subscription = self.subscriptions.get(topic)
if subscription:
subscription.cancel()
del self.subscriptions[topic]
def process_message(self, message):
json_message = json.loads(message.data.decode('utf8'))
self.priority_queue[json_message['priority']].put(message)
def retrieve_message(self):
message = None
priority = random.choice(self.priorities)
ack_priorities = PRIORITY_SEQUENCES[priority]
for ack_priority in ack_priorities:
try:
message = self.priority_queue[ack_priority].get(block=False)
break
except Queue.Empty:
pass
return message
if __name__ == '__main__':
messages_acked = 0
pub_sub = Subscriber()
pub_sub.subscribe_to_topic(PRIORITY_TOPICS[PRIORITY_LOW], MESSAGE_LIMIT * 3)
while True:
msg = pub_sub.retrieve_message()
if msg:
json_msg = json.loads(msg.data.decode('utf8'))
msg.ack()
print ("%s - Akked Priority %s , High %s, Medium %s, Low %s" % (
datetime.datetime.now().strftime('%H:%M:%S'),
json_msg['priority'],
pub_sub.priority_queue[PRIORITY_HIGH].qsize(),
pub_sub.priority_queue[PRIORITY_MEDIUM].qsize(),
pub_sub.priority_queue[PRIORITY_LOW].qsize(),
))
time.sleep(60.0 / ACKS_PER_MIN)
I'm wondering if this behaviour as inherent to how streaming pulls function or if there are configurations that can alter this behaviour.
Cheers!
Considering the Cloud Pub/Sub documentation, Cloud Pub/sub delivers each published message at least once for every subscription, nevertheless there are some exception to this behavior:
A message that cannot be delivered within the maximum retention time of 7 days is deleted.
A message published before a given subscription was created will not be delivered.
In other words, the service will deliver the messages to the subscriptions created before the message was published, therefore, the old messages will not be available for new subscriptions. As far as I know, Cloud Pub/Sub does not offer a feature to change this behavior.

Filtering log files in Flume using interceptors

I have an http server writing log files which I then load into HDFS using Flume
First I want to filter data according to data I have in my header or body. I read that I can do this using an interceptor with regex, can someone explain exactly what I need to do? Do I need to write Java code that overrides the Flume code?
Also I would like to take data and according to the header send it to a different sink (i.e source=1 goes to sink1 and source=2 goes to sink2) how is this done?
thank you,
Shimon
You don't need to write Java code to filter events. Use Regex Filtering Interceptor to filter events which body text matches some regular expression:
agent.sources.logs_source.interceptors = regex_filter_interceptor
agent.sources.logs_source.interceptors.regex_filter_interceptor.type = regex_filter
agent.sources.logs_source.interceptors.regex_filter_interceptor.regex = <your regex>
agent.sources.logs_source.interceptors.regex_filter_interceptor.excludeEvents = true
To route events based on headers use Multiplexing Channel Selector:
a1.sources = r1
a1.channels = c1 c2 c3 c4
a1.sources.r1.selector.type = multiplexing
a1.sources.r1.selector.header = state
a1.sources.r1.selector.mapping.CZ = c1
a1.sources.r1.selector.mapping.US = c2 c3
a1.sources.r1.selector.default = c4
Here events with header "state"="CZ" go to channel "c1", with "state"="US" - to "c2" and "c3", all other - to "c4".
This way you can also filter events by header - just route specific header value to channel, which points to Null Sink.
You can use flume channel selectors for simply routing event to different destinations. Or you can chain several flume agents together to implement complex routing function.
But the chained flume agents will become a little hard to maintain (resource usage and flume topology).
You can have a look at flume-ng router sink, it may provide some function you want.
First, add specific fields in event header by flume interceptor
a1.sources = r1 r2
a1.channels = c1 c2
a1.sources.r1.channels = c1
a1.sources.r1.type = seq
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = static
a1.sources.r1.interceptors.i1.key = datacenter
a1.sources.r1.interceptors.i1.value = NEW_YORK
a1.sources.r2.channels = c2
a1.sources.r2.type = seq
a1.sources.r2.interceptors = i2
a1.sources.r2.interceptors.i2.type = static
a1.sources.r2.interceptors.i2.key = datacenter
a1.sources.r2.interceptors.i2.value = BERKELEY
Then, you can setup your flume channel selector like:
a2.sources = r2
a2.sources.channels = c1 c2 c3 c4
a2.sources.r2.selector.type = multiplexing
a2.sources.r2.selector.header = datacenter
a2.sources.r2.selector.mapping.NEW_YORK = c1
a2.sources.r2.selector.mapping.BERKELEY= c2 c3
a2.sources.r2.selector.default = c4
Or, you can setup avro-router sink like:
agent.sinks.routerSink.type = com.datums.stream.AvroRouterSink
agent.sinks.routerSink.hostname = test_host
agent.sinks.routerSink.port = 34541
agent.sinks.routerSink.channel = memoryChannel
# Set sink name
agent.sinks.routerSink.component.name = AvroRouterSink
# Set header name for routing
agent.sinks.routerSink.condition = datacenter
# Set routing conditions
agent.sinks.routerSink.conditions = east,west
agent.sinks.routerSink.conditions.east.if = ^NEW_YORK
agent.sinks.routerSink.conditions.east.then.hostname = east_host
agent.sinks.routerSink.conditions.east.then.port = 34542
agent.sinks.routerSink.conditions.west.if = ^BERKELEY
agent.sinks.routerSink.conditions.west.then.hostname = west_host
agent.sinks.routerSink.conditions.west.then.port = 34543

How to check whether have message in the queue

I am using IBM Websphere MQ. I have the queue manager and queue name. Now, I want to check whether the queue has any messages in it?
I did not work on this before. Pleas help
Please let me know if you need further information!
Thanks
The below code is .NET / amqmdnet - but you might try and convert this in the meantime until a Java dev sees your post.
To see if there is a message on the queue, without actually taking it off the queue, use MQC.MQOO_BROWSE on the Queue and IBM.WMQ.MQC.MQGMO_BROWSE_FIRST as the option
You'll get MQRC_NO_MSG_AVAILABLE if the queue is empty.
MQMessage queueMessage = new MQMessage();
MQQueueManager queueManager = new MQQueueManager(qmName, channelName, connName);
MQQueuequeue = queueManager.AccessQueue(qName,
MQC.MQOO_BROWSE + MQC.MQOO_FAIL_IF_QUIESCING);
MQGetMessageOptions opt = new MQGetMessageOptions();
opt.Options = IBM.WMQ.MQC.MQGMO_BROWSE_FIRST;
queueMessage.CorrelationId = IBM.WMQ.MQC.MQMI_NONE;
queueMessage.MessageId = IBM.WMQ.MQC.MQMI_NONE;
queue.Get(queueMessage, opt);
String sMessage = queueMessage.ReadString(queueMessage.DataLength);
To peek the next message use IBM.WMQ.MQC.MQGMO_BROWSE_NEXT;
To actually read the message OFF the queue, use MQC.MQOO_INPUT_SHARED on the AccessQueue.
The answer didn't show how to check for MQRC_NO_MSG_AVAILABLE. Here is my solution. If there are better ones please let me know.
try
{
queue.Get(queueMessage, opt);
String sMessage = queueMessage.ReadString(queueMessage.DataLength);
}
catch (MQException err)
{
if (err.ReasonCode.CompareTo(MQC.MQRC_NO_MSG_AVAILABLE) == 0)
return true;
}
For Windows machine
It depends on where your queue manager is.
You could use MQUtilities - ih03 pack - which has rfhUtil.exe (Local Qm) and rfhUtilC.exe (for remote qm)
For Local QM , it is straight forward you need to place appropriate values and hit browse, it will show you Queue Depth.
For Remote QM, Place /TCP/(PortNo) for queue manager name and queue for queue name. Hit browse and you will get to know the queue depth.
For Unix/Ubuntu/Linux versions - There is a product called MQVisualEdit which is similar to this one.

Resources