We use IBM MQ6.1. Can we send messages to Queue using multi threading? - ibm-mq

We use IBM MQ6.1. And use IBM API to send Messages to Queue.
As we need to send huge number of messages, was wondering if we can use multithreading and send messages to the same Queue.
Any suggestions?

Absolutely. You can have as many threads as memory and CPU will allow. Each thread requires it's own connection handle if you want to see a performance improvement. If you use a single connection handle, all GET and PUT activity from multiple threads is synchronized through the connection handle. If you use many connection handles, the GET and PUT activity can occur in parallel. Please see Multithreaded programs in the WMQ Using Java manual for additional details.
Just make sure that OS kernel settings like MAXUPROCS are set to allow WMQ to run sufficient threads to receive the connections and that WMQ settings like MAXHANDS and MAXCHANNELS are bumped up to accommodate the load. Keep in mind that units of work are thread-scoped so each thread is independent of the others for syncpoint - for example, when you issue COMMIT inside a thread it only commits the messages put by that particular thread assuming it has a dedicated connection. The queues have an attribute for DEFSOPT (default share options) but this relates to how many input handles can be active. Even if you open the queue for exclusive input, you can have many threads writing to it.
Also, v6.x of WMQ will be end-of-life as of September 2011. Start planning now to get to v7 to remain on a supported version. If you are using WMQ Client connections, upgrade the client as well as the QMgr. The v7 Client works fine with a v6 QMgr and can be upgraded independently. Of course it supports only the v6 functionality when used with a v6.x QMgr. You can download the v7 client as SupportPac MQC7.

Related

Moving IBM MQ away from mainframe - best practice?

Today we have our MQ installations primarely on mainframe, but are considering moving them to Windows or Linux instead.
We have three queue managers (qmgrs) in most environments. Two in a queue sharing group across two LPARs, and a stand-alone qmgr for applications that doesn't need to run 24/7. We have many smaller applications, which shares the few qmgrs.
When I read up on building qmgrs in Windows and Linux, I get the impression that most designs favor a qmgr or a cluster per application. Is it a no-go to build a general purpose qmgr for a hundred small applications on Windows/Linux?
I have considered a Multi instance Qmgr (active/passive) or a clustered solution.
What is considered best practice in a scenario, where I have several hundred different applications that needs MQ communication.
First off, you are asking for an opinion that is not allowed on StackOverflow.
Secondly, if you have z/OS (mainframe) applications using MQ then you cannot move MQ off the mainframe because there is no client-mode connectivity for mainframe applications. What I mean is that the mainframe MQ applications cannot connect in client mode to a queue manager running on a distributed platform (i.e. Linux, Windows, etc.). You will need to have queue managers on both mainframe and distributed platforms to allow messages to flow between the platforms. So, your title of "Moving IBM MQ away from mainframe" is a no go unless ALL mainframe MQ applications are moving off the mainframe too.
I get the impression that most designs favor a qmgr or a cluster per
application.
I don't know where you read that but it sounds like information from the 90's. Normally, you should only isolate a queue manager if you have a very good reason.
I have considered a Multi instance Qmgr (active/passive) or a
clustered solution.
I think you need to read up on MQ MI and MQ clustering because they are not mutually exclusive. MQ clustering has nothing to do with fail-over or HA (high-availability).
Here is a description of MQ clustering from the MQ Knowledge Center.
You need to understand and document your requirements.
What failover time can you tolerate?
With Shared queue on z/OS if you kill one MQ - another MQ can continue processing the messages within seconds. Applications will have to detect the connection is broken and reconnect.
If you go for a mid range solution, it may take longer to detect a queue manager has gone down, and for the work to switch to an alternative. During this time the in-transit messages will not be available.
If you have 10 mid range queue managers and kill one of them, applications wich were connected to the dead queue manager can detect the outage and reconnect to a different queue manager within seconds, so new messages will have a short blip in throughput. Applications connected to the other 9 queue managers will not be affected, so a smaller "blip" overall.
Do you have response time criteria? So enterprises have a "budget" no more than 5 ms in MQ, no more than 15 ms in DB2 etc.
Will having midrange affect response time, for example is there more or less network latency between clients and servers.
Are you worried about confidentiality of data. On z/OS you can have disk encryption enabled by default.
Are you worried about security of data, for example use of keystores, and having stash files (with the passwords of the keystore) sitting next to the keystore. Z/OS is better than mid range in this area.
You can have tamper proof keystores on z/OS and midrange.
Scaling
How many distributed queue managers will you need to handle the current workload, and any growth (and any unexpected peaks). Does this change your operational model?
If you are using AMS, the maintenance of keystores is challenging. If you add one more recipient, you need to update the keystore for all userids that use the queue- on all queue managers. With z/OS You update one key ring per queue manager.
How would the move to midrange affect your disaster recovery? It may be easier (just spin up a new queue manager) with midrange, or harder - you need to create the environment before you can spin up a new queue manager.
What is the worst case environment, for example the systems you talk to - if they went down for a day. Can your queue managers hold/buffer the workload?
If you had a day's worth of data, how long would it take to drain it and send it?
Where are your applications? - if you have applications running on z/OS (for example , batch, CICS, IMS, WAS) they all need a z/OS queue manager on the same LPAR. If all the applications are not on z/OS then they can all use client mode to access MQ.
How does security change? (Command access, access to queues)

Two consumers on same Websphere MQ JMS Queue, both receiving same message

I am working with someone who is trying to achieve a load-balancing behavior using JMS Queues with IBM Websphere MQ. As such, they have multiple Camel JMS consumers configured to read from the same Queue. Despite that this behavior is undefined according to the JMS spec (last time I looked anyway), they expect a sort of round-robin / load-balancing behavior. And, while the spec leaves this undefined, I'm led to believe that the normal behavior of Websphere MQ is to deliver the message to only one of the consumers, and that it may do some type of load-balancing. See here, for example: When multi MessageConsumer connect to same queue(Websphere MQ),how to load balance message-consumer?
But in this particular case, it appears that both consumers are receiving the same message.
Can anyone who is more of an expert with Websphere MQ shed any light on this? Is there any situation where this behavior is expected? Is there any configuration change that can alleviate this?
I'm leaning towards telling everyone here to use the native Websphere MQ clustering facility and go away from having multiple consumers pointing at the same Queue, but that will be a big change for them, so I'd love to discover a way to make this work.
Not that I'm a fan of relying on anything that's undefined, but if they're willing to rely on IBM specific behavior, I'll leave that up to them.
The only way for them to both receive the same messages are:
There are multiple copies of the message.
The apps are browsing the message without a lock, then circling back to delete it.
The apps are backing out a transaction and making the message available again.
The connection is severed before the app acknowledges the message.
Having multiple apps compete for messages in a queue is a recommended practice. If one app goes down the queue is still served. In a cluster this is crucial because the cluster will continue to direct messages to the un-served queue instance until it fills up.
If it's a Dev system, install SupportPac MA0W and tell it to trace just that one queue and you will be able to see exactly what is happening.
See the JMS spec in section 4.4. The provider must never deliver a second copy of an acknowledged message. Exception is made for session handling in 4.4.13 which I cover in #4 above. That's pretty unambiguous and part of the official spec so not an IBM-specific behavior.

MQ (Websphere 7) persist message to file system

How would I set up MQ so that every message received is immediately written to file system?
I have the "redbooks", but at least need someone at least point me to a chapter or heading in the book to figure it out.
We are a .NET shop. I have written C# via API to read the queue, and we currently use BizTalk MQ adapter. Our ultimate goal is to write same message to multiple directories in file system to "clone" the feed for our various test environments (DEV, STAGE, TRAINING, etc..). The problem with BizTalk is that when we consume the message, we map it at the same time to a new message, so the message is already changed, and we want the original raw message to be cloned, not the morphed one. Our vendors don't offer multiple copies of the feed, for example, they offer DEV and PROD, but we have 4 systems internally.
I suppose I could do a C# Windows Service to do it, but I would rather use built-in features of MQ if possible.
There is no configuration required. If the message is persistent, WMQ writes it to disk. However, I don't think that's going to help you because they are not written as discrete messages. There's no disk file to copy and replication only works if the replicated QMgr is identical to the primary and is offline during the replication.
There are a number of solutions to this problem but as of WMQ V7, the easiest one is to use the built-in Pub/Sub functionality. This assumes that the messages are arriving over a QMgr-to-QMgr channel and landing on a queue where you then consume them.
In that case, it is possible to delete the queue and create an alias of the same name over a topic. You then create a new queue and define an administrative subscription that delivers messages on the topic into the new queue. Your app consumes from the new queue.
When you need to send a feed to another QMgr or application, define a new subscription and point it at the new destination queue. Since this is Pub/Sub, MQ will replicate the original message as many times as there are subscriptions and the first application and its messages are not affected. If the destination you need to send to isn't accessible over MQ channels (perhaps DEV and QA are not connected, for example), you can deliver the messages to the new queue, use QLoad from SupportPac MO03 to write them to a file and then use another instance of QLoad to load them onto a different QMgr. If you wanted to move them in real time, you could set up the Q program from SupportPac MA01 to move them direct from the new subscription queue on QMgr1 to the destination queue on QMgr2. And you can replicate across as many systems as you need.
The SupportPacs main page is here.
If all you are using is the Redbooks, you might want to have a look at the Infocenters. Be sure to use the Infocenter matching the version of WMQ you are using.
WMQ V7.0 Infocenter
WMQ V7.1 Infocenter
WMQ V7.5 Infocenter

Clear messages from mq using java

What is the best approach to connect to websphere mq v7.1 and clear all the messages of one or more specified queues using Java and JMS? Do I need to use Websphere MQ specific java API? Thanks.
Like all good questions, "it depends."
The queue can be cleared with a command only if there are no open handles on the queue. In that case sending a PCF command to clear the queue is quite effective, but if there are open handles you get back an error. PCF commands are of course a Java feature and not JMS because they are proprietary to WebSphere MQ.
On the other hand, any program authorized to perform destructive gets off a queue can clear the queue. In this case, just loop over a get until you get the 2033 return code indicating the queue is empty. This can be performed using JMS or Java but both of these manage the input buffer for you. If the queue is REALLY deep then you end up moving all that data and if the app is client connected, you are moving it at network speed instead of in memory.
To get around this, you need to specify a minimal amount of buffer and as one of the GET options also specify MQGMO.TRUNCATED_MSG_ACCEPTED. This moves only the message header during the get calls and can be significantly faster.
Finally, if you are doing this programamtically and regardless of which method you use, spin off several threads and don't use syncpoint. You actually have to go out of your way to get exclusive input on a queue so once you get a session, just spawn many threads off of it. Close each thread gracefully and shut down the the session once all the threads are closed.

IBM MQ Message Throttling

We are using IBM MQ and we are facing some serious problems regarding controlling its asynchronous delivery to its recipient.We are having some java listeners configured, now the problem is that we need to control the messages coming towards listener, because the messages coming to server are in millions count and server machine dont have that much capacity t process so many threads at a time, so is there any way like throttling on IBM MQ side where we can configure preetch limit like Apache MQ does?
or is there any other way to achieve this?
Currently we are closing connection with IBM MQ when some X limit has reached on listener, but doesen't seems to be efficient way.
Please guys help us out to solve this issue.
Generally with message queueing technologies like MQ the point of the queue is that the sender is decoupled from the receiver. If you're having trouble with message volumes then the answer is to let them queue up on the receiver queue and process them as best you can, not to throttle the sender.
The obvious answer is to limit the maximum number of threads that your listeners are allowed to take up. I'm assuming you're using some sort of MQ threadpool? What platform are you using that provides unlimited listener threads?
From your description, it almost sounds like you have some process running that - as soon as it detects a message in the queue - it reads the message, starts up a new thread and goes back and looks at the queue again. This is the WRONG approach.
You should have a defined number of process threads running (start with one and scale up as required, and within limits of your server) which read from the queue themselves. They would each open the queue in shared mode and either get-with-wait or do immediate get with a sleep if you get a MQRC 2033 (no messages in queue).
Hope that helps.
If you are running in the application server environment, then the maxPoolDepth property on the activationSpec will define the maximum ServerSessionPool size for the MDB - decreasing this will throttle the number messages being delivered concurrently.
Of course, if your MDB (or javax.jms.MessageListener in the JSE environment) does nothing but hand the message to something else (or, worse, just spawn an unmanaged Thread and start it) onMessage will spin rapidly and you can still encounter problems. So in that case you need to limit other resources too, e.g. via threadpool configuration.
Closing the connection to the QM is never an efficient way, as the MQCONN/MQDISC cycle is expensive.

Resources