Moving IBM MQ away from mainframe - best practice? - ibm-mq

Today we have our MQ installations primarely on mainframe, but are considering moving them to Windows or Linux instead.
We have three queue managers (qmgrs) in most environments. Two in a queue sharing group across two LPARs, and a stand-alone qmgr for applications that doesn't need to run 24/7. We have many smaller applications, which shares the few qmgrs.
When I read up on building qmgrs in Windows and Linux, I get the impression that most designs favor a qmgr or a cluster per application. Is it a no-go to build a general purpose qmgr for a hundred small applications on Windows/Linux?
I have considered a Multi instance Qmgr (active/passive) or a clustered solution.
What is considered best practice in a scenario, where I have several hundred different applications that needs MQ communication.

First off, you are asking for an opinion that is not allowed on StackOverflow.
Secondly, if you have z/OS (mainframe) applications using MQ then you cannot move MQ off the mainframe because there is no client-mode connectivity for mainframe applications. What I mean is that the mainframe MQ applications cannot connect in client mode to a queue manager running on a distributed platform (i.e. Linux, Windows, etc.). You will need to have queue managers on both mainframe and distributed platforms to allow messages to flow between the platforms. So, your title of "Moving IBM MQ away from mainframe" is a no go unless ALL mainframe MQ applications are moving off the mainframe too.
I get the impression that most designs favor a qmgr or a cluster per
application.
I don't know where you read that but it sounds like information from the 90's. Normally, you should only isolate a queue manager if you have a very good reason.
I have considered a Multi instance Qmgr (active/passive) or a
clustered solution.
I think you need to read up on MQ MI and MQ clustering because they are not mutually exclusive. MQ clustering has nothing to do with fail-over or HA (high-availability).
Here is a description of MQ clustering from the MQ Knowledge Center.

You need to understand and document your requirements.
What failover time can you tolerate?
With Shared queue on z/OS if you kill one MQ - another MQ can continue processing the messages within seconds. Applications will have to detect the connection is broken and reconnect.
If you go for a mid range solution, it may take longer to detect a queue manager has gone down, and for the work to switch to an alternative. During this time the in-transit messages will not be available.
If you have 10 mid range queue managers and kill one of them, applications wich were connected to the dead queue manager can detect the outage and reconnect to a different queue manager within seconds, so new messages will have a short blip in throughput. Applications connected to the other 9 queue managers will not be affected, so a smaller "blip" overall.
Do you have response time criteria? So enterprises have a "budget" no more than 5 ms in MQ, no more than 15 ms in DB2 etc.
Will having midrange affect response time, for example is there more or less network latency between clients and servers.
Are you worried about confidentiality of data. On z/OS you can have disk encryption enabled by default.
Are you worried about security of data, for example use of keystores, and having stash files (with the passwords of the keystore) sitting next to the keystore. Z/OS is better than mid range in this area.
You can have tamper proof keystores on z/OS and midrange.
Scaling
How many distributed queue managers will you need to handle the current workload, and any growth (and any unexpected peaks). Does this change your operational model?
If you are using AMS, the maintenance of keystores is challenging. If you add one more recipient, you need to update the keystore for all userids that use the queue- on all queue managers. With z/OS You update one key ring per queue manager.
How would the move to midrange affect your disaster recovery? It may be easier (just spin up a new queue manager) with midrange, or harder - you need to create the environment before you can spin up a new queue manager.
What is the worst case environment, for example the systems you talk to - if they went down for a day. Can your queue managers hold/buffer the workload?
If you had a day's worth of data, how long would it take to drain it and send it?
Where are your applications? - if you have applications running on z/OS (for example , batch, CICS, IMS, WAS) they all need a z/OS queue manager on the same LPAR. If all the applications are not on z/OS then they can all use client mode to access MQ.
How does security change? (Command access, access to queues)

Related

Can MQ Support Multiple Separate Clients for the Same Queue While Maintaining Independent Messaging?

We have multiple application environments (development, QA, UAT, etc) that need to connect to fewer provider environments through MQ. For example, the provider only has one test (we'll call it TEST1) environment to which all of the client application environments need to interact. It is imperative that each client environment only receives MQ responses to the messages sent by that respective environment. This is a high volume scenario so correlating message IDs has been ruled out.
Right now TEST1 has a queue set up and is functional, but if one of the client app's environments wants to use it the others have to be shut off so that messaging doesn't overlap.
Does MQ support a model having multiple clients connect to a single queue while preserving the client-specific messaging? If so, where is that controlled (i.e. the channel, queue manager, etc)? If not, is the only solution to set up additional queues for each corresponding client?
Over the many years I have worked with IBM MQ, I have gone back and forth on this issue. I've come to the conclusion that sharing a queue just makes life more difficult. Queues should be handed out like candy on Halloween. If an application team says that they have 10 components to their application then the MQAdmin should give them 10 queues. To the queue manager or server or CPU or hard disk, there is no difference in resource usage.
Also, use an MQ naming standard that makes sense and is easy to apply security to. i.e. for HR (Human Resource) department
HR.PAYROLL.SALARY
HR.PAYROLL.DEDUCTIONS
HR.PAYROLL.BENEFITS
HR.EMPLOYEE.DETAILS
HR.EMPLOYEE.REVIEWS
etc...
You could use a selector such as MQGET(where applname="myapp") or based on a specific user-defined property assuming the sender populates such a property but that's likely to be worse performance than any retrieval by msgid or correlid. Though you've not given any information to demonstrate that get-by-correlid is actually problematic.
And of course any difference between a test and production environment - whether it involves code or configuration - is going to be very risky.
You would not normally share a single destination queue between multiple different application types - multiple queues is far more standard.

How is performance impacted when multiple queue managers are created?

We have IBM MQ V7.1.0 installed and working in production. Can I create a queue manager in the production environment and perform stress testing on that. Will that impact other queue managers that are existing. Is there any way I can create separate environment from the installed IBM MQ (Not installing the MQ again).
MQ is IO intensive, so performance mostly depends on the storage where the log and data directories of your queue managers are and on the capacity of the network interface(s) of your host.
If you configure the queue managers to have separate (both logically and phisically) storages for the data and log directories of each, and you provide enough bandwidth for the machine to handle the message load of multiple queue managers, performance impact of the queue managers on each other should be minimal.
That said, I don't think it's a good idea to do stress testing on an in use production environment, as the goal of stress testing is to find the load which "breaks" the system.

MQ (Websphere 7) persist message to file system

How would I set up MQ so that every message received is immediately written to file system?
I have the "redbooks", but at least need someone at least point me to a chapter or heading in the book to figure it out.
We are a .NET shop. I have written C# via API to read the queue, and we currently use BizTalk MQ adapter. Our ultimate goal is to write same message to multiple directories in file system to "clone" the feed for our various test environments (DEV, STAGE, TRAINING, etc..). The problem with BizTalk is that when we consume the message, we map it at the same time to a new message, so the message is already changed, and we want the original raw message to be cloned, not the morphed one. Our vendors don't offer multiple copies of the feed, for example, they offer DEV and PROD, but we have 4 systems internally.
I suppose I could do a C# Windows Service to do it, but I would rather use built-in features of MQ if possible.
There is no configuration required. If the message is persistent, WMQ writes it to disk. However, I don't think that's going to help you because they are not written as discrete messages. There's no disk file to copy and replication only works if the replicated QMgr is identical to the primary and is offline during the replication.
There are a number of solutions to this problem but as of WMQ V7, the easiest one is to use the built-in Pub/Sub functionality. This assumes that the messages are arriving over a QMgr-to-QMgr channel and landing on a queue where you then consume them.
In that case, it is possible to delete the queue and create an alias of the same name over a topic. You then create a new queue and define an administrative subscription that delivers messages on the topic into the new queue. Your app consumes from the new queue.
When you need to send a feed to another QMgr or application, define a new subscription and point it at the new destination queue. Since this is Pub/Sub, MQ will replicate the original message as many times as there are subscriptions and the first application and its messages are not affected. If the destination you need to send to isn't accessible over MQ channels (perhaps DEV and QA are not connected, for example), you can deliver the messages to the new queue, use QLoad from SupportPac MO03 to write them to a file and then use another instance of QLoad to load them onto a different QMgr. If you wanted to move them in real time, you could set up the Q program from SupportPac MA01 to move them direct from the new subscription queue on QMgr1 to the destination queue on QMgr2. And you can replicate across as many systems as you need.
The SupportPacs main page is here.
If all you are using is the Redbooks, you might want to have a look at the Infocenters. Be sure to use the Infocenter matching the version of WMQ you are using.
WMQ V7.0 Infocenter
WMQ V7.1 Infocenter
WMQ V7.5 Infocenter

Websphere MQ and High Availability

When I read about HA in Websphere MQ I always come to the point, when the best practise is to create two Queue Managers handling the same queue and use out-of-the-box load balancing. Therefore, when one is down, the other takes over his job.
Well, this is great but what about the messages in the queue that belong to the Queue Manager that went down? I mean do these messages reside there (when queue is persistent of course) until QM is up and running again?
Furthermore, is it possible to create a common storage for this doubled Queue Managers? Then no message would wait for the QM to be up. Every message would be delivered in the proper order. Is this correct?
WebSphere MQ provides different capabilities for HA, depending on your requirements. WebSphere MQ clustering uses parallelism to distribute load across multiple instances of a queue. This provides availability of the service but not for in-flight messages.
Hardware clustering and Multi-Instance Queue Manager (MIQM) are both designs using multiple instances of a queue manager that see a single disk image of that queue manager's state. These provide availability of in-flight messages but the service is briefly unavailable while the cluster fails over.
Using these in combination it is possible to provide recovery of in-flight messages as well as availability of the service across multiple queue instances.
In hardware cluster model the disk is mounted to only one server and the cluster software monitors for failure and swaps the disk, IP address and possibly other resources to the secondary node. This requires a hardware cluster monitor such as PowerHA to manage the cluster.
The Multi-Instance QMgr is implemented entirely within WebSphere MQ and needs no other software. It works by having two running instances of the QMgr pointing to the same NFS 4 shared disk mount. Both instances compete for locks on the files. The first one to acquire a lock becomes the active QMgr. Because there is no hardware cluster monitor to perform IP address takeover this type of cluster will have multiple IP addresses. Any modern version of WMQ allows for this using multi-instance CONNAME where you can supply a comma-separated list of IP or DNS names. Client applications that previously used Client Channel Definition Tables (CCDT) to manage failover across multiple QMgrs will continue to work and CCDT continues to be supported in current versions of WMQ.
Please see the Infocenter topic Using WebSphere MQ with high availability configurations for details of hardware cluster and MIQM support.
Client Channel Definition Table files are discussed in the Infocenter topic Client Channel Definition Table file.

We use IBM MQ6.1. Can we send messages to Queue using multi threading?

We use IBM MQ6.1. And use IBM API to send Messages to Queue.
As we need to send huge number of messages, was wondering if we can use multithreading and send messages to the same Queue.
Any suggestions?
Absolutely. You can have as many threads as memory and CPU will allow. Each thread requires it's own connection handle if you want to see a performance improvement. If you use a single connection handle, all GET and PUT activity from multiple threads is synchronized through the connection handle. If you use many connection handles, the GET and PUT activity can occur in parallel. Please see Multithreaded programs in the WMQ Using Java manual for additional details.
Just make sure that OS kernel settings like MAXUPROCS are set to allow WMQ to run sufficient threads to receive the connections and that WMQ settings like MAXHANDS and MAXCHANNELS are bumped up to accommodate the load. Keep in mind that units of work are thread-scoped so each thread is independent of the others for syncpoint - for example, when you issue COMMIT inside a thread it only commits the messages put by that particular thread assuming it has a dedicated connection. The queues have an attribute for DEFSOPT (default share options) but this relates to how many input handles can be active. Even if you open the queue for exclusive input, you can have many threads writing to it.
Also, v6.x of WMQ will be end-of-life as of September 2011. Start planning now to get to v7 to remain on a supported version. If you are using WMQ Client connections, upgrade the client as well as the QMgr. The v7 Client works fine with a v6 QMgr and can be upgraded independently. Of course it supports only the v6 functionality when used with a v6.x QMgr. You can download the v7 client as SupportPac MQC7.

Resources