How is performance impacted when multiple queue managers are created? - ibm-mq

We have IBM MQ V7.1.0 installed and working in production. Can I create a queue manager in the production environment and perform stress testing on that. Will that impact other queue managers that are existing. Is there any way I can create separate environment from the installed IBM MQ (Not installing the MQ again).

MQ is IO intensive, so performance mostly depends on the storage where the log and data directories of your queue managers are and on the capacity of the network interface(s) of your host.
If you configure the queue managers to have separate (both logically and phisically) storages for the data and log directories of each, and you provide enough bandwidth for the machine to handle the message load of multiple queue managers, performance impact of the queue managers on each other should be minimal.
That said, I don't think it's a good idea to do stress testing on an in use production environment, as the goal of stress testing is to find the load which "breaks" the system.

Related

Can MQ Support Multiple Separate Clients for the Same Queue While Maintaining Independent Messaging?

We have multiple application environments (development, QA, UAT, etc) that need to connect to fewer provider environments through MQ. For example, the provider only has one test (we'll call it TEST1) environment to which all of the client application environments need to interact. It is imperative that each client environment only receives MQ responses to the messages sent by that respective environment. This is a high volume scenario so correlating message IDs has been ruled out.
Right now TEST1 has a queue set up and is functional, but if one of the client app's environments wants to use it the others have to be shut off so that messaging doesn't overlap.
Does MQ support a model having multiple clients connect to a single queue while preserving the client-specific messaging? If so, where is that controlled (i.e. the channel, queue manager, etc)? If not, is the only solution to set up additional queues for each corresponding client?
Over the many years I have worked with IBM MQ, I have gone back and forth on this issue. I've come to the conclusion that sharing a queue just makes life more difficult. Queues should be handed out like candy on Halloween. If an application team says that they have 10 components to their application then the MQAdmin should give them 10 queues. To the queue manager or server or CPU or hard disk, there is no difference in resource usage.
Also, use an MQ naming standard that makes sense and is easy to apply security to. i.e. for HR (Human Resource) department
HR.PAYROLL.SALARY
HR.PAYROLL.DEDUCTIONS
HR.PAYROLL.BENEFITS
HR.EMPLOYEE.DETAILS
HR.EMPLOYEE.REVIEWS
etc...
You could use a selector such as MQGET(where applname="myapp") or based on a specific user-defined property assuming the sender populates such a property but that's likely to be worse performance than any retrieval by msgid or correlid. Though you've not given any information to demonstrate that get-by-correlid is actually problematic.
And of course any difference between a test and production environment - whether it involves code or configuration - is going to be very risky.
You would not normally share a single destination queue between multiple different application types - multiple queues is far more standard.

Moving IBM MQ away from mainframe - best practice?

Today we have our MQ installations primarely on mainframe, but are considering moving them to Windows or Linux instead.
We have three queue managers (qmgrs) in most environments. Two in a queue sharing group across two LPARs, and a stand-alone qmgr for applications that doesn't need to run 24/7. We have many smaller applications, which shares the few qmgrs.
When I read up on building qmgrs in Windows and Linux, I get the impression that most designs favor a qmgr or a cluster per application. Is it a no-go to build a general purpose qmgr for a hundred small applications on Windows/Linux?
I have considered a Multi instance Qmgr (active/passive) or a clustered solution.
What is considered best practice in a scenario, where I have several hundred different applications that needs MQ communication.
First off, you are asking for an opinion that is not allowed on StackOverflow.
Secondly, if you have z/OS (mainframe) applications using MQ then you cannot move MQ off the mainframe because there is no client-mode connectivity for mainframe applications. What I mean is that the mainframe MQ applications cannot connect in client mode to a queue manager running on a distributed platform (i.e. Linux, Windows, etc.). You will need to have queue managers on both mainframe and distributed platforms to allow messages to flow between the platforms. So, your title of "Moving IBM MQ away from mainframe" is a no go unless ALL mainframe MQ applications are moving off the mainframe too.
I get the impression that most designs favor a qmgr or a cluster per
application.
I don't know where you read that but it sounds like information from the 90's. Normally, you should only isolate a queue manager if you have a very good reason.
I have considered a Multi instance Qmgr (active/passive) or a
clustered solution.
I think you need to read up on MQ MI and MQ clustering because they are not mutually exclusive. MQ clustering has nothing to do with fail-over or HA (high-availability).
Here is a description of MQ clustering from the MQ Knowledge Center.
You need to understand and document your requirements.
What failover time can you tolerate?
With Shared queue on z/OS if you kill one MQ - another MQ can continue processing the messages within seconds. Applications will have to detect the connection is broken and reconnect.
If you go for a mid range solution, it may take longer to detect a queue manager has gone down, and for the work to switch to an alternative. During this time the in-transit messages will not be available.
If you have 10 mid range queue managers and kill one of them, applications wich were connected to the dead queue manager can detect the outage and reconnect to a different queue manager within seconds, so new messages will have a short blip in throughput. Applications connected to the other 9 queue managers will not be affected, so a smaller "blip" overall.
Do you have response time criteria? So enterprises have a "budget" no more than 5 ms in MQ, no more than 15 ms in DB2 etc.
Will having midrange affect response time, for example is there more or less network latency between clients and servers.
Are you worried about confidentiality of data. On z/OS you can have disk encryption enabled by default.
Are you worried about security of data, for example use of keystores, and having stash files (with the passwords of the keystore) sitting next to the keystore. Z/OS is better than mid range in this area.
You can have tamper proof keystores on z/OS and midrange.
Scaling
How many distributed queue managers will you need to handle the current workload, and any growth (and any unexpected peaks). Does this change your operational model?
If you are using AMS, the maintenance of keystores is challenging. If you add one more recipient, you need to update the keystore for all userids that use the queue- on all queue managers. With z/OS You update one key ring per queue manager.
How would the move to midrange affect your disaster recovery? It may be easier (just spin up a new queue manager) with midrange, or harder - you need to create the environment before you can spin up a new queue manager.
What is the worst case environment, for example the systems you talk to - if they went down for a day. Can your queue managers hold/buffer the workload?
If you had a day's worth of data, how long would it take to drain it and send it?
Where are your applications? - if you have applications running on z/OS (for example , batch, CICS, IMS, WAS) they all need a z/OS queue manager on the same LPAR. If all the applications are not on z/OS then they can all use client mode to access MQ.
How does security change? (Command access, access to queues)

How to create a cluster for TIBCO EMS?

I've created an administration domain on a Windows node, now I need to create a cluster and add that node into that cluster.How do I go about it?
There is no concept of "Cluster" (in a WMQ sense) with EMS, but there are notions of "connected EMS servers" (routes), and bridging of queues (bridges, not unlike WMQ remote queues). There is also the notion of "Multi-instance local HA".
You may want to :
Plan your EMS deployment strategy, to proper balance latency, availability, license cost and performance
Execute the plan by:
Seting up the local HA, with a shared FS if shared state is needed (here is my article on that)
Make use of the "Routes" feature of EMS to link your multiple EMS HA instances together (see page 569 of the user guide)
Note : If you have a limited number of instances but MASSIVE amount of clients, consider multicast OR another product (like FTL or RabbitMQ).

Muliple Websphere Application Servers attached to a single Websphere MQ Failing

Issue:
Having multiple consumer applications active specification attached to a single MQ on distributed VM servers is causing a null payload in a MQ Message.
Note: See notes at the bottom. No issue with mq.
Details:
I have 3 Websphere applications deployed across 2 VM servers. 1 application is a publisher and the other 2 applications are consumers attached to a single MQ Manager, and MQ.
2 consumer applications are pulling off the messages and processing them. The consumer application on the separate server receives a null payload. I have confirmed that its seems to be an issue having multiple application server instances attached to MQ. Confirmed by deploying the publisher on server 2 with consumer 2, then consumer 1 fails.
Question:
Has anyone tried attaching multiple MDB applications deployed on separate server instances bind to one Queue Manager and one MQ?
Specifications:
Websphere 7, EJB 3.0 MDB's,Transactions turned off,Queue in a queue installed on another machine.
Goal:
Distributed computing, scaling up against large number of messages.
I'm thinking this is a configuration issue but not 100% sure where to look. I had read that you could use MQLink, but I don't see why I would need to use service bus integration.
Supporting Doucmentation:
[MQ Link][1
UPDATE: I fixed the problem and it was a related to a combination of class loader issue with a duplicate classes. See Solution Notes below I added.
EDIT HISTORY:
- Clarified specifications, clarified question and added overall goal.
- reference notes to solution.
Has anyone tried attaching multiple MDB applications deployed on
separate server instances bind to one Local MQ?
Multiple MDB applications deployed on separate servers, connecting to one Queue Manager(but different queue) is a normal scenario and we have it everywhere and all applications work fine.
But, I suspect what you are doing is: Multiple MDB applications deployed on separate servers, connecting to one Queue Manager and listening to the same queue.
In this case one message will be received by one consumer only.
You need to create separate queue for each application and create subscriptions for each for the topic being published by your publisher.
Addition:
I suspect, for load balancing the problem you may be facing is that, when your first application gets the message, it doesn't issue a commit. So, there will be an uncommited message in the queue, which may be stopping your other application from getting message from the queue. When your first application finishes its processing, it issues a commit, but then again it is ready to pick the message and hence it again issues a get.
In my architecture, we have implemented load balancing using multiple queue managers like below:
You create 3 queue managers, say GatewayQM, App1QM and App2QM.
Keep the three queue managers in the same cluster.
Create an alias queue(shared in cluster) in GatewayQM and ask your putting app to put message in the gateway queue.
Now create one local cluster queue in each of App1QM and App2QM. Read from these queues via your applications App1 and App2 respectively.
This implementation provides you better security and serves a perfect load balancer.
This specific problem was caused by a code issue and the combination of class loading being set to "Parent First" in the Websphere console. On one node it would work and the other nodes in a cluster would fail, I think this was caused by the "Parent First" setting.
More important, in terms of my configuration in a cluster binding multiple active specifications to a single MQ to provide distributed computing is a correct solution.
However "points" due go to "nitgeek" solution references above if you are looking for a extremely high volume solution. Its important to understand that a single MQ can have a very high depth and takes a lot to fully utilize one. My current configuration is a good starting point for quick configuration and distributed processing using Multiple MDB's.

Websphere MQ and High Availability

When I read about HA in Websphere MQ I always come to the point, when the best practise is to create two Queue Managers handling the same queue and use out-of-the-box load balancing. Therefore, when one is down, the other takes over his job.
Well, this is great but what about the messages in the queue that belong to the Queue Manager that went down? I mean do these messages reside there (when queue is persistent of course) until QM is up and running again?
Furthermore, is it possible to create a common storage for this doubled Queue Managers? Then no message would wait for the QM to be up. Every message would be delivered in the proper order. Is this correct?
WebSphere MQ provides different capabilities for HA, depending on your requirements. WebSphere MQ clustering uses parallelism to distribute load across multiple instances of a queue. This provides availability of the service but not for in-flight messages.
Hardware clustering and Multi-Instance Queue Manager (MIQM) are both designs using multiple instances of a queue manager that see a single disk image of that queue manager's state. These provide availability of in-flight messages but the service is briefly unavailable while the cluster fails over.
Using these in combination it is possible to provide recovery of in-flight messages as well as availability of the service across multiple queue instances.
In hardware cluster model the disk is mounted to only one server and the cluster software monitors for failure and swaps the disk, IP address and possibly other resources to the secondary node. This requires a hardware cluster monitor such as PowerHA to manage the cluster.
The Multi-Instance QMgr is implemented entirely within WebSphere MQ and needs no other software. It works by having two running instances of the QMgr pointing to the same NFS 4 shared disk mount. Both instances compete for locks on the files. The first one to acquire a lock becomes the active QMgr. Because there is no hardware cluster monitor to perform IP address takeover this type of cluster will have multiple IP addresses. Any modern version of WMQ allows for this using multi-instance CONNAME where you can supply a comma-separated list of IP or DNS names. Client applications that previously used Client Channel Definition Tables (CCDT) to manage failover across multiple QMgrs will continue to work and CCDT continues to be supported in current versions of WMQ.
Please see the Infocenter topic Using WebSphere MQ with high availability configurations for details of hardware cluster and MIQM support.
Client Channel Definition Table files are discussed in the Infocenter topic Client Channel Definition Table file.

Resources