Can MQ Support Multiple Separate Clients for the Same Queue While Maintaining Independent Messaging? - ibm-mq

We have multiple application environments (development, QA, UAT, etc) that need to connect to fewer provider environments through MQ. For example, the provider only has one test (we'll call it TEST1) environment to which all of the client application environments need to interact. It is imperative that each client environment only receives MQ responses to the messages sent by that respective environment. This is a high volume scenario so correlating message IDs has been ruled out.
Right now TEST1 has a queue set up and is functional, but if one of the client app's environments wants to use it the others have to be shut off so that messaging doesn't overlap.
Does MQ support a model having multiple clients connect to a single queue while preserving the client-specific messaging? If so, where is that controlled (i.e. the channel, queue manager, etc)? If not, is the only solution to set up additional queues for each corresponding client?

Over the many years I have worked with IBM MQ, I have gone back and forth on this issue. I've come to the conclusion that sharing a queue just makes life more difficult. Queues should be handed out like candy on Halloween. If an application team says that they have 10 components to their application then the MQAdmin should give them 10 queues. To the queue manager or server or CPU or hard disk, there is no difference in resource usage.
Also, use an MQ naming standard that makes sense and is easy to apply security to. i.e. for HR (Human Resource) department
HR.PAYROLL.SALARY
HR.PAYROLL.DEDUCTIONS
HR.PAYROLL.BENEFITS
HR.EMPLOYEE.DETAILS
HR.EMPLOYEE.REVIEWS
etc...

You could use a selector such as MQGET(where applname="myapp") or based on a specific user-defined property assuming the sender populates such a property but that's likely to be worse performance than any retrieval by msgid or correlid. Though you've not given any information to demonstrate that get-by-correlid is actually problematic.
And of course any difference between a test and production environment - whether it involves code or configuration - is going to be very risky.
You would not normally share a single destination queue between multiple different application types - multiple queues is far more standard.

Related

Rabbit MQ performance

We have this use case for implementing data synchronization between two environments (env for short) - an active (and very busy) and a fail-over env.
The two env have multiple servers with multiple Web Services (WS) on each server but only one Database (DB) per env. The idea is if any of the WS on any of the servers in the active env sends a message, only one WS on one of the servers in the fail-over env receives it and updates the DB accordingly.
We do not care much WHEN the message is delivered. What we care about is:
if Rabbit MQ accepts the message, it must deliver it at some point
if Rabbit MQ’s status prevents it from delivering a message it should reject it right away
in both cases above, there should be minimal performance impact to the WS
We think we can use Rabbit MQ broker with Quorum type queue to make this possible (we did some initial experiments).
But we have this question regarding configuration – can we achieve this in synchronous mode without much performance penalty or async mode without running out of resources (threads, memory) waiting for task cancellation?
What would the configuration look like in each case?

Moving IBM MQ away from mainframe - best practice?

Today we have our MQ installations primarely on mainframe, but are considering moving them to Windows or Linux instead.
We have three queue managers (qmgrs) in most environments. Two in a queue sharing group across two LPARs, and a stand-alone qmgr for applications that doesn't need to run 24/7. We have many smaller applications, which shares the few qmgrs.
When I read up on building qmgrs in Windows and Linux, I get the impression that most designs favor a qmgr or a cluster per application. Is it a no-go to build a general purpose qmgr for a hundred small applications on Windows/Linux?
I have considered a Multi instance Qmgr (active/passive) or a clustered solution.
What is considered best practice in a scenario, where I have several hundred different applications that needs MQ communication.
First off, you are asking for an opinion that is not allowed on StackOverflow.
Secondly, if you have z/OS (mainframe) applications using MQ then you cannot move MQ off the mainframe because there is no client-mode connectivity for mainframe applications. What I mean is that the mainframe MQ applications cannot connect in client mode to a queue manager running on a distributed platform (i.e. Linux, Windows, etc.). You will need to have queue managers on both mainframe and distributed platforms to allow messages to flow between the platforms. So, your title of "Moving IBM MQ away from mainframe" is a no go unless ALL mainframe MQ applications are moving off the mainframe too.
I get the impression that most designs favor a qmgr or a cluster per
application.
I don't know where you read that but it sounds like information from the 90's. Normally, you should only isolate a queue manager if you have a very good reason.
I have considered a Multi instance Qmgr (active/passive) or a
clustered solution.
I think you need to read up on MQ MI and MQ clustering because they are not mutually exclusive. MQ clustering has nothing to do with fail-over or HA (high-availability).
Here is a description of MQ clustering from the MQ Knowledge Center.
You need to understand and document your requirements.
What failover time can you tolerate?
With Shared queue on z/OS if you kill one MQ - another MQ can continue processing the messages within seconds. Applications will have to detect the connection is broken and reconnect.
If you go for a mid range solution, it may take longer to detect a queue manager has gone down, and for the work to switch to an alternative. During this time the in-transit messages will not be available.
If you have 10 mid range queue managers and kill one of them, applications wich were connected to the dead queue manager can detect the outage and reconnect to a different queue manager within seconds, so new messages will have a short blip in throughput. Applications connected to the other 9 queue managers will not be affected, so a smaller "blip" overall.
Do you have response time criteria? So enterprises have a "budget" no more than 5 ms in MQ, no more than 15 ms in DB2 etc.
Will having midrange affect response time, for example is there more or less network latency between clients and servers.
Are you worried about confidentiality of data. On z/OS you can have disk encryption enabled by default.
Are you worried about security of data, for example use of keystores, and having stash files (with the passwords of the keystore) sitting next to the keystore. Z/OS is better than mid range in this area.
You can have tamper proof keystores on z/OS and midrange.
Scaling
How many distributed queue managers will you need to handle the current workload, and any growth (and any unexpected peaks). Does this change your operational model?
If you are using AMS, the maintenance of keystores is challenging. If you add one more recipient, you need to update the keystore for all userids that use the queue- on all queue managers. With z/OS You update one key ring per queue manager.
How would the move to midrange affect your disaster recovery? It may be easier (just spin up a new queue manager) with midrange, or harder - you need to create the environment before you can spin up a new queue manager.
What is the worst case environment, for example the systems you talk to - if they went down for a day. Can your queue managers hold/buffer the workload?
If you had a day's worth of data, how long would it take to drain it and send it?
Where are your applications? - if you have applications running on z/OS (for example , batch, CICS, IMS, WAS) they all need a z/OS queue manager on the same LPAR. If all the applications are not on z/OS then they can all use client mode to access MQ.
How does security change? (Command access, access to queues)

Muliple Websphere Application Servers attached to a single Websphere MQ Failing

Issue:
Having multiple consumer applications active specification attached to a single MQ on distributed VM servers is causing a null payload in a MQ Message.
Note: See notes at the bottom. No issue with mq.
Details:
I have 3 Websphere applications deployed across 2 VM servers. 1 application is a publisher and the other 2 applications are consumers attached to a single MQ Manager, and MQ.
2 consumer applications are pulling off the messages and processing them. The consumer application on the separate server receives a null payload. I have confirmed that its seems to be an issue having multiple application server instances attached to MQ. Confirmed by deploying the publisher on server 2 with consumer 2, then consumer 1 fails.
Question:
Has anyone tried attaching multiple MDB applications deployed on separate server instances bind to one Queue Manager and one MQ?
Specifications:
Websphere 7, EJB 3.0 MDB's,Transactions turned off,Queue in a queue installed on another machine.
Goal:
Distributed computing, scaling up against large number of messages.
I'm thinking this is a configuration issue but not 100% sure where to look. I had read that you could use MQLink, but I don't see why I would need to use service bus integration.
Supporting Doucmentation:
[MQ Link][1
UPDATE: I fixed the problem and it was a related to a combination of class loader issue with a duplicate classes. See Solution Notes below I added.
EDIT HISTORY:
- Clarified specifications, clarified question and added overall goal.
- reference notes to solution.
Has anyone tried attaching multiple MDB applications deployed on
separate server instances bind to one Local MQ?
Multiple MDB applications deployed on separate servers, connecting to one Queue Manager(but different queue) is a normal scenario and we have it everywhere and all applications work fine.
But, I suspect what you are doing is: Multiple MDB applications deployed on separate servers, connecting to one Queue Manager and listening to the same queue.
In this case one message will be received by one consumer only.
You need to create separate queue for each application and create subscriptions for each for the topic being published by your publisher.
Addition:
I suspect, for load balancing the problem you may be facing is that, when your first application gets the message, it doesn't issue a commit. So, there will be an uncommited message in the queue, which may be stopping your other application from getting message from the queue. When your first application finishes its processing, it issues a commit, but then again it is ready to pick the message and hence it again issues a get.
In my architecture, we have implemented load balancing using multiple queue managers like below:
You create 3 queue managers, say GatewayQM, App1QM and App2QM.
Keep the three queue managers in the same cluster.
Create an alias queue(shared in cluster) in GatewayQM and ask your putting app to put message in the gateway queue.
Now create one local cluster queue in each of App1QM and App2QM. Read from these queues via your applications App1 and App2 respectively.
This implementation provides you better security and serves a perfect load balancer.
This specific problem was caused by a code issue and the combination of class loading being set to "Parent First" in the Websphere console. On one node it would work and the other nodes in a cluster would fail, I think this was caused by the "Parent First" setting.
More important, in terms of my configuration in a cluster binding multiple active specifications to a single MQ to provide distributed computing is a correct solution.
However "points" due go to "nitgeek" solution references above if you are looking for a extremely high volume solution. Its important to understand that a single MQ can have a very high depth and takes a lot to fully utilize one. My current configuration is a good starting point for quick configuration and distributed processing using Multiple MDB's.

MQ (Websphere 7) persist message to file system

How would I set up MQ so that every message received is immediately written to file system?
I have the "redbooks", but at least need someone at least point me to a chapter or heading in the book to figure it out.
We are a .NET shop. I have written C# via API to read the queue, and we currently use BizTalk MQ adapter. Our ultimate goal is to write same message to multiple directories in file system to "clone" the feed for our various test environments (DEV, STAGE, TRAINING, etc..). The problem with BizTalk is that when we consume the message, we map it at the same time to a new message, so the message is already changed, and we want the original raw message to be cloned, not the morphed one. Our vendors don't offer multiple copies of the feed, for example, they offer DEV and PROD, but we have 4 systems internally.
I suppose I could do a C# Windows Service to do it, but I would rather use built-in features of MQ if possible.
There is no configuration required. If the message is persistent, WMQ writes it to disk. However, I don't think that's going to help you because they are not written as discrete messages. There's no disk file to copy and replication only works if the replicated QMgr is identical to the primary and is offline during the replication.
There are a number of solutions to this problem but as of WMQ V7, the easiest one is to use the built-in Pub/Sub functionality. This assumes that the messages are arriving over a QMgr-to-QMgr channel and landing on a queue where you then consume them.
In that case, it is possible to delete the queue and create an alias of the same name over a topic. You then create a new queue and define an administrative subscription that delivers messages on the topic into the new queue. Your app consumes from the new queue.
When you need to send a feed to another QMgr or application, define a new subscription and point it at the new destination queue. Since this is Pub/Sub, MQ will replicate the original message as many times as there are subscriptions and the first application and its messages are not affected. If the destination you need to send to isn't accessible over MQ channels (perhaps DEV and QA are not connected, for example), you can deliver the messages to the new queue, use QLoad from SupportPac MO03 to write them to a file and then use another instance of QLoad to load them onto a different QMgr. If you wanted to move them in real time, you could set up the Q program from SupportPac MA01 to move them direct from the new subscription queue on QMgr1 to the destination queue on QMgr2. And you can replicate across as many systems as you need.
The SupportPacs main page is here.
If all you are using is the Redbooks, you might want to have a look at the Infocenters. Be sure to use the Infocenter matching the version of WMQ you are using.
WMQ V7.0 Infocenter
WMQ V7.1 Infocenter
WMQ V7.5 Infocenter

When to use persistence with Java Messaging and Queuing Systems

I'm performing a trade study on (Java) Messaging & Queuing systems for an upcoming re-design of a back-end framework for a major web application (on Amazon's EC2 Cloud, x-large instances). I'm currently evaluating ActiveMQ and RabbitMQ.
The plan is to have 5 different queues, with one being a dead-letter queue. The number of messages sent per day will be anywhere between 40K and 400K. As I plan for the message content to be a pointer to an XML file location on a data store, I expect the messages to be about 64 bytes. However, for evaluation purposes, I would also like to consider sending raw XML in the messages, with an average file size of 3KB.
My main questions: When/how many messages should be persisted on a daily basis? Is it reasonable to persist all messages, considering the amounts I specified above? I know that persisting will decrease performance, perhaps by a lot. But, by not persisting, a lot of RAM is being used. What would some of you recommend?
Also, I know that there is a lot of information online regarding ActiveMQ (JMS) vs RabbitMQ (AMQP). I have done a ton of research and testing. It seems like either implementation would fit my needs. Considering the information that I provided above (file sizes and # of messages), can anyone point out a reason(s) to use a particular vendor that I may have missed?
Thanks!
When/how many messages should be persisted on a daily basis? Is it
reasonable to persist all messages, considering the amounts I
specified above?
JMS persistence doesn't replace a database, it should be considered a short-lived buffer between producers and consumers of data. that said, the volume/size of messages you mention won't tax the persistence adapters on any modern JMS system (configured properly anyways) and can be used to buffer messages for extended durations as necessary (just use a reliable message store architecture)
I know that persisting will decrease performance, perhaps by a lot.
But, by not persisting, a lot of RAM is being used. What would some of
you recommend?
in my experience, enabling message persistence isn't a significant performance hit and is almost always done to guarantee messages. for most applications, the processes upstream (producers) or downstream (consumers) end up being the bottlenecks (especially database I/O)...not JMS persistence stores
Also, I know that there is a lot of information online regarding
ActiveMQ (JMS) vs RabbitMQ (AMQP). I have done a ton of research and
testing. It seems like either implementation would fit my needs.
Considering the information that I provided above (file sizes and # of
messages), can anyone point out a reason(s) to use a particular vendor
that I may have missed?
I have successfully used ActiveMQ on many projects for both low and high volume messaging. I'd recommend using it along with a routing engine like Apache Camel to streamline integration and complex routing patterns
A messaging system must be used as a temporary storage. Applications should be designed to pull the messages as soon as possible. The more number of messages lesser the performance. If you are pulling of messages then there will be a better performance as well as lesser memory usage. Whether persistent or not memory will still be used as the messages are kept in memory for better performance and will backed up on disk if a message type is persistent only.
The decision on message persistence depends on how critical a message is and does it require to survive a messaging provider restart.
You may want to have a look at IBM WebSphere MQ. It can meet your requirements. It has JMS as well as proprietary APIs for developing applications.
ActiveMQ is a good choice for open source JMS, more expensive ones I can recommend are TIBCO EMS or maybe Solace.
But JMS is actually built for once-only delivery and longer persistence is left out of the specification. You could of course go database, but that's heavy weight and possibly expensive.
What I would recommend (Note: I work for CodeStreet) is our 'ReplayService for JMS'. It let's you store any type of JMS messages (or native WebSphere MQ ones) in a high-performance file-based disk storage. Each message is automatically assigned a nanosecond timestamp and a globalMsgID that you can overwrite on publication. So the XML messages could be recorded by the ReplayServer and your actual message could just contain the globalMsgID as reference. And maybe some properties ?
Once a receiver receives the globalMsgID, it could then replay that message from the ReplayServer, if needed.
But on the other hand, 400K*3KB XML message should be easily doable for ActiveMQ or others. Also, you should compress your XML messages before sending.

Resources