I have a cluster named inventory with 4 queue managers defined london, paris, newyork, tokyo. If london and paris are the default queue managers which get messages, how can I make newyork and tokyo as default to receive and not allow clients to put messages on london and paris but they should still be in the cluster. can this be achieved with workload management ?
If there are any other solutions please let me know. All this should be done without making any changes on the client side.
Thanks
Adding to T.Rob's suggestions here are couple of options you could take a look at:
Disable put on cluster queue instances in london and paris. So the messages will be distributed between newyork and tokyo.
Write a cluster workload balancing exit that will skip putting messages to cluster queues in london and paris and puts messages only to cluster queue instances in newyork and tokyo.
See Writing and compiling cluster workload exits from the MQ 7 Documentation.
It is important to remember that WebSphere MQ clusters provide a context that tells queue managers how to talk amongst themselves. Clients, on the other hand, are completely unaware of clusters. Clients must be told specifically which queue managers to connect to.
In your case, when configuring the clients provide them with the connection details for newyork and tokoyo QMgrs and they won't connect to london or paris. Assuming all these QMgrs are in a WebSphere MQ cluster, the messages from the clients will be able to resolve to clustered queues residing on london and paris.
Related
Today we have our MQ installations primarely on mainframe, but are considering moving them to Windows or Linux instead.
We have three queue managers (qmgrs) in most environments. Two in a queue sharing group across two LPARs, and a stand-alone qmgr for applications that doesn't need to run 24/7. We have many smaller applications, which shares the few qmgrs.
When I read up on building qmgrs in Windows and Linux, I get the impression that most designs favor a qmgr or a cluster per application. Is it a no-go to build a general purpose qmgr for a hundred small applications on Windows/Linux?
I have considered a Multi instance Qmgr (active/passive) or a clustered solution.
What is considered best practice in a scenario, where I have several hundred different applications that needs MQ communication.
First off, you are asking for an opinion that is not allowed on StackOverflow.
Secondly, if you have z/OS (mainframe) applications using MQ then you cannot move MQ off the mainframe because there is no client-mode connectivity for mainframe applications. What I mean is that the mainframe MQ applications cannot connect in client mode to a queue manager running on a distributed platform (i.e. Linux, Windows, etc.). You will need to have queue managers on both mainframe and distributed platforms to allow messages to flow between the platforms. So, your title of "Moving IBM MQ away from mainframe" is a no go unless ALL mainframe MQ applications are moving off the mainframe too.
I get the impression that most designs favor a qmgr or a cluster per
application.
I don't know where you read that but it sounds like information from the 90's. Normally, you should only isolate a queue manager if you have a very good reason.
I have considered a Multi instance Qmgr (active/passive) or a
clustered solution.
I think you need to read up on MQ MI and MQ clustering because they are not mutually exclusive. MQ clustering has nothing to do with fail-over or HA (high-availability).
Here is a description of MQ clustering from the MQ Knowledge Center.
You need to understand and document your requirements.
What failover time can you tolerate?
With Shared queue on z/OS if you kill one MQ - another MQ can continue processing the messages within seconds. Applications will have to detect the connection is broken and reconnect.
If you go for a mid range solution, it may take longer to detect a queue manager has gone down, and for the work to switch to an alternative. During this time the in-transit messages will not be available.
If you have 10 mid range queue managers and kill one of them, applications wich were connected to the dead queue manager can detect the outage and reconnect to a different queue manager within seconds, so new messages will have a short blip in throughput. Applications connected to the other 9 queue managers will not be affected, so a smaller "blip" overall.
Do you have response time criteria? So enterprises have a "budget" no more than 5 ms in MQ, no more than 15 ms in DB2 etc.
Will having midrange affect response time, for example is there more or less network latency between clients and servers.
Are you worried about confidentiality of data. On z/OS you can have disk encryption enabled by default.
Are you worried about security of data, for example use of keystores, and having stash files (with the passwords of the keystore) sitting next to the keystore. Z/OS is better than mid range in this area.
You can have tamper proof keystores on z/OS and midrange.
Scaling
How many distributed queue managers will you need to handle the current workload, and any growth (and any unexpected peaks). Does this change your operational model?
If you are using AMS, the maintenance of keystores is challenging. If you add one more recipient, you need to update the keystore for all userids that use the queue- on all queue managers. With z/OS You update one key ring per queue manager.
How would the move to midrange affect your disaster recovery? It may be easier (just spin up a new queue manager) with midrange, or harder - you need to create the environment before you can spin up a new queue manager.
What is the worst case environment, for example the systems you talk to - if they went down for a day. Can your queue managers hold/buffer the workload?
If you had a day's worth of data, how long would it take to drain it and send it?
Where are your applications? - if you have applications running on z/OS (for example , batch, CICS, IMS, WAS) they all need a z/OS queue manager on the same LPAR. If all the applications are not on z/OS then they can all use client mode to access MQ.
How does security change? (Command access, access to queues)
We are stuck in a difficult scenario in our new MQ infrastructure implementation using multi-instance queue managers using WebSphere MQ v7.5 in Linux platform.
The concern is our Network Team is not able to configure NFS4 and hence we are still having the NFS3 version. We understand multi-instance queue managers will not function properly with NFS3. But are there any issues if we define queue managers in multi-instance fashion in NFS3 and expect to work perfect for single instance mode.
Thanks
I would not expect you to have issues running single-node queue managers with NFS3, we do so on a regular basis. The requirement for NFS4 was for the file locking mechanism required by multi-instance queue managers to determine when the primary instance has lost control and an a secondary queue manager should take over.
If you do define the queue manager as multi-instance, and the queue manager attempt to failover, it may not do so successfully, at worst it may corrupt your queue manager files.
If you control the failover yourself - as in, shutdown the queue manager on one node and start it again on another node - that should work for you, as there is no file sharing taking place and all files would be shutdown on the primary node before being opened on the secondary node. You would have to make sure the secondary queue manager is NOT running in standby node -- ever.
I hope this helps.
Dave
If I want to set up Websphere MQ in a distributed environment (across 2 machines in a cluster), will the queues and topics(which I understand as physical storage spaces for messages) be created on both machines?
Or will the queues and topics be created on one machine only but the program (I guess it is called the websphere MQ broker) will be deployed on 2 machines and both the instances will access the same queue and topic.
Cluster concept in WebSphere MQ is different from the traditional high availability (HA) clusters. In traditional HA cluster two systems access the same storage/data to provide HA feature. Both systems can be configured to be active at anytime and processing requests. You can also have a active/passive type of HA configuration.
Unlike traditional HA cluster, WebSphere MQ cluster is different. Two queue managers do not share the same storage/data. Each queue manager is unique. WebSphere MQ cluster is more suitable for workload balance than a HA. You can have a queue with same name in multiple queue managers in a MQ cluster and when messages are put, MQ cluster will load balance them to all queues in that cluster. It should be noted the messages in each instance of a queue in cluster are independent and are not shared. If for some reason one of the queue manager in a cluster goes down, then messages in that queue manager become unavailable till the queue manager comes back.
Are you aiming for workload balance or HA? If you are aim is to achieve HA, then you could look at the multi-instance queue manager feature of MQ or any other HA solutions. If you are aiming for workload balance then you can go for MQ clustering. You can also have a mix of mutli-instance queue manager and MQ clustering to achieve HA and workload balance.
No, MQ doesnot create duplicate queues in the cluster if you don't(manually).
Further, check whether your queue manager is a Partial repository or a Full repository for the cluster.
A partial repository will only contain information about its own objects whereas a full repository will have information about the objects of all queue managers in the cluster.
A cluster needs at least one full repository in it, and other partial repository can use this full repository for accessing objects of other queue managers.
But, the object information in full repository is just a list. Actual physical object will only be there in the queue manager where it was created.
When I read about HA in Websphere MQ I always come to the point, when the best practise is to create two Queue Managers handling the same queue and use out-of-the-box load balancing. Therefore, when one is down, the other takes over his job.
Well, this is great but what about the messages in the queue that belong to the Queue Manager that went down? I mean do these messages reside there (when queue is persistent of course) until QM is up and running again?
Furthermore, is it possible to create a common storage for this doubled Queue Managers? Then no message would wait for the QM to be up. Every message would be delivered in the proper order. Is this correct?
WebSphere MQ provides different capabilities for HA, depending on your requirements. WebSphere MQ clustering uses parallelism to distribute load across multiple instances of a queue. This provides availability of the service but not for in-flight messages.
Hardware clustering and Multi-Instance Queue Manager (MIQM) are both designs using multiple instances of a queue manager that see a single disk image of that queue manager's state. These provide availability of in-flight messages but the service is briefly unavailable while the cluster fails over.
Using these in combination it is possible to provide recovery of in-flight messages as well as availability of the service across multiple queue instances.
In hardware cluster model the disk is mounted to only one server and the cluster software monitors for failure and swaps the disk, IP address and possibly other resources to the secondary node. This requires a hardware cluster monitor such as PowerHA to manage the cluster.
The Multi-Instance QMgr is implemented entirely within WebSphere MQ and needs no other software. It works by having two running instances of the QMgr pointing to the same NFS 4 shared disk mount. Both instances compete for locks on the files. The first one to acquire a lock becomes the active QMgr. Because there is no hardware cluster monitor to perform IP address takeover this type of cluster will have multiple IP addresses. Any modern version of WMQ allows for this using multi-instance CONNAME where you can supply a comma-separated list of IP or DNS names. Client applications that previously used Client Channel Definition Tables (CCDT) to manage failover across multiple QMgrs will continue to work and CCDT continues to be supported in current versions of WMQ.
Please see the Infocenter topic Using WebSphere MQ with high availability configurations for details of hardware cluster and MIQM support.
Client Channel Definition Table files are discussed in the Infocenter topic Client Channel Definition Table file.
As always, IBM documenation is great for what it tells you but leaves important details out. Apologies if this is already answered here - the search terms are unfortunately mostly generic or at least ambiguous and I have looked through a few hundred questions with no luck.
I have two (IBM i) servers, each with a single WMQ 7.0 queue manager. I have two channels running between them - one each direction.
I have a topic defined on "server A" with Publication and Subscription Scopes of "All" and Proxy Subscription Behaviour of "Force".
I have a subscription defined on "server B" with Scope "All".
Everything is up and running but when I drop a message into the topic on server A (using MQ Explorer), nothing appears on server B.
I have read about the "proxy subscriptions" required to make this work but I cannot for the life of me figure out how these get created.
Any assistance appreciated. I've got this far (never used pub/sub before) in a few hours only to trip at this hurdle.
You have to setup a hierarchy between these two queue managers for publications to flow to queue manager on B.
You have to setup a hierarchy between these two queue managers for publications to flow to queue manager on B.
Assuming queue manager on A as parent and queue manager on B as child, you have to issue "ALTER QMGR PARENT ()" in a RUNMQSC prompt of queue manager on B. This will create the hierarchy between two queue managers. Once a subscription is created on queue manager on B, a proxy subscription will automatically flow to queue manager on A. Assuming queue manager on A as parent and queue manager on B as child, you have to issue "ALTER QMGR PARENT ()" in a RUNMQSC prompt of queue manager on B. This will create the hierarchy between two queue managers. Once a subscription is created on queue manager on B, a proxy subscription will automatically flow to queue manager on A.
EDIT: More detail on my configuration (with slightly more meaningful - to me - server names)
On server A7:
Queue manager A7.QUEUE.MANAGER
Sender channel A7.TO.A2 with transmission queue A7.TO.A2
Alias queue A2.QUEUE.MANAGER pointing to A7.TO.A2
Receiver channel A2.TO.A7
In server A2:
Queue manager A2.QUEUE.MANAGER
Sender channel A2.TO.A7 with transmission queue A2.TO.A7
Alias queue A7.QUEUE.MANAGER pointing to A2.TO.A7
Receiver channel A7.TO.A2
I then issued ALTER QMGR PARENT('A7.QUEUE.MANAGER')
I have the topic on A7 and after issuing the ALTER (above) I added a subscription on A2 to the topic.
display pubsub type(ALL)
3 : display pubsub type(ALL)
AMQ8723: Display pub/sub status details.
QMNAME(A2.QUEUE.MANAGER) TYPE(LOCAL)
display pubsub type(ALL)
1 : display pubsub type(ALL)
AMQ8723: Display pub/sub status details.
QMNAME(A7.QUEUE.MANAGER) TYPE(LOCAL)
Cluster the two QMgrs and advertise the topic to the cluster. WMQ will then make publications available across the cluster.
On QMGR01
# Make the QMgr a cluster repository
ALTER QMGR REPOS('CLUSTERNAME')
# Create CLUSRCVR and advertise to cluster
# Substitute your CONNAME parms, QMgr name, channel names, etc.
DEF CHL(CLUSTERNAME.QMGR01) CHLTYPE(CLUSRCVR) +
TRPTYPE(TCP) +
CONNAME('127.0.0.1(1414)') +
CLUSTER('CLUSTERNAME') +
REPLACE
# Create topic object and advertise to cluster
DEF TOPIC('ROOT') TOPICSTR('ROOT') CLUSTER('CLUSTERNAME') REPLACE
On QMGR02
# Always create CLUSRCVR first and advertise to cluster
# Substitute your CONNAME parms, QMgr name, channel names, etc.
DEF CHL(CLUSTERNAME.QMGR02) CHLTYPE(CLUSRCVR) +
TRPTYPE(TCP) +
CONNAME('127.0.0.1(1415)') +
CLUSTER('CLUSTERNAME') +
REPLACE
# Then conecct to the repos and advertise the CLUSSDR to the cluster too
# Substitute your CONNAME parms, QMgr name, channel names, etc.
DEF CHL(CLUSTERNAME.QMGR01) CHLTYPE(CLUSSDR) +
TRPTYPE(TCP) +
CONNAME('127.0.0.1(1414)') +
CLUSTER('CLUSTERNAME') +
REPLACE
Now you can publish to the topic that was advertised to the cluster:
On QMgr01
amqspub ROOT/Whatever QMGR01
On QMgr02
amqssub ROOT/Whatever QMGR02
You don't have to name your object ROOT or use that as the top of the topic namespace. This was an arbitrary example and you can use anything you want. In Production you would probably have some topic objects at the 2nd or 3rd level in the topic hierarchy to hang access control lists from. Generally it is these objects that are used to advertise the topics to the cluster.
Some additional notes:
You cannot advertise SYSTEM.BASE.TOPIC or SYSTEM.DEFAULT.TOPIC to the cluster.
A clustered topic need be defined on only one node in the cluster. It can be any node but it's a good practice to advertise it on the primary full repository. That way you know where all your clustered topic objects are defined in one place and the repository is (or should be) highly available.
If there are local and clustered topic objects that overlap, the local one takes precedence.
Please see Creating a new cluster topic for more info. Also, Creating and configuring a queue manager cluster has tasks to create the cluster and add QMgrs to it. However, I tested with the above on my Windows host and in this minimal cluster, the pub/sub worked great.
You have to setup a hierarchy between these two queue managers for publications to flow to queue manager on B.
Assuming queue manager on A as parent and queue manager on B as child, you have to issue "ALTER QMGR PARENT ()" in a RUNMQSC prompt of queue manager on B. This will create the hierarchy between two queue managers. Once a subscription is created on queue manager on B, a proxy subscription will automatically flow to queue manager on A.