Application with multiple QueueManager - jms

I need to send messages to different Queue Managers (different QM name, host-port pairs).
What is the best way to handle this scenario? Do I need to create separate ConnectionFactory for each Queue Manager?
Use Case: Java service which is sending command message to distributed agents (FTE agents). However those agents can listen on different queues on different Queue Managers.

Yes you need to use different ConnectionFactory for each JMS server. You can reuse ConnectionFactory only for various Queues on one Message Broker.

I think you are better of connecting the different queue managers hosting the queues and accessing the remote queues via remote queue definitions created on the queue manager your JMS application is connecting to.
I guess these queue managers are already connected, if you are moving files between the FTE agents.

FTE has the concept of the Command Queue Manager. This node is designated as the one QMgr to which the application should connect when communicating with the FTE agents. If the network has been properly defined, connectivity between all FTE agents and the Command QMgr will be present.
The design is one of the ways in which FTE can enforce security. It is possible on the agent QMgrs for the channel from the command queue manager to be authorized very specifically to a subset of the queues and the messages monitored. The design provides a policy enforcement point. Since the FTE security model is weak to begin with, it is important to try to properly use the security controls that are provided.
This design also means that it is not necessary to manage application service account credentials across all the agent QMgrs.
So, yes, use the Command QMgr for its intended purpose. Have one and only one connection factory and properly design and secure the FTE network using the Command QMgr and its channels as a policy enforcement point.

Related

IBM MQ - MQ Client Connecting to a MQ Cluster

I have an MQ cluster which has two full repository queue managers and two partial repository queue managers. A Java client needs to connect to this cluster and should PUT/GET messages. Which queue manager should I expose to the Java client, is it a queue manager from full repository or is it any one from partial repository? is there a kind of a pattern/anti-pattern/advantage/disadvantage exposing any of those?
Any advice would be very much appreciated.
Regards,
Yasothar
One pattern that is so often used that it has been adopted as a full-fledged IBM MQ feature is known as a Uniform Cluster. It is so called because every queue manager in the cluster hosts the same queues, and so it does not matter which queue manager the client application connects to.
However, to answer your direct question, the client should be given a CCDT which contains entries for each queue manager that hosts a copy of the queue the client application is using. This could be EVERY queue manager in the cluster, as per the Uniform Cluster pattern, or just some of them. It does not need to be only partials or only full repository queue managers, but instead you are looking for those hosting the queue.
If you are not sure what that set of queue managers is, you could go to one of your full repository queue managers and type in the following MQSC command (using runmqsc or your favourite other MQSC tool):
DISPLAY QCLUSTER(name-of-queue-used-by-application) CLUSQMGR
and the resultant list will show the hosting queue manager names in the CLUSQMGR field.

more than one listener for the queue manager

Can there more than one listener to a queue manager ? I have used one listener/queue manager combination so far and wonder if this possible. This is because we have 2 applications connecting to same queue manager and seems to have problem with that.
There are a couple meanings for the term listener in an MQ context. Let's see if we can clear up some confusion over the terminology and then answer the question as it relates to each.
As defined in the spec, a JMS listener is an object that implements a callback mechanism. It listens on destinations for messages and calls onMessage when they arrive. The destinations may be queues or topics hosted by any JMS-compliant transport provider.
In IBM MQ terms, a listener is a process (runmqlsr) that handles inbound connection requests on a server. Although these can handle a variety of protocols, in practice they are almost exclusively TCP listeners that bind a port (1414 by default) and negotiate connection requests on sockets.
TCP Ports
Tim's answer applies to the second of these contexts. MQ can listen for sockets on multiple ports and indeed it is quite common to do so. Each listener listens on one and only one port. It may listen on that port across all network interfaces or can be bound to a specific network interface. No two listeners can bind to the same combination of interface and port though.
In a B2B context the best practice is to run a dedicated listener for each external business partner to isolate each of their connections across dedicated access paths. Internally I usually recommend separate ports for QMgr-to-QMgr, app-to-QMgr and interactive user connections.
In that sense it is possible to run multiple listeners on a given QMgr. Each of those listeners can accept many connections. Their job is to negotiate the connection then hand the socket off to a Message Channel Agent which talks to the QMgr on behalf of the remotely connected client or QMgr.
JMS Listeners
Based on comments, Ulab refers to JMS listeners. These objects establish a connection to a queue manager and then wait in GET mode for new messages arriving on a destination. On arrival of a message, they call the onMessage method which is an asynchronous callback routine.
As to the question "can there more than one (JMS) listener to a queue manager?" the answer is definitely yes. A multi-threaded application can have multiple listeners connected, multiple application instances can connect at the same time, and many thousands of application connections can be handled by a single queue manager with sufficient memory, disk and CPU available.
Of course, each of these applications is ultimately connected to one or more queues so then the question becomes one of whether they can connect to the same queue.
Many listeners can listen on the same queue so long as they do not get exclusive access to it. Each will receive a portion of the messages arriving.
Listeners on QMgr-managed subscriptions are exclusively attached to a dynamic queue but multiple instances on the same topic will all receive the same messages.
If the queue is clustered and there is more than one instance of it multiple listeners will be required to get all the messages since they will normally be distributed by MQ workload distribution across those instances.
Yes, you can create as many listeners as you wish:
http://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.explorer.doc/e_listener.htm
However, there is no reason why two applications can't connect to the queue manager via the same listener (on the same port). What problem have you run into?

IBM WAS7 Queue Factory Configuration to an MQ Cluster

I'm trying to configure a clustered websphere application server that connects to a clustered MQ.
However, the the information I have is details for two instances of MQ with different host names, server channels and queue manager which belongs to the same MQ cluster name.
On the websphere console, I can see input fields for hostname, queue manager and server channel, I cannot find anything that I can specify multiple MQ details.
If I pick one of the MQ detail, will MQ clustering still work? If not, how will I enable MQ clustering given the details I have?
WebSphere MQ clustering affects the behavior of how queue managers talk amongst themselves. It does not change how an application connects or talks to a queue manager so the question as asked seems to be assuming some sort of clustering behavior that is not present in WMQ.
To set up the app server with two addresses, please see Configuring multi-instance queue manager connections with WebSphere MQ messaging provider custom properties in the WAS v7 Knowledge Center for instructions on how to configure a connection factory with a multi-instance CONNAME value.
If you specify a valid QMgr name in the Connection Factory and the QMgr to which the app connects doesn't have that specific name then the connection is rejected. Normally a multi-instance CONNAME is used to connect to a multi-instance QMgr. This is a single highly available queue manager that can be at one of two different IP addresses so using a real QMgr name works in that case. But if the QMgrs to which your app is connecting are two distinct and different-named queue managers, which is what you described, you should specify an asterisk (a * character) as the queue manager name in your connection factory as described here. This way the app will not check the name of the QMgr when it gets a connection.
If I pick one of the MQ detail, will MQ clustering still work? If not,
how will I enable MQ clustering given the details I have?
Depends on what you mean by "clustering". If you believe that the app will see one logical queue which is hosted by two queue managers, then no. That's not how WMQ clustering works. Each queue manager hosting a clustered queue gets a subset of messages sent to that queue. Any apps getting from that queue will therefore only ever see the local subset.
But if by "clustering" you intend to connect alternately to one or the other of the two queue managers and transmit messages to a queue that is in the same cluster but not hosted on either of the two QMgrs to which you connect, then yes it will work fine. If your Connection Factory knows of only one of the two QMgrs you will only connect to that QMgr, and sending messages to the cluster will still work. But set it up as described in the links I've provided and your app will be able to connect to either of the two QMgrs and you can easily test that by stopping the channel on the one it connects to and watching it connect to the other one.
Good luck!
UPDATE:
To be clear the detail provide are similar to hostname01, qmgr01,
queueA, serverchannel01. And the other is hostname02, qmgr02, queueA,
serverchannel02.
WMQ Clients will connect to two different QMgrs using a multi-instance CONNAME only when...
The channel name used on both QMgrs is the exactly the same
The application uses an asterisk (a * character) or a space for the QMgr name when the connection request is made (i.e. in the Connection Factory).
It is possible to have WMQ connect to one of several different queue managers where the channel name differs on each by using a Client Connection Definition Table, also known as a CCDT. The CCDT is a compiled artifact that you create using MQSC commands to define CLNTCONN channels. It contains entries for each of the QMgrs the client is eligible to connect to. Each can have a different QMgr name, host, port and channel. However, when defining the CCDT the administrator defines all the entries such that the QMgr name is replaced with the application High Level Qualifier. For example, the Payroll app wants to connect to any 1 of 3 different QMgrs. The WMQ Admin defines a CCDT with three entries but uses PAY01, PAY02, and PAY03 for the QMgr names. Note this does not need to match the actual QMgr names. The application then specifies the QMgr name as PAY* which selects all three QMgrs in the CCDT.
Please see Using a client channel definition table with WebSphere MQ classes for JMS for more details on the CCDT.
Is MQ cluster not similar to application server clusters?
No, not at all.
Wherein two-child nodes are connected to a cluster. And an F5 URL will
be used to distribute the load to each node. Does not WMQ come with a
cluster url / f5 that we just send message to and the partitioning of
messages are transparent?
No. The WMQ cluster provides a namespace within which applications and QMgrs can resolve non-local objects such as queues and topics. The only thing that ever connects to a WebSphere MQ cluster is a queue manager. Applications and human users always connect to specific queue managers. There may be a set of interchangeable queue managers such as with the CCDT, but each is independent.
With WAS the messaging engine may run on several nodes, but it provides a single logical queue from which applications can get messages. With WMQ each node hosting that queue gets a subset of the messages and any application consuming those messages sees only that subset.
HTTP is stateless and so an F5 URL works great. When it does maintain a session, that session exists mainly to optimize away connection overhead and tends to be short lived. WMQ client channels are stateful and coordinate both single-phase and two-phase units of work. If an application fails over to another QMgr during a UOW, it has no way to reconcile that UOW.
Because of the nature of WMQ connections, F5 is never used between QMgrs. It is only used between client and QMgr for connection balancing and not message traffic balancing. Furthermore, the absence or presence of an MQ cluster is entirely transparent to the application which, in either case, simply connects to a QMgr to get and./or put messages. Use of a Multi-Instance CONNAME or a CCDT file makes that connection more robust by providing multiple equivalent QMgrs to which the client can connect but that has nothing whatever to do with WMQ clustering.
Does that help?
Please see:
Clustering
How Clusters Work
Queue manager groups in the CCDT
Connecting WebSphere MQ MQI client applications to queue managers

Intended use of Transmission Queue

This is very basic question about IBM WebSphere MQ V7.
Regarding the Transmission Queue, my understanding is it is only used with remote queue that resides in the same queue manager. Therefore, if I want to put message to the queue, I need to put it to remote queue.
It is like this.
App --> Remote queue --> Transmission Queue
My question is:
Is it possible to put the message directly into transmission queue like this?
App --> Transmission Queue
--Modified on 2014.03.17 --
I found a way to put message directly into transmission queue. I do not know this is a common use, but in order to do that I needed to prepend MQXQH to the message. I tried and confirmed it works. See the Infocenter reference here.
Do not ever put directly to a transmission queue. It is dangerous if you do not know what you are doing.
You should put your message to a remote queue. A remote queue is not the same as a local queue. A remote queue is simply a pointer to a queue on another queue manager.
Although it is possible to put messages directly on the XMitQ, there is considerable risk in allowing that to occur so most admins will prevent applications from directly accessing that queue. As you have found, it is possible to construct a message with the transmission queue header and behind that a normal message with the MQMD and payload. (This is, in fact, excatly how the MCA works.)
The problem here is that the QMgr does not check the values in the MQMD residing in the payload so you can put mqm as the MQMD.UserID and then address the message to the remote command queue and grant yourself admin access to that remote QMgr.
Security-conscious administrators typically use two security controls to prevent this. First, they disallow direct access to the XMitQ. That helps for outbound messages. More importantly, they set the MCAUSER of their RCVR/RQSTR/CLUSRCVR channels to a non-admin user ID that is not authorized to put messages onto any sensitive queues.
The other issue is, of course, that what you describe completely defeats WMQ's name resolution. By embedding routing into the app, you prevent the administrator from adjusting channel weights, cluster settings, failover and load distribution at the network level. Need to redistribute traffic? Redeploy the code. Not a good plan.
So for security reasons and because you paid a lot of money to get WMQ's reliability - much of which comes from dynamic addressing and name resolution features - coding apps to write directly to the XMitQ is strongly discouraged.
You should not directly be using the transmission queue. Its used by the message channel agent (MCA) as temporary storage when sending messages across to a remote queue manager.
This is distributed queuing - i.e you publish a message to Queue Manager A, and want it routed to a local queue on Queue Manager B. So you define a reference on QM-A referring to the local queue on QM-B. This reference is the 'remote queue definition'.
The remote queue definition specifies the transmission queue name. The transmission queue is bound to the MCA, which in turn knows about the remote QM.

Websphere MQ clustering

I'm pretty new to websphere MQ, so please pardon me if I am not using the right terms. We are doing a project in which we need to setup a MQ cluster for high availability.
The client application maintains a pool of connection with the Queue Manager for subscribers and publishers. Suppose we have two Queue Managers in a cluster hosting the queues with the same names. Each of the queue has its own set of subscribers and publishers which are cached by the client application. Suppose one of the queue manager goes down, the subscribers and publishers of the queues on that queue manager will die making the objects on client application defunct.
In this case can the following scenarios taken care of?
1] When first QueueManager crashes, the messages on its queues are transferred to other queuemanager in the cluster
2] When QueueManager comes up again, is there any mechanism to restore the publishers and subscribers. Currently we have written an automated recovery thread in the client application which tries to reconnect the failed publishers and subscriber. But in case of cluster setup, we fear that the publishers and subscribers will reconnect to the other running qmanager. And when the crashed queuemanager is restored, there will be no publishers and subscribers to it.
Can anybody please explain how to take care of above two scenarios?
WMQ Clustering is an advanced topic. You must first do a good amount of read up of WMQ and understand what clustering in WMQ world means before attempting anything.
WMQ Cluster differs in many ways from the traditional clusters. Unlike the traditional clusters, say in a Active/Passive cluster, data will be shared between active and passive instances of an application. At any point in time, the active instance of application will be processing data. When the active instance goes down, the passive instance takes over and starts processing. This is not the case in WMQ clusters where queue managers in a cluster are unique and hence queues/topics hosted by those queue managers are not shared. You might have the same queues/topics in both queue managers but since queue managers are different, messages, topics, subscriptions etc won't be shared.
Answering to your questions.
1) No. Messages,if persistent, will remain in the crashed queue manager. They will not be transferred to other queue manager. Since the queue manager itself is not available nothing can be done till the queue manager is brought up.
2)No. Queue manager can't do that. It's the duty of the application to check for queue manager availability and reconnect. WMQ provides automatic client reconnection feature where in the WMQ client libraries automatically reconnect to queue manager when they detect connection broken errors. This feature is available from WMQ v7.x and above with C and Java clients. C# client supports the feature from v7.1.
For your high availability requirement, you could look at using Multi instance queue manager feature of WMQ. This feature enables an Active/Passive instances of the same queue manager running on two different machines. Active instance of the queue manager will be handling client connections while the passive instance will be in sleep mode. Both instances will be sharing data and logs. Once the active instance goes down, the passive instance becomes active. You will have access to all the persistent messages that were in the queues before the active queue manager went down.
Read through the WMQ InfoCenter for more on Multi instance queue manager.
To add to Shashi's answer, to get the most out of WMQ clustering you need to have a network hop between senders and receivers of messages. WMQ clustering is about how QMgrs talk among themselves. It has nothing to do with how client apps talk to QMgrs and does not replicate messages. In a cluster when a message has to get from one QMgr to another, the cluster figures out where to route it. If there are multiple clustered instances of a single destination queue, the message is eligible to be routed to any of them. If there is no network hop between senders and receivers, then messages don't need to leave the local QMgr and therefore WMQ clustering behavior is never invoked, even though the QMgrs involved may participate in the cluster.
In a conventional WMQ cluster architecture, the receivers all listen on multiple instances of the same queue, with the same name, spread across multiple QMgrs. The senders have one or more QMgrs where they can connect and send requests (fire-and-forget), possibly awaiting replies (request-reply). Since the receivers of the messages provide some service, I call their QMgrs "Service Provider QMgrs." The QMgrs where the senders of messages live are "Service Consumer" QMgrs because these apps are consumers of services.
The slide below is from a presentation I use on WMQ Architecture consulting engagements.
Note that consumers of services - the things sending request messages - fail over. Things listening on service endpoint queues and providing services do NOT fail over. This is because of the need to make sure every active service endpoint queue is always served. Typically each app instance holds an input handle on two or more queue instances. This way a QMgr can go down and all app instances remain active. If an app instance goes down, some other app instance continues to serve its queues. This affinity of service providers to specific QMgrs also enables XA transactionality if needed.
The best way I've found to explain WMQ HA is a slide from the IMPACT conference:
A WebSphere MQ cluster ensures that a service remains available, even though an instance of a clustered queue may be unavailable. New messages in the cluster will route to the remaining queue instances. A hardware cluster or multi-instance QMgr (MIQM) provides access to existing messages. When one side of the active/passive pair goes down, there is a brief outage on that QMgr only while the failover occurs, then the secondary node takes over and makes any messages on the queues available again. A network that combines both WMQ clusters and hardware clusters/MIQM provides the highest level of availability.
Keep in mind that in none of these configurations are messages replicated across nodes. A WMQ message always has a single physical location. For more on this aspect, please see Thoughts on Disaster Recovery.

Resources