how to set IBM MQ topic to share message only within same applications - ibm-mq

this is what I want to achieve with IBM MQ:
I have a topic in place, and 2 different applications (A1,A2) subscribing the topic. each application has 2 instances, i.e. (A1-I1,A1-I2,A2-I1,A2-I2).
when a message (M1) is published to the topic, the message will be received by both applications A1 & A2. but within A1, only one instance (A1-I1 or A1-I2) can receive this message M1, the same goes to instances of A2.
is it possible with IBM MQ topics and JMS?

I have looked up shared subscription mentioned in the comment, but I find it (IBM MQ) has a limitation that the shared subscription can only be accessed by one JVM, i.e. two application instances can't access the the shared subscription at the same time, so I give up this purely pub/sub direction.
I managed to make it work by making the applications listen to the destination queue of subscription instead of directly subscription. I defined in the queue manager subscription SUB.A1 with destination queue of SUB.A1.QUEUE and subscription SUB.A2 with destination queue of SUB.A2.QUEUE. all instances of application A1 will listen to SUB.A1.QUEUE and A2 to SUB.A2.QUEUE. and everything works as expected.
The above setup is all applications connecting to the same queue manager. if there is a queue manager cluster of multiple QMs, just define the subscription destination queues as cluster queues and it will work as well

Related

Difference between Queue Manager and Queue in MQ

Looking at the sample codes it seems I need queue manager and queue name to setup MQ through code. What is difference between those and where I can get those values from? Any suggestion?
MQTopicConnectionFactory cf = new MQTopicConnectionFactory();
// Config
cf.setHostName("localhost");
cf.setPort(1414);
cf.setTransportType(JMSC.MQJMS_TP_CLIENT_MQ_TCPIP);
cf.setQueueManager("QM_thinkpad");
cf.setChannel("SYSTEM.DEF.SVRCONN");
MQTopicConnection connection = (MQTopicConnection) cf.createTopicConnection();
MQTopicSession session = (MQTopicSession) connection.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);
MQTopic topic = (MQTopic) session.createTopic("topic://foo");
MQTopicPublisher publisher = (MQTopicPublisher) session.createPublisher(topic);
MQTopicSubscriber subscriber = (MQTopicSubscriber) session.createSubscriber(topic);
You connect to a queue manager which may host many different queues. So yes, an application generally needs access to a queue manager and then specific queues on that queue manager. I suggest you can look at the Stack Overflow info for the websphere-mq tag to help get you started. The names of those objects should be known by your application architect/developer or can be confirmed with the MQ admin.
A Queue is a container for messages. Business applications that are connected to the Queue Manager that hosts the queue can retrieve messages from the queue or can put messages on the queue. We have difference types of Queues as follows:
Local Queue: A local queue is a definition of both a queue and the set of messages that are associated with the queue. The queue manager that hosts the queue receives messages in its local queues
Remote Queue: Remote queue definitions are definitions on the local Queue Manager of queues that belong to another queue manager. To send a message to a queue on a remote queue manager, the sender queue manager must have a remote definition of the target queue.
Alias Queue: Alias queues are not actually queues; they are additional definitions of existing queues. You create alias queue definitions that refer to actual local queues but you can name the alias queue definition differently from the local queue (the base queue). This means that you can change the queues that an application uses without needing to change the application; you just create an alias queue definition that points to the new local queue.
Source
Before going to connect Queue, we must start a queue manager.The queue manager has a name, and applications can connect to it using this name. The queue manager owns and manages the set of resources that are used by WebSphere MQ.
Page sets that hold the WebSphere MQ object definitions and message data
Logs that are used to recover messages and objects in the event of queue manager failure
Processor storage
Connections through which different application environments (CICS®, IMS™, and Batch) can access the WebSphere MQ API
The WebSphere MQ channel initiator, which allows communication between WebSphere MQ on your z/OS system and other systems
Source

How SYSTEM.INTER.QMGR.PUBS queue works with Local & Cluster Topics

I am seeing an issue in MQ cluster infrastructure where messages pile
up in SYSTEM.INTER.QMGR.PUBS which is present in a queue manager with
cluster topics defined.
Infocenter says "In a publish/subscribe cluster, those publications are
targeted at the SYSTEM.INTER.QMGR.PUBS queue on the remote queue
managers that host the subscriptions."
I would like to know how the message flow works with SYSTEM.INTER.QMGR.
PUBS queue in a queue manager with Topics & subscriptions defined as
Cluster Topics & local susbcriptions
Local Topics & local subscriptions
Can anyone help here to understand whether messages flow through SYSTEM.INTER.QMGR.PUBS queue if Topics are defined local to a queue manager?
I would expect messages flow through the SYSTEM.INTER.QMGR.PUBS queue of a queue manager when the queue manager has local subscriptions to a topic, which is published to by publishers connected to a remote queue manager, or when the queue manager is a topic host in a cluster using topic host routing, or when the queue manager is member of a publish/subscribe hierarchy.
It doesn't matter whether the topic object is defined remotely or locally, messages will flow through this queue when the queue manager is on the route between remote publishers and subscribers.
http://www.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.mon.doc/q005262_.htm%23q005262_?lang=en
The SYSTEM.INTER.QMGR.PUBS is made use if the Topics are defined in cluster and there are no local defined queue objects in the same queue manager for remote publishers(queues) to publish messages to cluster Topics.In this case all published messages pass through this queue to reach Topics.
But if the Topics are defined as local Topics are there are alias queue objects defined in the same queue manager which are mapped to Topics; then all remote publishers(queues) will publish to the alias queues defined and skip the SYSTEM.INTER.QMGR.PUBS. In this case this queue is not made use for publishing.

IBM WAS7 Queue Factory Configuration to an MQ Cluster

I'm trying to configure a clustered websphere application server that connects to a clustered MQ.
However, the the information I have is details for two instances of MQ with different host names, server channels and queue manager which belongs to the same MQ cluster name.
On the websphere console, I can see input fields for hostname, queue manager and server channel, I cannot find anything that I can specify multiple MQ details.
If I pick one of the MQ detail, will MQ clustering still work? If not, how will I enable MQ clustering given the details I have?
WebSphere MQ clustering affects the behavior of how queue managers talk amongst themselves. It does not change how an application connects or talks to a queue manager so the question as asked seems to be assuming some sort of clustering behavior that is not present in WMQ.
To set up the app server with two addresses, please see Configuring multi-instance queue manager connections with WebSphere MQ messaging provider custom properties in the WAS v7 Knowledge Center for instructions on how to configure a connection factory with a multi-instance CONNAME value.
If you specify a valid QMgr name in the Connection Factory and the QMgr to which the app connects doesn't have that specific name then the connection is rejected. Normally a multi-instance CONNAME is used to connect to a multi-instance QMgr. This is a single highly available queue manager that can be at one of two different IP addresses so using a real QMgr name works in that case. But if the QMgrs to which your app is connecting are two distinct and different-named queue managers, which is what you described, you should specify an asterisk (a * character) as the queue manager name in your connection factory as described here. This way the app will not check the name of the QMgr when it gets a connection.
If I pick one of the MQ detail, will MQ clustering still work? If not,
how will I enable MQ clustering given the details I have?
Depends on what you mean by "clustering". If you believe that the app will see one logical queue which is hosted by two queue managers, then no. That's not how WMQ clustering works. Each queue manager hosting a clustered queue gets a subset of messages sent to that queue. Any apps getting from that queue will therefore only ever see the local subset.
But if by "clustering" you intend to connect alternately to one or the other of the two queue managers and transmit messages to a queue that is in the same cluster but not hosted on either of the two QMgrs to which you connect, then yes it will work fine. If your Connection Factory knows of only one of the two QMgrs you will only connect to that QMgr, and sending messages to the cluster will still work. But set it up as described in the links I've provided and your app will be able to connect to either of the two QMgrs and you can easily test that by stopping the channel on the one it connects to and watching it connect to the other one.
Good luck!
UPDATE:
To be clear the detail provide are similar to hostname01, qmgr01,
queueA, serverchannel01. And the other is hostname02, qmgr02, queueA,
serverchannel02.
WMQ Clients will connect to two different QMgrs using a multi-instance CONNAME only when...
The channel name used on both QMgrs is the exactly the same
The application uses an asterisk (a * character) or a space for the QMgr name when the connection request is made (i.e. in the Connection Factory).
It is possible to have WMQ connect to one of several different queue managers where the channel name differs on each by using a Client Connection Definition Table, also known as a CCDT. The CCDT is a compiled artifact that you create using MQSC commands to define CLNTCONN channels. It contains entries for each of the QMgrs the client is eligible to connect to. Each can have a different QMgr name, host, port and channel. However, when defining the CCDT the administrator defines all the entries such that the QMgr name is replaced with the application High Level Qualifier. For example, the Payroll app wants to connect to any 1 of 3 different QMgrs. The WMQ Admin defines a CCDT with three entries but uses PAY01, PAY02, and PAY03 for the QMgr names. Note this does not need to match the actual QMgr names. The application then specifies the QMgr name as PAY* which selects all three QMgrs in the CCDT.
Please see Using a client channel definition table with WebSphere MQ classes for JMS for more details on the CCDT.
Is MQ cluster not similar to application server clusters?
No, not at all.
Wherein two-child nodes are connected to a cluster. And an F5 URL will
be used to distribute the load to each node. Does not WMQ come with a
cluster url / f5 that we just send message to and the partitioning of
messages are transparent?
No. The WMQ cluster provides a namespace within which applications and QMgrs can resolve non-local objects such as queues and topics. The only thing that ever connects to a WebSphere MQ cluster is a queue manager. Applications and human users always connect to specific queue managers. There may be a set of interchangeable queue managers such as with the CCDT, but each is independent.
With WAS the messaging engine may run on several nodes, but it provides a single logical queue from which applications can get messages. With WMQ each node hosting that queue gets a subset of the messages and any application consuming those messages sees only that subset.
HTTP is stateless and so an F5 URL works great. When it does maintain a session, that session exists mainly to optimize away connection overhead and tends to be short lived. WMQ client channels are stateful and coordinate both single-phase and two-phase units of work. If an application fails over to another QMgr during a UOW, it has no way to reconcile that UOW.
Because of the nature of WMQ connections, F5 is never used between QMgrs. It is only used between client and QMgr for connection balancing and not message traffic balancing. Furthermore, the absence or presence of an MQ cluster is entirely transparent to the application which, in either case, simply connects to a QMgr to get and./or put messages. Use of a Multi-Instance CONNAME or a CCDT file makes that connection more robust by providing multiple equivalent QMgrs to which the client can connect but that has nothing whatever to do with WMQ clustering.
Does that help?
Please see:
Clustering
How Clusters Work
Queue manager groups in the CCDT
Connecting WebSphere MQ MQI client applications to queue managers

Websphere MQ clustering

I'm pretty new to websphere MQ, so please pardon me if I am not using the right terms. We are doing a project in which we need to setup a MQ cluster for high availability.
The client application maintains a pool of connection with the Queue Manager for subscribers and publishers. Suppose we have two Queue Managers in a cluster hosting the queues with the same names. Each of the queue has its own set of subscribers and publishers which are cached by the client application. Suppose one of the queue manager goes down, the subscribers and publishers of the queues on that queue manager will die making the objects on client application defunct.
In this case can the following scenarios taken care of?
1] When first QueueManager crashes, the messages on its queues are transferred to other queuemanager in the cluster
2] When QueueManager comes up again, is there any mechanism to restore the publishers and subscribers. Currently we have written an automated recovery thread in the client application which tries to reconnect the failed publishers and subscriber. But in case of cluster setup, we fear that the publishers and subscribers will reconnect to the other running qmanager. And when the crashed queuemanager is restored, there will be no publishers and subscribers to it.
Can anybody please explain how to take care of above two scenarios?
WMQ Clustering is an advanced topic. You must first do a good amount of read up of WMQ and understand what clustering in WMQ world means before attempting anything.
WMQ Cluster differs in many ways from the traditional clusters. Unlike the traditional clusters, say in a Active/Passive cluster, data will be shared between active and passive instances of an application. At any point in time, the active instance of application will be processing data. When the active instance goes down, the passive instance takes over and starts processing. This is not the case in WMQ clusters where queue managers in a cluster are unique and hence queues/topics hosted by those queue managers are not shared. You might have the same queues/topics in both queue managers but since queue managers are different, messages, topics, subscriptions etc won't be shared.
Answering to your questions.
1) No. Messages,if persistent, will remain in the crashed queue manager. They will not be transferred to other queue manager. Since the queue manager itself is not available nothing can be done till the queue manager is brought up.
2)No. Queue manager can't do that. It's the duty of the application to check for queue manager availability and reconnect. WMQ provides automatic client reconnection feature where in the WMQ client libraries automatically reconnect to queue manager when they detect connection broken errors. This feature is available from WMQ v7.x and above with C and Java clients. C# client supports the feature from v7.1.
For your high availability requirement, you could look at using Multi instance queue manager feature of WMQ. This feature enables an Active/Passive instances of the same queue manager running on two different machines. Active instance of the queue manager will be handling client connections while the passive instance will be in sleep mode. Both instances will be sharing data and logs. Once the active instance goes down, the passive instance becomes active. You will have access to all the persistent messages that were in the queues before the active queue manager went down.
Read through the WMQ InfoCenter for more on Multi instance queue manager.
To add to Shashi's answer, to get the most out of WMQ clustering you need to have a network hop between senders and receivers of messages. WMQ clustering is about how QMgrs talk among themselves. It has nothing to do with how client apps talk to QMgrs and does not replicate messages. In a cluster when a message has to get from one QMgr to another, the cluster figures out where to route it. If there are multiple clustered instances of a single destination queue, the message is eligible to be routed to any of them. If there is no network hop between senders and receivers, then messages don't need to leave the local QMgr and therefore WMQ clustering behavior is never invoked, even though the QMgrs involved may participate in the cluster.
In a conventional WMQ cluster architecture, the receivers all listen on multiple instances of the same queue, with the same name, spread across multiple QMgrs. The senders have one or more QMgrs where they can connect and send requests (fire-and-forget), possibly awaiting replies (request-reply). Since the receivers of the messages provide some service, I call their QMgrs "Service Provider QMgrs." The QMgrs where the senders of messages live are "Service Consumer" QMgrs because these apps are consumers of services.
The slide below is from a presentation I use on WMQ Architecture consulting engagements.
Note that consumers of services - the things sending request messages - fail over. Things listening on service endpoint queues and providing services do NOT fail over. This is because of the need to make sure every active service endpoint queue is always served. Typically each app instance holds an input handle on two or more queue instances. This way a QMgr can go down and all app instances remain active. If an app instance goes down, some other app instance continues to serve its queues. This affinity of service providers to specific QMgrs also enables XA transactionality if needed.
The best way I've found to explain WMQ HA is a slide from the IMPACT conference:
A WebSphere MQ cluster ensures that a service remains available, even though an instance of a clustered queue may be unavailable. New messages in the cluster will route to the remaining queue instances. A hardware cluster or multi-instance QMgr (MIQM) provides access to existing messages. When one side of the active/passive pair goes down, there is a brief outage on that QMgr only while the failover occurs, then the secondary node takes over and makes any messages on the queues available again. A network that combines both WMQ clusters and hardware clusters/MIQM provides the highest level of availability.
Keep in mind that in none of these configurations are messages replicated across nodes. A WMQ message always has a single physical location. For more on this aspect, please see Thoughts on Disaster Recovery.

All JMSs Message from Distributed Queue across the Cluster

Currently using WebLogic and Distributed Queues. And I know from the documentation that Distributed Queues allow you to retrieve a connection to any of the Queues across a cluster by using the Global JNDI name. It seems one of the main pieces of functionality Distributed Queue gives you is load balanced connections across multiple managed servers. So we have 4 Managed Servers (two on each physical, that communicate over multicast), and each Managed Server has an individual JMS Server which is configured to it's own Data Store.
I am 99% certain I already know the answer to this, but it appears that if you wanted to do a Consume a message off of a Queue, and that Queue exists on each Mgd Server in the Cluster, you cannot technically pull a Message off of any of the Queues (you can only pull the Message off the Queue to which you are connected to). So if I have a Message on Mgd Server 4, and I connect to Mgd Server 1, I won't see the messages on the Queue from Mgd Server 4.
So is there a way in Java EE or WLS to consume a message from all the nodes of a Queue (across the Cluster). Like a view into every instance of the Queue on each Mgd Server? It doesn't appear so and the documentation makes it seem like this is not possible, as well as this video (around minute 5):
http://www.youtube.com/watch?v=HAKixK_wp0Q
No you cannot consumer a message that is delivered to one managed server when your client is connected to another managed server of the same cluster.
Here's how it works.
When using UDT, wls provides a JNDI name that resolves internally into 4 distinct JNDI names for each of the managed server, the JMS servers on each of the managed servers are distinct.
When using the UDQ JNDI name when you post a message, it gets to one of the 4 managed servers using the algorithm you chose and other configuration done in your connection factory.
When a message consumer listens to the UDQ it gets pinned to the JMS server on one of the managed servers. It has no visibility about messages in the other servers.
Usually UDQ is used in scenarios where you want the message to be consumed concurrently by more than one managed server. You would normally deploy a MDB to the cluster, meaning the MDB will be deployed to each of the managed server and each of these will be able to consume the messages from their local JMS server.
I believe you can if your message store is config'd to use a database. If so, then I would think removing an item from the queue would remove it from the shared db table. I.e. all JMS servers are pointing to the same db instance and table. That should be pretty easy to test, too.

Resources