Like Websphere HA queue manager, is there any such WMQFTE cluster agent - ibm-mq

In case of failover on one server, is there any concept in WMQFTE v8.0 as a failover MQFTE agent.
I have two linux servers in RHEL cluster, in case of failure on first one, how can i manage the MQFTE transfers with the second server with the same agent name.

WMQFTE HA is strictly active/passive. WMQFTE can not be deployed active/active. You can not advertise the SYSTEM.FTE.* queues to an MQ cluster. You can not balance the load of a transfer among multiple agents at the source or destination.
WMQFTE HA can be configured either with multi-instance queue managers, as described in this article, or with HA software, as described in this article.

Related

IBM MQ - MQ Client Connecting to a MQ Cluster

I have an MQ cluster which has two full repository queue managers and two partial repository queue managers. A Java client needs to connect to this cluster and should PUT/GET messages. Which queue manager should I expose to the Java client, is it a queue manager from full repository or is it any one from partial repository? is there a kind of a pattern/anti-pattern/advantage/disadvantage exposing any of those?
Any advice would be very much appreciated.
Regards,
Yasothar
One pattern that is so often used that it has been adopted as a full-fledged IBM MQ feature is known as a Uniform Cluster. It is so called because every queue manager in the cluster hosts the same queues, and so it does not matter which queue manager the client application connects to.
However, to answer your direct question, the client should be given a CCDT which contains entries for each queue manager that hosts a copy of the queue the client application is using. This could be EVERY queue manager in the cluster, as per the Uniform Cluster pattern, or just some of them. It does not need to be only partials or only full repository queue managers, but instead you are looking for those hosting the queue.
If you are not sure what that set of queue managers is, you could go to one of your full repository queue managers and type in the following MQSC command (using runmqsc or your favourite other MQSC tool):
DISPLAY QCLUSTER(name-of-queue-used-by-application) CLUSQMGR
and the resultant list will show the hosting queue manager names in the CLUSQMGR field.

JMS Cluster x Distributed Queue

Does anybody know the difference in having a single queue pointing to a JMS Server targeting to a cluster server and having a distributed queue
pointing each queue to a JMS Server pointing to each target server from cluster?
I also post this question on Oracle Community and I found the answer.
Click here!

IBM WAS7 Queue Factory Configuration to an MQ Cluster

I'm trying to configure a clustered websphere application server that connects to a clustered MQ.
However, the the information I have is details for two instances of MQ with different host names, server channels and queue manager which belongs to the same MQ cluster name.
On the websphere console, I can see input fields for hostname, queue manager and server channel, I cannot find anything that I can specify multiple MQ details.
If I pick one of the MQ detail, will MQ clustering still work? If not, how will I enable MQ clustering given the details I have?
WebSphere MQ clustering affects the behavior of how queue managers talk amongst themselves. It does not change how an application connects or talks to a queue manager so the question as asked seems to be assuming some sort of clustering behavior that is not present in WMQ.
To set up the app server with two addresses, please see Configuring multi-instance queue manager connections with WebSphere MQ messaging provider custom properties in the WAS v7 Knowledge Center for instructions on how to configure a connection factory with a multi-instance CONNAME value.
If you specify a valid QMgr name in the Connection Factory and the QMgr to which the app connects doesn't have that specific name then the connection is rejected. Normally a multi-instance CONNAME is used to connect to a multi-instance QMgr. This is a single highly available queue manager that can be at one of two different IP addresses so using a real QMgr name works in that case. But if the QMgrs to which your app is connecting are two distinct and different-named queue managers, which is what you described, you should specify an asterisk (a * character) as the queue manager name in your connection factory as described here. This way the app will not check the name of the QMgr when it gets a connection.
If I pick one of the MQ detail, will MQ clustering still work? If not,
how will I enable MQ clustering given the details I have?
Depends on what you mean by "clustering". If you believe that the app will see one logical queue which is hosted by two queue managers, then no. That's not how WMQ clustering works. Each queue manager hosting a clustered queue gets a subset of messages sent to that queue. Any apps getting from that queue will therefore only ever see the local subset.
But if by "clustering" you intend to connect alternately to one or the other of the two queue managers and transmit messages to a queue that is in the same cluster but not hosted on either of the two QMgrs to which you connect, then yes it will work fine. If your Connection Factory knows of only one of the two QMgrs you will only connect to that QMgr, and sending messages to the cluster will still work. But set it up as described in the links I've provided and your app will be able to connect to either of the two QMgrs and you can easily test that by stopping the channel on the one it connects to and watching it connect to the other one.
Good luck!
UPDATE:
To be clear the detail provide are similar to hostname01, qmgr01,
queueA, serverchannel01. And the other is hostname02, qmgr02, queueA,
serverchannel02.
WMQ Clients will connect to two different QMgrs using a multi-instance CONNAME only when...
The channel name used on both QMgrs is the exactly the same
The application uses an asterisk (a * character) or a space for the QMgr name when the connection request is made (i.e. in the Connection Factory).
It is possible to have WMQ connect to one of several different queue managers where the channel name differs on each by using a Client Connection Definition Table, also known as a CCDT. The CCDT is a compiled artifact that you create using MQSC commands to define CLNTCONN channels. It contains entries for each of the QMgrs the client is eligible to connect to. Each can have a different QMgr name, host, port and channel. However, when defining the CCDT the administrator defines all the entries such that the QMgr name is replaced with the application High Level Qualifier. For example, the Payroll app wants to connect to any 1 of 3 different QMgrs. The WMQ Admin defines a CCDT with three entries but uses PAY01, PAY02, and PAY03 for the QMgr names. Note this does not need to match the actual QMgr names. The application then specifies the QMgr name as PAY* which selects all three QMgrs in the CCDT.
Please see Using a client channel definition table with WebSphere MQ classes for JMS for more details on the CCDT.
Is MQ cluster not similar to application server clusters?
No, not at all.
Wherein two-child nodes are connected to a cluster. And an F5 URL will
be used to distribute the load to each node. Does not WMQ come with a
cluster url / f5 that we just send message to and the partitioning of
messages are transparent?
No. The WMQ cluster provides a namespace within which applications and QMgrs can resolve non-local objects such as queues and topics. The only thing that ever connects to a WebSphere MQ cluster is a queue manager. Applications and human users always connect to specific queue managers. There may be a set of interchangeable queue managers such as with the CCDT, but each is independent.
With WAS the messaging engine may run on several nodes, but it provides a single logical queue from which applications can get messages. With WMQ each node hosting that queue gets a subset of the messages and any application consuming those messages sees only that subset.
HTTP is stateless and so an F5 URL works great. When it does maintain a session, that session exists mainly to optimize away connection overhead and tends to be short lived. WMQ client channels are stateful and coordinate both single-phase and two-phase units of work. If an application fails over to another QMgr during a UOW, it has no way to reconcile that UOW.
Because of the nature of WMQ connections, F5 is never used between QMgrs. It is only used between client and QMgr for connection balancing and not message traffic balancing. Furthermore, the absence or presence of an MQ cluster is entirely transparent to the application which, in either case, simply connects to a QMgr to get and./or put messages. Use of a Multi-Instance CONNAME or a CCDT file makes that connection more robust by providing multiple equivalent QMgrs to which the client can connect but that has nothing whatever to do with WMQ clustering.
Does that help?
Please see:
Clustering
How Clusters Work
Queue manager groups in the CCDT
Connecting WebSphere MQ MQI client applications to queue managers

Is a hornetq cluster designed to cope with loosing 1 or more nodes?

I know there's HornetQ HA with Master/Backup setups. But I would like to run HornetQ in a non-master setup and handle duplicate messages myself.
The cluster setup looks perfect for this, but nowhere I see a hint to its ability to service such these requirements. What happens to clients of a failed node? Do they connect to other servers?
Will a rebooted/repaired node be able to rejoin the cluster and continue distribution of its persistent messages?
Failover on clients require a backup node at the moment. you would have to reconnect manually in case of a failure to get into other nodes.
Example: get the connection factory and connect there.

Websphere MQ clustering

I'm pretty new to websphere MQ, so please pardon me if I am not using the right terms. We are doing a project in which we need to setup a MQ cluster for high availability.
The client application maintains a pool of connection with the Queue Manager for subscribers and publishers. Suppose we have two Queue Managers in a cluster hosting the queues with the same names. Each of the queue has its own set of subscribers and publishers which are cached by the client application. Suppose one of the queue manager goes down, the subscribers and publishers of the queues on that queue manager will die making the objects on client application defunct.
In this case can the following scenarios taken care of?
1] When first QueueManager crashes, the messages on its queues are transferred to other queuemanager in the cluster
2] When QueueManager comes up again, is there any mechanism to restore the publishers and subscribers. Currently we have written an automated recovery thread in the client application which tries to reconnect the failed publishers and subscriber. But in case of cluster setup, we fear that the publishers and subscribers will reconnect to the other running qmanager. And when the crashed queuemanager is restored, there will be no publishers and subscribers to it.
Can anybody please explain how to take care of above two scenarios?
WMQ Clustering is an advanced topic. You must first do a good amount of read up of WMQ and understand what clustering in WMQ world means before attempting anything.
WMQ Cluster differs in many ways from the traditional clusters. Unlike the traditional clusters, say in a Active/Passive cluster, data will be shared between active and passive instances of an application. At any point in time, the active instance of application will be processing data. When the active instance goes down, the passive instance takes over and starts processing. This is not the case in WMQ clusters where queue managers in a cluster are unique and hence queues/topics hosted by those queue managers are not shared. You might have the same queues/topics in both queue managers but since queue managers are different, messages, topics, subscriptions etc won't be shared.
Answering to your questions.
1) No. Messages,if persistent, will remain in the crashed queue manager. They will not be transferred to other queue manager. Since the queue manager itself is not available nothing can be done till the queue manager is brought up.
2)No. Queue manager can't do that. It's the duty of the application to check for queue manager availability and reconnect. WMQ provides automatic client reconnection feature where in the WMQ client libraries automatically reconnect to queue manager when they detect connection broken errors. This feature is available from WMQ v7.x and above with C and Java clients. C# client supports the feature from v7.1.
For your high availability requirement, you could look at using Multi instance queue manager feature of WMQ. This feature enables an Active/Passive instances of the same queue manager running on two different machines. Active instance of the queue manager will be handling client connections while the passive instance will be in sleep mode. Both instances will be sharing data and logs. Once the active instance goes down, the passive instance becomes active. You will have access to all the persistent messages that were in the queues before the active queue manager went down.
Read through the WMQ InfoCenter for more on Multi instance queue manager.
To add to Shashi's answer, to get the most out of WMQ clustering you need to have a network hop between senders and receivers of messages. WMQ clustering is about how QMgrs talk among themselves. It has nothing to do with how client apps talk to QMgrs and does not replicate messages. In a cluster when a message has to get from one QMgr to another, the cluster figures out where to route it. If there are multiple clustered instances of a single destination queue, the message is eligible to be routed to any of them. If there is no network hop between senders and receivers, then messages don't need to leave the local QMgr and therefore WMQ clustering behavior is never invoked, even though the QMgrs involved may participate in the cluster.
In a conventional WMQ cluster architecture, the receivers all listen on multiple instances of the same queue, with the same name, spread across multiple QMgrs. The senders have one or more QMgrs where they can connect and send requests (fire-and-forget), possibly awaiting replies (request-reply). Since the receivers of the messages provide some service, I call their QMgrs "Service Provider QMgrs." The QMgrs where the senders of messages live are "Service Consumer" QMgrs because these apps are consumers of services.
The slide below is from a presentation I use on WMQ Architecture consulting engagements.
Note that consumers of services - the things sending request messages - fail over. Things listening on service endpoint queues and providing services do NOT fail over. This is because of the need to make sure every active service endpoint queue is always served. Typically each app instance holds an input handle on two or more queue instances. This way a QMgr can go down and all app instances remain active. If an app instance goes down, some other app instance continues to serve its queues. This affinity of service providers to specific QMgrs also enables XA transactionality if needed.
The best way I've found to explain WMQ HA is a slide from the IMPACT conference:
A WebSphere MQ cluster ensures that a service remains available, even though an instance of a clustered queue may be unavailable. New messages in the cluster will route to the remaining queue instances. A hardware cluster or multi-instance QMgr (MIQM) provides access to existing messages. When one side of the active/passive pair goes down, there is a brief outage on that QMgr only while the failover occurs, then the secondary node takes over and makes any messages on the queues available again. A network that combines both WMQ clusters and hardware clusters/MIQM provides the highest level of availability.
Keep in mind that in none of these configurations are messages replicated across nodes. A WMQ message always has a single physical location. For more on this aspect, please see Thoughts on Disaster Recovery.

Resources