JMS Cluster x Distributed Queue - jms

Does anybody know the difference in having a single queue pointing to a JMS Server targeting to a cluster server and having a distributed queue
pointing each queue to a JMS Server pointing to each target server from cluster?

I also post this question on Oracle Community and I found the answer.
Click here!

Related

IBM MQ - MQ Client Connecting to a MQ Cluster

I have an MQ cluster which has two full repository queue managers and two partial repository queue managers. A Java client needs to connect to this cluster and should PUT/GET messages. Which queue manager should I expose to the Java client, is it a queue manager from full repository or is it any one from partial repository? is there a kind of a pattern/anti-pattern/advantage/disadvantage exposing any of those?
Any advice would be very much appreciated.
Regards,
Yasothar
One pattern that is so often used that it has been adopted as a full-fledged IBM MQ feature is known as a Uniform Cluster. It is so called because every queue manager in the cluster hosts the same queues, and so it does not matter which queue manager the client application connects to.
However, to answer your direct question, the client should be given a CCDT which contains entries for each queue manager that hosts a copy of the queue the client application is using. This could be EVERY queue manager in the cluster, as per the Uniform Cluster pattern, or just some of them. It does not need to be only partials or only full repository queue managers, but instead you are looking for those hosting the queue.
If you are not sure what that set of queue managers is, you could go to one of your full repository queue managers and type in the following MQSC command (using runmqsc or your favourite other MQSC tool):
DISPLAY QCLUSTER(name-of-queue-used-by-application) CLUSQMGR
and the resultant list will show the hosting queue manager names in the CLUSQMGR field.

How can I know when an MQ queue manager was restarted?

I have a set of IBM MQ queue managers and would like to know when one of them was restarted or when it automatically failed over to the standby instance.
The queue managers are seated on AIX
Regards,
You can find this information from the AMQERR01.LOG of the queue manager or by running DIS QMSTATUS ALL.
Queue Manager START events are emitted (provided they are enabled), and include information about whether it was a multi-instance failover. See https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.1.0/com.ibm.mq.ref.doc/q049990_.htm

JMS with clustered nodes

I have two clustered managed servers running on Weblogic, and seperate JMS server1 and server2 are running on each managed server. The problem is in application properties file, we only hardcoded and pass JMS server1 JNDI name to the application. So both applications running on each node actually only uses one fixed JMS server, which is not truly distributed and clustered. If JMS server 1 is down, the whole application will be down.
My question is how to let application dynamically find JMS server in above senario? Can you please point me a direction? Thanks!
It's in the Weblogic docs at: http://docs.oracle.com/cd/E14571_01/web.1111/e13738/best_practice.htm#CACDDFJD
Basically you created a comma separated list of servers and the JMS connection logic should be automatically able to handle to case when one of the servers is down:
e.g.
t3://hostA:7001,hostB:7001
When you use a property like jms.jndi.provider.url=t3://hostA:31122,hostA:31124
it tells wls to connect to either hostA:31122 or hostA:31124.
Note your JMS client is connected to only one Host at any given time.
when you shutdown hostA the connection between JMS client and server is cut abruptly resulting in an exception, your code will have to handle this exception gracefully and attempt to connect to WLS again periodically to ensure it connects to hostB.
WLS internally will round robin the request if more than 1 instance of the JMS client is running.
When using MDB as JMS client and deploying it to a cluster and using such a url 1 mdb instance would connect to one host and the other instance would connect to another host. MDB also inherently has the ability to reconnect periodically to the JMS destination.
A easy solution to your problem could be to
1) Set the jms.jndi.provider.url=t3://hostA:31122,hostA:31124
2) Have 2 instance of the JMS client code running, so one will connect to port 31122 and other to 31124
3) Set Forward-Delay on the JMS Queue so that message dont remain in queue without getting consumed for long and get forwarded to the other queue which has an active consumer.
I am updating my progress here instead of adding more comments. I have tested using a standalone JMS client by changing properties file from t3://hostA:7001 to t3://hostA:7001,hostB:7001 for JMS provider. The failover is automatically handled by WLS. No code change. The exception I got above is caused by using wlclient.jar, it is working after it changed to wlfullclient.jar.
I followed this link to generate wlfullclient.jar.
Thanks everyone!

Is a hornetq cluster designed to cope with loosing 1 or more nodes?

I know there's HornetQ HA with Master/Backup setups. But I would like to run HornetQ in a non-master setup and handle duplicate messages myself.
The cluster setup looks perfect for this, but nowhere I see a hint to its ability to service such these requirements. What happens to clients of a failed node? Do they connect to other servers?
Will a rebooted/repaired node be able to rejoin the cluster and continue distribution of its persistent messages?
Failover on clients require a backup node at the moment. you would have to reconnect manually in case of a failure to get into other nodes.
Example: get the connection factory and connect there.

All JMSs Message from Distributed Queue across the Cluster

Currently using WebLogic and Distributed Queues. And I know from the documentation that Distributed Queues allow you to retrieve a connection to any of the Queues across a cluster by using the Global JNDI name. It seems one of the main pieces of functionality Distributed Queue gives you is load balanced connections across multiple managed servers. So we have 4 Managed Servers (two on each physical, that communicate over multicast), and each Managed Server has an individual JMS Server which is configured to it's own Data Store.
I am 99% certain I already know the answer to this, but it appears that if you wanted to do a Consume a message off of a Queue, and that Queue exists on each Mgd Server in the Cluster, you cannot technically pull a Message off of any of the Queues (you can only pull the Message off the Queue to which you are connected to). So if I have a Message on Mgd Server 4, and I connect to Mgd Server 1, I won't see the messages on the Queue from Mgd Server 4.
So is there a way in Java EE or WLS to consume a message from all the nodes of a Queue (across the Cluster). Like a view into every instance of the Queue on each Mgd Server? It doesn't appear so and the documentation makes it seem like this is not possible, as well as this video (around minute 5):
http://www.youtube.com/watch?v=HAKixK_wp0Q
No you cannot consumer a message that is delivered to one managed server when your client is connected to another managed server of the same cluster.
Here's how it works.
When using UDT, wls provides a JNDI name that resolves internally into 4 distinct JNDI names for each of the managed server, the JMS servers on each of the managed servers are distinct.
When using the UDQ JNDI name when you post a message, it gets to one of the 4 managed servers using the algorithm you chose and other configuration done in your connection factory.
When a message consumer listens to the UDQ it gets pinned to the JMS server on one of the managed servers. It has no visibility about messages in the other servers.
Usually UDQ is used in scenarios where you want the message to be consumed concurrently by more than one managed server. You would normally deploy a MDB to the cluster, meaning the MDB will be deployed to each of the managed server and each of these will be able to consume the messages from their local JMS server.
I believe you can if your message store is config'd to use a database. If so, then I would think removing an item from the queue would remove it from the shared db table. I.e. all JMS servers are pointing to the same db instance and table. That should be pretty easy to test, too.

Resources