JMS with clustered nodes - jms

I have two clustered managed servers running on Weblogic, and seperate JMS server1 and server2 are running on each managed server. The problem is in application properties file, we only hardcoded and pass JMS server1 JNDI name to the application. So both applications running on each node actually only uses one fixed JMS server, which is not truly distributed and clustered. If JMS server 1 is down, the whole application will be down.
My question is how to let application dynamically find JMS server in above senario? Can you please point me a direction? Thanks!

It's in the Weblogic docs at: http://docs.oracle.com/cd/E14571_01/web.1111/e13738/best_practice.htm#CACDDFJD
Basically you created a comma separated list of servers and the JMS connection logic should be automatically able to handle to case when one of the servers is down:
e.g.
t3://hostA:7001,hostB:7001

When you use a property like jms.jndi.provider.url=t3://hostA:31122,hostA:31124
it tells wls to connect to either hostA:31122 or hostA:31124.
Note your JMS client is connected to only one Host at any given time.
when you shutdown hostA the connection between JMS client and server is cut abruptly resulting in an exception, your code will have to handle this exception gracefully and attempt to connect to WLS again periodically to ensure it connects to hostB.
WLS internally will round robin the request if more than 1 instance of the JMS client is running.
When using MDB as JMS client and deploying it to a cluster and using such a url 1 mdb instance would connect to one host and the other instance would connect to another host. MDB also inherently has the ability to reconnect periodically to the JMS destination.
A easy solution to your problem could be to
1) Set the jms.jndi.provider.url=t3://hostA:31122,hostA:31124
2) Have 2 instance of the JMS client code running, so one will connect to port 31122 and other to 31124
3) Set Forward-Delay on the JMS Queue so that message dont remain in queue without getting consumed for long and get forwarded to the other queue which has an active consumer.

I am updating my progress here instead of adding more comments. I have tested using a standalone JMS client by changing properties file from t3://hostA:7001 to t3://hostA:7001,hostB:7001 for JMS provider. The failover is automatically handled by WLS. No code change. The exception I got above is caused by using wlclient.jar, it is working after it changed to wlfullclient.jar.
I followed this link to generate wlfullclient.jar.
Thanks everyone!

Related

Payara 4 JMS - Any way to log when a connection to MQ Server is created or dropped?

We currently have issues after connecting a new MQ Server (IBM MQ v9) to our service.
The current expectation is that we have two issues:
The connection establishment at MQ-Server takes too much time
We assume that on every message send a new physical connection is established instead of using the payara pool
Especially to prove No. 2 - is there any possibility to find out or log every Connection establishment that is done by the Resource Adapter?
You can use the Application activity trace of IBM MQ.
There is also a very nice presentation available to find out more about how to use the Activity Trace.

JMS Listener Not Picking Up Message From the Queue

I am planning to do code change for an existing application which has a JMS listener.
To test whether the listener works on my local server, I deploy the application to my localhost and shutdown other containers that running the same application.
But my local listener won't pick up any message. It is confirmed that other containers work fine and can pick up and process new messages in the queue.
Can you think of any possible cause of this?
Way too general, too many missing points...but some things to look at:
if the message queue is on a different server, can you ping it from the local device? could be that development environment can't see production server, perhaps
does a netstat -n show the correct port number, you should see a remote port with the port on which the message provider is listening itself
can you verify that the messaging provider sees you as a consumer? I use activemq, I can look at the management console, dive into a specific queue, and view active consumers, most providers will have something similar
are you running in an identical environment? Running a listener in a JEE environment where the queue is a jndi reference might be different running in a debugger where you need the actual queue name
any JMS filtering going on, where the filter for your local envionrment doesn't match up with what's already on the queue?
any transaction manager stuff that may be getting in the way?
Again, just throwing stuff to see what sticks to the wall, but these are the really obvious things.
Thanks Scott for answering my question.
I finally find that Eclipse somehow created another container and my listener was deployed to it. That's why I cannot find it working in my current container.

Jms connection is timing out in spring batch remote partitoning on the master side

I am trying to implement spring batch remote partitioning using spring integration. I am triggering the master using standalone app and slaves are running on jboss eap 6.1 cluster. I am able to trigger the job and i can see slaves also got triggered. but after some time master jms connection is timing out. can someone shed me a light, how can i configure this timeout settings..
2636699 21/11 16:59:55,580[org.springframework.jms.listener.DefaultMessageListenerContainer#0-378] WARN jms.listener.DefaultMessageListenerContainer.handleListenerSetupFailure - Setup of JMS message listener invoker failed for destination 'queue-screening-replies-partitioning' - trying to recover. Cause: Session is closed
2636699 21/11 16:59:55,580[org.springframework.jms.listener.DefaultMessageListenerContainer#0-378] INFO jms.listener.DefaultMessageListenerContainer.refreshConnectionUntilSuccessful - Successfully refreshed JMS Connection
I am getting this kind of errors..
Thanks in advance..
--M K
Either the network or your broker is timing out the connection.
Consult your broker documentation to enable some kind of heartbeats/timeouts to keep the connection open.
Or, it would be better to reduce your partition sizes (increase the number of partitions) so they complete in a reasonable time.
EDIT:
Also, configure the outbound gateway with a <reply-listener/> and use a fixed, named, reply queue and the gateway will be able to recover from the dropped connection.
But I would still recommend smaller partitions in general.

ActiveMQ Failover Transport -

I am using the active mq failover configuration in my Spring Web based application. There are 4 production active mq boxes and my connection URL for a message producer looks like this
failover:(tcp://hosta:61616,tcp://hostb:61616,tcp://hostc:61616,tcp://hostd:61616)
The failover piece works fine. When a producer tries send message, connection gets established on any of the 4 nodes and if that fails, it goes to a other node.
All is fine here. But if the second host fails, the next one does not get picked up and it sends an exception to the client.
There is only one level of failover that I am able to witness . Do we have to make any additional configuration to make sure all active mq hosts are checked before throwing out an exception to the client.
Any help is appreciated. Thanks.

All JMSs Message from Distributed Queue across the Cluster

Currently using WebLogic and Distributed Queues. And I know from the documentation that Distributed Queues allow you to retrieve a connection to any of the Queues across a cluster by using the Global JNDI name. It seems one of the main pieces of functionality Distributed Queue gives you is load balanced connections across multiple managed servers. So we have 4 Managed Servers (two on each physical, that communicate over multicast), and each Managed Server has an individual JMS Server which is configured to it's own Data Store.
I am 99% certain I already know the answer to this, but it appears that if you wanted to do a Consume a message off of a Queue, and that Queue exists on each Mgd Server in the Cluster, you cannot technically pull a Message off of any of the Queues (you can only pull the Message off the Queue to which you are connected to). So if I have a Message on Mgd Server 4, and I connect to Mgd Server 1, I won't see the messages on the Queue from Mgd Server 4.
So is there a way in Java EE or WLS to consume a message from all the nodes of a Queue (across the Cluster). Like a view into every instance of the Queue on each Mgd Server? It doesn't appear so and the documentation makes it seem like this is not possible, as well as this video (around minute 5):
http://www.youtube.com/watch?v=HAKixK_wp0Q
No you cannot consumer a message that is delivered to one managed server when your client is connected to another managed server of the same cluster.
Here's how it works.
When using UDT, wls provides a JNDI name that resolves internally into 4 distinct JNDI names for each of the managed server, the JMS servers on each of the managed servers are distinct.
When using the UDQ JNDI name when you post a message, it gets to one of the 4 managed servers using the algorithm you chose and other configuration done in your connection factory.
When a message consumer listens to the UDQ it gets pinned to the JMS server on one of the managed servers. It has no visibility about messages in the other servers.
Usually UDQ is used in scenarios where you want the message to be consumed concurrently by more than one managed server. You would normally deploy a MDB to the cluster, meaning the MDB will be deployed to each of the managed server and each of these will be able to consume the messages from their local JMS server.
I believe you can if your message store is config'd to use a database. If so, then I would think removing an item from the queue would remove it from the shared db table. I.e. all JMS servers are pointing to the same db instance and table. That should be pretty easy to test, too.

Resources