ActiveMQ crash after some times - jms

I am using ActiveMQ 5.10.0. I have many consumer that connect through stomp connection to ActiveMQ. I have about 2000 consumer that connect to unique queue. My problem now is that ActiveMQ always crash/hang after a couple of hours and the log not showing any error. It makes me difficult to trace where is the problem. As far as I know, the new consumer can't create new connection and subscription to queue after ActiveMQ crashed. I can't open the web console too. Is there any way to improve the performance of ActiveMQ to handle many connections or anything else for performance tuning?

In order to debug to exact issue, try:
Enable debug logs of active mq
Add TransportListener to active mq connection/connection factory. Logging connection interrupt, resume and exception.
Refer http://activemq.apache.org/maven/apidocs/org/apache/activemq/transport/TransportListener.html
JMX monitoring via jconsole or jvisual vm, thread dump might help in debugging the same.
We faced similar issue in production env. where producers were able to produce but consumers were not consuming after 24-48 hrs of everything running fine. Restarting active mq lead to consumers starting consuming messages. We haven't found the exact cause/fix yet but recently added above debugging steps.

Related

how to limit number of connections to IBM MQ

I have a Spring Boot based messaging app sending/receiving JMS messages to/from IBM MQ queue manager.
Basically, it uses MQConnectionFactory to organize connection to IBM MQ and a JmsPoolConnectionFactory from messaginghub:pooledjms to enable JMS connection pool, which is removed from MQConnectionFactory in IBM MQ 7.x
The app uses two different appoach to work with JMS. A "correct" one runs a JMSListener to receive messages and then sends a response on each message using JmsTemplate.send(). And there is a second "troubling" approach, where the app sends requests using JmsTemplate.send() and waits for response using JmsTemplate.readByCorrelId() until received or timed out.
I say troubling because this makes JMS sessions last longer if the response is delayed and could easily exhaust IBM MQ connection limit. Unfortunately, I cannot rewrite the app at the moment to the first approach to resolve the issue.
Now I want to restrict the number of connections in the pool. Of course, the delayed requests will fail but IBM MQ connection limit is more important at the moment, so this is kind of appropriate. The problem is that even if I disable the JmsPoolConnectionFactory, it seems that MQConnectionFactory still opens multiple connections to the query manager.
While profiling the app I see multiple threads RvcThread: com.ibm.mq.jmmqi.remote.impl.RemoteTCPConnection#12433875[...] created by JMSCCMasterThreadPool and corresponding connections to the query manager in MQ Explorer. I wonder why there are many of them in spite of the connection pooling is removed from MQConnectionFactory? I suppose it should open and reuse a single connection then but it is not true in my test.
Disabling "troubling" JmsTemplate.readByCorrelId() and leaving only "correct" way in the app removes these multiple connections (and the waiting threads of course).
Replacing JmsPoolConnectionFactory with SingleConnectionFactory has not effect on the issue.
Is there any way to limit those connections? Is it possible to control max threads in the JMSCCMasterThreadPool as a workaround?
Because it affects other applications your MQ admins probably want you to not exhaust the overall Queue Manager's connection limit (MaxChannels and MaxActiveChannels parameters in qm.ini). They can help you by defining an MQ channel exclusively used by your application. By this, they can limit the number of connections of your application with the MAXINST / MAXINSTC channel parameter. You will get an exception when this number is exhausted which is appropriate as you say. Other applications won’t be affected anymore.

Can you check to see if an IBM MQ topic is up and available through a Java application before attempting to create a connection?

I would like to add some conditional logic to our Java application code for attempting to create a JMS Topic Connection. I have seen problems in the past stemming from attempting to create a connection when the MQ server had been restarted or was currently down. One improvement I added was to check for the quiescent state, and another was to increase the timer before attempting reconnection to our durable topic queue.
Is there a way to confirm with the MQ server/topic/channel that it is up and running and a connection request can safely be made?
The best way to confirm that a queue manager (and the channel you are using to connect to the queue manager) is up and running is to attempt to connect to it.
If your connection attempt fails, you will get an MQ Reason code telling you exactly why. This is a much better way to confirm than any administrative command, because it also confirms that your application, and it's security context is correct and able to connect to the queue manager. It is completely possible to have an up-and-running queue manager but an application that is not yet correctly configured to use it. So connect from the application and if it works, the queue manager is up-and-running.
Your comment about having an increased timer before attempting to reconnect after a failure is well made. It doesn't help anyone if you hammer the queue manager with lots of repeated and close together connection attempts until it is ready to accept your connection, but still anything that is going to test the availability of the queue manager needs to ultimately connect to it, so very simply, just connect.

OSB Proxy Service using MQ Transport abruptly stops polling MQ

I've a proxy service of Messaging Service Type which polls messages from a MQ. Now this service is abruptly getting stuck sometimes. There will be no error logs or warning logs in the log files. If we change the Polling Interval of this service then It'll again start to run.
Did anybody face any similar kind of issues. Please advice how to fix this bug.

jms order of message delivery with high availability

I have set up uniform distributed queue with weblogic server 12c. I am trying to achieve order of delivery and high availability with jms distributed queue. In my prototpe testing deployment I have two managed servers in the cluster, let us say managed_server1 and managed_server2. Each of this managed server hosts jms server namely jms server1 and jms server2. I have configured the jms servers with jdbc persistent store. I have enabled server affinity.
I have a producer running such as java queuproducer t3::/managed_server1. I send out 4 messages. From the weblogic monitoring console I see there are 4 messages in the queu since there are no consumers to the queue yet.
Now I shut down managed_server1.
Bring up a consumer to listen on java queuconsumer t3://managed_server2. This consumer cannot consume message since the producer send all the messages to jms server1, and it is down.
Bring up managed server 1, start a consumer to listen to t3://managed_server1 I can get all messages.
Here is my problem say if the managed_server1 went down then there it never came back up, do i loose all my messages. Also if there is another producer sending messages to java queuproducer t3://managed_server2 then order of messages based on the time between these producers are not guanranteed.
I am a little lost, am I missing something. Can unit of order help me to overcome this. Or should I use distributed topic instead of distributed queue, where all the jms server will receive all the messages from producers, but if one jms server where my consumre is listening fails there is only one consumer in my application, when I switch over to other jms server, I might be starting to get messages from the beginning not from where I left off.
Any suggestions regarding the same will be helpful.
Good Question !
" Here is my problem say if the managed_server1 went down then there it never came back up, do i loose all my messages. "
Ans - no you do not loose all your messages, they are stored in the JDBC store configured for the JMS server deployed on managed server 1. If you want the Messages sent to managed_server1 to be consumed from managed_server2 you need to configure JMS migration.
" Also if there is another producer sending messages to java queuproducer t3://managed_server2 then order of messages based on the time between these producers are not guanranteed. Can unit of order help me to overcome this."
Ans - If you want the messages to be consumed strictly in a certain order, then you will have to make use of unit of order (UOO). when messages are sent using UOO, they are sent to one of the several UDQ destinations, if midway that destination fails, and migration is enabled the messages are migrated to the next UDQ destination and new UDQ messages are also delivered to the new destination.
Useful links -
http://www.youtube.com/watch?v=B9J7q5NbXag
http://www.youtube.com/watch?v=_W3EJ8p35lI
Hope this helps.

NServiceBus DBMS connection timeout

I use Nservicebus with Oracle Queues OAQ instaed of MSMQ.
I have a problem working with a dbms server that is shutdown every day at the same time.
In particular when my nservicebus host can't get the dbms connection it starts logging on.
When the dbms is restarted my host restart or not randomly! However restarting my host everything is ok!
Another detail is that when my nservicebus host can't restart it logs a 'connection timeout message' every 15 seconds!
What's the behavior of NserviceBus when it's reading from a queue and the dbms crash? What could i do to solve this problem?
thank you,
R
I'm afraid the problem you're facing is the result of the design of your system. By having the queues in the DB, when the DB becomes unavailable, so do the queues. NServiceBus assumes that it is always able to communicate with its queues, as is the case when using a distributed/federated queuing system like MSMQ.
You can look at what some people in the community have done to combat this same problem when they were using IBM MQ (http://code.google.com/p/nservicebuswmq/) - ultimately falling back to MSMQ under those conditions and then syncing back up with MQ when it came back online.

Resources