OSB Proxy Service using MQ Transport abruptly stops polling MQ - oracle

I've a proxy service of Messaging Service Type which polls messages from a MQ. Now this service is abruptly getting stuck sometimes. There will be no error logs or warning logs in the log files. If we change the Polling Interval of this service then It'll again start to run.
Did anybody face any similar kind of issues. Please advice how to fix this bug.

Related

Can you check to see if an IBM MQ topic is up and available through a Java application before attempting to create a connection?

I would like to add some conditional logic to our Java application code for attempting to create a JMS Topic Connection. I have seen problems in the past stemming from attempting to create a connection when the MQ server had been restarted or was currently down. One improvement I added was to check for the quiescent state, and another was to increase the timer before attempting reconnection to our durable topic queue.
Is there a way to confirm with the MQ server/topic/channel that it is up and running and a connection request can safely be made?
The best way to confirm that a queue manager (and the channel you are using to connect to the queue manager) is up and running is to attempt to connect to it.
If your connection attempt fails, you will get an MQ Reason code telling you exactly why. This is a much better way to confirm than any administrative command, because it also confirms that your application, and it's security context is correct and able to connect to the queue manager. It is completely possible to have an up-and-running queue manager but an application that is not yet correctly configured to use it. So connect from the application and if it works, the queue manager is up-and-running.
Your comment about having an increased timer before attempting to reconnect after a failure is well made. It doesn't help anyone if you hammer the queue manager with lots of repeated and close together connection attempts until it is ready to accept your connection, but still anything that is going to test the availability of the queue manager needs to ultimately connect to it, so very simply, just connect.

SpingBoot mq web app stops consuming messages

I noticed strange problem in my web app.
Web app is hosted on AWS, is made in SpringBoot framework and its main responsibility is consuming messages from MQ. Messages are consumed one by one - there is no concurrency.
I am using MQQueueConnectionFactory class to prepare my connection,
I properly set parameters like:
host
port
queue manager
channel
int property
client reconnect options
and all ssl required parameters.
All seems to work well all the time until something happen and messages from MQ are not consumed. I can see messages which are waiting for being consumed, but web app does not consume it. After web app is restarted all work fine. I think it can be problem with connection which seems to be broken after some time (several days). I set reconnect option to connection factory class and for now I do not know what to do else to fix it.
Have you ever met similar problem? Do you have any clues?

ActiveMQ crash after some times

I am using ActiveMQ 5.10.0. I have many consumer that connect through stomp connection to ActiveMQ. I have about 2000 consumer that connect to unique queue. My problem now is that ActiveMQ always crash/hang after a couple of hours and the log not showing any error. It makes me difficult to trace where is the problem. As far as I know, the new consumer can't create new connection and subscription to queue after ActiveMQ crashed. I can't open the web console too. Is there any way to improve the performance of ActiveMQ to handle many connections or anything else for performance tuning?
In order to debug to exact issue, try:
Enable debug logs of active mq
Add TransportListener to active mq connection/connection factory. Logging connection interrupt, resume and exception.
Refer http://activemq.apache.org/maven/apidocs/org/apache/activemq/transport/TransportListener.html
JMX monitoring via jconsole or jvisual vm, thread dump might help in debugging the same.
We faced similar issue in production env. where producers were able to produce but consumers were not consuming after 24-48 hrs of everything running fine. Restarting active mq lead to consumers starting consuming messages. We haven't found the exact cause/fix yet but recently added above debugging steps.

AMQ9504: A protocol error was detected for channel

I'm unable to connect remotely from WebSphere Application Server with Queue Manager at WebSphere MQ. Anyhow it get connected to Queue Manager from WAS that is installed on same machine. I'm using version 7.5 of WebSphere MQ and version 7.0 of WebSphere Application Server.
While attempting to connect WAS remotely to Queue Manager following error messages were logged.
Error Message from WebSphere MQ:
1/30/2013 21:12:09 - Process(3624.6) User(MUSR_MQADMIN)
Program(amqrmppa.exe)
Host(KHILT-269) Installation(Installation1)
VRMF(7.5.0.0) QMgr(QM.TEST)
AMQ9504: A protocol error was detected for channel 'TEST_CHANNEL'. EXPLANATION: During communications with the
remote queue manager, the channel program detected a protocol error.
The failure type was 11 with associated data of 0. ACTION: Contact the
systems administrator who should examine the error logs to determine
the cause of the failure.
Error Message at WebSphere Application Server:
A connection could not be made to WebSphere MQ for the following
reason: CC=2;RC=2009
As it can be seen from logs, I have created Queue Manager as QM.TEST and channel as TEST_CHANNEL. The listener port defined for the Queue Manager is 1417 along with protocol TCP.
I did lot of google but didn't find any appropriate solution. I appreciate any help in this regard.
Thanks in adv, KAmeer
I had a similar issue where I have WAS 7 and WMQ 7.5. I was able to make a connection to my existing WMQ 7.0 QM but not my new WMQ 7.5 QM. Apparently there was a change to the WMQ components bundled with WAS 7 after the initial release 7.0.0.0. After updating the resource adapter I was able to make a successfull connection to both queue managers.
The queue manager generates protocol error and terminates the connection immediately when it receives unexpected TSH flow from the client. As a result the client receives 2009 error. Technically, low level MQ client will be able communicate with higher version MQ queue manager and vice versa unless there are known restrictions and/or there is a MQ defect/APAR. The error message indicates the queue manager is running at MQ 7500 and this is MQ base 7.5 version. It is recommended upgrading the queue manager to the latest fixpack available to rule out any known problems. You could also try disabling shared conversion on the SVRCONN(i.e. setting SHARECNV to 0) channel and check whether it workarounds the problem until the problem is resolved.
Open a PMR with IBM as this sounds like a bug.
the cause for this is mq 7 client cannot talk to mq 7.5, the client needs to use mq 7.5 jar files
I had this problem. In my case was the mq library that was doing an MQGET with an infinite loop, so the lib was locked on the mqget while i called the kill and generated an event and tried to disconnect while the get was still running. As the mqget does not support unlocking via signal, i had to change the code to not stay infinite on get and add some flags on the kill command so the app could detect that it was time to die, when it returned from the get.

NServiceBus DBMS connection timeout

I use Nservicebus with Oracle Queues OAQ instaed of MSMQ.
I have a problem working with a dbms server that is shutdown every day at the same time.
In particular when my nservicebus host can't get the dbms connection it starts logging on.
When the dbms is restarted my host restart or not randomly! However restarting my host everything is ok!
Another detail is that when my nservicebus host can't restart it logs a 'connection timeout message' every 15 seconds!
What's the behavior of NserviceBus when it's reading from a queue and the dbms crash? What could i do to solve this problem?
thank you,
R
I'm afraid the problem you're facing is the result of the design of your system. By having the queues in the DB, when the DB becomes unavailable, so do the queues. NServiceBus assumes that it is always able to communicate with its queues, as is the case when using a distributed/federated queuing system like MSMQ.
You can look at what some people in the community have done to combat this same problem when they were using IBM MQ (http://code.google.com/p/nservicebuswmq/) - ultimately falling back to MSMQ under those conditions and then syncing back up with MQ when it came back online.

Resources