MQJMS2000 Failed to close MQ queue in Mule - jms

I have a flow in Mule which puts order message in a WMQ queue. I am using JMS connector to connect to WMQ queue. This WMQ is a queue on the target system. The target system has a flow that pulls message from this queue. When there are burst of message hitting this WMQ queue, any one message cuts half. So if I check my logs, I can a see a warning "WARN org.mule.transport.jms.JmsConnector - Failed to close jms message producer: MQJMS2000: failed to close MQ queue". The target system is using WMQ version 6.0.2.3. I read the IBM support which says "JMS attempted to close a WebSphere MQ queue, but encountered an error. The queue may already be closed, or another thread may be performing an MQGET while close() is called". What can be done to resolve this?

Related

Websphere MQ XMS Event Poll Time

I use websphere mq xms.net infrastructure to async message listening.My problem is sometimes there are messages in queue but xms cannot read them on time and waiting 1,2... 5 minutes wait and then get the message.Is there any configuration on xms for that something like event poll interval time ...?Sometimes I got message directly sometimes not?
You are using XMS .NET version 8.0.0.8. I am not sure what value you have set for the XMSC.WMQ_PROVIDER_VERSION property. The default value is "unpspecified" as documented here.
Update:
Apologies. I just checked documentation and IBM.XMS.XMSC.WMQ_POLLING_INTERVAL is valid for MQ 7 and above.
However XMSC.RTT_BROKER_PING_INTERVAL property is not valid when connecting MQ queue manager. It is valid only for Real Time Transport of Message Broker and RTT is no longer supported.
Are you connecting to MQ v6 queue manager by any chance?
I suggest you to not set IBM.XMS.XMSC.WMQ_POLLING_INTERVAL property. The messages should be delivered as soon as they arrive in queue and application is ready to receive.

WebSphere MQ Cluster Controller

Is it possible in WAS or on the MQ side to only ever have 1 active connection to a MQ and other passive connections?
That way I can avoid competing consumers for a queue but if a consumer fails, it would still allow for another competing consumer to connect. Like a heartbeat monitor for consumer health.
Scenerio:
Multiple WAS Server Consumers in a cluster
Only 1 MQ Queue Manager
Require only 1 active consumer due to message processing correlation. Order of messages is not important.
Anyway to send messages to only 1 active consumer while keeping other consumers passive?
I thought maybe a max open input count property on a queue would exist but did not see it.

AMQ9504: A protocol error was detected for channel

I'm unable to connect remotely from WebSphere Application Server with Queue Manager at WebSphere MQ. Anyhow it get connected to Queue Manager from WAS that is installed on same machine. I'm using version 7.5 of WebSphere MQ and version 7.0 of WebSphere Application Server.
While attempting to connect WAS remotely to Queue Manager following error messages were logged.
Error Message from WebSphere MQ:
1/30/2013 21:12:09 - Process(3624.6) User(MUSR_MQADMIN)
Program(amqrmppa.exe)
Host(KHILT-269) Installation(Installation1)
VRMF(7.5.0.0) QMgr(QM.TEST)
AMQ9504: A protocol error was detected for channel 'TEST_CHANNEL'. EXPLANATION: During communications with the
remote queue manager, the channel program detected a protocol error.
The failure type was 11 with associated data of 0. ACTION: Contact the
systems administrator who should examine the error logs to determine
the cause of the failure.
Error Message at WebSphere Application Server:
A connection could not be made to WebSphere MQ for the following
reason: CC=2;RC=2009
As it can be seen from logs, I have created Queue Manager as QM.TEST and channel as TEST_CHANNEL. The listener port defined for the Queue Manager is 1417 along with protocol TCP.
I did lot of google but didn't find any appropriate solution. I appreciate any help in this regard.
Thanks in adv, KAmeer
I had a similar issue where I have WAS 7 and WMQ 7.5. I was able to make a connection to my existing WMQ 7.0 QM but not my new WMQ 7.5 QM. Apparently there was a change to the WMQ components bundled with WAS 7 after the initial release 7.0.0.0. After updating the resource adapter I was able to make a successfull connection to both queue managers.
The queue manager generates protocol error and terminates the connection immediately when it receives unexpected TSH flow from the client. As a result the client receives 2009 error. Technically, low level MQ client will be able communicate with higher version MQ queue manager and vice versa unless there are known restrictions and/or there is a MQ defect/APAR. The error message indicates the queue manager is running at MQ 7500 and this is MQ base 7.5 version. It is recommended upgrading the queue manager to the latest fixpack available to rule out any known problems. You could also try disabling shared conversion on the SVRCONN(i.e. setting SHARECNV to 0) channel and check whether it workarounds the problem until the problem is resolved.
Open a PMR with IBM as this sounds like a bug.
the cause for this is mq 7 client cannot talk to mq 7.5, the client needs to use mq 7.5 jar files
I had this problem. In my case was the mq library that was doing an MQGET with an infinite loop, so the lib was locked on the mqget while i called the kill and generated an event and tried to disconnect while the get was still running. As the mqget does not support unlocking via signal, i had to change the code to not stay infinite on get and add some flags on the kill command so the app could detect that it was time to die, when it returned from the get.

jms order of message delivery with high availability

I have set up uniform distributed queue with weblogic server 12c. I am trying to achieve order of delivery and high availability with jms distributed queue. In my prototpe testing deployment I have two managed servers in the cluster, let us say managed_server1 and managed_server2. Each of this managed server hosts jms server namely jms server1 and jms server2. I have configured the jms servers with jdbc persistent store. I have enabled server affinity.
I have a producer running such as java queuproducer t3::/managed_server1. I send out 4 messages. From the weblogic monitoring console I see there are 4 messages in the queu since there are no consumers to the queue yet.
Now I shut down managed_server1.
Bring up a consumer to listen on java queuconsumer t3://managed_server2. This consumer cannot consume message since the producer send all the messages to jms server1, and it is down.
Bring up managed server 1, start a consumer to listen to t3://managed_server1 I can get all messages.
Here is my problem say if the managed_server1 went down then there it never came back up, do i loose all my messages. Also if there is another producer sending messages to java queuproducer t3://managed_server2 then order of messages based on the time between these producers are not guanranteed.
I am a little lost, am I missing something. Can unit of order help me to overcome this. Or should I use distributed topic instead of distributed queue, where all the jms server will receive all the messages from producers, but if one jms server where my consumre is listening fails there is only one consumer in my application, when I switch over to other jms server, I might be starting to get messages from the beginning not from where I left off.
Any suggestions regarding the same will be helpful.
Good Question !
" Here is my problem say if the managed_server1 went down then there it never came back up, do i loose all my messages. "
Ans - no you do not loose all your messages, they are stored in the JDBC store configured for the JMS server deployed on managed server 1. If you want the Messages sent to managed_server1 to be consumed from managed_server2 you need to configure JMS migration.
" Also if there is another producer sending messages to java queuproducer t3://managed_server2 then order of messages based on the time between these producers are not guanranteed. Can unit of order help me to overcome this."
Ans - If you want the messages to be consumed strictly in a certain order, then you will have to make use of unit of order (UOO). when messages are sent using UOO, they are sent to one of the several UDQ destinations, if midway that destination fails, and migration is enabled the messages are migrated to the next UDQ destination and new UDQ messages are also delivered to the new destination.
Useful links -
http://www.youtube.com/watch?v=B9J7q5NbXag
http://www.youtube.com/watch?v=_W3EJ8p35lI
Hope this helps.

Where is a message under syncpoint control stored prior to COMMIT?

With WebSphere MQ, where is a message addressed to a remote queue manager and PUT under syncpoint control stored until MQCMIT is issued?
Messages that are addressed to a remote queue manager resolve to a transmit queue. Which transmit queue they resolve to depends on how the message will be sent to the remote QMgr. The message will resolve locally to either a user-defined transmit queue for a SDR or SVR channel, or it will resolve to SYSTEM.CLUSTER.TRANSMIT.QUEUE for a clustered channel.
For any message that is put under syncpoint, the message is written to the transaction logs and, if the message is persistent, to the queue file. The queue depth increases to reflect that the message is in the queue but the message is not made available to other programs (such as the channel agent) until the COMMIT occurs.
So if your message is going to a clustered queue manager and you PUT under syncpoint, you will see the depth of the cluster transmit queue increase. At that point the message is at least in the transaction log and possibly also written to the queue file. As soon as the message is committed, it becomes available to the channel.

Resources