com.ibm.msg.client.jms.DetailedInvalidDestinationException: JMSWMQ2007: Failed to send a message to destination 'Queue Name'.
JMS attempted to perform an MQPUT or MQPUT1; however WebSphere MQ reported an error.
Use the linked exception to determine the cause of this error.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:498)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:216)
at com.ibm.msg.client.wmq.internal.WMQMessageProducer.checkJmqiCallSuccess(WMQMessageProducer.java:1086)
at com.ibm.msg.client.wmq.internal.WMQMessageProducer.checkJmqiCallSuccess(WMQMessageProducer.java:1044)
at com.ibm.msg.client.wmq.internal.WMQMessageProducer.access$800(WMQMessageProducer.java:71)
Is 'Queue Name' queue that you are trying to put message? That queue does not exist on the queue manager your application is connected to. A queue must exist before you can put a message. Create a (or using existing) queue in your queue manager and try again.
Related
I have a flow in Mule which puts order message in a WMQ queue. I am using JMS connector to connect to WMQ queue. This WMQ is a queue on the target system. The target system has a flow that pulls message from this queue. When there are burst of message hitting this WMQ queue, any one message cuts half. So if I check my logs, I can a see a warning "WARN org.mule.transport.jms.JmsConnector - Failed to close jms message producer: MQJMS2000: failed to close MQ queue". The target system is using WMQ version 6.0.2.3. I read the IBM support which says "JMS attempted to close a WebSphere MQ queue, but encountered an error. The queue may already be closed, or another thread may be performing an MQGET while close() is called". What can be done to resolve this?
I'm WAS 7 and Webshpere MQ 6 for JMS Application in java.
I'm facing this error while connecting to the queue.
com.ibm.msg.client.jms.DetailedJMSSecurityException: JMSWMQ0018: Failed to connect to queue manager 'Test_QManager' with connection mode 'Client' and host name '172.21.136.72'. Check the queue manager is started and if running in client mode, check there is a listener running. Please see the linked exception for more information.
Caused by: com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2063' ('MQRC_SECURITY_ERROR').
Any Help Please.
As a general rule, the most detail for any security error is provided at the queue manager. The reason for this is that the administrator needs as much information as possible but an attacker should get as little information as possible.
This gives us a great diagnostic tool for this kind of error. When at the client you get a very sparse "security error" with little explanation, look at the queue manager's logs. If they record a detailed error at the same time as your client did, then you know the request made it to MQ and why MQ rejected it.
However, if the QMgr logs do not record the error then you know to concentrate your efforts on the client side.
If this was an authorization error, you would get back a 2035. A 2063 has something to do with security but not authorization. That leaves things like the client cannot find or open its keystore, or the file permissions on the keystore allow world-read. It might be that the client JSSE provider isn't compatible with MQ.
The recommended diagnostic is to use the sample programs that come with MQ to perform verification tests. If these can recreate the problem, it is with configuration or the environment. If they work, then the issue is likely in the code, app server config, or managed objects. Turning on client-side trace should help tremendously, just remember to disable it afterwards,
i've a question about the DEADQ in MQ. I know that DEADQ has been used when the msg cannot be delived into the target queue, for example queue full or put inhibited like this. But if the client applicatoin connects to QMGR through SVRCONN channel and that target queue is full right now, will the msg sent by client application go to DEADQ or just that put operation would return with failure saying queue full?
if it works as the latter, does it mean the DEADQ not being used for client-server mode, like through SVRCONN channel?
Thanks
The DLQ is used by QMgr-to-QMgr channels because at that point the message has been entrusted to WMQ to deliver and/or persist as necessary. There is no option to directly notify the sending application in the API call that something went wrong. Best the QMgr can do is to send back a report message, if the app has requested it, and if the app specified a reply-to queue.
When an application is connected over a client channel the QMgr can tell the application during the PUT API call "hey, the destination queue is full," and let the application decide the appropriate course of action. Usually that "correct action" does NOT include putting the message onto the DLQ. In fact, routing this type of message to a secondary queue is VERY unusual. Conventional wisdom says it's better for the app to raise an alert than to provision overflow queues.
The one exception to this is when using JMS classes. These will move a poison message (one where the backout count exceeds the BOTHRESH value on the queue) to a backout queue. If the backout queue is not specified or the app is not authorized to it or if it is full, the app will then try to use the DLQ. Again, allowing an app to directly put messages to the DLQ is very unusual and not considered good practice, but it can be done.
I was wondering if you can help me solve the following problem.
After a while of no-messages being received by the Queue Manager, the Queue Manager goes to 'sleep' and unless you use IBM WebSphere Explorer to 'start the queue' using the command.
On the other hand, if you send a message and expect a response, it will say - Cannot connect.
Then, if you send the same message again, expecting the response, I've noticed the Queue Manager wakes-up.
So to summarize, my question is:
Does anyone know of a command to 'wake-up' the Queue Manager, before sending an actual message(as above).
Thanks in Advance,
IBM MQ Queue manager does not go to sleep. If queue manager is running, then it's awake and is no "wake up". There must be some reason because of which queue manager might have shutdown. Check logs in "errors" folder.
Can you please explain what you mean by 'start the queue'? because there is nothing like that. There is start queue manager.
What MQ reason code are you getting when it says "cannot connect".
Ok,
So I spoke to the admin of the MQ Service and there is a parameter they can set (some timeout I think) that sets it to 0 so it never 'sleeps'.
This will fix the problem.
With WebSphere MQ, where is a message addressed to a remote queue manager and PUT under syncpoint control stored until MQCMIT is issued?
Messages that are addressed to a remote queue manager resolve to a transmit queue. Which transmit queue they resolve to depends on how the message will be sent to the remote QMgr. The message will resolve locally to either a user-defined transmit queue for a SDR or SVR channel, or it will resolve to SYSTEM.CLUSTER.TRANSMIT.QUEUE for a clustered channel.
For any message that is put under syncpoint, the message is written to the transaction logs and, if the message is persistent, to the queue file. The queue depth increases to reflect that the message is in the queue but the message is not made available to other programs (such as the channel agent) until the COMMIT occurs.
So if your message is going to a clustered queue manager and you PUT under syncpoint, you will see the depth of the cluster transmit queue increase. At that point the message is at least in the transaction log and possibly also written to the queue file. As soon as the message is committed, it becomes available to the channel.