MQ: Queue is not able to accept 4 MB of message - ibm-mq

Queue is not able to accept message greater than 4 MB even though MAXMSGL set to 100 MB on the Queue Manager, Queue, DLQ, SVRCONN and CLNTCONN and provided updated channel tab to application team. This didn't resolved the issue (restarted queue manager as well).
Application team is posting the messages using service/job.
Whereas same job is able to connect and post 4+ MB of message to different queue manager running on different server.
MQ Client Version: 7.5.0.2
MQ Server Version: 7.5.0.3
I tried to post 4+ MB message to the queue using RFHUtil it is successful. But somehow it is failing to send message from application/job.

Issue found once we enabled trace on the client machine. It is observed application is using old channel tab even though entries in environment variables referring to new channel tab.
Restarted MQ Client machine and tested flow by enabling trace and this time it worked as expected and application/job able to post the message greater than 4 MB.

Related

Websphere MQ XMS Event Poll Time

I use websphere mq xms.net infrastructure to async message listening.My problem is sometimes there are messages in queue but xms cannot read them on time and waiting 1,2... 5 minutes wait and then get the message.Is there any configuration on xms for that something like event poll interval time ...?Sometimes I got message directly sometimes not?
You are using XMS .NET version 8.0.0.8. I am not sure what value you have set for the XMSC.WMQ_PROVIDER_VERSION property. The default value is "unpspecified" as documented here.
Update:
Apologies. I just checked documentation and IBM.XMS.XMSC.WMQ_POLLING_INTERVAL is valid for MQ 7 and above.
However XMSC.RTT_BROKER_PING_INTERVAL property is not valid when connecting MQ queue manager. It is valid only for Real Time Transport of Message Broker and RTT is no longer supported.
Are you connecting to MQ v6 queue manager by any chance?
I suggest you to not set IBM.XMS.XMSC.WMQ_POLLING_INTERVAL property. The messages should be delivered as soon as they arrive in queue and application is ready to receive.

HUGE Number of MQGET and MQINQS requests logged against a MQ channel

We have a BatchJob application which is configured in Websphere Application Server (8.0.0.7). The application processes the requests put in the source MQ queue. Through WAS, the MQ queue is polled to see if there are any new requests available for processing.
We have been recently notified by a MQ resource that, there is high CPU resource utilization due to the MQ channel used by our application. When looked at the numbers, the MQGETS and MQINQS requests are humongous. This is not a 1 off incidence. It has always been like that since the day our application was installed. So I believe there is some configuration at Websphere that is causing this high volume of MQGETS and MQINQS requests.
Can somebody give any pointers which configs need to be checked? I am from application development side, so don't have in-detailed knowledge about WAS.
Thanks in advance.

More than 4MB response from IBM MQ in Jmeter

I am trying to run some load tests on jmeter by connecting to IBM MQ. It works fine except for a particular message which get a close to 5 MB response.
So, here's the setup, I push a message from jmeter to a request queue on IBM MQ. My was app picks it up and sends a response to the response queue where the JMETER picks it up. This setup works fine, till i send a particular message whose response is close to 5MB.
The jmeter simply doesn't pick this response and that in turn holds up all the other messages on the response queue.
I think the default recieve size for jmeter is 4MB, but can i change it for MQ response in jmeter properties somewhere.
I tried by making changes to user.properties and jmeter.properties in jmeter bin directory but nothing worked.
The exception thrown is related to MQJMS. So check the linked exception for MQ reason code returned. If you are connecting to queue manager using a client mode connection, then check the MAXMSGL attribute of the Server Connection channel you are using. By default the MAXMSGL is set to 4 MB. You will need to increase the attribute value to larger messages.

MQ Input/Output count increasing when Datapower client is connect using MQ front side handler

I am using MQ 7.5.0.2 and Datapower client IDG7
When MQ send messages to Datapower, Datapower receive those messages using MQ front side handlers and also same way it do send messages using Backend URL
But the problem I am facing it when ever Datapower connects to MQ, Queue Input/Output count increases to (10 ~20) and remains same and the Handle state is INACTIVE.
When I see queue details using below commands it is displaying as below
display qstatus(******) type(handle)
QUEUE(********) TYPE(HANDLE)
APPLDESC(WebSphere MQ Channel)
APPLTAG(WebSphere Datapower MQClient)
APPLTYPE(SYSTEM) BROWSE(NO)
CHANNEL(*****) CONNAME(******)
ASTATE(NONE) HSTATE(INACTIVE)
INPUT(SHARED) INQUIRE(NO)
OUTPUT(NO) PID(25391)
QMURID(0.1149) SET(NO)
TID(54)
URID(XA_FORMATID[] XA_GTRID[] XA_BQUAL[])
URTYPE(QMGR)
Can any one help me in this.It only clearing when ever i restart the queue manager but I dont want to restart the qmgr every time.
HSTATE in INACTIVE state indicates "No API call from a connection is currently in progress for this object. For a queue, this condition can arise when no MQGET WAIT call is in progress.". This is likely to happen if the application(DP in this case) opened the queue and then not issuing any API calls on the opened object. Pid 25391 - is this for an amqrmppa process? Is DP expected to consume messages on this queue continuously?

AMQ9504: A protocol error was detected for channel

I'm unable to connect remotely from WebSphere Application Server with Queue Manager at WebSphere MQ. Anyhow it get connected to Queue Manager from WAS that is installed on same machine. I'm using version 7.5 of WebSphere MQ and version 7.0 of WebSphere Application Server.
While attempting to connect WAS remotely to Queue Manager following error messages were logged.
Error Message from WebSphere MQ:
1/30/2013 21:12:09 - Process(3624.6) User(MUSR_MQADMIN)
Program(amqrmppa.exe)
Host(KHILT-269) Installation(Installation1)
VRMF(7.5.0.0) QMgr(QM.TEST)
AMQ9504: A protocol error was detected for channel 'TEST_CHANNEL'. EXPLANATION: During communications with the
remote queue manager, the channel program detected a protocol error.
The failure type was 11 with associated data of 0. ACTION: Contact the
systems administrator who should examine the error logs to determine
the cause of the failure.
Error Message at WebSphere Application Server:
A connection could not be made to WebSphere MQ for the following
reason: CC=2;RC=2009
As it can be seen from logs, I have created Queue Manager as QM.TEST and channel as TEST_CHANNEL. The listener port defined for the Queue Manager is 1417 along with protocol TCP.
I did lot of google but didn't find any appropriate solution. I appreciate any help in this regard.
Thanks in adv, KAmeer
I had a similar issue where I have WAS 7 and WMQ 7.5. I was able to make a connection to my existing WMQ 7.0 QM but not my new WMQ 7.5 QM. Apparently there was a change to the WMQ components bundled with WAS 7 after the initial release 7.0.0.0. After updating the resource adapter I was able to make a successfull connection to both queue managers.
The queue manager generates protocol error and terminates the connection immediately when it receives unexpected TSH flow from the client. As a result the client receives 2009 error. Technically, low level MQ client will be able communicate with higher version MQ queue manager and vice versa unless there are known restrictions and/or there is a MQ defect/APAR. The error message indicates the queue manager is running at MQ 7500 and this is MQ base 7.5 version. It is recommended upgrading the queue manager to the latest fixpack available to rule out any known problems. You could also try disabling shared conversion on the SVRCONN(i.e. setting SHARECNV to 0) channel and check whether it workarounds the problem until the problem is resolved.
Open a PMR with IBM as this sounds like a bug.
the cause for this is mq 7 client cannot talk to mq 7.5, the client needs to use mq 7.5 jar files
I had this problem. In my case was the mq library that was doing an MQGET with an infinite loop, so the lib was locked on the mqget while i called the kill and generated an event and tried to disconnect while the get was still running. As the mqget does not support unlocking via signal, i had to change the code to not stay infinite on get and add some flags on the kill command so the app could detect that it was time to die, when it returned from the get.

Resources