How to avoid MQRC2033 NO_MSG_AVAILABLE - ibm-mq

I have a simple program to process messages from a queue.
My intention is to process all available messages in queue and still listen to queue for incoming messages.
I have written the processing part inside a infinite loop as i want it to listen to queue always and process messages.
Once after processing all messages again it tries to get a message(as it is inside a infinite loop) from the queue and there
is no messages it throws MQRC 2033 NO_MSG_AVAILABLE exception(infact it is correct) and my program exits.
Can someone give an idea to continously listen to this queue and avoid this exception.

When you execute the MQGET API call, there is an option to have the program wait for messages. You can specify a wait time (in milliseconds) or specify to wait forever. Just make sure that if you have the app wait for more than a few seconds, also specify 'Fail if Quiescing'. This allows the queue manager to be stopped cleanly. Without 'Fail if Quiescing' the administrator will need to issue a preemptive shutdown which can cause problems.
There is a section specifically for this question in the Programmer's Guide in the Waiting for Messages chapter. Depending on the language you are writing in ,the actual value to specify is in the Programmer's Reference, the Using Java manual or the Using .Net manual. Each of these will be visible in the navigation panel when you click the link above.

Related

Way to automatically clear all applications connected to the queue

We have an environment where MQ acts as an interface between
Websites and Micro Focus. Sometimes a message gets stuck in a queue,
thereby blocking all the communications over that particular queue. If
the queue depth increases greatly, all the communication stops in the
queue manager.
When we check the status of queue, we see that microfocus process is present there.
Is there are way to automatically clear all applications connected to the queue?
I don't think its possible to close an applications handle on a given queue but you could have a script that runs a couple of MQSC commands against the queue manager to first get the connection identifier using the DISPLAY CONN command and then close the connection using the STOP CONN command. You could then setup a trigger on the queue that executes the script once a certain queue depth has been reached.

WebSphere MQ v7.1 Channels going down

The sender and receiver channels between two queue managers (WebSphere MQ v7.1 running on Redhat Linux) that I have configured is going down pretty frequently. Any idea why? How can I debug this? Thanks.
Channels are expected to go down. The idea is that they stay active as long as there is traffic and then time out. Assuming they've been configured to trigger, the presence of a message on the XMitQ causes the channel to start up again.
The reason for this is that a triggered channel will generally restart if interrupted by a network failure or other adverse event. However if a channel is configured to stay running 24x7 then the only way it stops is due to one of these adverse events and that increases the likelihood that human intervention will be required to restart the channel. On the other hand, a channel that times out can survive all sorts of nasty network events that occur while it is inactive. Allowing it to time out when not in use thus improves overall reliability of the channel.
So how do you cause a channel to trigger? Make sure the transmission queue contains the TRIGGER, TRIGTYPE, TRIGDATA and INITQ attributes. For example, to define a transmission queue to the JUPITER QMgr:
DEF QL(JUPITER) +
USAGE(XMITQ) +
TRIGGER +
TRIGTYPE(FIRST) +
TRIGDATA('MYQMGR.JUPITER') +
INITQ(SYSTEM.CHANNEL.INITQ) +
REPLACE
The only variable of the bunch is TRIGDATA which contains the name of the channel serving this XMitQ.
Of course, the channel initiator must be running but in modern versions of WMQ it starts by default (based on the value of the queue manager's SCHINIT attribute) so generally will in fact be running.
The channel that is in STOPPED state cannot be triggered. By default the STOP CHL command uses STATUS(STOPPED) so most of the time manually stopping a channel prevents triggering. If you want to stop a channel in such a way that it will restart (for example to test triggering) use the STOP CHL(CHLNAME) STATUS(INACTIVE) command. If the channel is already in STOPPED state, either issue the START CHL command to make it start immediately or use the STOP CHL(CHLNAME) STATUS(INACTIVE) to change the status from STOPPED to INACTIVE without starting it.
Once the channels are up, the DISCINT attribute of the channel determines how long it will run before timing out. The value is in seconds and defaults to 600 which is 10 minutes. The DISCINT, KAINT and HBINT combine to determine when the channel comes down. Note that the TCP spec calls for things using keepalive to disable them by default so if you want to use keepalive on your channels, you must enable it in the QMgr tuning as described here.
Please see Triggering Channels in the Infocenter for more on the configuration details. Take a look at SupportPac MD0C WebSphere MQ - Keeping Channels Up and Running if you want to know more about the internals and tuning. (The SupportPac is a bit dated but the principles of tuning mostly still apply. Where there are discrepancies, the Infocenter is the authoritative version.)
If you want to keep channels up continuously, set DISCINT(0) but remember that triggering remains the preferred option. Some shops need to minimize response times during the business day and so set DISCINT to a value that allows the channels to time out at night but generally keeps them running all day. If for some reason you have triggering set up right and the channels go down prior to DISCIINT you should be able to check in the error logs for the reason why. These reside in the QMgr's directory under errors. For example, on UNIX/Linux they are in /var/mqm/qmgrs/qmgrname/errors and on Windows the default location is C:\Program Files(x86)\WebSphere MQ\QMgrs\qmgrname\errors. Look for the files named AMQERR??.LOG where ?? = 01, 02, or 03. The logs rotate where 01 is current, 02 is next and so on. If you have a very busy QMgr you need to capture these as soon as the channel goes down or they could roll off.

WebSphere MQ: Message keeps toggling between input queue and backout queue

The logic flow is like this
A message is sent to an input queue
A ProcessorMDB's onMessage() is invoked. Within this method several operations/validations are done
In case of a poison message(msg that application code cannot handle) a RuntimeException is thrown.
This should rollback the transaction. We are seeing evidence in the log file.
There is a backout threshold defined with a backout queue name
once threshold is reached, the message is sent to backout queue
But immediately it starts going back and forth between the input queue and backout queue.
We are using MQMON tool to observe this weird behavior. It continues for ever almost even after the app server(where MDB is running) is shutdown.
We are using Weblogic 10.3.1 and WebSphere MQ 6.02
Any help will be much appreciated, looks like we are running out of ideas.
This sounds like a syncpoint issue. If the QMgr were to issue a COMMIT when a message is requeued inside of a unit of work it would affect all messages under syncpoint inside of that thread. This would cause serious problems if an application had performed several PUT or GET calls prior to hitting the poison message. Rather than issue a COMMIT outside of the program's control, the QMgr just leaves the message on the backout queue inside the unit of work and waits for the program to issue the COMMIT. This can lead to some unexpected behavior such as what you are seeing where a message lands back on the input queue.
If another message is in the queue behind the "bad" one and it is processed successfully by the same thread, everything works out perfectly. The app issues a COMMIT on the new message and this also affects the poison message on the Backout Queue. However if the thread were to exit uncleanly (without an explicit disconnect or COMMIT) then the transaction is rolled back and the poison message is returned to the input queue.
The usual way of dealing with this is that the next good message (or batch of messages if transactions are batched) in the input queue will force the COMMIT. However in some cases where the owning thread gets no new work (perhaps it was performing a GET by Correlation ID) there is nothing to push the bad message through. In these cases, it is important to make sure that the application issues a COMMIT before ending. One way to do this is to write the code to perform the GET by CORRELID with a wait interval. If the wait interval expires, the application would get a return code of 2033 and then issue a COMMIT before closing the thread. If the reply message is legitimately late for whatever reason, the COMMIT will have no effect. But if the message arrived and had been backed out and requeued, the COMMIT will cause it to stay in the Backout Queue.
One way to see exactly what is going on is to run a trace against the queue in question. You can use the built-in trace function - strmqtrc - which has a few more options in V7 than does the V6 version. However if you want very fine grained control you can use the trace exit in SupportPac MA0W. With MA0W you can see exactly what API calls are made by the program and those made on its behalf.
[EDIT] Updating the response with some info from the PMR:
The following is from the WMQ V7 Infocenter:
MessageConsumers are single threaded below the Session level, and
any requeuing of poison messages
takes place within the current unit of
work. This does not affect the
operation of the application, however
when poison messages are requeued
under a transacted or
Client_acknowledge Session, the
requeue action itself will not be
committed until the current unit of
work is committed by the application
code or, if appropriate, the
application container code."
Hence, if it is important for the customer to have poison messages
committed immediately after they are
backed out, it is recommended they
either make use of the Application
Server Facilities
(ConnectionConsumer) which can commit
the message immediately, or
another mechanism to move poison
messages from the queue.
Here is the link to this information in the V6 and V7 Information Centers. Since you are using the V6 client so you would want to refer to the V6 Infocenter. Note that with the V6 client, there is no mention in the Infocenter of ASF being able to commit the poison message immediately, even when using a ConnectionConsumer. The way I read it, this means you probably will need to upgrade to the V7 client to get the behavior you are looking for. Will be interested to see if the PMR results in a similar recommendation.

Windows Messages in Library Code

I am porting a library to Windows. In a function I need to block on the arrival of a WM_DEVICECHANGE message.
What options are available for doing this? As my code resides in a library I have little-to-no information on the current set-up (so if it is a Console application, a regular GUI application, if my code is being run in a spawned thread, and so on). Therefore what is the best way to wait for the arrival of a specific message?
Blocking and receiving Windows messages are mutually incompatible. You get messages by pumping a message loop. Since you cannot rely on the app pumping one, you'll need to do this yourself.
You will need to create a thread. Create a hidden window in that thread then run the standard message loop. The window procedure for that window can see the WM_DEVICECHANGE message. It can do what ever it needs to do, within the constraints of running inside a separate thread. Like setting an event to signal that a function should stop blocking.
The message is probably sent using BroadcastSystemMessage(). You could create a hidden top-level window and its window proc would probably get this message. I'm not sure; but that's what I'd try first.

How to check which point is cause of problem with MQ?

I use MQ for send/receive message between my system and other system. Sometime I found that no response message in response queue, yet other system have already put response message into response queue (check from log). So, how to check which point is cause of problem, how to prove message is not arrive to my response queue.
In addition, when message arrive my queue it will be written to log file.
You can view this in real-time using the QStats interface. The MO71 SupportPac is a desktop client that you can configure to connect similar to WebSphere MQ Explorer. One of the options it has is queue statistics. Each time you view the queue stats, WMQ resets them to zero. So the procedure is this:
Start MO71 and browse the queues.
Filter on the one queue of interest.
View the queue stats a couple of times.
You will see them reset to zero.
Now run your test.
View the queue stats again.
If the remote program actually put a message, you will see that the queue now shows one or more messages PUT.
If your program successfully executed a GET of the message, you will see GET counts equal to the number of PUT counts.
If GET and PUT both zero, the remote program never PUT the response message.
There are a few other approaches to this but this is the easiest. The opposite end of the spectrum is SupportPac MA0W which will show you every API call against that queue, or by PID, or whatever. It shows all the options so if a program tries to open the queue with the wrong options (i.e. open a remote queue for input) it shows that. But MA0W is a installed as an exit and requires the QMgr to be bounced so it's a little invasive.

Resources