Can you check to see if an IBM MQ topic is up and available through a Java application before attempting to create a connection? - ibm-mq

I would like to add some conditional logic to our Java application code for attempting to create a JMS Topic Connection. I have seen problems in the past stemming from attempting to create a connection when the MQ server had been restarted or was currently down. One improvement I added was to check for the quiescent state, and another was to increase the timer before attempting reconnection to our durable topic queue.
Is there a way to confirm with the MQ server/topic/channel that it is up and running and a connection request can safely be made?

The best way to confirm that a queue manager (and the channel you are using to connect to the queue manager) is up and running is to attempt to connect to it.
If your connection attempt fails, you will get an MQ Reason code telling you exactly why. This is a much better way to confirm than any administrative command, because it also confirms that your application, and it's security context is correct and able to connect to the queue manager. It is completely possible to have an up-and-running queue manager but an application that is not yet correctly configured to use it. So connect from the application and if it works, the queue manager is up-and-running.
Your comment about having an increased timer before attempting to reconnect after a failure is well made. It doesn't help anyone if you hammer the queue manager with lots of repeated and close together connection attempts until it is ready to accept your connection, but still anything that is going to test the availability of the queue manager needs to ultimately connect to it, so very simply, just connect.

Related

how to limit number of connections to IBM MQ

I have a Spring Boot based messaging app sending/receiving JMS messages to/from IBM MQ queue manager.
Basically, it uses MQConnectionFactory to organize connection to IBM MQ and a JmsPoolConnectionFactory from messaginghub:pooledjms to enable JMS connection pool, which is removed from MQConnectionFactory in IBM MQ 7.x
The app uses two different appoach to work with JMS. A "correct" one runs a JMSListener to receive messages and then sends a response on each message using JmsTemplate.send(). And there is a second "troubling" approach, where the app sends requests using JmsTemplate.send() and waits for response using JmsTemplate.readByCorrelId() until received or timed out.
I say troubling because this makes JMS sessions last longer if the response is delayed and could easily exhaust IBM MQ connection limit. Unfortunately, I cannot rewrite the app at the moment to the first approach to resolve the issue.
Now I want to restrict the number of connections in the pool. Of course, the delayed requests will fail but IBM MQ connection limit is more important at the moment, so this is kind of appropriate. The problem is that even if I disable the JmsPoolConnectionFactory, it seems that MQConnectionFactory still opens multiple connections to the query manager.
While profiling the app I see multiple threads RvcThread: com.ibm.mq.jmmqi.remote.impl.RemoteTCPConnection#12433875[...] created by JMSCCMasterThreadPool and corresponding connections to the query manager in MQ Explorer. I wonder why there are many of them in spite of the connection pooling is removed from MQConnectionFactory? I suppose it should open and reuse a single connection then but it is not true in my test.
Disabling "troubling" JmsTemplate.readByCorrelId() and leaving only "correct" way in the app removes these multiple connections (and the waiting threads of course).
Replacing JmsPoolConnectionFactory with SingleConnectionFactory has not effect on the issue.
Is there any way to limit those connections? Is it possible to control max threads in the JMSCCMasterThreadPool as a workaround?
Because it affects other applications your MQ admins probably want you to not exhaust the overall Queue Manager's connection limit (MaxChannels and MaxActiveChannels parameters in qm.ini). They can help you by defining an MQ channel exclusively used by your application. By this, they can limit the number of connections of your application with the MAXINST / MAXINSTC channel parameter. You will get an exception when this number is exhausted which is appropriate as you say. Other applications won’t be affected anymore.

WebSphere MQ: "Object is open". How to force the release and reconnect?

I'm migrating a queue from one QM to another. I stopped the application reading the queue, but I don't have control over the application putting to it.
What I want to do is:
Create a new queue with the same name on another QM but shared in the MQ cluster where both QMs belong to.
Install a new application that will read from this new queue.
Remove the old queue so that the application putting will start putting on the new queue because of the MQ cluster queue location resolution.
For this to work, I need to stop the application making the PUT because it's keeping the old queue opened (when trying to delete I have the "Object is open" error). The application in question cannot be easily stopped, however, due to some SLA constraints.
I would like to find a command that would force the release of this queue and remove it, thus forcing the client (application doing PUT) to reconnect. Or is there any other way to achive this at runtime?
Queue which has open handle can not be deleted. I suggest stopping the connection from the application and then delete the queue before the application making a new connection. Other option would be to stop the channel instance, delete the queue and then start the channel instance. However, for a SVRCONN channel, it would affect other applications if they use the same channel.
Identify the connection using DISPLAY CONN:
http://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.ref.adm.doc/q086140_.htm
Stop the connection using STOP CONN:
http://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.ref.adm.doc/q086790_.htm
If this is not helpful (e.g. the application might initiate a new connection before the queue is deleted) then you could try STOP CHL. However as indicated other applications might be affected depending on whether they use the same channel or different channel. Moreover if you running the Channel Process in FASTPATH mode (trusted listener), mode FORCE can not be used.

JMS Listener Not Picking Up Message From the Queue

I am planning to do code change for an existing application which has a JMS listener.
To test whether the listener works on my local server, I deploy the application to my localhost and shutdown other containers that running the same application.
But my local listener won't pick up any message. It is confirmed that other containers work fine and can pick up and process new messages in the queue.
Can you think of any possible cause of this?
Way too general, too many missing points...but some things to look at:
if the message queue is on a different server, can you ping it from the local device? could be that development environment can't see production server, perhaps
does a netstat -n show the correct port number, you should see a remote port with the port on which the message provider is listening itself
can you verify that the messaging provider sees you as a consumer? I use activemq, I can look at the management console, dive into a specific queue, and view active consumers, most providers will have something similar
are you running in an identical environment? Running a listener in a JEE environment where the queue is a jndi reference might be different running in a debugger where you need the actual queue name
any JMS filtering going on, where the filter for your local envionrment doesn't match up with what's already on the queue?
any transaction manager stuff that may be getting in the way?
Again, just throwing stuff to see what sticks to the wall, but these are the really obvious things.
Thanks Scott for answering my question.
I finally find that Eclipse somehow created another container and my listener was deployed to it. That's why I cannot find it working in my current container.

Can't connect Websphere MQ Queue Manager

I'm a beginner on WebSphere MQ, I was working on MQ 6 and it was working fine, but now I've installed MQ 7.1 and when I try to create a new Queue Manager I can do it But it can't connect and it gives me the following error :
Do you have any idea about that? Thank you :)
You can look up any WebSphere MQ error code if either the WebSphere MQ Client or Server are installed using the mqrc command. In this case:
C:\Users\MUSR_MQADMIN>mqrc 2059
2059 0x0000080b MQRC_Q_MGR_NOT_AVAILABLE
The 2059 usually indicates that the listener is not running or the queue manager is down. There's a different error code if the listener is running and the QMgr name is wrong and another one if the connection is made to the right QMgr but the channel name is wrong. Sometimes you can get a 2059 if the channel was closed at the server side by an exit but since you didn't mention any exits, I'm assuming in this case that its listener problem.
Hopefully by now you are defining a listener object rather than using inetd or the runmqlsr command. Defining an object and setting it to start and stop under QMgr control is the most reliable way to configure it.
Once you get past the 2059, you should be aware that as of WMQ V7.1, the queue managers are secure by default and won't accept any remote client connections unless you explicitly authorize them. This is the opposite of the behavior of V6 where on a newly defined queue manager running a listener, anyone with a TCP route to it could administer it and remotely execute OS code as the mqm user. So I expect that the next problem you run into will be 2035 errors.
I've been told this means more work for the WMQ administrator. The only case in which that's true is if the V6 or earlier queue manager had been configured without security. If the tasks to secure a V7.0 QMgr are compared to the tasks to provision access on a v7.1 and higher QMgr are compared, provisioning access turnds out to be easier. However if you liked the V7.0 behavior, you can always alter the QMgr to disable CHLAUTH rules. Needless to say, leaving security enabled is highly encouraged.
To debug security errors, alter the QMgr to enable authorization events using the runmqsc command ALTER QMGR AUTHOREV(ENABLED). Next, download and install SupportPac MS0P into WebSphere MQ Explorer. Then when you do get a security error, use WebSphere MQ Explorer to look at the queue. Right-click on the queue and select the option to parse the event messages. This will tell you in excruciating detail all the information you need to debug the authorization error.
Finally, if you wish to read up on the new security features, go to t-rob.net/links and look at the conference presentations there. There are also some articles indexed if you scroll down.
In the screen-shot, I see hostname "127.0.0.1" and port # 1414. If it is a local queue manager then connect directly to it.
Also, each queue manager MUST use a unique port number. If you had it working with WMQ v6 queue manager, is this the same queue manager? If not, then make sure each queue manager uses a different port number (i.e. 1415, 1416, etc...)
I got same problem. but i resolved this by :
1. created a listener manually (define lstr(lstr1) port(xxxx) control(qmgr)
2. setmqaut mcauser('mqm').

NServiceBus DBMS connection timeout

I use Nservicebus with Oracle Queues OAQ instaed of MSMQ.
I have a problem working with a dbms server that is shutdown every day at the same time.
In particular when my nservicebus host can't get the dbms connection it starts logging on.
When the dbms is restarted my host restart or not randomly! However restarting my host everything is ok!
Another detail is that when my nservicebus host can't restart it logs a 'connection timeout message' every 15 seconds!
What's the behavior of NserviceBus when it's reading from a queue and the dbms crash? What could i do to solve this problem?
thank you,
R
I'm afraid the problem you're facing is the result of the design of your system. By having the queues in the DB, when the DB becomes unavailable, so do the queues. NServiceBus assumes that it is always able to communicate with its queues, as is the case when using a distributed/federated queuing system like MSMQ.
You can look at what some people in the community have done to combat this same problem when they were using IBM MQ (http://code.google.com/p/nservicebuswmq/) - ultimately falling back to MSMQ under those conditions and then syncing back up with MQ when it came back online.

Resources