how to limit number of connections to IBM MQ - spring

I have a Spring Boot based messaging app sending/receiving JMS messages to/from IBM MQ queue manager.
Basically, it uses MQConnectionFactory to organize connection to IBM MQ and a JmsPoolConnectionFactory from messaginghub:pooledjms to enable JMS connection pool, which is removed from MQConnectionFactory in IBM MQ 7.x
The app uses two different appoach to work with JMS. A "correct" one runs a JMSListener to receive messages and then sends a response on each message using JmsTemplate.send(). And there is a second "troubling" approach, where the app sends requests using JmsTemplate.send() and waits for response using JmsTemplate.readByCorrelId() until received or timed out.
I say troubling because this makes JMS sessions last longer if the response is delayed and could easily exhaust IBM MQ connection limit. Unfortunately, I cannot rewrite the app at the moment to the first approach to resolve the issue.
Now I want to restrict the number of connections in the pool. Of course, the delayed requests will fail but IBM MQ connection limit is more important at the moment, so this is kind of appropriate. The problem is that even if I disable the JmsPoolConnectionFactory, it seems that MQConnectionFactory still opens multiple connections to the query manager.
While profiling the app I see multiple threads RvcThread: com.ibm.mq.jmmqi.remote.impl.RemoteTCPConnection#12433875[...] created by JMSCCMasterThreadPool and corresponding connections to the query manager in MQ Explorer. I wonder why there are many of them in spite of the connection pooling is removed from MQConnectionFactory? I suppose it should open and reuse a single connection then but it is not true in my test.
Disabling "troubling" JmsTemplate.readByCorrelId() and leaving only "correct" way in the app removes these multiple connections (and the waiting threads of course).
Replacing JmsPoolConnectionFactory with SingleConnectionFactory has not effect on the issue.
Is there any way to limit those connections? Is it possible to control max threads in the JMSCCMasterThreadPool as a workaround?

Because it affects other applications your MQ admins probably want you to not exhaust the overall Queue Manager's connection limit (MaxChannels and MaxActiveChannels parameters in qm.ini). They can help you by defining an MQ channel exclusively used by your application. By this, they can limit the number of connections of your application with the MAXINST / MAXINSTC channel parameter. You will get an exception when this number is exhausted which is appropriate as you say. Other applications won’t be affected anymore.

Related

Can you check to see if an IBM MQ topic is up and available through a Java application before attempting to create a connection?

I would like to add some conditional logic to our Java application code for attempting to create a JMS Topic Connection. I have seen problems in the past stemming from attempting to create a connection when the MQ server had been restarted or was currently down. One improvement I added was to check for the quiescent state, and another was to increase the timer before attempting reconnection to our durable topic queue.
Is there a way to confirm with the MQ server/topic/channel that it is up and running and a connection request can safely be made?
The best way to confirm that a queue manager (and the channel you are using to connect to the queue manager) is up and running is to attempt to connect to it.
If your connection attempt fails, you will get an MQ Reason code telling you exactly why. This is a much better way to confirm than any administrative command, because it also confirms that your application, and it's security context is correct and able to connect to the queue manager. It is completely possible to have an up-and-running queue manager but an application that is not yet correctly configured to use it. So connect from the application and if it works, the queue manager is up-and-running.
Your comment about having an increased timer before attempting to reconnect after a failure is well made. It doesn't help anyone if you hammer the queue manager with lots of repeated and close together connection attempts until it is ready to accept your connection, but still anything that is going to test the availability of the queue manager needs to ultimately connect to it, so very simply, just connect.

JMS connection number confuse me in Netstat/MQ explorer

I am encountering some issues when using IBM MQ along with Spring JMS. I am using CachingConnectionFactory to improve performance, but I noticed something which is "strange" either in MQ explorer app connection table and in netstat result.
The cached session size is configured to 5 for CachingConnectionFactory.
Create one or multiple connections in a single thread, I see 1 row in both MQ explorer and netstat.(OK, no problem)
Create one connection and then 5 sessions in a single thread, I see 1 row in netstat result but 6 rows in MQ explorer.
Now I want to test in a concurrent scenario with 5 producer threads, create one connection in each thread, I can see 1 row in netstat and 1 row in MQ explorer.
In the same context as 3, create one connection and then create one session in each thread, I can see 5 rows in netstat result and (5 + 1) rows in MQ explorer. If doing connection.close() in each thread => rows count are not changed. If doing session.close() in each thread => rows count are not changed.
Confused by case 4/.
I would like to show same analysis result when using MQConnectionFactory
Create one or multiple connections in a single thread, I see 1 row in both MQ explorer and netstat.(OK, no problem)
Create one connection and then 5 sessions in a single thread, I see 1 row in netstat result but 6 rows in MQ explorer.
Now I want to test in a concurrent scenario with 5 producer threads, create one connection in each thread, I can see 5 rows in netstat and 5 rows in MQ explorer.
In the same context as 3, create one connection and then create one session in each thread, I can see 5 rows in netstat result and 10 rows in MQ explorer. If doing connection.close() in each thread => rows count are now 0 in both of them. If doing session.close() in each thread => rows count are not changed in netstat, but decreased to 5 in MQ IE.
Hope have your explanation, thank you
IBM MQ level details
netstat will show you network level TCP connections to the MQ queue manager's listener port, these will match 1 for 1 with the number of channel instances running.
IBM MQ v7.0 and later support multiplexing multiple conversations per MQ channel instance. The setting on a SVRCONN channel that controls the number of conversations per channel is SHARECNV and defaults to the value of 10, this is negotiated based both on the server and client side settings to the smallest value, in the case of JMS generally this will match the server side setting unless CCDTs are used in which case it will negotiate based on the CLNTCONN's SHARECNV value.
If you run the following command in runmqsc you can see both the negotiated SHARECNV value (MAXSHCNV) and the current number of conversations (CURSHCNV).
DIS CHS(CHL.NAME) CURSHCNV MAXSHCNV`
When you look at connections related to TCP clients in MQ using either the following command in runmqsc or the application connection table in MQ Explorer, you will see one connection per conversation.
DIS CONN(*) TYPE(ALL)
JMS details
At the JMS layer each connection to the queue manager is a conversation.
At the JMS layer each session is another conversation.
What you likely see is that if you open enough sessions to go over the SHARCNV limit and another channel instance will start, thus you will see another "row" in the netstat output.
Notes on your observations:
You observed:
Now I want to test in a concurrent scenario with 5 producer threads, create one connection in each thread, I can see 5 rows in netstat and 5 rows in MQ explorer.
In the same context as 3, create one connection and then create one session in each thread, I can see 5 rows in netstat result and 10 rows in MQ explorer. If doing connection.close() in each thread => rows count are now 0 in both of them. If doing session.close() in each thread => rows count are not changed in netstat, but decreased to 5 in MQ IE.
Comments below are based on the assumption that the SVRCONN channel has SHARECNV(10) and that you are not using a CCDT with CLNTCONN channels with SHARECNV lower than 10.
Connection created: From what I have read (your observations agree) each connection is a new channel instance no matter the SHARECNV setting.
Session created for each connection: Each session will be an additional conversation in the same channel instance that was created by the connection it is associated with. If SHARECNV(1) you would also see a new channel instance for each session.
connection.close(): By closing the 5 connections you close the sessions that were created on top of each connection, so seeing no more channels running or connections listed in Explorer is normal.
session.close(): This is also normal, since you closed the 5 sessions then 5 connections you observe in Explorer will go away, but the connection will still keep the 5 channels running.
Additional details
Note also that other channel level improvements were added in MQ v7.0, bidirectional heart beats being one of those. If a MQ client v6.0 or lower connects to a MQ v7.0 or later queue manager the channel does not turn on the new features (multiplexing and bidirectional HBs). If a MQ v7.0 or later client connects to a MQ v7.0 or later queue manater, the v6.0 behavior can be forced by setting SHARECNV(0).
Setting SHARECNV(1) keeps the new channel features but turns off multiplexing.
If SHARECNV is set either to 0 or 1 then the number of channels and netstat "rows" will increase for each JMS connection and session.
Non-default performance enhancing setting
If you are at IBM MQ v8.0 or higher, IBM recommends using SHARECNV(1) for a 15% performance increase. You can read about it here: IBM MQ 9.1.x Knowledge Center>Monitoring and performance>Tuning your IBM MQ network>Tuning client and server connection channels
The default settings for client and server connection channels changed in Version 7.0 to use shared conversations. Performance enhancements for distributed severs were then introduced in Version 8.0. To benefit from the new features that were introduced alongside shared conversations, without the performance impact on the distributed server, set SHARECNV to 1 on your Version 8.0 or later server connection channels.
From Version 7.0, each channel is defined by default to run up to 10 client conversations per channel instance. Before Version 7.0, each conversation was allocated to a different channel instance. The enhancements added in Version 7.0 also include the following features:
Bi-directional heartbeats
Administrator stop-quiesce
Read-ahead
Asynchronous-consume by client applications
For some configurations, using shared conversations brings significant benefits. However, for distributed servers, processing messages on channels that use the default configuration of 10 shared conversations is on average 15% slower than on channels that do not use shared conversations. On an MQI channel instance that is sharing conversations, all of the conversations on a socket are received by the same thread. If the conversations sharing a socket are all busy, the conversational threads contend with one another to use the receiving thread. The contention causes delays, and in this situation using a smaller number of shared conversations is better.
You use the SHARECNV parameter to specify the maximum number of conversations to be shared over a particular TCP/IP client channel instance. For details of all possible values, and of the new features added in Version 7.0, see MQI client: Default behavior of client-connection and server-connection. If you do not need shared conversations, there are two settings that give best performance in Version 8.0 or later:
SHARECNV(1). Use this setting whenever possible. It eliminates contention to use the receiving thread, and your client applications can take advantage of the new features added in Version 7.0. For this setting, distributed server performance is significantly improved in Version 8.0 or later. The performance improvements apply to Version 8.0 or later client applications that issue non read ahead synchronous get wait calls; for example C client MQGET wait calls. When these client applications are connected, the distributed server uses less threads and less memory and the throughput is increased.

more than one listener for the queue manager

Can there more than one listener to a queue manager ? I have used one listener/queue manager combination so far and wonder if this possible. This is because we have 2 applications connecting to same queue manager and seems to have problem with that.
There are a couple meanings for the term listener in an MQ context. Let's see if we can clear up some confusion over the terminology and then answer the question as it relates to each.
As defined in the spec, a JMS listener is an object that implements a callback mechanism. It listens on destinations for messages and calls onMessage when they arrive. The destinations may be queues or topics hosted by any JMS-compliant transport provider.
In IBM MQ terms, a listener is a process (runmqlsr) that handles inbound connection requests on a server. Although these can handle a variety of protocols, in practice they are almost exclusively TCP listeners that bind a port (1414 by default) and negotiate connection requests on sockets.
TCP Ports
Tim's answer applies to the second of these contexts. MQ can listen for sockets on multiple ports and indeed it is quite common to do so. Each listener listens on one and only one port. It may listen on that port across all network interfaces or can be bound to a specific network interface. No two listeners can bind to the same combination of interface and port though.
In a B2B context the best practice is to run a dedicated listener for each external business partner to isolate each of their connections across dedicated access paths. Internally I usually recommend separate ports for QMgr-to-QMgr, app-to-QMgr and interactive user connections.
In that sense it is possible to run multiple listeners on a given QMgr. Each of those listeners can accept many connections. Their job is to negotiate the connection then hand the socket off to a Message Channel Agent which talks to the QMgr on behalf of the remotely connected client or QMgr.
JMS Listeners
Based on comments, Ulab refers to JMS listeners. These objects establish a connection to a queue manager and then wait in GET mode for new messages arriving on a destination. On arrival of a message, they call the onMessage method which is an asynchronous callback routine.
As to the question "can there more than one (JMS) listener to a queue manager?" the answer is definitely yes. A multi-threaded application can have multiple listeners connected, multiple application instances can connect at the same time, and many thousands of application connections can be handled by a single queue manager with sufficient memory, disk and CPU available.
Of course, each of these applications is ultimately connected to one or more queues so then the question becomes one of whether they can connect to the same queue.
Many listeners can listen on the same queue so long as they do not get exclusive access to it. Each will receive a portion of the messages arriving.
Listeners on QMgr-managed subscriptions are exclusively attached to a dynamic queue but multiple instances on the same topic will all receive the same messages.
If the queue is clustered and there is more than one instance of it multiple listeners will be required to get all the messages since they will normally be distributed by MQ workload distribution across those instances.
Yes, you can create as many listeners as you wish:
http://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.explorer.doc/e_listener.htm
However, there is no reason why two applications can't connect to the queue manager via the same listener (on the same port). What problem have you run into?

Restricting EMS cliemt connections

Hi Our EMS server is used by other clients for putting message. But some time they dont close connections and number of connections is reaching maximum limit of the server. Is there any way where we can restrict the number of connections for the client based on emsusername provided to the client or based on the host name from where client is creating connection. Is there any configuration we can do for client specific connections restriction.
No, there is no such provision in EMS server or client libraries where you can restrict the number of consumer/producer clients based on their user names or other properties. You can have a look at the JAAS and JACI provision supported by EMS which can be used to write your own JAVA authentication custom modules which run in JVM within EMS server. You can find more information about JAAS and JACI on Oracles documentation site.
Have you looked into the server_timeout_client_connection setting ?
From the doc :
server_timeout_client_connection = limit
In a server-to-client connection, if the server does not receive a heartbeat for a
period exceeding this limit (in seconds), it closes the connection.
We recommend setting this value to approximately 3 times the heartbeat interval, as it is specified in client_heartbeat_server.
Zero is a special value, which disables heartbeat detection in the server (although
clients still send heartbeats).

Build durable architecture with Websphere MQ clients

How can you create a durable architecture environment using MQ Client and server if the clients don't allow you to persist messages nor do they allow for assured delivery?
Just trying to figure out how you can build a salable / durable architecture if the clients don't appear to contain any of the necessary components required to persist data.
Thanks,
S
Middleware messaging was born of the need to persist data locally to mitigate the effects of failures of the remote node or of the network. The idea at the time was that the queue manager was installed locally on the box where the application lives and was treated as part of the transport stack. For instance you might install TCP and WMQ as a transport and some apps would use TCP while others used WMQ.
In the intervening 20 years, the original problems that led to the creation of MQSeries (Now WebSphere MQ) have largely been solved. The networks have improved by several nines of availability and high availability hardware and software clustering have provided options to keep the different components available 24x7.
So the practices in widespread use today to address your question follow two basic approaches. Either make the components highly available so that the client can always find a messaging server, or put a QMgr where the application lives in order to provide local queueing.
The default operation of MQ is that when a message is sent (MQPUT or in JMS terms producer.send), the application does not get a response back on the MQPUT call until the message has reached a queue on a queue manager. i.e. MQPUT is a synchronous call, and if you get a completion code of OK, that means that the queue manager to which the client application is connected has received the message successfully. It may not yet have reached its ultimate destination, but it has reached the protection of an MQ Server, and therefore you can rely on MQ to look after the message and forward it on to where it needs to get to.
Whether client connected, or locally bound to the queue manager, applications sending messages are responsible for their data until an MQPUT call returns successfully. Similarly, receiving applications are responsible for their data once they get it from a successful MQGET (or JMS consumer.receive) call.
There are multiple levels of message protection are available.
If you are using non-persistent messages and asynchronous PUTs, then you are effectively saying it doesn't matter too much whether the messages reach their destination (although they generally will).
If you want MQ to really look after your messages, use synchronous PUTs as described above, persistent messages, and perform your PUTs and GETs within transactions (aka syncpoint) so you have full application control over the commit points.
If you have very unreliable networks such that you expect to regularly fail to get the messages to a server, and expect to need regular retries such that you need client-side message protection, one option you could investigate is MQ Telemetry (e.g. in WebSphere MQ V7.1) which is designed for low bandwidth and/or unreliable network communications, as a route into the wider MQ.

Resources