JMS connection number confuse me in Netstat/MQ explorer - ibm-mq

I am encountering some issues when using IBM MQ along with Spring JMS. I am using CachingConnectionFactory to improve performance, but I noticed something which is "strange" either in MQ explorer app connection table and in netstat result.
The cached session size is configured to 5 for CachingConnectionFactory.
Create one or multiple connections in a single thread, I see 1 row in both MQ explorer and netstat.(OK, no problem)
Create one connection and then 5 sessions in a single thread, I see 1 row in netstat result but 6 rows in MQ explorer.
Now I want to test in a concurrent scenario with 5 producer threads, create one connection in each thread, I can see 1 row in netstat and 1 row in MQ explorer.
In the same context as 3, create one connection and then create one session in each thread, I can see 5 rows in netstat result and (5 + 1) rows in MQ explorer. If doing connection.close() in each thread => rows count are not changed. If doing session.close() in each thread => rows count are not changed.
Confused by case 4/.
I would like to show same analysis result when using MQConnectionFactory
Create one or multiple connections in a single thread, I see 1 row in both MQ explorer and netstat.(OK, no problem)
Create one connection and then 5 sessions in a single thread, I see 1 row in netstat result but 6 rows in MQ explorer.
Now I want to test in a concurrent scenario with 5 producer threads, create one connection in each thread, I can see 5 rows in netstat and 5 rows in MQ explorer.
In the same context as 3, create one connection and then create one session in each thread, I can see 5 rows in netstat result and 10 rows in MQ explorer. If doing connection.close() in each thread => rows count are now 0 in both of them. If doing session.close() in each thread => rows count are not changed in netstat, but decreased to 5 in MQ IE.
Hope have your explanation, thank you

IBM MQ level details
netstat will show you network level TCP connections to the MQ queue manager's listener port, these will match 1 for 1 with the number of channel instances running.
IBM MQ v7.0 and later support multiplexing multiple conversations per MQ channel instance. The setting on a SVRCONN channel that controls the number of conversations per channel is SHARECNV and defaults to the value of 10, this is negotiated based both on the server and client side settings to the smallest value, in the case of JMS generally this will match the server side setting unless CCDTs are used in which case it will negotiate based on the CLNTCONN's SHARECNV value.
If you run the following command in runmqsc you can see both the negotiated SHARECNV value (MAXSHCNV) and the current number of conversations (CURSHCNV).
DIS CHS(CHL.NAME) CURSHCNV MAXSHCNV`
When you look at connections related to TCP clients in MQ using either the following command in runmqsc or the application connection table in MQ Explorer, you will see one connection per conversation.
DIS CONN(*) TYPE(ALL)
JMS details
At the JMS layer each connection to the queue manager is a conversation.
At the JMS layer each session is another conversation.
What you likely see is that if you open enough sessions to go over the SHARCNV limit and another channel instance will start, thus you will see another "row" in the netstat output.
Notes on your observations:
You observed:
Now I want to test in a concurrent scenario with 5 producer threads, create one connection in each thread, I can see 5 rows in netstat and 5 rows in MQ explorer.
In the same context as 3, create one connection and then create one session in each thread, I can see 5 rows in netstat result and 10 rows in MQ explorer. If doing connection.close() in each thread => rows count are now 0 in both of them. If doing session.close() in each thread => rows count are not changed in netstat, but decreased to 5 in MQ IE.
Comments below are based on the assumption that the SVRCONN channel has SHARECNV(10) and that you are not using a CCDT with CLNTCONN channels with SHARECNV lower than 10.
Connection created: From what I have read (your observations agree) each connection is a new channel instance no matter the SHARECNV setting.
Session created for each connection: Each session will be an additional conversation in the same channel instance that was created by the connection it is associated with. If SHARECNV(1) you would also see a new channel instance for each session.
connection.close(): By closing the 5 connections you close the sessions that were created on top of each connection, so seeing no more channels running or connections listed in Explorer is normal.
session.close(): This is also normal, since you closed the 5 sessions then 5 connections you observe in Explorer will go away, but the connection will still keep the 5 channels running.
Additional details
Note also that other channel level improvements were added in MQ v7.0, bidirectional heart beats being one of those. If a MQ client v6.0 or lower connects to a MQ v7.0 or later queue manager the channel does not turn on the new features (multiplexing and bidirectional HBs). If a MQ v7.0 or later client connects to a MQ v7.0 or later queue manater, the v6.0 behavior can be forced by setting SHARECNV(0).
Setting SHARECNV(1) keeps the new channel features but turns off multiplexing.
If SHARECNV is set either to 0 or 1 then the number of channels and netstat "rows" will increase for each JMS connection and session.
Non-default performance enhancing setting
If you are at IBM MQ v8.0 or higher, IBM recommends using SHARECNV(1) for a 15% performance increase. You can read about it here: IBM MQ 9.1.x Knowledge Center>Monitoring and performance>Tuning your IBM MQ network>Tuning client and server connection channels
The default settings for client and server connection channels changed in Version 7.0 to use shared conversations. Performance enhancements for distributed severs were then introduced in Version 8.0. To benefit from the new features that were introduced alongside shared conversations, without the performance impact on the distributed server, set SHARECNV to 1 on your Version 8.0 or later server connection channels.
From Version 7.0, each channel is defined by default to run up to 10 client conversations per channel instance. Before Version 7.0, each conversation was allocated to a different channel instance. The enhancements added in Version 7.0 also include the following features:
Bi-directional heartbeats
Administrator stop-quiesce
Read-ahead
Asynchronous-consume by client applications
For some configurations, using shared conversations brings significant benefits. However, for distributed servers, processing messages on channels that use the default configuration of 10 shared conversations is on average 15% slower than on channels that do not use shared conversations. On an MQI channel instance that is sharing conversations, all of the conversations on a socket are received by the same thread. If the conversations sharing a socket are all busy, the conversational threads contend with one another to use the receiving thread. The contention causes delays, and in this situation using a smaller number of shared conversations is better.
You use the SHARECNV parameter to specify the maximum number of conversations to be shared over a particular TCP/IP client channel instance. For details of all possible values, and of the new features added in Version 7.0, see MQI client: Default behavior of client-connection and server-connection. If you do not need shared conversations, there are two settings that give best performance in Version 8.0 or later:
SHARECNV(1). Use this setting whenever possible. It eliminates contention to use the receiving thread, and your client applications can take advantage of the new features added in Version 7.0. For this setting, distributed server performance is significantly improved in Version 8.0 or later. The performance improvements apply to Version 8.0 or later client applications that issue non read ahead synchronous get wait calls; for example C client MQGET wait calls. When these client applications are connected, the distributed server uses less threads and less memory and the throughput is increased.

Related

how to limit number of connections to IBM MQ

I have a Spring Boot based messaging app sending/receiving JMS messages to/from IBM MQ queue manager.
Basically, it uses MQConnectionFactory to organize connection to IBM MQ and a JmsPoolConnectionFactory from messaginghub:pooledjms to enable JMS connection pool, which is removed from MQConnectionFactory in IBM MQ 7.x
The app uses two different appoach to work with JMS. A "correct" one runs a JMSListener to receive messages and then sends a response on each message using JmsTemplate.send(). And there is a second "troubling" approach, where the app sends requests using JmsTemplate.send() and waits for response using JmsTemplate.readByCorrelId() until received or timed out.
I say troubling because this makes JMS sessions last longer if the response is delayed and could easily exhaust IBM MQ connection limit. Unfortunately, I cannot rewrite the app at the moment to the first approach to resolve the issue.
Now I want to restrict the number of connections in the pool. Of course, the delayed requests will fail but IBM MQ connection limit is more important at the moment, so this is kind of appropriate. The problem is that even if I disable the JmsPoolConnectionFactory, it seems that MQConnectionFactory still opens multiple connections to the query manager.
While profiling the app I see multiple threads RvcThread: com.ibm.mq.jmmqi.remote.impl.RemoteTCPConnection#12433875[...] created by JMSCCMasterThreadPool and corresponding connections to the query manager in MQ Explorer. I wonder why there are many of them in spite of the connection pooling is removed from MQConnectionFactory? I suppose it should open and reuse a single connection then but it is not true in my test.
Disabling "troubling" JmsTemplate.readByCorrelId() and leaving only "correct" way in the app removes these multiple connections (and the waiting threads of course).
Replacing JmsPoolConnectionFactory with SingleConnectionFactory has not effect on the issue.
Is there any way to limit those connections? Is it possible to control max threads in the JMSCCMasterThreadPool as a workaround?
Because it affects other applications your MQ admins probably want you to not exhaust the overall Queue Manager's connection limit (MaxChannels and MaxActiveChannels parameters in qm.ini). They can help you by defining an MQ channel exclusively used by your application. By this, they can limit the number of connections of your application with the MAXINST / MAXINSTC channel parameter. You will get an exception when this number is exhausted which is appropriate as you say. Other applications won’t be affected anymore.

what is the use of server conn in websphere MQ

What is the use of a server conn in Websphere MQ and why do we go for it.
What is the difference between client conn and server conn.
In some respects these are two opposite things, but they need to match to make a client connection to a queue manager. Its quite a generic topic but fortunately there is lots of useful documentation about this in google / IBM knowledgebase e.g. https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.con.doc/q016480_.htm
As a queue manager, if you are going to let clients connect into you, you need to be able to provide some configuration details (heartbeat intervals, max message sizes, user exits) - these are configured on a SVRCONN channel
As an application, if you want to connect into a queue manager via the client bindings (usually to go to another machine), you need some information about the configuration to use and these are configured on a CLNTCONN channel.
The application 'provides' a CLNTCONN channel, and once the connection is made, an equivalent SVRCONN channel is looked up, and the configuration values are negotiated and the connection made.
An application can 'provide' a CLNTCONN channel at least 3 common ways...
- As part of an MQSERVER environment variable
- Via a client channel table (MQCHLLIB/MQCHLTAB environment variables)
- During an MQCONNX call it can provide the channel details
More details here:
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q027440_.htm

WebSphere MQ DISC vs KAINT on SVRCONN channels

we have a major problem with many of our Applications making improper connections (SVRCONN) with queue manager and not issuing MQDISC when connection not required. This causes lot of idle stale connections and prevents Application from making new connections and fails with CONNECTION BROKEN (2009) error. We have been restricting Application connections with clientidle parameter in our Windows MQ on version 7.0.1.8 but when we migrated to MQ v7.5.0.2 in Linux platform we are deciding on the best option available in the new version. We do not have clientidle anymore in ini file for v7.5 but has DISCINT & KAINT in SVRCONN channels. I have been going through the advantages and disadvantages of both for our scenario of Application making connections through SVRCONN channels and leave connections open without issuing a disconnect. Which of these above channel attributes is ideal for us. Any suggestions? Does any of these take precedence over the other??
First off, KAINT controls TCP functions, not MQ functions. That means for it to take effect, the TCP Keepalive function must be enabled in the qm.ini TCP stanza. Nothing wrong with this, but the native HBINT and DISCINT are more responsive than delegating to TCP. This addresses the problem that the OS hasn't recognized that a socket's remote partner is gone and cleaned up the socket. As long as the socket exists and MQ's channel is idle, MQ won't notice. When TCP cleans the socket up, MQ's exception callback routine sees it immediately and closes the channel.
Of the remaining two, DISCINT controls the interval after which MQ will terminate an idle but active socket whereas HBINT controls the interval after which MQ will shut down an MCA attached to an orphan socket. Ideally, you will have a modern MQ client and server so you can use both of these.
The DISCINT should be a value longer than the longest expected interval between messages if you want the channel to stay up during the Production shift. So if a channel should have message traffic at least once every 5 minutes by design, then a DISCINT longer than 5 minutes would be required to avoid channel restart time.
The HBINT actually flows a small heartbeat message over the channel, but only will do so if HBINT seconds have passed without a message. Thsi catches the case that the socket is dead but TCP hasn't yet cleaned it up. HBINT allows MQ to discover this before the OS and take care of it, including tearing down the socket.
In general, really low values for HBINT can cause lots of unnecessary traffic. For example, HBINT(5) would flow a heartbeat every five second interval in which no other channel traffic is passed. chances are, you don't need to terminate orphan channels within 5 seconds of the loss of the socket so a larger value is perhaps more useful. That said, HBINT(5) would cause zero extra traffic in a system with a sustained message rate of 1/second - until the app died, in which case the orphan socket would be killed pretty quick.
For more detail, please go to the SupportPacs page and look for the Morag's "Keeping Channels Running" presentation.

MQ Load Testing using JMETER for JMS Point-to-Point Messaging

I want to perform MQ load testing using JMETER for JMS Point-to-Point Messaging. I am able to connect and send request message for single connection to single remote queue. Can we establish multiple channel connection using same connection factory and send message to different queues. I have establish approximately 1500 channel connection with 1500 dedicated remote queues. I am using JMETER version 2.11
If you mean using a different uniquely named SVRCONN channels then no. The Channel specified in the connection factory can't be changed. To simulate 1 channel per connection you would need to create a connection factory for each channel.
However, there is no technical reason you cannot use the same channel for multiple queues simply by referencing the same connection factory for each test. Performance wise, there really won't be a difference between using 1500 instances of the same channel and 1500 individually named channels.
You may need to adjust the number of instances that that given channel and/or be started from a single client if you expect all 1500 to go at the same time.

IBM WAS7 Queue Factory Configuration to an MQ Cluster

I'm trying to configure a clustered websphere application server that connects to a clustered MQ.
However, the the information I have is details for two instances of MQ with different host names, server channels and queue manager which belongs to the same MQ cluster name.
On the websphere console, I can see input fields for hostname, queue manager and server channel, I cannot find anything that I can specify multiple MQ details.
If I pick one of the MQ detail, will MQ clustering still work? If not, how will I enable MQ clustering given the details I have?
WebSphere MQ clustering affects the behavior of how queue managers talk amongst themselves. It does not change how an application connects or talks to a queue manager so the question as asked seems to be assuming some sort of clustering behavior that is not present in WMQ.
To set up the app server with two addresses, please see Configuring multi-instance queue manager connections with WebSphere MQ messaging provider custom properties in the WAS v7 Knowledge Center for instructions on how to configure a connection factory with a multi-instance CONNAME value.
If you specify a valid QMgr name in the Connection Factory and the QMgr to which the app connects doesn't have that specific name then the connection is rejected. Normally a multi-instance CONNAME is used to connect to a multi-instance QMgr. This is a single highly available queue manager that can be at one of two different IP addresses so using a real QMgr name works in that case. But if the QMgrs to which your app is connecting are two distinct and different-named queue managers, which is what you described, you should specify an asterisk (a * character) as the queue manager name in your connection factory as described here. This way the app will not check the name of the QMgr when it gets a connection.
If I pick one of the MQ detail, will MQ clustering still work? If not,
how will I enable MQ clustering given the details I have?
Depends on what you mean by "clustering". If you believe that the app will see one logical queue which is hosted by two queue managers, then no. That's not how WMQ clustering works. Each queue manager hosting a clustered queue gets a subset of messages sent to that queue. Any apps getting from that queue will therefore only ever see the local subset.
But if by "clustering" you intend to connect alternately to one or the other of the two queue managers and transmit messages to a queue that is in the same cluster but not hosted on either of the two QMgrs to which you connect, then yes it will work fine. If your Connection Factory knows of only one of the two QMgrs you will only connect to that QMgr, and sending messages to the cluster will still work. But set it up as described in the links I've provided and your app will be able to connect to either of the two QMgrs and you can easily test that by stopping the channel on the one it connects to and watching it connect to the other one.
Good luck!
UPDATE:
To be clear the detail provide are similar to hostname01, qmgr01,
queueA, serverchannel01. And the other is hostname02, qmgr02, queueA,
serverchannel02.
WMQ Clients will connect to two different QMgrs using a multi-instance CONNAME only when...
The channel name used on both QMgrs is the exactly the same
The application uses an asterisk (a * character) or a space for the QMgr name when the connection request is made (i.e. in the Connection Factory).
It is possible to have WMQ connect to one of several different queue managers where the channel name differs on each by using a Client Connection Definition Table, also known as a CCDT. The CCDT is a compiled artifact that you create using MQSC commands to define CLNTCONN channels. It contains entries for each of the QMgrs the client is eligible to connect to. Each can have a different QMgr name, host, port and channel. However, when defining the CCDT the administrator defines all the entries such that the QMgr name is replaced with the application High Level Qualifier. For example, the Payroll app wants to connect to any 1 of 3 different QMgrs. The WMQ Admin defines a CCDT with three entries but uses PAY01, PAY02, and PAY03 for the QMgr names. Note this does not need to match the actual QMgr names. The application then specifies the QMgr name as PAY* which selects all three QMgrs in the CCDT.
Please see Using a client channel definition table with WebSphere MQ classes for JMS for more details on the CCDT.
Is MQ cluster not similar to application server clusters?
No, not at all.
Wherein two-child nodes are connected to a cluster. And an F5 URL will
be used to distribute the load to each node. Does not WMQ come with a
cluster url / f5 that we just send message to and the partitioning of
messages are transparent?
No. The WMQ cluster provides a namespace within which applications and QMgrs can resolve non-local objects such as queues and topics. The only thing that ever connects to a WebSphere MQ cluster is a queue manager. Applications and human users always connect to specific queue managers. There may be a set of interchangeable queue managers such as with the CCDT, but each is independent.
With WAS the messaging engine may run on several nodes, but it provides a single logical queue from which applications can get messages. With WMQ each node hosting that queue gets a subset of the messages and any application consuming those messages sees only that subset.
HTTP is stateless and so an F5 URL works great. When it does maintain a session, that session exists mainly to optimize away connection overhead and tends to be short lived. WMQ client channels are stateful and coordinate both single-phase and two-phase units of work. If an application fails over to another QMgr during a UOW, it has no way to reconcile that UOW.
Because of the nature of WMQ connections, F5 is never used between QMgrs. It is only used between client and QMgr for connection balancing and not message traffic balancing. Furthermore, the absence or presence of an MQ cluster is entirely transparent to the application which, in either case, simply connects to a QMgr to get and./or put messages. Use of a Multi-Instance CONNAME or a CCDT file makes that connection more robust by providing multiple equivalent QMgrs to which the client can connect but that has nothing whatever to do with WMQ clustering.
Does that help?
Please see:
Clustering
How Clusters Work
Queue manager groups in the CCDT
Connecting WebSphere MQ MQI client applications to queue managers

Resources