We are getting this error in every alternate day.I know that increasing the channel max depth will resolve the issue. Is there any option we could close the connection after the messages have been pushed to the queues? Can we do that in that way?
Assuming the connections that datapower is using are MQI connections (ie client connecting to a SVRCONN channel) you can try setting a disconnect interval on the SVRCONN channel that is being used by datapower. This forces connections that haven't been used for the specified interval to close.
The MQSC command to alter the interval would be this:
ALTER CHANNEL(channel_name) CHLTYPE(SVRCONN) DISCINT(number_of_seconds)
Further information about DISCINT is available at the IBM Knowledge Center:
https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.1.0/com.ibm.mq.ref.con.doc/q081860_.htm
Related
I have a Spring Boot based messaging app sending/receiving JMS messages to/from IBM MQ queue manager.
Basically, it uses MQConnectionFactory to organize connection to IBM MQ and a JmsPoolConnectionFactory from messaginghub:pooledjms to enable JMS connection pool, which is removed from MQConnectionFactory in IBM MQ 7.x
The app uses two different appoach to work with JMS. A "correct" one runs a JMSListener to receive messages and then sends a response on each message using JmsTemplate.send(). And there is a second "troubling" approach, where the app sends requests using JmsTemplate.send() and waits for response using JmsTemplate.readByCorrelId() until received or timed out.
I say troubling because this makes JMS sessions last longer if the response is delayed and could easily exhaust IBM MQ connection limit. Unfortunately, I cannot rewrite the app at the moment to the first approach to resolve the issue.
Now I want to restrict the number of connections in the pool. Of course, the delayed requests will fail but IBM MQ connection limit is more important at the moment, so this is kind of appropriate. The problem is that even if I disable the JmsPoolConnectionFactory, it seems that MQConnectionFactory still opens multiple connections to the query manager.
While profiling the app I see multiple threads RvcThread: com.ibm.mq.jmmqi.remote.impl.RemoteTCPConnection#12433875[...] created by JMSCCMasterThreadPool and corresponding connections to the query manager in MQ Explorer. I wonder why there are many of them in spite of the connection pooling is removed from MQConnectionFactory? I suppose it should open and reuse a single connection then but it is not true in my test.
Disabling "troubling" JmsTemplate.readByCorrelId() and leaving only "correct" way in the app removes these multiple connections (and the waiting threads of course).
Replacing JmsPoolConnectionFactory with SingleConnectionFactory has not effect on the issue.
Is there any way to limit those connections? Is it possible to control max threads in the JMSCCMasterThreadPool as a workaround?
Because it affects other applications your MQ admins probably want you to not exhaust the overall Queue Manager's connection limit (MaxChannels and MaxActiveChannels parameters in qm.ini). They can help you by defining an MQ channel exclusively used by your application. By this, they can limit the number of connections of your application with the MAXINST / MAXINSTC channel parameter. You will get an exception when this number is exhausted which is appropriate as you say. Other applications won’t be affected anymore.
I am encountering some issues when using IBM MQ along with Spring JMS. I am using CachingConnectionFactory to improve performance, but I noticed something which is "strange" either in MQ explorer app connection table and in netstat result.
The cached session size is configured to 5 for CachingConnectionFactory.
Create one or multiple connections in a single thread, I see 1 row in both MQ explorer and netstat.(OK, no problem)
Create one connection and then 5 sessions in a single thread, I see 1 row in netstat result but 6 rows in MQ explorer.
Now I want to test in a concurrent scenario with 5 producer threads, create one connection in each thread, I can see 1 row in netstat and 1 row in MQ explorer.
In the same context as 3, create one connection and then create one session in each thread, I can see 5 rows in netstat result and (5 + 1) rows in MQ explorer. If doing connection.close() in each thread => rows count are not changed. If doing session.close() in each thread => rows count are not changed.
Confused by case 4/.
I would like to show same analysis result when using MQConnectionFactory
Create one or multiple connections in a single thread, I see 1 row in both MQ explorer and netstat.(OK, no problem)
Create one connection and then 5 sessions in a single thread, I see 1 row in netstat result but 6 rows in MQ explorer.
Now I want to test in a concurrent scenario with 5 producer threads, create one connection in each thread, I can see 5 rows in netstat and 5 rows in MQ explorer.
In the same context as 3, create one connection and then create one session in each thread, I can see 5 rows in netstat result and 10 rows in MQ explorer. If doing connection.close() in each thread => rows count are now 0 in both of them. If doing session.close() in each thread => rows count are not changed in netstat, but decreased to 5 in MQ IE.
Hope have your explanation, thank you
IBM MQ level details
netstat will show you network level TCP connections to the MQ queue manager's listener port, these will match 1 for 1 with the number of channel instances running.
IBM MQ v7.0 and later support multiplexing multiple conversations per MQ channel instance. The setting on a SVRCONN channel that controls the number of conversations per channel is SHARECNV and defaults to the value of 10, this is negotiated based both on the server and client side settings to the smallest value, in the case of JMS generally this will match the server side setting unless CCDTs are used in which case it will negotiate based on the CLNTCONN's SHARECNV value.
If you run the following command in runmqsc you can see both the negotiated SHARECNV value (MAXSHCNV) and the current number of conversations (CURSHCNV).
DIS CHS(CHL.NAME) CURSHCNV MAXSHCNV`
When you look at connections related to TCP clients in MQ using either the following command in runmqsc or the application connection table in MQ Explorer, you will see one connection per conversation.
DIS CONN(*) TYPE(ALL)
JMS details
At the JMS layer each connection to the queue manager is a conversation.
At the JMS layer each session is another conversation.
What you likely see is that if you open enough sessions to go over the SHARCNV limit and another channel instance will start, thus you will see another "row" in the netstat output.
Notes on your observations:
You observed:
Now I want to test in a concurrent scenario with 5 producer threads, create one connection in each thread, I can see 5 rows in netstat and 5 rows in MQ explorer.
In the same context as 3, create one connection and then create one session in each thread, I can see 5 rows in netstat result and 10 rows in MQ explorer. If doing connection.close() in each thread => rows count are now 0 in both of them. If doing session.close() in each thread => rows count are not changed in netstat, but decreased to 5 in MQ IE.
Comments below are based on the assumption that the SVRCONN channel has SHARECNV(10) and that you are not using a CCDT with CLNTCONN channels with SHARECNV lower than 10.
Connection created: From what I have read (your observations agree) each connection is a new channel instance no matter the SHARECNV setting.
Session created for each connection: Each session will be an additional conversation in the same channel instance that was created by the connection it is associated with. If SHARECNV(1) you would also see a new channel instance for each session.
connection.close(): By closing the 5 connections you close the sessions that were created on top of each connection, so seeing no more channels running or connections listed in Explorer is normal.
session.close(): This is also normal, since you closed the 5 sessions then 5 connections you observe in Explorer will go away, but the connection will still keep the 5 channels running.
Additional details
Note also that other channel level improvements were added in MQ v7.0, bidirectional heart beats being one of those. If a MQ client v6.0 or lower connects to a MQ v7.0 or later queue manager the channel does not turn on the new features (multiplexing and bidirectional HBs). If a MQ v7.0 or later client connects to a MQ v7.0 or later queue manater, the v6.0 behavior can be forced by setting SHARECNV(0).
Setting SHARECNV(1) keeps the new channel features but turns off multiplexing.
If SHARECNV is set either to 0 or 1 then the number of channels and netstat "rows" will increase for each JMS connection and session.
Non-default performance enhancing setting
If you are at IBM MQ v8.0 or higher, IBM recommends using SHARECNV(1) for a 15% performance increase. You can read about it here: IBM MQ 9.1.x Knowledge Center>Monitoring and performance>Tuning your IBM MQ network>Tuning client and server connection channels
The default settings for client and server connection channels changed in Version 7.0 to use shared conversations. Performance enhancements for distributed severs were then introduced in Version 8.0. To benefit from the new features that were introduced alongside shared conversations, without the performance impact on the distributed server, set SHARECNV to 1 on your Version 8.0 or later server connection channels.
From Version 7.0, each channel is defined by default to run up to 10 client conversations per channel instance. Before Version 7.0, each conversation was allocated to a different channel instance. The enhancements added in Version 7.0 also include the following features:
Bi-directional heartbeats
Administrator stop-quiesce
Read-ahead
Asynchronous-consume by client applications
For some configurations, using shared conversations brings significant benefits. However, for distributed servers, processing messages on channels that use the default configuration of 10 shared conversations is on average 15% slower than on channels that do not use shared conversations. On an MQI channel instance that is sharing conversations, all of the conversations on a socket are received by the same thread. If the conversations sharing a socket are all busy, the conversational threads contend with one another to use the receiving thread. The contention causes delays, and in this situation using a smaller number of shared conversations is better.
You use the SHARECNV parameter to specify the maximum number of conversations to be shared over a particular TCP/IP client channel instance. For details of all possible values, and of the new features added in Version 7.0, see MQI client: Default behavior of client-connection and server-connection. If you do not need shared conversations, there are two settings that give best performance in Version 8.0 or later:
SHARECNV(1). Use this setting whenever possible. It eliminates contention to use the receiving thread, and your client applications can take advantage of the new features added in Version 7.0. For this setting, distributed server performance is significantly improved in Version 8.0 or later. The performance improvements apply to Version 8.0 or later client applications that issue non read ahead synchronous get wait calls; for example C client MQGET wait calls. When these client applications are connected, the distributed server uses less threads and less memory and the throughput is increased.
We want to use spring websockets + STOMP + amazon MQ as a full featured message broker. We were trying to do benchmarking, to find out how many client websocket connections single tomcat node can handle. But it appears that we hit amazonMQ connection limit first. As per the aws documentation, amazonMQ has a limit of 1000 connections per node (as far as I understand we can ask support to increase the limit, but I doubt that it can be increased big time). So my questions is:
1) Am I correct in assuming that for every websocket connection from client to spring/tomcat server, a corresponding connection being opened from server to broker? Is this correct behavior or we're doning something wrong here/missing something?
2) What can be done here? I mean I don't think this is a good idea to create broker node per evry 1000 users..
According to https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/messaging/simp/stomp/StompBrokerRelayMessageHandler.html your are doing everything right, and it is documented behavior.
Quote from javadoc:
For each new CONNECT message, an independent TCP connection to the broker is opened and used exclusively for all messages from the client that originated the CONNECT message. Messages from the same client are identified through the session id message header. Reversely, when the STOMP broker sends messages back on the TCP connection, those messages are enriched with the session id of the client and sent back downstream through the MessageChannel provided to the constructor.
As for a fix, you can write your own message broker relay, with tcp connection pooling.
What is the use of a server conn in Websphere MQ and why do we go for it.
What is the difference between client conn and server conn.
In some respects these are two opposite things, but they need to match to make a client connection to a queue manager. Its quite a generic topic but fortunately there is lots of useful documentation about this in google / IBM knowledgebase e.g. https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.con.doc/q016480_.htm
As a queue manager, if you are going to let clients connect into you, you need to be able to provide some configuration details (heartbeat intervals, max message sizes, user exits) - these are configured on a SVRCONN channel
As an application, if you want to connect into a queue manager via the client bindings (usually to go to another machine), you need some information about the configuration to use and these are configured on a CLNTCONN channel.
The application 'provides' a CLNTCONN channel, and once the connection is made, an equivalent SVRCONN channel is looked up, and the configuration values are negotiated and the connection made.
An application can 'provide' a CLNTCONN channel at least 3 common ways...
- As part of an MQSERVER environment variable
- Via a client channel table (MQCHLLIB/MQCHLTAB environment variables)
- During an MQCONNX call it can provide the channel details
More details here:
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q027440_.htm
I have tool connecting to MQ to list queue , channels for admin use..
When i try to list all active channels it shows error "Out of MQI Connections for Queue Manager "QM2" , Increase MQI connection pool for Queue Manager "QM2".
i don't see any max channels reached errors in Queue Manager logs, also we have Max Channels set to 6k.
Any suggestions?