WebSphere http outbound socket configuration - websphere

We are running a performance test on a WebSphere 8.5.5.1 application server that calls many external SOAP services. Running netstat we notice that the server is only creating a max of 50 outbound connections.
We are trying to increase this value but cannot find the correct property. We have increased the Default pool but this doesn't seem to apply.
The WebContainer pool size is also set to higher than 50 and we can see this pool grow. Is there some hidden pool that is defaulting to 50?

You'll need to configure com.ibm.websphere.webservices.http.maxConnection property based on this:
http://www-01.ibm.com/support/knowledgecenter/api/content/SSEQTP_8.5.5/com.ibm.websphere.base.iseries.doc/ae/rwbs_httptransportprop.html

Related

Add new server to HAProxy using dataplane api with Rate limiting in Golang

I am adding new backend server to Haproxy through my golang code. I can see there is a parameter called max connections while adding new server which can be used to limit no of connections. There is also a parameter called maxqueue which will queue the connections if max connection limit is reached. But I cant find-out the option to specify the queue timeout. And I could not find from documentation what is default queue timeout time.
Furthermore, How can I add rate limiting based on no of requests (sliding window) while adding new server to backend?
I can see there is an option of mentioning stick table however I could not find example of its implementation.
I am referring to below documentation.
https://www.haproxy.com/documentation/dataplaneapi/community/#post-/services/haproxy/configuration/servers
A server have no "queue timeout". You can set the "queue timeout" via the backend configs
https://www.haproxy.com/documentation/dataplaneapi/community/#post-/services/haproxy/configuration/backends
https://www.haproxy.com/documentation/dataplaneapi/community/#put-/services/haproxy/configuration/backends/-name-
The defaults can be received via the defaults call.
https://www.haproxy.com/documentation/dataplaneapi/community/#get-/services/haproxy/configuration/defaults

how to limit number of connections to IBM MQ

I have a Spring Boot based messaging app sending/receiving JMS messages to/from IBM MQ queue manager.
Basically, it uses MQConnectionFactory to organize connection to IBM MQ and a JmsPoolConnectionFactory from messaginghub:pooledjms to enable JMS connection pool, which is removed from MQConnectionFactory in IBM MQ 7.x
The app uses two different appoach to work with JMS. A "correct" one runs a JMSListener to receive messages and then sends a response on each message using JmsTemplate.send(). And there is a second "troubling" approach, where the app sends requests using JmsTemplate.send() and waits for response using JmsTemplate.readByCorrelId() until received or timed out.
I say troubling because this makes JMS sessions last longer if the response is delayed and could easily exhaust IBM MQ connection limit. Unfortunately, I cannot rewrite the app at the moment to the first approach to resolve the issue.
Now I want to restrict the number of connections in the pool. Of course, the delayed requests will fail but IBM MQ connection limit is more important at the moment, so this is kind of appropriate. The problem is that even if I disable the JmsPoolConnectionFactory, it seems that MQConnectionFactory still opens multiple connections to the query manager.
While profiling the app I see multiple threads RvcThread: com.ibm.mq.jmmqi.remote.impl.RemoteTCPConnection#12433875[...] created by JMSCCMasterThreadPool and corresponding connections to the query manager in MQ Explorer. I wonder why there are many of them in spite of the connection pooling is removed from MQConnectionFactory? I suppose it should open and reuse a single connection then but it is not true in my test.
Disabling "troubling" JmsTemplate.readByCorrelId() and leaving only "correct" way in the app removes these multiple connections (and the waiting threads of course).
Replacing JmsPoolConnectionFactory with SingleConnectionFactory has not effect on the issue.
Is there any way to limit those connections? Is it possible to control max threads in the JMSCCMasterThreadPool as a workaround?
Because it affects other applications your MQ admins probably want you to not exhaust the overall Queue Manager's connection limit (MaxChannels and MaxActiveChannels parameters in qm.ini). They can help you by defining an MQ channel exclusively used by your application. By this, they can limit the number of connections of your application with the MAXINST / MAXINSTC channel parameter. You will get an exception when this number is exhausted which is appropriate as you say. Other applications won’t be affected anymore.

How to limit number of HTTP Connections for a rest web service

We want to limit the number of connections for our rest web service.
We are using spring boot with jetty as server.
We have configured below settings :
#rate limit connections
server.jetty.acceptors=1
server.jetty.selectors=1
#connection time out in milliseconds
server.connection-timeout=-1
Now, as you can see that there is no idle timeout applicable for connections.
Which means a connection once open will remain active until it is explicitly closed.
So, with this settings, my understanding is that if I open more then 1 connection, then I should not get any response because the connection limit is only 1.
But this does not seem to be working. Response is sent to each request.
I am sending request with 3 different clients. I have verified the ip address and ports. They all are different for 3 clients. But all 3 remains active once connection is established.
Any experts to guide on the same?
Setting the acceptors and selectors to 1 will not limit the max number of connections.
I suggest you look at using either the jetty QoS filter, or the Connection Limit jetty module.

How do I increase the max pool size for JMS queue with Glassfish Server

I am dropping messages on an MDB in a loop and I can see in my logs that I keep running out of available connections.
Caused by: com.sun.messaging.jms.JMSException: MQRA:CFA:allocation failure:createConnection:Error in allocating a connection. Cause: In-use connections equal max-pool-size and expired max-wait-time. Cannot allocate more connections.
at com.sun.messaging.jms.ra.ConnectionFactoryAdapter._allocateConnection(ConnectionFactoryAdapter.java:209)
at com.sun.messaging.jms.ra.ConnectionFactoryAdapter.createConnection(ConnectionFactoryAdapter.java:162)
at com.sun.messaging.jms.ra.ConnectionFactoryAdapter.createConnection(ConnectionFactoryAdapter.java:144)
After dropping each message on the queue I am closing all the connections but still I do not know how I run out of available connections.
I am thinking of increasing the pool size instead. But havent been able to find that setting.
Can anyone guide me how to change that setting for the Glassfish server.
For an MDB you can set this via the MaxPoolSize of the activation spec. This property is the "Maximum size of server session pool internally created by the resource adapter for achieving concurrent message delivery. This should be equal to the maximum pool size of MDB objects".

Restricting EMS cliemt connections

Hi Our EMS server is used by other clients for putting message. But some time they dont close connections and number of connections is reaching maximum limit of the server. Is there any way where we can restrict the number of connections for the client based on emsusername provided to the client or based on the host name from where client is creating connection. Is there any configuration we can do for client specific connections restriction.
No, there is no such provision in EMS server or client libraries where you can restrict the number of consumer/producer clients based on their user names or other properties. You can have a look at the JAAS and JACI provision supported by EMS which can be used to write your own JAVA authentication custom modules which run in JVM within EMS server. You can find more information about JAAS and JACI on Oracles documentation site.
Have you looked into the server_timeout_client_connection setting ?
From the doc :
server_timeout_client_connection = limit
In a server-to-client connection, if the server does not receive a heartbeat for a
period exceeding this limit (in seconds), it closes the connection.
We recommend setting this value to approximately 3 times the heartbeat interval, as it is specified in client_heartbeat_server.
Zero is a special value, which disables heartbeat detection in the server (although
clients still send heartbeats).

Resources