How can I limit the number of HTTP connections in OpenLiberty? - open-liberty

I want to limit the number of HTTP connections for web applications on OpenLiberty. Which parameters should I change to do this? I'm looking at the docs, should I change maxConcurrentStreams in HTTPOption or maxThread in executer?

There isn't an existing property to specifically limit HTTP connections, so I'd recommend using maxOpenConnections, which is a tcpOptions configuration[1]. That property allows you to restrict the number of open connections for a TCP endpoint. The default value is 128000.
maxConcurrentStreams applies specifically to HTTP/2 the number of streams that are allowed per HTTP/2 (TCP) connection, and maxThread won't directly accomplish what you want.
[1] https://openliberty.io/docs/21.0.0.2/reference/config/httpEndpoint.html#tcpOptions

Related

Connection pooling: interaction of MaxIdleConnsPerHost and IdleConnTimeout in http.Transport

I am trying to write a heavy duty proxy to a set of web APIs in Golang. In order to prevent port exhaustion, I have decided to not use the DefaultClient and instead create a customized instance for http.Client. There are many interesting setting in the http.Transport that I can play around with.
I have come across the MaxIdleConnsPerHost and IdleConnTimeout fields and I have this question.
If I increase the value of MaxIdleConnsPerHost it means there will be more idle connection, but are they reusable idle connections? Or in other words, to make a decent connection pool, should I increase the value of MaxIdleConnsPerHost together with the timeout for IdleConnTimeout accordingly, or does it behave exactly the opposite?
yes, IdleConns are reusable as these are keep-alive connections. But to make golang honour the reusability of keep-alive connections you need to make sure of 2 things in your applications.
Read until Response is complete (i.e. ioutil.ReadAll(rep.Body))
Call Body.Close()
here's a link for more explaination.

per http request over per tcp connection

My question is simple: How many http requests make how many tcp connections to server in Go.
I'm coding a tool can send many http requests, but I find all this requests over one or two tcp connetions, seem like golang http Transport have a connections pool.
If you are using the DefaultTransport for your HTTP requests then the TCP connections are reused.
DefaultTransport is the default implementation of Transport and is used by DefaultClient. It establishes network connections as needed and caches them for reuse by subsequent calls.
You can use MaxConnsPerHost to limit the total number of connections:
// MaxConnsPerHost optionally limits the total number of
// connections per host, including connections in the dialing,
// active, and idle states. On limit violation, dials will block.
//
// Zero means no limit.
//
// For HTTP/2, this currently only controls the number of new
// connections being created at a time, instead of the total
// number. In practice, hosts using HTTP/2 only have about one
// idle connection, though.
MaxConnsPerHost int
Edit
I highly suggest you read the docs as Go has one of the most well documented standard library. Anyway, you can configure http.Transport to disable keep-alive to force usage of one TCP connection per request.
// DisableKeepAlives, if true, disables HTTP keep-alives and
// will only use the connection to the server for a single
// HTTP request.
//
// This is unrelated to the similarly named TCP keep-alives.
DisableKeepAlives bool

Is there any chance of conflict if I make 100 simultaneous http request asynchronously to same destination from same source?

(1) If I make a hundred http request asynchronously from a client application to a single destination(i.e- same ip/port), is there any chance of conflict in the client side?
What I understand is whenever an application makes a http request the OS assigns a random port as source, and the server response is sent to that source port only. As the requests are asynchronous and too many, can there be cases where OS assigns a same source port to another of this 100 request, and when the server responses actually for the first request the second request also receives that response?
(2) Even if conflict is not probable for 100 request, is there any upper limit to this(because ports are limited, and number of simultaneous requests made are nearly same or more)?
(3) And is the scenario same for all applications(whether using a Winforms client or a curl)?
You can create maximum of 65535 (2^16 - 1) ports in a system - including server and client ports.
Ans 1: The ports won't overlap/conflict when you make 100 or above simultaneous requests. But make sure at the server side, whether you can do such huge requests from a particular system/network.
Ans 2: Upper limit is 65535.
And 3: Yes, this limit is for all the ports used by the application running in the system.

can Execution groups be used to achieve higher priority

I have multiple clients(referring to them as channels) accessing a service on a WebSphere message broker.
The service is a likely to be a SOAP based webservice (can possibly be RESTful too).
Prioritizing requests for MQ/JMS can be handled by WMB using the header info (priority).
The SOAP or HTTP Nodes do not seem to have an equivalent property. Wondering how we cna achieve priority for requests from a specific client channel.
Can I use multiple execution groups(EG) to give higher priortiy for a specific channel. In other words, I am thinking of using EG to give a bigger pipe for a specific channel which should translate to requests being processed faster compared to the other channels.
Thanks
the end points
If you have IIB v9 you can use the "workload management" feature described here:
http://pic.dhe.ibm.com/infocenter/wmbhelp/v9r0m0/topic/com.ibm.etools.mft.doc/bj58250_.htm
https://www.youtube.com/watch?v=K11mKCHMRxo
The problem with this is that it allows you to cap different classes of messages at max rates, it won't allow you to run low priority work at full speed when there is no high priority work for example.
So a better approach might be to create multiple EGs using the maxThreads property on the EG level HTTP connector and the number of additional instances configured on each flow to give relative priority to the different classes of traffic.

Listening to multiple sockets: select vs. multi-threading

A server needs to listen to incoming data from several sockets (10-20). After some initializations, those sockets are created and do not change (i.e. no new sockets accepted, and none of them is expected to close during the lifetime of the server).
One option is to select() on all sockets, then deal with incoming data per socket (i.e. route to proper handling function).
Another option is to open one thread per socket and let each thread recv() and handle the input.
(The first option has the benefit of setting a timeout, but this is not an issue in this case,
since all the sockets are quite active).
Assuming the following: Windows server, has enough memory such that 20MB (for the 20 threads) is a non-issue, is any of those options expected to be faster then the other?
There's not much in it in you app. Typically, using a thread-per-socket is easier than asynchronous approaches because it's a simpler overall structure and it's easier to maintain state.

Resources