How many concurrent connections can GrizzlyHttpServer handle? - spring

I'm looking after a Spring rest application that uses GrizzlyHttpServer as HTTP server. I need to know what's the default maximum number of concurrent connections that this server can handle?

Related

How to limit number of HTTP Connections for a rest web service

We want to limit the number of connections for our rest web service.
We are using spring boot with jetty as server.
We have configured below settings :
#rate limit connections
server.jetty.acceptors=1
server.jetty.selectors=1
#connection time out in milliseconds
server.connection-timeout=-1
Now, as you can see that there is no idle timeout applicable for connections.
Which means a connection once open will remain active until it is explicitly closed.
So, with this settings, my understanding is that if I open more then 1 connection, then I should not get any response because the connection limit is only 1.
But this does not seem to be working. Response is sent to each request.
I am sending request with 3 different clients. I have verified the ip address and ports. They all are different for 3 clients. But all 3 remains active once connection is established.
Any experts to guide on the same?
Setting the acceptors and selectors to 1 will not limit the max number of connections.
I suggest you look at using either the jetty QoS filter, or the Connection Limit jetty module.

Spring Webflux Webclient set Connection keepAlive time

Just starting to use Spring Webflux Webclient,Just wanted to know what is the default KeepAlive time for the Http Connection ? Is there a way to increase the keepAlive Time? In our Rest Service we get a request probably every five minutes,The request takes long time to process .It takes time between 500 seconds-- 10 second. However in load test if I send frequent requests the processing time is less than 250ms.
Spring WebFlux WebClient is an HTTP client API that wraps actual HTTP libraries - so configuration like connection management, timeouts, etc. are configured at the library level directly and behavior might change depending on the chosen library.
The default library with WebClient is Reactor Netty.
Many HTTP clients (and this is the case with Reactor Netty) are maintaining HTTP connections in a connection pool to reuse them. Clients usually acquire a new connection to a remote host, use it to send/receive information and then put it back in the connection pool. This is very useful since sometimes acquiring a new connection can be costly. This seems to be really costly in your case.
HTTP clients leave those unused connections in the pool, but what about keepAlive time?
Most clients leave those connections in the pool as long as possible and test them before acquiring them to see if they're still valid or listen to server events asynchronously to remove them from the pool (I believe Reactor Netty does that). So ultimately, the server is in control and decides when to close connections if they're inactive.
Now your problem description might suggest that connecting to that remote host is very costly, but it could be also the remote host taking a long time to respond to your requests (for example, it might be operating on an empty cache and needs to calculate a lot of things).

How many SSE connections can a web server maintain?

I'm experimenting with server-sent events (SSE) as an alternative to websockets for real-time data pushing (data in my application is primarily one-directional).
How scalable would this be? I know that each SSE connection uses an HTTP request -- does this mean that a web server can handle as many SSE connections as HTTP requests (something like this answer)? I feel as though this might be the case, but I'm not sure how a SSE connection works and if it is substantially more complex/resource-hungry than a simple HTTP request.
I'm mostly wondering how this compares to the number of concurrent websockets a browser can keep open. This answer suggests that only ~1400-1800 sockets can be handled by a server at the same time.
Can someone provide some insight on this?
(To clarify, I am not asking about how many SSE connections can be kept open from the client; I am asking about how many can be reasonably kept open by a web server.)
Tomcat 8 (web server to give an example) and above that uses the NIO connector for handling incoming requst. It can service max 10,000 concurrent connections(docs). It does not say anything about max connections pers se. They also provide another parameter called acceptCount which is the fall back if connections exceed 10,000.
socket connections are treated as files. Every incoming connection to tomcat is like opening a socket and depending on the OS e.g in linux depends on the file-descriptor policy. You will find a common error when too many connections are open or max connections have been reached as the following
java.net.SocketException: Too many files open
You can change the number of open files by editing
/etc/security/limits.conf
It is not clear what is max limit that is allowed. Some say default for tomcat is 1096 but the (default) one for linux is 30,000 which can be changed.
On the article I have shared the linkedIn team were able to go 250K connections on one host.
So that should give you a pretty good idea about max sse connections possible. depends on your web server max connection configuration, OS capacity etc.

Websockets and scalability

I am a beginner with websockets.
I have a need in my application where server needs to notify clients when something changes and am planning to use websockets.
Single server instance and single client ==> How many websockets will be created and how many connections to websockets?
Single server instance and 10 clients ==> How many websockets will be created and how many connections to websockets?
Single server instance and 1000 clients ==> How many websockets will be created and how many connections to websockets?
How do you scale with websockets when your application has a 1000’s of user base?
Thanks much for your feedback.
1) Single server instance and single client ==> How many websockets will be created and how many connections to websockets?
If your client creates one webSocket connection, then that's what there will be one webSocket connection on the client and one on the server. It's the client that creates webSocket connections to the server so it is the client that determines how many there will be. If it creates 3, then there will be 3. If it creates 1, then there will be 1. Usually, the client would just create 1.
2) Single server instance and 10 clients ==> How many websockets will be created and how many connections to websockets?
As described above, it depends upon what the client does. If each client creates 1 webSocket connection and there are 10 clients connected to the server, then the server will see a total of 10 webSocket connections.
3) Single server instance and 1000 clients ==> How many websockets will be created and how many connections to websockets?
Same as point #2.
How do you scale with webscokets when your application has a 1000’s of user base?
A single server, configured appropriately can handle hundreds of thousands of simultaneous webSocket connections that are mostly idle since an idle webSocket uses pretty much no server CPU. For even larger scale deployments, one can cluster the server (run multiple server processes) and use sticky load balancing to spread the load.
There are many other articles like these on Google worth reading if you're pursuing large scale webSocket or socket.io deployments:
The Road to 2 Million Websocket Connections in Phoenix
600k concurrent websocket connections on AWS using Node.js
10 million concurrent webSockets
Ultimately, the achievable scale per a properly configured server will likely have more to do with how much activity there is per connection and how much computation is needed to deliver that.

Maximum concurrent requests that a Self Hosted SignalR server can handle

I had been doing some load testing on SignalR server. According to my test case a Self Hosted SignalR server can handle only 20,000 concurrent requests at a time.
When SignalR has 20,000 open connections, the process consumes about 1.5 GB of RAM (I think that is too much). And when the connections exceed 22,000 the new clients get connection timeout error. Server never runs out of memory, just stops responding to new requests.
I'm aware of Server Farming, and that I can use that it in SignalR using BackPlane, but I'm concerned about Vertical Scaling here. I have achieved 25,000 connections using long polling (async asp.net handlers). I suppose signalR should be able to achieve more concurrent requests as it uses WebSockets.
Is there something I can do to have about 50,000 concurrent connections per node of SignalR? This performance tuning is of no help because I'm using Owin self-hosting. What can I do so that my server application takes less memory per connection?

Resources