Recently I was debugging a problem that a client reported when they were stress-testing a spring application running in tomcat. They ran the tests, but most of the requests immediately resulted in an exception, causing a 500 response code.
At some point the client was no longer able to open new connections because the ephemeral port range was exhausted - all ports were in a TIME_WAIT state. The reason appears to be that when an exception occurs the connection is closed by either spring or tomcat.
I created a simple spring boot 2.7.6 application with a single controller, with two endpoints - /5xx which throws ResponseStatusException(HttpStatus.INTERNAL_SERVER_ERROR), and /4xx
to throw ResponseStatusException(HttpStatus.BAD_REQUEST).
After requesting these endpoints, the response contains a Connection: close header, and looking at the local ports with ss the ports in TIME_WAIT get increased.
Further, requesting an endpoint that doesn't exist - doesn't close the connection and simply returns 404.
When I switched to undertow, instead of tomcat - the connection wasn't closed for all endpoints.
When I switched to spring boot 3 and tomcat - the connection isn't closed as well for all three endpoints.
So, my questions are:
What is the correct behavior? I'm thinking that the connection shouldn't be closed.
Does anyone know if this is a spring or tomcat feature and is there a configuration for this?
EDIT #1:
OS details:
Ubuntu 22.04.1 LTS
Linux 5.15.0-56-generic #62-Ubuntu SMP Tue Nov 22 19:54:14 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
net.ipv4.ip_local_port_range = 1024 60999
Tomcat:
embedded tomcat v9.0.69 with default configuration provided by spring boot
the situation improves by increasing maxConnections to 30000 (default is 10000)
Related
The issue
I've stumbled upon the issue:
Error message: org.springframework.web.reactive.function.client.WebClientRequestException: Connection prematurely closed BEFORE response; nested exception is reactor.netty.http.client.PrematureCloseException: Connection prematurely closed BEFORE response
General info about the issue
It's a Spring Boot app (2.4.5) running on Reactive (WebFlux) stack.
The app also uses Playtika OSS reactive Feign Client (starter 3.0.3) for synchronious REST API communication.
Underlying web client is Netty.
There are no any special Feign or WebClient configs in the app.
All the other microservice parties are running on embedded Tomcat with default Spring Boot autoconfigurations.
All apps are running in Kubernetes cluster.
The error log observed from time to time (not every day).
Guesses
After some investigation, my best guess would be that some long-lived connections are being dropped from the pool on certain conditions. This causing the error log.
This thought is based on Instana that connects the error log to the span that spans accross a lot of subcalls.
Also no data losses/ other inconsistencies were noticed so far xD
Questions
Does Feign has a connection pool by default?
How to know if those are live or idle connections from the pool being closed?
How the connection pool can be configured or dropped to avoid long-running connections ?
Is it possible that Kubernetes can somehow close these connections?
What else can close connections?
Is there a timeout setting for requests stored in the apache tomcat request queue? If yes, what is the default value for the embedded tomcat in spring boot?
When configuring a web server, it also might be useful to set the server connection timeout. This represents the maximum amount of time the server will wait for the client to make their request after connecting before the connection is closed.
You may specify this property in your application.properties as follows.
server.connection-timeout=5s
Since we switched to http/2 we see a lot of ClientAbortException. Either Broken Pipe or Connection reset by peer.
The setup is Client <-> HAProxy <-> Spring Boot 2 application with embedded Tomcat 8
Is this normal behavior? If not, what would be the best way to find out why the connection is broken?
We have this behavior of MongoDB connection which is unable to be traced; the MongoDB connections after some application idle time starts getting MongoDB connection error for every request made by the application. Only after the restart of the AppServer ('Tomcat', the application is deployed on) the MongoDB connections are reacquired and works well.
Want to know if anyone have come across such an issue and probable configuration solution. One another way which I think is then to Exception handle and do a retry for getting the connection.
NOTE: Java 1.7, Spring 3.1.x have been used for Server code. MongoDB Version: 2.6.9. Mongo Driver: mongo-2.10.1
Use these options for performance and preventing this problem.
autoConnectRetry = true
connectTimeout = 3000
connectionsPerHost = 40
socketTimeout = 120000
threadsAllowedToBlockForConnectionMultiplier = 5
maxAutoConnectRetryTime=5
maxWaitTime=120000
I need to set up an HTTP connection pool in a Spring app on a Tomcat server.
We are debating whether to define the pool at the application or at the server level (applicationContext.xml vs server.xml).
My problem is: I've looked and I've looked, but I just can't find any info on doing either.
For now, I'm working with org.apache.http.impl.conn.PoolingClientConnectionManager inside my class, and it's working ok.
How would I be able to define a pool outside my Java code and work with it from there?
Here is the configuration reference you are looking for for tomcat 7:
http://tomcat.apache.org/tomcat-7.0-doc/config/http.html#Standard_Implementation
Here is also another SO post on the same subject: How to increase number of threads in tomcat thread pool?