Playtika's OSS Feign Client: org.springframework.web.reactive.function.client.WebClientRequestException: Connection prematurely closed BEFORE response - spring

The issue
I've stumbled upon the issue:
Error message: org.springframework.web.reactive.function.client.WebClientRequestException: Connection prematurely closed BEFORE response; nested exception is reactor.netty.http.client.PrematureCloseException: Connection prematurely closed BEFORE response
General info about the issue
It's a Spring Boot app (2.4.5) running on Reactive (WebFlux) stack.
The app also uses Playtika OSS reactive Feign Client (starter 3.0.3) for synchronious REST API communication.
Underlying web client is Netty.
There are no any special Feign or WebClient configs in the app.
All the other microservice parties are running on embedded Tomcat with default Spring Boot autoconfigurations.
All apps are running in Kubernetes cluster.
The error log observed from time to time (not every day).
Guesses
After some investigation, my best guess would be that some long-lived connections are being dropped from the pool on certain conditions. This causing the error log.
This thought is based on Instana that connects the error log to the span that spans accross a lot of subcalls.
Also no data losses/ other inconsistencies were noticed so far xD
Questions
Does Feign has a connection pool by default?
How to know if those are live or idle connections from the pool being closed?
How the connection pool can be configured or dropped to avoid long-running connections ?
Is it possible that Kubernetes can somehow close these connections?
What else can close connections?

Related

Spring Boot micro-service not connecting to Rabbit MQ server after server is online again

We are facing one problem, that sometimes our RabbitMQ server crashes due to some reasons. To connect RabbitMQ again with micro-services we need to restart the spring boot micro-services, Now is there a way that we can skip the restarts, so whenever RabbitMQ comes up, services' connection to RabbitMQ should automatically be created and should start working as expected.

Spring Boot Integrating Testing: connection pool leaking

I have a spring boot application (1.5) that uses #Repos, #PersistenceContext and Connection Pooling (C3PO with mssql-jdb; 6.1.0.jre8) that connects to an Azure SQL Database. However, we are hitting connection errors when running our test suite. When doing netstat while running the integration tests, I'm seeing the ESTABLISHED connections expand without bound. The number of connections hits ~250 connections and then I start seeing Connection Pool exceptions and everything eventually dies.
My question is what' s the proper way to handle this situation? Is there a way to turn off connection pooling when doing integration testing or do I need to manually deactivate connection poolings at the end of a test?

Spring boot/Amazon PostgreSQL RDS connection pool issue

I am troubleshooting an issue with a Spring Boot app connecting to a PostgreSQL database. The app runs normally, but under fairly moderate load it will begin to log errors like this:
java.sql.SQLException: Timeout after 30000ms of waiting for a connection.
This is running on an Amazon EC2 instance connecting to a PostgreSQL RDS. The app is configured like the following:
spring.datasource.url=jdbc:postgresql://[rds_path]:5432/[db name]
spring.datasource.username=[username]
spring.datasource.password=[password]
spring.datasource.max-active=100
In the AWS console, I see 60 connections active to the database, but that is across several Spring Boot apps (not all this app). When I query the database for current activity using pg_stat_activity, I see all but one or 2 connections in an idle state. It would seem the Spring Boot app is not using all available connections? Or is somehow leaking connections? I'm trying to interpret how pg_stat_activity would show so many idle connections and the app still getting connection pool time outs.
Figured it out. Spring is using the Hikari database connection pooling (didn't realize that until after more closely inspecting the stack trace). Hikari configuration parameters have different names, to set the pool size you use maximum-pool-size. Updated that and problem solved.

A webapp that uses Spring AMQP is that consired to be 1 client?

Hi there i am wondering if i create a webapp that uses Spring AMQP. Is that single webapp 1 AMQP client? Or is every request made by a user that results into an AMQP call a client, so potentially x numbers of clients?
I don't know AMQP much, but I suspect it has the same terminology as jms. In that sense your application is probably pooling connections to AMQP broker for better performance. Each connection in a pool is treated as a separate client (competing consumer).
Thus each request is not really creating a new connection (client), but your application isn't a single client as well. In fact, when your application tries to access AMQP broker, it picks any connection from the pool and puts it back once it's done. Another request can reuse the same connection (client) or use a different, idle one.

Spring JMS: Creating multiple connection to a queue

To process a large number of messages coming to a queue i need guarantee of at least one jms connection to be there at any time. I am using spring and spring allows to have multiple sessions on a single connection only. In case one and only connection fails, application will come to standstill till spring reconnects to the JMS bridge.
So how can i create more than one connection to a queue in Spring, also how can i do connection pooling here.
The answer to this depends on whether you are using Spring inside a J2EE container(jboss etc.) or in a standalone application.
Standalone - you'll find pooling connections to be a problem. Springs SingleConnectionFactory can be setup to renew the connection on an exception garaunteeing that at some point a connection will come online and start processing the queue again, but you'll still have the problem of waiting for that single connection to renew, plus depending on what messaging implementation your dealing with and how it does load balancing you may find yourself stuck with a connection to a single node in a cluster.
If you are running in a container you can rely on the containers connection factory which will be much more robust. JBoss Messaging in the container for instance will failover seamlessly to other nodes and handles pooling under the covers, but if your working in the container its usually easier to bail on JMS template which kind of sucks and use whatever that container provides.

Resources