I am currently using a micro services architecture and under high load (around 1000 requests per second), the services become drastically slow and most of the components that use database keep restarting due to the error
javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: Unable to acquire JDBC Connection
Hikari Logs
DEBUG HikariPool:411 - HikariPool-3 - Pool stats (total=20, active=20, idle=0, waiting=234)
DEBUG HikariPool:411 - HikariPool-3 - Pool stats (total=20, active=20, idle=0, waiting=240)
Symptoms
Database CPU maxes out
Slowness of services
Service downtime due to restarts
Number of waiting on hikari pool increases
Technologies
Kubernetes
Spring cloud
Hikari for spring boot jdbc pool
Hibernate
Postgres 13
Pgpool to manage postgres database connection pooling
System Configurations
Database server has 1 master and 2 read replicas (Database server is SSD and has 40Cores dedicated on each node). Server specification -> PowerEdge R740 with 192GB RAM Processor 2*(Intel(R) Xeon(R) Gold 6154 CPU # 3.00GHz Core Enabled:18 Thread Count: 36)
3 pgpool instances: manages db connection for all instances of the micro services.
7 microservices. Each can scale to a maximum of 10 pods.
7 databases. Average size of 5gb. Each micro service has a separate database.
Async configurations for each micro service (manages threads spawning for spring-boot rest endpoints)
#Override
#Bean(name="taskExecutor")
public AsyncTaskExecutor getAsyncExecutor() {
ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
threadPoolTaskExecutor.setThreadNamePrefix("Async-");
threadPoolTaskExecutor.setCorePoolSize(3);
threadPoolTaskExecutor.setMaxPoolSize(3);
threadPoolTaskExecutor.setQueueCapacity(600);
threadPoolTaskExecutor.afterPropertiesSet();
return threadPoolTaskExecutor;
}
Hikari Configurations for each micro service
spring.datasource.hikari.maxLifetime : 18000
spring.datasource.hikari.maxPoolSize : 20
spring.datasource.hikari.idleTimeut : 600
spring.datasource.hikari.minimumIdle : 20
spring.datasource.hikari.connectionTimeout : 30000
Pg-pool configurations
PGPOOL_NUM_INIT_CHILDREN: 128
PGPOOL_MAX_POOL: 20
PGPOOL_CHILD_LIFE_TIME:300
PGPOOL_CLIENT_IDLE_LIMIT:600
PGPOOL_CONNECTION_LIFE_TIME:600
PGPOOL_CHILD_MAX_CONNECTIONS:2
PGPOOL_SR_CHECK_PERIOD: 21600
Query Plan
Planning Time: 0.223 ms
Execution Time: 13.816 ms
Attached is a screenshot of the new relic image and system architecture
What I have tried
Increased database CPU from 30 cores to 40 cores (however, no matter which value I increase it to, the CPU usage maxes out under high load)
I have tried to update hikari configurations to the default values, but still the issue persists.
spring.datasource.hikari.maxLifetime : 1800000
spring.datasource.hikari.maxPoolSize : 10
spring.datasource.hikari.idleTimeut : 60000
spring.datasource.hikari.minimumIdle : 10
spring.datasource.hikari.connectionTimeout : 30000
I have optimized the queries to only fetch required fields
I have added indexes on the database tables to make queries faster
My Expectations
Best configurations for hikari and pgpool to eliminate the long time taken to get connection from DB and the "unable to get jdbc exception".
Suggestions on best springboot configurations in a distributed environment.
Improvement suggestions on scaling of spring boot microservices
Related
I running a Spring Boot 2.6.x application (bundled with Tomcat 9.56.x) with the following configuration:
server.tomcat.accept-count = 100
server.tomcat.threads.max = 1000
server.tomcat.threads.min-spare = 10
on a machine with 16 CPU cores and 32GB of RAM
I testing a performance load of my server, during which I'm opening multiple (500) connections and each one sends a HTTP request every 1 second.
Expected behavior: tomcat will attempt to use as much threads as possible in order to maximize a throughput.
Actual behavior: tomcat always stick to 10 threads (which are configured by "min-spare") and never adding threads above that configured amount. I know that by observing its JMX endpoint (currentThreadCount is always 10). This is despite that it definitely not able to process all requests in time, since I have growing amount of pending requests in my client.
Does anyone can explain me such behavior? Based on what Tomcat (the NIO thread pool) supposed to decide whether to add threads?
Turns out the issue was in my client.
For issuing requests, I was using RestTemplate which internally was using HttpClient. Well, HttpClient internally managing connections and by default it has ridiculously low limits configured - max 20 concurrent connections...
I solved the issue by configuring PoolingHttpClientConnectionManager (which supposed to deliver better throughput in multi-threaded environment) and increased limits:
HttpClientBuilder clientBuilder = HttpClientBuilder.create();
PoolingHttpClientConnectionManager connManager
= new PoolingHttpClientConnectionManager();
connManager.setMaxTotal(10000);
connManager.setDefaultMaxPerRoute(10000);
clientBuilder.setConnectionManager(connManager);
HttpClient httpClient = clientBuilder.build();
After doing that, I greatly increased issued requests per second which made Tomcat to add new threads - as expected
I connect my DB (AWS RDS) to Spring boot JPA, Then my number of connections increases dramatically.
it is 12 now, I think it works spring boot 5 + browser 5, workbench 1 +, and others?
How can I reduce my number of connections? How can I maintain this connection safely?
You should be looking for Database connection pooling.
Database connection pooling is a method used to keep database connections open so they can be reused and also it will be keeping the total number of connections within a limit that we are specifying.
The default connection pool in Spring Boot is HikariCP, all you have to do is configure it properly
Sample connection pool configuration,
spring.datasource.hikari.connection-timeout = 20000
spring.datasource.hikari.minimum-idle= 10
spring.datasource.hikari.maximum-pool-size= 10
spring.datasource.hikari.idle-timeout=10000
spring.datasource.hikari.max-lifetime= 1000
spring.datasource.hikari.auto-commit =true
I have developed a microservice using Spring Boot and deployed it as a docker container and when doing performance testing the service I see that the maximum number of threads that created to the service is 20 at any point in time even though the number of calls made is much higher. I have even set the max thread to 4000 and also set Max connection to 10000 along with all db configuration and server has 24 core and 64 gb ram still no improvement Are there any limitations with respect to number of calls that can be made to a microservice developed using Spring Boot or is the issue is with docker container? Or is this normal?
I am using hikariCP for connection pooling in my reactive spring boot application running in kubernetes cluster. There will be lots of blocking calls and multiple database queries, so ideally more no of database connections would help, provided the availability of cpu cores.
Providing all the cpu core to one kubernetes container will waste resource as the spike in requests will not always be there. So I am trying to explore how to utilize the autoscaler in kubernetes so that new application containers can be spinned up with increase in the no of requests. Two concerns:
I tried the hikari configuration com.zaxxer.hikari.blockUntilFilled=true to keep the no of connections filled up during the application startup. But when using autoscaler with increasing no of requests, this will cause delays in the response as connection creation in the pool would take time. Is it better to use hikari's dynamic connection creation based on spike in demand rather than creating all the connections at once (during the startup).
Also, each kubernetes container will be a new instance of application, how do we manage the no of database connections created.
I did a sample load test with jmeter and could see improved performance (and no timeouts etc) with large no of requests when using a fixed no of active database connections. There were large no of thread interrupted exceptions when there was no fixed connection pool size provided and connections were getting created dynamically with increased no of requests.
Any insights will help.
The application i'm working on is in Spring Boot using Spring JDBCTemplate to connect to Teradata.
We face issues with Idle connections. we have about 6 different environments that create at some point 1672 sessions.
In order to limit the total pool size and the minimum idle connections i set it to:
hikari:
maximum-pool-size: 3
minimum-idle: 2
is there anything else recommend in otder to prevent the number of idle connections?
Thanks in advance
Assuming you're using Hikari for connection pool, you may use idleTimeout. Please refer Hikari configuration for all available properties.