Unable to acquire JDBC Connection in SpringBoot app - spring-boot

I have a Microservices‑Based Application, each Microservice is a SpringBoot 2.0.3.RELEASE app., but after my 4rth Microservices launched I have this error:
Unable to acquire JDBC Connection; nested exception is org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
..
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Too many connections
I would like to know how to reduce the maximumPoolSize or if there is a way to know maximumPoolSize because I have't seen anything related when the app starts

You can set the maximum pool size of the JDBC connections in your application.properties file like:
spring.datasource.hikari.maximum-pool-size=5

Related

HikariPool-1 - Connection is not available -webclient

I have some client class to external data provider.
In this class, I'm using webclient reactive client and crud repository.
Webclient uses repo to save responses from client (business requirement). For instance in onError, onStatus etc methods uses this repository. We performed load tests and it's working well.
The problem is when external API is not working and we're retrying couple of times (exponential backoff). Then I get:
HikariPool-1 - Connection is not available, request timed out after 30005ms
org.springframework.dao.DataAccessResourceFailureException: Unable to acquire JDBC Connection; nested exception is org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
So it seems like webclient is keeping connection while retrying for 30 seconds and we're running out of connections. Extending connection pool size is not the way I want to fix that.
Is there any way to release connection while webclient is jsut waiting for another retry and so on?

My application say, too many connections but the db connection is not full

My application say,
Error querying database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Too many connections
But DB connection is not full.
I use spring and mybatis and mariaDB. what is the problem?
enter image description here

Connection Timeout for Spring Redis using Redis Cloud

I am trying to connect Redis in my Spring application.
I have created a Redis Database in Redis Cloudredis-10228.c15.us-east-1-2.ec2.cloud.redislabs.com:10228.
I have configured the following in my application.properties
spring.redis.host=redis-10228.c15.us-east-1-2.ec2.cloud.redislabs.com:10228
spring.redis.password=<password-from-redis-cloud>
I am still getting Redis Connection failure event after providing password.
reactor.core.Exceptions$ErrorCallbackNotImplemented: org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to redis-10228.c15.us-east-1-2.ec2.cloud.redislabs.com:10228
Caused by: org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to redis-10228.c15.us-east-1-2.ec2.cloud.redislabs.com:10228
Any misconfiguration done from my part?
You need to specify the redis port in a different property:
spring.redis.host=redis-10228.c15.us-east-1-2.ec2.cloud.redislabs.com
spring.redis.port=10228
See this for more details.

How to configure auto reconnection with hikari in SpringBoot application?

We are using SpringBoot 2.1.x version so Hikari is the default DataSource implementation. However, I am not sure how to configure Hikari settings to auto reconnect to our Oracle database after database maintenance/restart or network connection issue.
We have the following hikari settings but it does not seem to help.
account.datasource.url: jdbc:oracle:thin:#myserver:1521:DEV
account.datasource.username: user
account.datasource.password: xxxx
account.datasource.driverClassName: oracle.jdbc.driver.OracleDriver
account.datasource.hikari.connection-timeout: 30000
account.datasource.hikari.maximum-pool-size: 3
account.datasource.hikari.idle-timeout: 60000
account.datasource.hikari.max-lifetime: 1800000
account.datasource.hikari.minimum-idle: 2
It failed to reconnect after network connection to the database got restored.
Failed to obtain JDBC Connection; nested exception is java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30033ms.
Any other account.datasource.hikari.xxxxx will help to auto reconnect to the database ?
From the HikariCP docs:
connectionTestQuery
If your driver supports JDBC4 we strongly
recommend not setting this property. This is for "legacy" drivers that
do not support the JDBC4 Connection.isValid() API. This is the query
that will be executed just before a connection is given to you from
the pool to validate that the connection to the database is still
alive. Again, try running the pool without this property, HikariCP
will log an error if your driver is not JDBC4 compliant to let you
know. Default: none
So I'd suggest verifying that your JDBC Driver is actually JDBC4 compliant. If it's not - set the above property.

elasticsearch out of memory error

We are using elasticsearch 0.90.0 and java version "1.7.0_25".
We migrate data from oracle DB to Hadoop through a executable jar kept at DB server.after 15-20 mins of successful running, we get following exception
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:597)
at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker.start(DeadLockProofWorker.java:38)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.<init>(AbstractNioSelector.java:95)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.<init>(AbstractNioWorker.java:51)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.<init>(NioWorker.java:45)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:99)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:69)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.<init>(NioWorkerPool.java:39)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.<init>(NioWorkerPool.java:33)
at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:240)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:90)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:179)
at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:119)
No exception is caught either in namenode/datanode or elasticsearch logs. This error is caught at DB Server but I think it is related to elastic search.
Looks like you're creating too many Netty clients is my guess which is in turn eating up all your threads. Perhaps wrap your Netty client pool with a service that inject? See Helter Skelter's comment on this answer https://stackoverflow.com/a/5253186/266531

Resources