Hikari pool Connections not reused - spring-boot

I am seeing the below DEBUG logs for my Java service. Connections are not going to the pool again after use. And active and total connects remain equal and after a certain time, when timeout happens, waiting for connections decreases.
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 - Before
cleanup stats (total=1, active=1, idle=0, waiting=0)
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 - Before
cleanup stats (total=1, active=1, idle=0, waiting=0)
After cleanup stats (total=1, active=1, idle=0, waiting=0) 17-01-2023
10:15:32.419 [35m[HikariPool-1 housekeeper][0;39m
[39mDEBUG[0;39m After cleanup stats (total=1, active=1, idle=0,
waiting=0)
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 - Before
cleanup stats (total=32, active=32, idle=0, waiting=1)
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 - Before
cleanup stats (total=32, active=32, idle=0, waiting=1)
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 - After
cleanup stats (total=32, active=32, idle=0, waiting=1)
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 - After
cleanup stats (total=32, active=32, idle=0, waiting=1)
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 - After
adding stats (total=50, active=40, idle=10, waiting=0)
m com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 -
After adding stats (total=50, active=40, idle=10, waiting=0)
2023-01-17T15:46:32.536+05:30
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 - Before
cleanup stats (total=50,
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 - Before
cleanup stats (total=50, active=50, idle=0, waiting=31)
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 - After
cleanup stats (total=50, active=50, idle=0, waiting=31)
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 - After
cleanup stats (total=50, active=50, idle=0, waiting=31)
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 -
Timeout failure stats (total=50, active=50, idle=0, waiting=91)
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 -
**Timeout failure stats (total=50, active=50, idle=0,
waiting=91) **
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 -
Timeout failure stats (total=50, active=50, idle=0, waiting=92)
com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1
-**** Timeout failure stats (total=50, active=50, idle=0, waiting=92)
I am not using any datasource.getConnection(). It's all implicit to springboot.
Can you pls help and suggest.
Below is the config I am using
hikari:
connection-timeout: 30000
maximum-pool-size: 10
min-idle: 5
leak-detection-threshold: 30000

Related

Jsmpeg in Chrome cannot read websocket stream served by Spring Webflux

Problem description
Application implements Spring webflux websocket Netty server that forwards ffmpeg stream output to websocket protocol. This application allows multiple viewers to see the stream from their web browser by using jsmpeg javascript library.
This solution works as expected in Firefox web browser.
https://i.stack.imgur.com/X4Fm4.png
But in Chrome web browser the stream fails no load. Chrome browser cannot get data from the websocket connection.
Follows an screenshot of the problem seen in Chrome DevTools.
https://i.stack.imgur.com/xWvll.png
Question
What should be changed in this solution so that stream can be opened in Chrome?
How to reproduce the problem
Pull following git repository: https://github.com/haraldsegliens/spring-webflux-websocket-jsmpeg-chrome-problem/tree/master
Install FFMpeg: https://ffmpeg.org/download.html
Update ffmpeg-command variable in src/main/resources/application.yaml so it references the ffmpeg program in your system
Run Spring application. The application by default creates a following endpoint: ws://localhost:8080/stream
Open view_stream.html in Chrome browser
My research of the problem
Turned on TRACE logs and opened the stream in Chrome browser. I see following error in application logs:
2022-09-02 15:26:36.952 - DEBUG [reactor-http-nio-3] r.n.h.s.HttpServerOperations --- [08cf17fd, L:/[0:0:0:0:0:0:0:1]:8080 - R:/[0:0:0:0:0:0:0:1]:54815] New http connection, requesting read
2022-09-02 15:26:36.952 - DEBUG [reactor-http-nio-3] r.n.t.TransportConfig --- [08cf17fd, L:/[0:0:0:0:0:0:0:1]:8080 - R:/[0:0:0:0:0:0:0:1]:54815] Initialized pipeline DefaultChannelPipeline{(reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.left.httpTrafficHandler = reactor.netty.http.server.HttpTrafficHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2022-09-02 15:26:36.955 - DEBUG [reactor-http-nio-3] r.n.h.s.HttpServerOperations --- [08cf17fd, L:/[0:0:0:0:0:0:0:1]:8080 - R:/[0:0:0:0:0:0:0:1]:54815] Increasing pending responses, now 1
2022-09-02 15:26:36.955 - DEBUG [reactor-http-nio-3] r.n.h.s.HttpServer --- [08cf17fd-1, L:/[0:0:0:0:0:0:0:1]:8080 - R:/[0:0:0:0:0:0:0:1]:54815] Handler is being applied: org.springframework.http.server.reactive.ReactorHttpHandlerAdapter#6c90595a
2022-09-02 15:26:36.956 - TRACE [reactor-http-nio-3] o.s.w.s.a.HttpWebHandlerAdapter --- [08cf17fd-2] HTTP GET "/stream", headers={masked}
2022-09-02 15:26:36.962 - DEBUG [reactor-http-nio-3] o.s.w.r.h.SimpleUrlHandlerMapping --- [08cf17fd-2] Mapped to com.edi.pacs.video_stream.FfmpegStreamSocketHandler#7ead1d80
2022-09-02 15:26:36.963 - DEBUG [reactor-http-nio-3] r.n.ReactorNetty --- [08cf17fd-1, L:/[0:0:0:0:0:0:0:1]:8080 - R:/[0:0:0:0:0:0:0:1]:54815] Removed handler: reactor.left.httpTrafficHandler, pipeline: DefaultChannelPipeline{(reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2022-09-02 15:26:36.964 - DEBUG [reactor-http-nio-3] r.n.ReactorNetty --- [08cf17fd-1, L:/[0:0:0:0:0:0:0:1]:8080 - R:/[0:0:0:0:0:0:0:1]:54815] Non Removed handler: reactor.left.accessLogHandler, context: null, pipeline: DefaultChannelPipeline{(reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2022-09-02 15:26:36.964 - DEBUG [reactor-http-nio-3] r.n.ReactorNetty --- [08cf17fd-1, L:/[0:0:0:0:0:0:0:1]:8080 - R:/[0:0:0:0:0:0:0:1]:54815] Non Removed handler: reactor.left.httpMetricsHandler, context: null, pipeline: DefaultChannelPipeline{(reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2022-09-02 15:26:36.964 - DEBUG [reactor-http-nio-3] i.n.h.c.h.w.WebSocketServerHandshaker --- [id: 0x08cf17fd, L:/[0:0:0:0:0:0:0:1]:8080 - R:/[0:0:0:0:0:0:0:1]:54815] WebSocket version V13 server handshake
2022-09-02 15:26:36.964 - DEBUG [reactor-http-nio-3] i.n.h.c.h.w.WebSocketServerHandshaker --- WebSocket version 13 server handshake key: BCbwUez1uBWHEcnoIKUeBw==, response: 2L7i3HHoJx/Q9QKFH1IWl7ZYzYg=
2022-09-02 15:26:36.964 - DEBUG [reactor-http-nio-3] i.n.h.c.h.w.WebSocketServerHandshaker --- Requested subprotocol(s) not supported: null
2022-09-02 15:26:36.965 - DEBUG [reactor-http-nio-3] o.s.w.r.s.a.ReactorNettyWebSocketSession --- [08cf17fd-2] Session id "b528e63" for http://localhost:8080/stream
2022-09-02 15:26:36.966 - TRACE [reactor-http-nio-3] o.s.w.r.s.a.ReactorNettyWebSocketSession --- [08cf17fd-2] Sending WebSocket BINARY message (8 bytes)
2022-09-02 15:26:36.966 - TRACE [reactor-http-nio-3] i.n.h.c.h.w.WebSocket08FrameEncoder --- Encoding WebSocket Frame opCode=2 length=8
2022-09-02 15:26:36.966 - TRACE [reactor-http-nio-3] o.s.w.s.a.HttpWebHandlerAdapter --- [08cf17fd-2] Completed 200 OK, headers={}
2022-09-02 15:26:36.967 - TRACE [reactor-http-nio-3] o.s.h.s.r.ReactorHttpHandlerAdapter --- [08cf17fd-1, L:/[0:0:0:0:0:0:0:1]:8080 - R:/[0:0:0:0:0:0:0:1]:54815] Handling completed
2022-09-02 15:26:36.967 - INFO [reactor-http-nio-3] r.F.S.2 --- onSubscribe(SinkManyBestEffort.DirectInner)
2022-09-02 15:26:36.967 - INFO [reactor-http-nio-3] r.F.S.2 --- request(128)
2022-09-02 15:26:36.967 - ERROR [reactor-http-nio-3] r.n.t.ServerTransport --- [08cf17fd-1, L:/[0:0:0:0:0:0:0:1]:8080 - R:/[0:0:0:0:0:0:0:1]:54815] onUncaughtException(ws{uri=/stream, connection=SimpleConnection{channel=[id: 0x08cf17fd, L:/[0:0:0:0:0:0:0:1]:8080 - R:/[0:0:0:0:0:0:0:1]:54815]}})
java.io.IOException: An established connection was aborted by the software in your host machine
at java.base/sun.nio.ch.SocketDispatcher.read0(Native Method)
at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:46)
at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276)
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:233)
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:223)
at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:389)
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:258)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:357)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:832)
2022-09-02 15:26:36.968 - TRACE [reactor-http-nio-3] r.n.c.ChannelOperations --- [08cf17fd-1, L:/[0:0:0:0:0:0:0:1]:8080 - R:/[0:0:0:0:0:0:0:1]:54815] Disposing ChannelOperation from a channel
java.lang.Exception: ChannelOperation dispose stack
at reactor.netty.channel.ChannelOperations.dispose(ChannelOperations.java:196)
at reactor.netty.transport.ServerTransport$ChildObserver.onUncaughtException(ServerTransport.java:467)
at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:232)
at reactor.netty.channel.FluxReceive.onInboundError(FluxReceive.java:453)
at reactor.netty.channel.ChannelOperations.onInboundError(ChannelOperations.java:488)
at reactor.netty.channel.ChannelOperationsHandler.exceptionCaught(ChannelOperationsHandler.java:126)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:273)
at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1377)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281)
at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:907)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:125)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:177)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:832)
2022-09-02 15:26:36.968 - INFO [reactor-http-nio-3] r.F.S.2 --- cancel()
My assumption is that Chrome doesn't "like" something about the given response from the Spring application and then Google closed the connection.
Next what I'll try is to use Jetty or Undertow as server but if it solves my problem, it won't solve my question - why doesn't it work with Netty.
Fixed the problem. Problem was that some web browsers are "allergic" to subprotocol header being empty.
chrome empty subprotocol header network log
There are two possible solutions for this problem.
First solution, fill the subprotocol value. Done the fix in https://github.com/haraldsegliens/spring-webflux-websocket-jsmpeg-chrome-problem/tree/solution/use-protocol and Chrome now handles the stream correctly.
chrome now works by using subprotocol
Second solution, fix jsmpeg so that it doesn't write subprotocol header with empty value instead it should skip subprotocol header. Done fix in https://github.com/haraldsegliens/spring-webflux-websocket-jsmpeg-chrome-problem/tree/solution/fix-in-jsmpeg
Following pull request that fixes the problem in jsmpeg repository: https://github.com/phoboslab/jsmpeg/pull/143
chrome now works by fixing jsmpeg

HikariCP shows unavailable connections while database has no sessions running - how to find the cause?

I have an application running with Spring-Boot using Spring's default Hikari Connection Pool.
Recently the server started having issues with unavailable connections:
DEBUG HikariPool (411) - HikariPool-1 - Pool stats (total=20, active=20, idle=0, waiting=3)
DEBUG HikariPool (411) - HikariPool-1 - Timeout failure stats (total=20, active=20, idle=0, waiting=2)
WARN SqlExceptionHelper(137) - SQL Error: 0, SQLState: null
ERROR SqlExceptionHelper(142) - HikariPool-1 - Connection is not available, request timed out after 30005ms.
Since it is a shared database, first I wasn't sure if it is a problem in my application. But after restarting the application the connection was fine again, so I now need to find the cause. The error occurs only on the productive system which is used all around the clock with many actions per minute. I tried to figure out what was causing rising active connections in Hikari by scanning the logs for specific statements executed before the rising connection - but there were always different or unclear actions.
So then we executed a script on the database to return all active connections (gv$session). The result returned not a single active session even though the Hikari Logs showed active connections at the same time. Can anybody tell me what this means and/or has a clue where to find the root cause of this issue?
I use Spring Boot v2.1.4.
These are my hikari settings: (default with spring.datasource.hikari.maximum-pool-size=20)
DEBUG HikariConfig (1020) - HikariPool-1 - configuration:
DEBUG HikariConfig (1052) - allowPoolSuspension.............false
DEBUG HikariConfig (1052) - autoCommit......................true
DEBUG HikariConfig (1052) - catalog.........................none
DEBUG HikariConfig (1052) - connectionInitSql...............none
DEBUG HikariConfig (1052) - connectionTestQuery.............none
DEBUG HikariConfig (1052) - connectionTimeout...............30000
DEBUG HikariConfig (1052) - dataSource......................none
DEBUG HikariConfig (1052) - dataSourceClassName.............none
DEBUG HikariConfig (1052) - dataSourceJNDI..................none
DEBUG HikariConfig (1052) - dataSourceProperties............{password=<masked>}
DEBUG HikariConfig (1052) - driverClassName................."oracle.jdbc.OracleDriver"
DEBUG HikariConfig (1052) - healthCheckProperties...........{}
DEBUG HikariConfig (1052) - healthCheckRegistry.............none
DEBUG HikariConfig (1052) - idleTimeout.....................600000
DEBUG HikariConfig (1052) - initializationFailTimeout.......1
DEBUG HikariConfig (1052) - isolateInternalQueries..........false
DEBUG HikariConfig (1052) - jdbcUrl.........................jdbc:oracle:thin:<masked>
DEBUG HikariConfig (1052) - leakDetectionThreshold..........0
DEBUG HikariConfig (1052) - maxLifetime.....................1800000
DEBUG HikariConfig (1052) - maximumPoolSize.................20
DEBUG HikariConfig (1052) - metricRegistry..................none
DEBUG HikariConfig (1052) - metricsTrackerFactory...........none
DEBUG HikariConfig (1052) - minimumIdle.....................20
DEBUG HikariConfig (1052) - password........................<masked>
DEBUG HikariConfig (1052) - poolName........................"HikariPool-1"
DEBUG HikariConfig (1052) - readOnly........................false
DEBUG HikariConfig (1052) - registerMbeans..................false
DEBUG HikariConfig (1052) - scheduledExecutor...............none
DEBUG HikariConfig (1052) - schema..........................none
DEBUG HikariConfig (1052) - threadFactory...................internal
DEBUG HikariConfig (1052) - transactionIsolation............default
DEBUG HikariConfig (1052) - username........................none
DEBUG HikariConfig (1052) - validationTimeout...............5000
Enable leakDetectionThreshold=<max query time in ms> (documented here).
It logs connections not returned to the pool within the time specified.

Limit jdbc connection pool fixed amount

Hi i use micronaut data together with various jdbc connection pools.
I first had hikari and also tried the tomcat one.
What i was assuming that setting the datasource to maximum-pool-size: 10 results in max 10 open connections.
But it seems that there is a lot of opening and closing going on. Together with a lot o requests at the same time, it uses much more than only 10 connections. The thing is, that the azure postgresql only allows 100 connections in total.
Currently i have running 7 apps accessing that database. Which i expect to result in 70 connections max total. But in reality it is much more.
I also tried using the tomcat jdbc pool, he behaves a little differntly. But also uses more than the 10 connections. I also checked using a java profiler and figured out, that some times its up to 100 open/close connection events per second.
Any suggestion how to act in that case, except of using a second database.
I was hoping that the pool will buffer the calls, especially cause they come from a kafka topic.
But well, seems to be differently.
--- edit add hikari log
Here is the log output of hikari
2020-12-11 11:59:40,983 [main] DEBUG com.zaxxer.hikari.HikariConfig - Driver class org.postgresql.Driver found in Thread context class loader jdk.internal.loader.ClassLoaders$AppClassLoader#2c13da15
2020-12-11 11:59:40,993 [main] DEBUG com.zaxxer.hikari.HikariConfig - HikariPool-1 - configuration:
2020-12-11 11:59:40,999 [main] DEBUG com.zaxxer.hikari.HikariConfig - allowPoolSuspension.............false
2020-12-11 11:59:41,000 [main] DEBUG com.zaxxer.hikari.HikariConfig - autoCommit......................true
2020-12-11 11:59:41,000 [main] DEBUG com.zaxxer.hikari.HikariConfig - catalog.........................none
2020-12-11 11:59:41,001 [main] DEBUG com.zaxxer.hikari.HikariConfig - connectionInitSql...............none
2020-12-11 11:59:41,001 [main] DEBUG com.zaxxer.hikari.HikariConfig - connectionTestQuery............."SELECT 1;"
2020-12-11 11:59:41,001 [main] DEBUG com.zaxxer.hikari.HikariConfig - connectionTimeout...............30000
2020-12-11 11:59:41,002 [main] DEBUG com.zaxxer.hikari.HikariConfig - dataSource......................none
2020-12-11 11:59:41,002 [main] DEBUG com.zaxxer.hikari.HikariConfig - dataSourceClassName.............none
2020-12-11 11:59:41,002 [main] DEBUG com.zaxxer.hikari.HikariConfig - dataSourceJNDI..................none
2020-12-11 11:59:41,003 [main] DEBUG com.zaxxer.hikari.HikariConfig - dataSourceProperties............{password=<masked>}
2020-12-11 11:59:41,004 [main] DEBUG com.zaxxer.hikari.HikariConfig - driverClassName................."org.postgresql.Driver"
2020-12-11 11:59:41,004 [main] DEBUG com.zaxxer.hikari.HikariConfig - exceptionOverrideClassName......none
2020-12-11 11:59:41,004 [main] DEBUG com.zaxxer.hikari.HikariConfig - healthCheckProperties...........{}
2020-12-11 11:59:41,005 [main] DEBUG com.zaxxer.hikari.HikariConfig - healthCheckRegistry.............none
2020-12-11 11:59:41,005 [main] DEBUG com.zaxxer.hikari.HikariConfig - idleTimeout.....................600000
2020-12-11 11:59:41,005 [main] DEBUG com.zaxxer.hikari.HikariConfig - initializationFailTimeout.......1
2020-12-11 11:59:41,006 [main] DEBUG com.zaxxer.hikari.HikariConfig - isolateInternalQueries..........false
2020-12-11 11:59:41,007 [main] DEBUG com.zaxxer.hikari.HikariConfig - jdbcUrl.........................jdbc:postgresql://URL:5432/postgres
2020-12-11 11:59:41,007 [main] DEBUG com.zaxxer.hikari.HikariConfig - leakDetectionThreshold..........0
2020-12-11 11:59:41,007 [main] DEBUG com.zaxxer.hikari.HikariConfig - maxLifetime.....................1800000
2020-12-11 11:59:41,008 [main] DEBUG com.zaxxer.hikari.HikariConfig - maximumPoolSize.................10
2020-12-11 11:59:41,008 [main] DEBUG com.zaxxer.hikari.HikariConfig - metricRegistry..................none
2020-12-11 11:59:41,008 [main] DEBUG com.zaxxer.hikari.HikariConfig - metricsTrackerFactory...........none
2020-12-11 11:59:41,009 [main] DEBUG com.zaxxer.hikari.HikariConfig - minimumIdle.....................10
2020-12-11 11:59:41,009 [main] DEBUG com.zaxxer.hikari.HikariConfig - password........................<masked>
2020-12-11 11:59:41,009 [main] DEBUG com.zaxxer.hikari.HikariConfig - poolName........................"HikariPool-1"
2020-12-11 11:59:41,010 [main] DEBUG com.zaxxer.hikari.HikariConfig - readOnly........................false
2020-12-11 11:59:41,010 [main] DEBUG com.zaxxer.hikari.HikariConfig - registerMbeans..................false
2020-12-11 11:59:41,011 [main] DEBUG com.zaxxer.hikari.HikariConfig - scheduledExecutor...............none
2020-12-11 11:59:41,011 [main] DEBUG com.zaxxer.hikari.HikariConfig - schema.........................."SCHEMA"
2020-12-11 11:59:41,011 [main] DEBUG com.zaxxer.hikari.HikariConfig - threadFactory...................internal
2020-12-11 11:59:41,011 [main] DEBUG com.zaxxer.hikari.HikariConfig - transactionIsolation............default
2020-12-11 11:59:41,012 [main] DEBUG com.zaxxer.hikari.HikariConfig - username........................"USERNAME"
2020-12-11 11:59:41,012 [main] DEBUG com.zaxxer.hikari.HikariConfig - validationTimeout...............5000
I found the mistake - or at least something that solves the issue.
While saving some data into the database i also try to update the cache.
But due to change on caffeines loadingcache, each save also results in a get on the exactly same data object instance.
My guessing is due the transaction that causes trouble.
After replacing the cache.get with the cache.replace everything works fine.

Spring Boot application unexpected shutdown with no reason

Sometimes my Spring Boot application is shutting down with no clear reasons.
I can only see the following output in the appliaction log:
2019-09-02 01:39:16.199 INFO 23535 --- [ActiveMQ ShutdownHook] o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.15.9 (localhost, ID:example-33285-1567372309839-0:1) is shutting down
2019-09-02 01:39:16.216 INFO 23535 --- [ActiveMQ Connection Executor: vm://localhost#0] o.s.j.c.CachingConnectionFactory : Encountered a JMSException - resetting the underlying JMS Connection
javax.jms.JMSException: peer (vm://localhost#1) stopped.
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:54) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.ActiveMQConnection.onAsyncException(ActiveMQConnection.java:1960) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.ActiveMQConnection.onException(ActiveMQConnection.java:1979) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.TransportFilter.onException(TransportFilter.java:114) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.ResponseCorrelator.onException(ResponseCorrelator.java:126) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.TransportFilter.onException(TransportFilter.java:114) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.vm.VMTransport.stop(VMTransport.java:233) ~[activemq-broker-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.TransportFilter.stop(TransportFilter.java:72) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.TransportFilter.stop(TransportFilter.java:72) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.ResponseCorrelator.stop(ResponseCorrelator.java:132) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.broker.TransportConnection.doStop(TransportConnection.java:1194) ~[activemq-broker-5.15.9.jar!/:5.15.9]
at org.apache.activemq.broker.TransportConnection$4.run(TransportConnection.java:1160) ~[activemq-broker-5.15.9.jar!/:5.15.9]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
Caused by: org.apache.activemq.transport.TransportDisposedIOException: peer (vm://localhost#1) stopped.
... 9 common frames omitted
2019-09-02 01:39:16.218 INFO 23535 --- [Thread-7] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService 'taskScheduler'
2019-09-02 01:39:16.218 INFO 23535 --- [ActiveMQ ShutdownHook] o.a.activemq.broker.TransportConnector : Connector vm://localhost stopped
2019-09-02 01:39:16.225 INFO 23535 --- [Thread-7] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
2019-09-02 01:39:16.230 INFO 23535 --- [ActiveMQ ShutdownHook] o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.15.9 (localhost, ID:example-33285-1567372309839-0:1) uptime 1 hour 27 minutes
2019-09-02 01:39:16.230 INFO 23535 --- [ActiveMQ ShutdownHook] o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.15.9 (localhost, ID:example-33285-1567372309839-0:1) is shutdown
I have no idea what is a cause of this shutdown. What steps should I do in order to determine the reason?

HikariCP connection broken/unavailable woes

I am unable to understand the reason behind intermittent HikariCP Connection is not available.
From the logs, It doesn't look like a connection leak issue. A bigger problem is I am unable to predictably reproduce the error. Following is a sample log trace where the error starts while this gist contains it till the end.
2017-12-12T19:31:55.958Z DEBUG <> [HikariPool-1 housekeeper] com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Pool stats (total=10, active=1, idle=9, waiting=0)
2017-12-12T19:31:57.052Z WARN <> [main] c.zaxxer.hikari.pool.ProxyConnection - HikariPool-1 - Connection org.postgresql.jdbc.PgConnection#1de5f0ef marked as broken because of SQLSTATE(08P01), ErrorCode(0)
org.postgresql.util.PSQLException: Expected command status BEGIN, got EMPTY.
at org.postgresql.core.v3.QueryExecutorImpl$1.handleCommandStatus(QueryExecutorImpl.java:515)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2180)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:288)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:430)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:356)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:168)
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:116)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeQuery(ProxyPreparedStatement.java:52)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeQuery(HikariProxyPreparedStatement.java)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:70)
at org.hibernate.id.SequenceGenerator.generateHolder(SequenceGenerator.java:116)
at org.hibernate.id.SequenceHiLoGenerator.generate(SequenceHiLoGenerator.java:62)
at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:101)
at org.hibernate.jpa.event.internal.core.JpaPersistEventListener.saveWithGeneratedId(JpaPersistEventListener.java:67)
... MORE DETAILS IN [GIST][1]
2017-12-12T19:31:57.067Z ERROR <> [main] c.o.r.s.t.RAUTwitterUserService - populateFromPayload: id=128, twitter_handle=non_local, exception=org.springframework.transaction.TransactionSystemException: Could not roll back JPA transaction; nested exception is javax.persistence.PersistenceException: unexpected error when rollbacking
2017-12-12T19:31:57.067Z INFO <> [main] c.o.r.s.t.TwitterCollectorService - RAU from RAU service: RAU(id=129, twitter=non_global)
2017-12-12T19:31:57.089Z DEBUG <> [HikariPool-1 connection adder] com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection#419dcf40

Resources