Spring gateway with zuul - spring

I'm having a problem with the hystrix/zuul framework when I try to route a request with images. It looks like there is a problem with big images, but i can’t always reproduce this error. At first, it looked like it was a problema with the timeout, so I made a custom config for this route with images, but the problem is still happening and I'm running out of ideas
My application config
default:
execution.isolation.thread:
timeoutInMilliseconds: 5000
circuitBreaker.forceClosed: true
My custom config
custom:
execution.isolation.thread:
timeoutInMilliseconds: 120000
This is the stacktrace
"stack_trace":"com.netflix.zuul.exception.ZuulException: Filter threw Exception\n\tat com.netflix.zuul.FilterProcessor.processZuulFilter(FilterProcessor.java:227)\n\tat com.netflix.zuul.FilterProcessor.runFilters(FilterProcessor.java:157)\n\tat com.netflix.zuul.FilterProcessor.route(FilterProcessor.java:118)\n\tat com.netflix.zuul.ZuulRunner.route(ZuulRunner.java:96)\n\tat com.netflix.zuul.http.ZuulServlet.route(ZuulServlet.java:116)\n\tat com.netflix.zuul.http.ZuulServlet.service(ZuulServlet.java:81)\n\tat org.springframework.web.servlet.mvc.ServletWrappingController.handleRequestInternal(ServletWrappingController.java:165)\n\tat org.springframework.cloud.netflix.zuul.web.ZuulController.handleRequest(ZuulController.java:44)\n\tat org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:52)\n\tat org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1038)\n\tat org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942)\n\tat org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005)\n\tat org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:908)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:665)\n\tat org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:750)\n\tat io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74)\n\tat io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)\n\tat com.picpay.filter.RequestTimeLoggingFilter.doFilterInternal(RequestTimeLoggingFilter.java:37)\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\n\tat io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)\n\tat io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)\n\tat org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:90)\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\n\tat io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)\n\tat io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)\n\tat org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\n\tat io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)\n\tat io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)\n\tat org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92)\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\n\tat io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)\n\tat io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)\n\tat org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93)\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\n\tat io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)\n\tat io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)\n\tat org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:117)\n\tat org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:106)\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\n\tat io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)\n\tat io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)\n\tat org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\n\tat io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)\n\tat io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)\n\tat io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)\n\tat io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)\n\tat io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68)\n\tat io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)\n\tat io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:132)\n\tat io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)\n\tat io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)\n\tat io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)\n\tat io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)\n\tat io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)\n\tat io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)\n\tat io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)\n\tat io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)\n\tat io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)\n\tat io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292)\n\tat io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81)\n\tat io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138)\n\tat io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135)\n\tat io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)\n\tat io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)\n\tat io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272)\n\tat io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)\n\tat io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104)\n\tat io.undertow.server.Connectors.executeRootHandler(Connectors.java:360)\n\tat io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: org.springframework.cloud.netflix.zuul.util.ZuulRuntimeException: com.netflix.zuul.exception.ZuulException: Forwarding error\n\tat org.springframework.cloud.netflix.zuul.filters.route.SimpleHostRoutingFilter.run(SimpleHostRoutingFilter.java:223)\n\tat com.netflix.zuul.ZuulFilter.runFilter(ZuulFilter.java:117)\n\tat com.netflix.zuul.FilterProcessor.processZuulFilter(FilterProcessor.java:193)\n\t... 74 common frames omitted\nCaused by: com.netflix.zuul.exception.ZuulException: Forwarding error\n\tat org.springframework.cloud.netflix.zuul.filters.route.SimpleHostRoutingFilter.handleException(SimpleHostRoutingFilter.java:243)\n\t... 77 common frames omitted\nCaused by: org.apache.http.client.ClientProtocolException: null\n\tat org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:187)\n\tat org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:118)\n\tat org.springframework.cloud.netflix.zuul.filters.route.SimpleHostRoutingFilter.forwardRequest(SimpleHostRoutingFilter.java:393)\n\tat org.springframework.cloud.netflix.zuul.filters.route.SimpleHostRoutingFilter.forward(SimpleHostRoutingFilter.java:312)\n\tat org.springframework.cloud.netflix.zuul.filters.route.SimpleHostRoutingFilter.run(SimpleHostRoutingFilter.java:218)\n\t... 76 common frames omitted\nCaused by: org.apache.http.client.NonRepeatableRequestException: Cannot retry request with a non-repeatable request entity\n\tat org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:108)\n\tat org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)\n\t... 80 common frames omitted\nCaused by: java.net.SocketException: Broken pipe (Write failed)\n\tat java.net.SocketOutputStream.socketWrite0(Native Method)\n\tat java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)\n\tat java.net.SocketOutputStream.write(SocketOutputStream.java:155)\n\tat org.apache.http.impl.io.SessionOutputBufferImpl.streamWrite(SessionOutputBufferImpl.java:124)\n\tat org.apache.http.impl.io.SessionOutputBufferImpl.flushBuffer(SessionOutputBufferImpl.java:136)\n\tat org.apache.http.impl.io.SessionOutputBufferImpl.write(SessionOutputBufferImpl.java:167)\n\tat org.apache.http.impl.io.ContentLengthOutputStream.write(ContentLengthOutputStream.java:113)\n\tat org.apache.http.entity.InputStreamEntity.writeTo(InputStreamEntity.java:144)\n\tat org.apache.http.impl.execchain.RequestEntityProxy.writeTo(RequestEntityProxy.java:121)\n\tat org.apache.http.impl.DefaultBHttpClientConnection.sendRequestEntity(DefaultBHttpClientConnection.java:156)\n\tat org.apache.http.impl.conn.CPoolProxy.sendRequestEntity(CPoolProxy.java:160)\n\tat org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:238)\n\tat org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)\n\tat org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)\n\tat org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)\n\tat org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)\n\t... 81 common frames omitted\n"

Related

RedisConnectionFailureException: Redis connection failed; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to [redis://

I have two businesses connected to the same redis-cluster, and the other business does not report any error, but this business will report the following error. During the error reporting period, I manually use redis-cli to connect to the cluster, and I can set/get and view the cluster information at the same time. Node There is no problem with information, slots, etc., and there is no problem with node memory/cpu。Does anyone know what the problem might be, thank you
org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis; nested exception is org.springframework.data.redis.connection.PoolException: Could not get a resource from the pool; nested exception is org.springframework.data.redis.RedisConnectionFailureException: Redis connection failed; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to [redis://********************#xxxx:6388?timeout=1s,]
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$ExceptionTranslatingConnectionProvider.translateException(LettuceConnectionFactory.java:1553)
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$ExceptionTranslatingConnectionProvider.getConnection(LettuceConnectionFactory.java:1461)
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$SharedConnection.getNativeConnection(LettuceConnectionFactory.java:1247)
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$SharedConnection.getConnection(LettuceConnectionFactory.java:1230)
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory.getClusterConnection(LettuceConnectionFactory.java:378)
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory.getConnection(LettuceConnectionFactory.java:355)
at org.springframework.data.redis.core.RedisConnectionUtils.fetchConnection(RedisConnectionUtils.java:193)
at org.springframework.data.redis.core.RedisConnectionUtils.doGetConnection(RedisConnectionUtils.java:144)
at org.springframework.data.redis.core.RedisConnectionUtils.getConnection(RedisConnectionUtils.java:105)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:209)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:189)
at org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:96)
at org.springframework.data.redis.core.DefaultValueOperations.set(DefaultValueOperations.java:236)
at com.jazz.neptune.biz.util.RedisUtil.set(RedisUtil.java:139)
at com.jazz.neptune.webscoket.goodway.WebSocketUtilManager.updateTableDetail(WebSocketUtilManager.java:471)
at com.jazz.neptune.webscoket.goodway.WebSocketUtilManager.handleMsg(WebSocketUtilManager.java:375)
at com.jazz.neptune.webscoket.goodway.WebSocketUtilManager.onMessage(WebSocketUtilManager.java:153)
at com.jazz.neptune.webscoket.goodway.websocket.dispatcher.MainThreadResponseDelivery.onMessage(MainThreadResponseDelivery.java:150)
at com.jazz.neptune.webscoket.goodway.websocket.dispatcher.DefaultResponseDispatcher.onMessage(DefaultResponseDispatcher.java:32)
at com.jazz.neptune.webscoket.goodway.websocket.response.TextResponse.onResponse(TextResponse.java:32)
at com.jazz.neptune.webscoket.goodway.websocket.dispatcher.EngineThread.run(EngineThread.java:41)
Caused by: org.springframework.data.redis.connection.PoolException: Could not get a resource from the pool; nested exception is org.springframework.data.redis.RedisConnectionFailureException: Redis connection failed; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to [redis://********************#xxxxxx:6388?timeout=1s, ]
at org.springframework.data.redis.connection.lettuce.LettucePoolingConnectionProvider.getConnection(LettucePoolingConnectionProvider.java:109)
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$ExceptionTranslatingConnectionProvider.getConnection(LettuceConnectionFactory.java:1459)
... 19 common frames omitted
Caused by: org.springframework.data.redis.RedisConnectionFailureException: Redis connection failed; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to [redis://********************#xxxxx:6388?timeout=1s,]
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:66)
at org.springframework.data.redis.connection.lettuce.LettuceFutureUtils.join(LettuceFutureUtils.java:74)
at org.springframework.data.redis.connection.lettuce.LettuceConnectionProvider.getConnection(LettuceConnectionProvider.java:53)
at org.springframework.data.redis.connection.lettuce.LettucePoolingConnectionProvider.lambda$null$0(LettucePoolingConnectionProvider.java:97)
at io.lettuce.core.support.ConnectionPoolSupport$RedisPooledObjectFactory.create(ConnectionPoolSupport.java:211)
at io.lettuce.core.support.ConnectionPoolSupport$RedisPooledObjectFactory.create(ConnectionPoolSupport.java:201)
at org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:58)
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:889)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:424)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:349)
at io.lettuce.core.support.ConnectionPoolSupport$1.borrowObject(ConnectionPoolSupport.java:122)
at io.lettuce.core.support.ConnectionPoolSupport$1.borrowObject(ConnectionPoolSupport.java:117)
at org.springframework.data.redis.connection.lettuce.LettucePoolingConnectionProvider.getConnection(LettucePoolingConnectionProvider.java:103)
... 20 common frames omitted
Caused by: io.lettuce.core.RedisConnectionException: Unable to connect to [redis://********************#xxxxx:6388?timeout=1s, ]
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78)
at io.lettuce.core.cluster.RedisClusterClient.lambda$transformAsyncConnectionException$40(RedisClusterClient.java:1157)
at io.lettuce.core.DefaultConnectionFuture.lambda$thenCompose$1(DefaultConnectionFuture.java:253)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at java.base/java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:883)
at java.base/java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2251)
at io.lettuce.core.DefaultConnectionFuture.thenCompose(DefaultConnectionFuture.java:250)
at io.lettuce.core.cluster.RedisClusterClient.transformAsyncConnectionException(RedisClusterClient.java:1154)
at io.lettuce.core.cluster.RedisClusterClient.connectAsync(RedisClusterClient.java:417)
at org.springframework.data.redis.connection.lettuce.ClusterConnectionProvider.getConnectionAsync(ClusterConnectionProvider.java:108)
at org.springframework.data.redis.connection.lettuce.ClusterConnectionProvider.getConnectionAsync(ClusterConnectionProvider.java:40)
... 31 common frames omitted
Caused by: java.lang.IllegalStateException: Cannot connect, Event executor group is terminated.
at io.lettuce.core.AbstractRedisClient.initializeChannelAsync(AbstractRedisClient.java:360)
at io.lettuce.core.cluster.RedisClusterClient.connectStatefulAsync(RedisClusterClient.java:752)
at io.lettuce.core.cluster.RedisClusterClient.connect(RedisClusterClient.java:658)
at io.lettuce.core.cluster.RedisClusterClient.lambda$connectClusterAsync$7(RedisClusterClient.java:639)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:100)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:100)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:100)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:100)
at reactor.core.publisher.Operators.error(Operators.java:198)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:48)
at reactor.core.publisher.Mono.subscribe(Mono.java:4361)
at reactor.core.publisher.Mono.subscribeWith(Mono.java:4476)
at reactor.core.publisher.Mono.toFuture(Mono.java:4881)
at io.lettuce.core.cluster.RedisClusterClient.connectClusterAsync(RedisClusterClient.java:652)
... 34 common frames omitted
Suppressed: java.lang.IllegalStateException: Cannot connect, Event executor group is terminated.
at io.lettuce.core.AbstractRedisClient.initializeChannelAsync(AbstractRedisClient.java:360)
at io.lettuce.core.cluster.RedisClusterClient.connectStatefulAsync(RedisClusterClient.java:752)
at io.lettuce.core.cluster.RedisClusterClient.connect(RedisClusterClient.java:658)
at io.lettuce.core.cluster.RedisClusterClient.lambda$connectClusterAsync$7(RedisClusterClient.java:639)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)
... 43 common frames omitted
Suppressed: java.lang.IllegalStateException: Cannot connect, Event executor group is terminated.
at io.lettuce.core.AbstractRedisClient.initializeChannelAsync(AbstractRedisClient.java:360)
at io.lettuce.core.cluster.RedisClusterClient.connectStatefulAsync(RedisClusterClient.java:752)
at io.lettuce.core.cluster.RedisClusterClient.connect(RedisClusterClient.java:658)
at io.lettuce.core.cluster.RedisClusterClient.lambda$connectClusterAsync$7(RedisClusterClient.java:639)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)
... 42 common frames omitted
Suppressed: java.lang.IllegalStateException: Cannot connect, Event executor group is terminated.
at io.lettuce.core.AbstractRedisClient.initializeChannelAsync(AbstractRedisClient.java:360)
at io.lettuce.core.cluster.RedisClusterClient.connectStatefulAsync(RedisClusterClient.java:752)
at io.lettuce.core.cluster.RedisClusterClient.connect(RedisClusterClient.java:658)
at io.lettuce.core.cluster.RedisClusterClient.lambda$connectClusterAsync$7(RedisClusterClient.java:639)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)
... 41 common frames omitted
Suppressed: java.lang.IllegalStateException: Cannot connect, Event executor group is terminated.
at io.lettuce.core.AbstractRedisClient.initializeChannelAsync(AbstractRedisClient.java:360)
at io.lettuce.core.cluster.RedisClusterClient.connectStatefulAsync(RedisClusterClient.java:752)
at io.lettuce.core.cluster.RedisClusterClient.connect(RedisClusterClient.java:658)
at io.lettuce.core.cluster.RedisClusterClient.lambda$connectClusterAsync$7(RedisClusterClient.java:639)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)
... 40 common frames omitted
Suppressed: java.lang.IllegalStateException: Cannot connect, Event executor group is terminated.
at io.lettuce.core.AbstractRedisClient.initializeChannelAsync(AbstractRedisClient.java:360)
at io.lettuce.core.cluster.RedisClusterClient.connectStatefulAsync(RedisClusterClient.java:752)
at io.lettuce.core.cluster.RedisClusterClient.connect(RedisClusterClient.java:658)
at io.lettuce.core.cluster.RedisClusterClient.lambda$connectClusterAsync$6(RedisClusterClient.java:635)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:44)
... 38 common frames omitted
restart redis-cluster

When using clickhouse-jdbc, I occasionally encounter this error when querying data, and I wonder what the reason is?

ru.yandex.clickhouse.except.ClickHouseUnknownException: ClickHouse exception, code: 1002, host: xxx.xxx.xxx.xxx, port: 8123; xxx.xxx.xxx.xxx:8123 failed to respond
at ru.yandex.clickhouse.except.ClickHouseExceptionSpecifier.getException(ClickHouseExceptionSpecifier.java:91)
at ru.yandex.clickhouse.except.ClickHouseExceptionSpecifier.specify(ClickHouseExceptionSpecifier.java:55)
at ru.yandex.clickhouse.except.ClickHouseExceptionSpecifier.specify(ClickHouseExceptionSpecifier.java:24)
at ru.yandex.clickhouse.ClickHouseStatementImpl.getInputStream(ClickHouseStatementImpl.java:633)
at ru.yandex.clickhouse.ClickHouseStatementImpl.executeQuery(ClickHouseStatementImpl.java:117)
at ru.yandex.clickhouse.ClickHouseStatementImpl.executeQuery(ClickHouseStatementImpl.java:100)
at ru.yandex.clickhouse.ClickHouseStatementImpl.executeQuery(ClickHouseStatementImpl.java:95)
at ru.yandex.clickhouse.ClickHouseStatementImpl.executeQuery(ClickHouseStatementImpl.java:90)
Caused by: org.apache.http.NoHttpResponseException: xxx.xxx.xxx.xxx:8123 failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:143)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at ru.yandex.clickhouse.ClickHouseStatementImpl.getInputStream(ClickHouseStatementImpl.java:614)
... 6 more
It's a famous bug in JDBC-driver https://github.com/ClickHouse/clickhouse-jdbc/issues/290

Delayed execution for 'Explain' queries on Prestosql clusters

I have two types of prestosql clusters, on aws instances and on Kubernetes. Prestosql on K8s has a weird issue with 'EXPLAIN' queries as it takes a long time ~2-3 mins compared to 2-3 seconds on the instance one.
The query stays on "WAITING_FOR_RESOURCES" for about 2 minutes and then executes very quickly.
There is also an exception on the server logs
2020-12-23T05:25:01.930Z ERROR Query-20201223_052431_00004_pxqak-276 io.prestosql.cost.CachingStatsProvider Error occurred when computing stats for query 20201223_052431_00004_pxqak
io.prestosql.spi.PrestoException: HIVE_METASTORE_ERROR
at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore.getMetastorePartitionColumnStatistics(ThriftHiveMetastore.java:461)
at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore.getPartitionColumnStatistics(ThriftHiveMetastore.java:438)
at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore.getPartitionStatistics(ThriftHiveMetastore.java:389)
at io.prestosql.plugin.hive.metastore.thrift.BridgingHiveMetastore.getPartitionStatistics(BridgingHiveMetastore.java:110)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.lambda$loadPartitionColumnStatistics$6(CachingHiveMetastore.java:360)
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.loadPartitionColumnStatistics(CachingHiveMetastore.java:353)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.access$100(CachingHiveMetastore.java:89)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:179)
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:207)
at io.prestosql.cost.JoinStatsRule.doCalculate(JoinStatsRule.java:81)
at io.prestosql.cost.JoinStatsRule.doCalculate(JoinStatsRule.java:48)
at io.prestosql.cost.SimpleStatsRule.calculate(SimpleStatsRule.java:39)
at io.prestosql.cost.ComposableStatsCalculator.calculateStats(ComposableStatsCalculator.java:82)
at io.prestosql.cost.ComposableStatsCalculator.calculateStats(ComposableStatsCalculator.java:70)
at io.prestosql.cost.CachingStatsProvider.getGroupStats(CachingStatsProvider.java:103)
at io.prestosql.cost.CachingStatsProvider.getStats(CachingStatsProvider.java:72)
at io.prestosql.cost.JoinStatsRule.doCalculate(JoinStatsRule.java:81)
at io.prestosql.cost.JoinStatsRule.doCalculate(JoinStatsRule.java:48)
at io.prestosql.cost.SimpleStatsRule.calculate(SimpleStatsRule.java:39)
at io.prestosql.cost.ComposableStatsCalculator.calculateStats(ComposableStatsCalculator.java:82)
at io.prestosql.cost.ComposableStatsCalculator.calculateStats(ComposableStatsCalculator.java:70)
at io.prestosql.cost.CachingStatsProvider.getGroupStats(CachingStatsProvider.java:103)
at io.prestosql.cost.CachingStatsProvider.getStats(CachingStatsProvider.java:72)
at io.prestosql.cost.CostCalculatorWithEstimatedExchanges.calculateJoinExchangeCost(CostCalculatorWithEstimatedExchanges.java:233)
at io.prestosql.cost.CostCalculatorWithEstimatedExchanges.calculateJoinCostWithoutOutput(CostCalculatorWithEstimatedExchanges.java:208)
at io.prestosql.sql.planner.iterative.rule.DetermineJoinDistributionType.getJoinNodeWithCost(DetermineJoinDistributionType.java:180)
at io.prestosql.sql.planner.iterative.rule.DetermineJoinDistributionType.addJoinsWithDifferentDistributions(DetermineJoinDistributionType.java:116)
at io.prestosql.sql.planner.iterative.rule.DetermineJoinDistributionType.getCostBasedJoin(DetermineJoinDistributionType.java:98)
at io.prestosql.sql.planner.iterative.rule.DetermineJoinDistributionType.apply(DetermineJoinDistributionType.java:74)
at io.prestosql.sql.planner.iterative.rule.DetermineJoinDistributionType.apply(DetermineJoinDistributionType.java:49)
at io.prestosql.sql.planner.iterative.IterativeOptimizer.transform(IterativeOptimizer.java:165)
at io.prestosql.sql.planner.iterative.IterativeOptimizer.exploreNode(IterativeOptimizer.java:140)
at io.prestosql.sql.planner.iterative.IterativeOptimizer.exploreGroup(IterativeOptimizer.java:105)
at io.prestosql.sql.planner.iterative.IterativeOptimizer.exploreChildren(IterativeOptimizer.java:190)
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4058)
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4021)
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4972)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.getAll(CachingHiveMetastore.java:255)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.getPartitionStatistics(CachingHiveMetastore.java:330)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.lambda$loadPartitionColumnStatistics$6(CachingHiveMetastore.java:360)
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.loadPartitionColumnStatistics(CachingHiveMetastore.java:353)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.access$100(CachingHiveMetastore.java:89)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:179)
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:207)
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4058)
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4021)
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4972)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.getAll(CachingHiveMetastore.java:255)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.getPartitionStatistics(CachingHiveMetastore.java:330)
at io.prestosql.plugin.hive.HiveMetastoreClosure.getPartitionStatistics(HiveMetastoreClosure.java:88)
at io.prestosql.plugin.hive.metastore.SemiTransactionalHiveMetastore.getPartitionStatistics(SemiTransactionalHiveMetastore.java:256)
at io.prestosql.plugin.hive.statistics.MetastoreHiveStatisticsProvider.getPartitionsStatistics(MetastoreHiveStatisticsProvider.java:126)
at io.prestosql.plugin.hive.statistics.MetastoreHiveStatisticsProvider.lambda$new$0(MetastoreHiveStatisticsProvider.java:104)
at io.prestosql.plugin.hive.statistics.MetastoreHiveStatisticsProvider.getTableStatistics(MetastoreHiveStatisticsProvider.java:146)
at io.prestosql.plugin.hive.HiveMetadata.getTableStatistics(HiveMetadata.java:695)
at io.prestosql.sql.planner.iterative.IterativeOptimizer.exploreGroup(IterativeOptimizer.java:107)
at io.prestosql.sql.planner.iterative.IterativeOptimizer.exploreChildren(IterativeOptimizer.java:190)
at io.prestosql.sql.planner.iterative.IterativeOptimizer.exploreGroup(IterativeOptimizer.java:107)
at io.prestosql.sql.planner.iterative.IterativeOptimizer.optimize(IterativeOptimizer.java:96)
at io.prestosql.sql.planner.LogicalPlanner.plan(LogicalPlanner.java:196)
at io.prestosql.sql.analyzer.QueryExplainer.getLogicalPlan(QueryExplainer.java:182)
at io.prestosql.sql.analyzer.QueryExplainer.getPlan(QueryExplainer.java:121)
at io.prestosql.sql.rewrite.ExplainRewrite$Visitor.getQueryPlan(ExplainRewrite.java:137)
at io.prestosql.sql.rewrite.ExplainRewrite$Visitor.visitExplain(ExplainRewrite.java:115)
at io.prestosql.sql.rewrite.ExplainRewrite$Visitor.visitExplain(ExplainRewrite.java:65)
at io.prestosql.sql.tree.Explain.accept(Explain.java:80)
at io.prestosql.sql.tree.AstVisitor.process(AstVisitor.java:27)
at io.prestosql.sql.rewrite.ExplainRewrite.rewrite(ExplainRewrite.java:62)
at io.prestosql.sql.rewrite.StatementRewrite.rewrite(StatementRewrite.java:57)
at io.prestosql.sql.analyzer.Analyzer.analyze(Analyzer.java:80)
at io.prestosql.sql.analyzer.Analyzer.analyze(Analyzer.java:75)
at io.prestosql.execution.SqlQueryExecution.analyze(SqlQueryExecution.java:221)
at io.prestosql.execution.SqlQueryExecution.<init>(SqlQueryExecution.java:180)
at io.prestosql.execution.SqlQueryExecution.<init>(SqlQueryExecution.java:97)
at io.prestosql.execution.SqlQueryExecution$SqlQueryExecutionFactory.createQueryExecution(SqlQueryExecution.java:732)
at io.prestosql.dispatcher.LocalDispatchQueryFactory.lambda$createDispatchQuery$0(LocalDispatchQueryFactory.java:119)
at io.prestosql.$gen.Presto_330____20201223_050837_2.call(Unknown Source)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: MetaException(message:null)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_statistics_req_result$get_partitions_statistics_req_resultStandardScheme.read(ThriftHiveMetastore.java)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_statistics_req_result$get_partitions_statistics_req_resultStandardScheme.read(ThriftHiveMetastore.java)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_statistics_req_result.read(ThriftHiveMetastore.java)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions_statistics_req(ThriftHiveMetastore.java:4013)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions_statistics_req(ThriftHiveMetastore.java:4000)
at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastoreClient.getPartitionColumnStatistics(ThriftHiveMetastoreClient.java:227)
at io.prestosql.plugin.hive.metastore.thrift.FailureAwareThriftMetastoreClient.lambda$getPartitionColumnStatistics$16(FailureAwareThriftMetastoreClient.java:191)
at io.prestosql.plugin.hive.metastore.thrift.FailureAwareThriftMetastoreClient.runWithHandle(FailureAwareThriftMetastoreClient.java:394)
at io.prestosql.plugin.hive.metastore.thrift.FailureAwareThriftMetastoreClient.getPartitionColumnStatistics(FailureAwareThriftMetastoreClient.java:191)
at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore.lambda$getMetastorePartitionColumnStatistics$15(ThriftHiveMetastore.java:453)
at io.prestosql.plugin.hive.metastore.thrift.ThriftMetastoreApiStats.lambda$wrap$0(ThriftMetastoreApiStats.java:42)
at io.prestosql.plugin.hive.util.RetryDriver.run(RetryDriver.java:130)
at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore.getMetastorePartitionColumnStatistics(ThriftHiveMetastore.java:451)
... 156 more
Suppressed: MetaException(message:null)
... 170 more
Suppressed: MetaException(message:null)
... 170 more
Suppressed: MetaException(message:null)
... 170 more
Suppressed: MetaException(message:null)
... 170 more
Suppressed: MetaException(message:null)
... 170 more
Suppressed: MetaException(message:null)
... 170 more
Suppressed: MetaException(message:null)
... 170 more
Suppressed: MetaException(message:null)
... 170 more
Suppressed: MetaException(message:null)
... 170 more
I tried changing the values of hive.metastore.partition-batch-size.max and hive.metastore-cache-ttl
It seems than in your "slow" deployment the metastore call get_partitions_statistics_req fails for some reason and is getting retried. The retries likely consume all the "waiting" time. Since Presto by default ignores stats calculation failures like this, the query eventually works.
The failure is on the Hive side, so you need to check metastore logs to understand the cause of the failure, since it's not getting propagated on the Presto side.
On the Presto side you can still apply some configuration changes, as a workaround:
disable stats for the Hive connector with hive.table-statistics-enabled configuration property
reduce the time spent retrying metastore calls with hive.metastore.thrift.client.max-retry-time configuration property
make your queries fail loud with global config property optimizer.ignore-stats-calculator-failures=false (unlikely what you want)

Cant connect to MQ Spring boot SSL

I am trying to establish a connection to the MQ service
For SSL connection I use the commands
-Djavax.net.ssl.trustStore=/opt/app/key.jks"
-Djavax.net.ssl.trustStorePassword=111111
Appliction.properties config
ibm.mq.connName=10.20.31.25(1414)
ibm.mq.channel=OIV.CHAN
ibm.mq.queueManager=OIV
ibm.mq.user=TEST
ibm.mq.password=passw0rd
ibm.mq.ssl-cipher-spec=TLS_RSA_WITH_AES_256_CBC_SHA
When the application starts, everything is ok
INFO IbmJmsConfiguration - Initializing SSL context:
protocol=TLSv1.2, keyStore=null, trustStore=/opt/app/key.jks
INFO IbmJmsConfiguration - SSL context initialized:
keyManagers item(s) = 0, trustManagers item(s) = 1
But when making a request to MQ, I get an error
Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2393;AMQ9771: SSL handshake failed. [1=java.lang.IllegalArgumentException[Unsupported ciphersuite SSL_RSA_WITH_AES_256_CBC_SHA256]
text error
Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2393;AMQ9771: SSL handshake
failed. [1=java.lang.IllegalArgumentException[Unsupported ciphersuite
SSL_RSA_WITH_AES_256_CBC_SHA],3=10.90.51.15/10.90.50.15:1414
(10.96.51.15),4=SSLSocket.createSocket,5=default]
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection.makeSocketSecure(RemoteTCPConnection.java:2360)
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection.bindAndConnectSocket(RemoteTCPConnection.java:816)
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection.protocolConnect(RemoteTCPConnection.java:1381)
at com.ibm.mq.jmqi.remote.impl.RemoteConnection.connect(RemoteConnection.java:976)
at com.ibm.mq.jmqi.remote.impl.RemoteConnectionSpecification.getNewConnection(RemoteConnectionSpecification.java:553)
at com.ibm.mq.jmqi.remote.impl.RemoteConnectionSpecification.getSessionFromNewConnection(RemoteConnectionSpecification.java:233)
at com.ibm.mq.jmqi.remote.impl.RemoteConnectionSpecification.getSession(RemoteConnectionSpecification.java:141)
at com.ibm.mq.jmqi.remote.impl.RemoteConnectionPool.getSession(RemoteConnectionPool.java:127)
at com.ibm.mq.jmqi.remote.api.RemoteFAP$Connector.jmqiConnect(RemoteFAP.java:13302)
... 74 common frames omitted
Caused by: java.lang.IllegalArgumentException: Unsupported ciphersuite SSL_RSA_WITH_AES_256_CBC_SHA
at sun.security.ssl.CipherSuite.valueOf(CipherSuite.java:228)
at sun.security.ssl.CipherSuiteList.<init>(CipherSuiteList.java:79)
at sun.security.ssl.SSLSocketImpl.setEnabledCipherSuites(SSLSocketImpl.java:2491)
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection.makeSocketSecure(RemoteTCPConnection.java:2351)
... 82 common frames omitted

Spring boot Show Error with tomcat 8 connection pool Too many open files

Anybody may be help?
I have app that I bilt In Spring Boot whith externa Tomcat 8 using connection pool in context.xml.
But when I call with threads show me error after few minutes.
Look my code and configuration:
TKS 4ALL!!!
context.xml
<Resource name="jdbc/ocp-api" auth="Container"
type="javax.sql.DataSource"
maxTotal="150"
max-active= "100"
max-idle= "80"
min-idle="8"
maxWaitMillis="2000"
username="postgres"
password="postgres"
driverClassName="org.postgresql.Driver"
url="jdbc:postgresql://<myIpServer>:5432/db_ocp_ago2017"/>
application.properties
spring.datasource.jndi-name=jdbc/ocp-api
spring.jpa.show-sql=true
spring.application.name=ocp-api
server.contextPath=/ocp-api
spring.jackson.date-format=yyyy-MM-dd HH:mm:ss.SSSZ
spring.jackson.joda-date-time-format=yyyy-MM-dd' 'HH:mm:ss.SSSZ
logging.level.org.springframework.web=ERROR
logging.level.com.mkyong=DEBUG
# Logging pattern for the console
logging.pattern.console= "%d{yyyy-MM-dd HH:mm:ss} - %msg%n"
# Logging pattern for file
logging.pattern.file= "%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n"
logging.file=/var/supportcomm/ocp/log/ocp-api.log
flyway.baselineOnMigrate=true
26-Jul-2018 16:40:29.049 WARNING
[ContainerBackgroundProcessor[StandardEngine[Catalina]]]
org.apache.catalina.deploy.NamingResourcesImpl.cleanUp Failed to
retrieve JNDI naming context for container
[StandardEngine[Catalina].StandardHost[localhost].StandardContext[/ocp-api]]
so no cleanup was performed for that container
javax.naming.NameNotFoundException: Name [comp/env] is not bound in
this Context. Unable to find [comp].
at org.apache.naming.NamingContext.lookup(NamingContext.java:824)
at org.apache.naming.NamingContext.lookup(NamingContext.java:172)
at org.apache.catalina.deploy.NamingResourcesImpl.cleanUp(NamingResourcesImpl.java:993)
at org.apache.catalina.deploy.NamingResourcesImpl.stopInternal(NamingResourcesImpl.java:975)
at org.apache.catalina.util.LifecycleBase.stop(LifecycleBase.java:221)
at org.apache.catalina.core.StandardContext.stopInternal(StandardContext.java:5551)
at org.apache.catalina.util.LifecycleBase.stop(LifecycleBase.java:221)
at org.apache.catalina.util.LifecycleBase.destroy(LifecycleBase.java:259)
at org.apache.catalina.core.ContainerBase.removeChild(ContainerBase.java:832)
at org.apache.catalina.startup.HostConfig.undeploy(HostConfig.java:1395)
at org.apache.catalina.startup.HostConfig.checkResources(HostConfig.java:1303)
at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1581)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:284)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:95)
at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:90)
at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1140)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1376)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1380)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1348)
at java.lang.Thread.run(Thread.java:745)
"2018-07-26 16:51:27 - Cannot forward to error page for request
[/occupation] as the response has already been committed. As a result,
the response may have the wrong status code. If your application is
running on WebSphere Application Server you may be able to resolve
this problem by setting
com.ibm.ws.webcontainer.invokeFlushAfterService to false
"org.apache.catalina.connector.ClientAbortException:
java.io.IOException: Broken pipe
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:396)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:426)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:345)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:320)
at org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:110)
at com.fasterxml.jackson.core.json.UTF8JsonGenerator.flush(UTF8JsonGenerator.java:1054)
at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:607)
at org.springframework.http.converter.json.AbstractJackson2HttpMessageConverter.writeInternal(AbstractJackson2HttpMessageConverter.java:286)
at org.springframework.http.converter.AbstractGenericHttpMessageConverter.write(AbstractGenericHttpMessageConverter.java:106)
at org.springframework.web.servlet.mvc.method.annotation.AbstractMessageConverterMethodProcessor.writeWithMessageConverters(AbstractMessageConverterMethodProcessor.java:231)
at org.springframework.web.servlet.mvc.method.annotation.HttpEntityMethodProcessor.handleReturnValue(HttpEntityMethodProcessor.java:203)
at org.springframework.web.method.support.HandlerMethodReturnValueHandlerComposite.handleReturnValue(HandlerMethodReturnValueHandlerComposite.java:81)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:113)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:622)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.tomcat.util.net.NioChannel.write(NioChannel.java:124)
at org.apache.tomcat.util.net.NioBlockingSelector.write(NioBlockingSelector.java:101)
at org.apache.tomcat.util.net.NioSelectorPool.write(NioSelectorPool.java:172)
at org.apache.coyote.http11.InternalNioOutputBuffer.writeToSocket(InternalNioOutputBuffer.java:139)
at org.apache.coyote.http11.InternalNioOutputBuffer.addToBB(InternalNioOutputBuffer.java:197)
at org.apache.coyote.http11.InternalNioOutputBuffer.access$000(InternalNioOutputBuffer.java:41)
at org.apache.coyote.http11.InternalNioOutputBuffer$SocketOutputBuffer.doWrite(InternalNioOutputBuffer.java:320)
at org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:118)
at org.apache.coyote.http11.AbstractOutputBuffer.doWrite(AbstractOutputBuffer.java:256)
at org.apache.coyote.Response.doWrite(Response.java:491)
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:391)
... 66 common frames omitted
26-Jul-2018 16:52:20.989 SEVERE [http-nio-8280-Acceptor-0]
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run Socket accept
failed java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:693)
at java.lang.Thread.run(Thread.java:745)
[pool-24-thread-14] WARN
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - SQL Error: 0,
SQLState: 08001"2018-07-26 16:52:32 - SQL Error: 0, SQLState: null
2018-07-26 16:52:32 - SQL Error: 0, SQLState: null 2018-07-26 16:52:32
- SQL Error: 0, SQLState: null 2018-07-26 16:52:32 - SQL Error: 0, SQLState: null 2018-07-26 16:52:32 - SQL Error: 0, SQLState: null
2018-07-26 16:52:32 - SQL Error: 0, SQLState: null 2018-07-26 16:52:33
- java.io.FileNotFoundException: /vol/app/apache-tomcat-8.0.38/webapps/ocp-api/WEB-INF/lib/jackson-core-2.8.10.jar
(Too many open files) 2018-07-26 16:52:34 -
java.io.FileNotFoundException:
/vol/app/apache-tomcat-8.0.38/webapps/ocp-api/WEB-INF/lib/jackson-core-2.8.10.jar
(Too many open files) 2018-07-26 16:52:34 -
java.io.FileNotFoundException:
/vol/app/apache-tomcat-8.0.38/webapps/ocp-api/WEB-INF/lib/jackson-core-2.8.10.jar
(Too many open files)

Resources