I'm trying to understand how Spring Boot shut down distributed Hazelcast cache. When I connect and then shut down a second instance I get the following logs:
First Instance (Still Running)
2021-09-20 15:34:47.994 INFO 11492 --- [.IO.thread-in-0] c.h.internal.nio.tcp.TcpIpConnection : [localhost]:8084 [dev] [4.0.2] Initialized new cluster connection between /127.0.0.1:8084 and /127.0.0.1:60552
2021-09-20 15:34:54.048 INFO 11492 --- [ration.thread-0] c.h.internal.cluster.ClusterService : [localhost]:8084 [dev] [4.0.2]
Members {size:2, ver:2} [
Member [localhost]:8084 - 4c874ad9-04d1-4857-8279-f3a47be3070b this
Member [localhost]:8085 - 2282b4e7-2b6d-4e5b-9ac8-dfac988ce39f
]
2021-09-20 15:35:11.087 INFO 11492 --- [.IO.thread-in-0] c.h.internal.nio.tcp.TcpIpConnection : [localhost]:8084 [dev] [4.0.2] Connection[id=1, /127.0.0.1:8084->/127.0.0.1:60552, qualifier=null, endpoint=[localhost]:8085, alive=false, connectionType=MEMBER] closed. Reason: Connection closed by the other side
2021-09-20 15:35:11.092 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Connecting to localhost/127.0.0.1:8085, timeout: 10000, bind-any: true
2021-09-20 15:35:13.126 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Could not connect to: localhost/127.0.0.1:8085. Reason: SocketException[Connection refused: no further information to address localhost/127.0.0.1:8085]
2021-09-20 15:35:15.285 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Connecting to localhost/127.0.0.1:8085, timeout: 10000, bind-any: true
2021-09-20 15:35:17.338 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Could not connect to: localhost/127.0.0.1:8085. Reason: SocketException[Connection refused: no further information to address localhost/127.0.0.1:8085]
2021-09-20 15:35:17.450 INFO 11492 --- [cached.thread-3] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Connecting to localhost/127.0.0.1:8085, timeout: 10000, bind-any: true
2021-09-20 15:35:19.474 INFO 11492 --- [cached.thread-3] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Could not connect to: localhost/127.0.0.1:8085. Reason: SocketException[Connection refused: no further information to address localhost/127.0.0.1:8085]
2021-09-20 15:35:19.474 WARN 11492 --- [cached.thread-3] c.h.i.n.tcp.TcpIpConnectionErrorHandler : [localhost]:8084 [dev] [4.0.2] Removing connection to endpoint [localhost]:8085 Cause => java.net.SocketException {Connection refused: no further information to address localhost/127.0.0.1:8085}, Error-Count: 5
2021-09-20 15:35:19.475 INFO 11492 --- [cached.thread-3] c.h.i.cluster.impl.MembershipManager : [localhost]:8084 [dev] [4.0.2] Removing Member [localhost]:8085 - 2282b4e7-2b6d-4e5b-9ac8-dfac988ce39f
2021-09-20 15:35:19.477 INFO 11492 --- [cached.thread-3] c.h.internal.cluster.ClusterService : [localhost]:8084 [dev] [4.0.2]
Members {size:1, ver:3} [
Member [localhost]:8084 - 4c874ad9-04d1-4857-8279-f3a47be3070b this
]
2021-09-20 15:35:19.478 INFO 11492 --- [cached.thread-7] c.h.t.TransactionManagerService : [localhost]:8084 [dev] [4.0.2] Committing/rolling-back live transactions of [localhost]:8085, UUID: 2282b4e7-2b6d-4e5b-9ac8-dfac988ce39f
It seems that when I shut it down the second instance does not report that it is closing down correctly to the first one. We get a warning after it cannot connect to it for a couple of seconds and therefore removed from the cluster.
Second Instance (The one that was shutdown)
2021-09-20 15:42:03.516 INFO 4900 --- [.ShutdownThread] com.hazelcast.instance.impl.Node : [localhost]:8085 [dev] [4.0.2] Running shutdown hook... Current state: ACTIVE
2021-09-20 15:42:03.520 INFO 4900 --- [ionShutdownHook] o.s.b.w.e.tomcat.GracefulShutdown : Commencing graceful shutdown. Waiting for active requests to complete
2021-09-20 15:42:03.901 INFO 4900 --- [tomcat-shutdown] o.s.b.w.e.tomcat.GracefulShutdown : Graceful shutdown complete
It seams that it is trying to run a shutdown hook, but last report it does is still "ACTIVE" and it never goes to "SHUTTING_DOWN" or "SHUT_DOWN" as mentioned in this artice.
Config
pom.xml
...
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.5.4</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
...
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-all</artifactId>
<version>4.0.2</version>
</dependency>
</dependencies>
...
Just to add some context. I have the following application.yml
---
server:
shutdown: graceful
And the following hazelcast.yaml
---
hazelcast:
shutdown:
policy: GRACEFUL
shutdown.max.wait: 8
network:
port:
auto-increment: true
port-count: 20
port: 8084
join:
multicast:
enabled: false
tcp-ip:
enabled: true
member-list:
- localhost:8084
The question
So my theory is that Spring Boot shuts down hazelcast by terminating it instead of allowing it do shut down gracefully.
How can I make Spring Boot and Hazelcast shut down properly so that the other instances recognizees that it is shutting down rather then just be "gone"?
There are 2 things at play here. First is a real issue terminating the instance instead of gracefully shutting down. The other is seeing it correctly in the logs.
Hazelcast by default registers a shutdown hook that terminates the instance on JVM exit.
You can disable the shutdown hook completely by setting this property:
-Dhazelcast.shutdownhook.enabled=false
Alternatively, you could change the policy to graceful shutdown
-Dhazelcast.shutdownhook.policy=GRACEFUL
but this would result in both spring boot gracefully shutting down = finishing serving requests and Hazelcast instance shutting down concurrently, leading to issues.
To see the logs correctly set the logging type to slf4j:
-Dhazelcast.logging.type=slf4j
Then you will see all the info logs from Hazelcast correctly and also changing the log level via
-Dlogging.level.com.hazelcast=TRACE
works.
Related
My microservice for running migrations with Liquibase and MongoDB doesn't execute the migration when the server starts up, as claimed to do with Spring Boot.
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
<dependency>
<groupId>org.liquibase.ext</groupId>
<artifactId>liquibase-mongodb</artifactId>
<version>4.1.1</version>
</dependency>
</dependencies>
spring:
application:
name: photo-app-liquibase
datasource:
driver-class-name: liquibase.ext.mongodb.database.MongoClientDriver
url: mongodb://localhost:27017/photo-app
liquibase:
change-log: classpath:db/changelog/db.changelog-master.xml
The migration and master file are located in src/main/resources/db/changelog folder
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.8.xsd">
<!-- Tried without "db/changelog" and appending "./" before already -->
<include file="db/changelog/20201029215628_create-users-table.xml"/>
</databaseChangeLog>
But when starting up the server, no sign of the migration being run and also nothing changes in the database.
2020-10-30 08:32:04.928 INFO 12065 --- [ main] c.g.p.PhotoAppLiquibaseApplication : Starting PhotoAppLiquibaseApplication on gabriel with PID 12065 (/home/gabriel/Workspace/spring/spring-microservices-ii/photo-app-liquibase/target/classes started by gabriel in /home/gabriel/Workspace/spring/spring-microservices-ii/photo-app-discovery-service)
2020-10-30 08:32:04.933 INFO 12065 --- [ main] c.g.p.PhotoAppLiquibaseApplication : No active profile set, falling back to default profiles: default
2020-10-30 08:32:05.559 INFO 12065 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data MongoDB repositories in DEFAULT mode.
2020-10-30 08:32:05.582 INFO 12065 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 14ms. Found 0 MongoDB repository interfaces.
2020-10-30 08:32:05.864 INFO 12065 --- [ main] org.mongodb.driver.cluster : Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'}
2020-10-30 08:32:05.920 INFO 12065 --- [localhost:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:1, serverValue:22}] to localhost:27017
2020-10-30 08:32:05.928 INFO 12065 --- [localhost:27017] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=4278944}
2020-10-30 08:32:06.134 INFO 12065 --- [ main] c.g.p.PhotoAppLiquibaseApplication : Started PhotoAppLiquibaseApplication in 1.729 seconds (JVM running for 2.57)
Process finished with exit code 0
Liquibase doesn't support MongoDB, but they provide an extension to support MongoDB:
=> How to use Liquibase-MongoDb-Spring-boot
I have this Spring-Boot 1.5.4 project that needed a clustered database cache with Hazelcast. So the changes I made are these:
pom.xml:
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-eureka-one</artifactId>
<version>1.1</version>
</dependency>
<dependency>
<groupId>org.mybatis.caches</groupId>
<artifactId>mybatis-hazelcast</artifactId>
<version>1.1.1</version>
</dependency>
Bean:
#Bean
public Config hazelcastConfig(EurekaClient eurekaClient) {
EurekaOneDiscoveryStrategyFactory.setEurekaClient(eurekaClient);
Config config = new Config();
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
return config;
}
mapper.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.sjngm.blah.dao.mapper.AttributeMapper">
<resultMap type="attribute" id="attributeResult">
...
</resultMap>
<cache type="org.mybatis.caches.hazelcast.HazelcastCache" eviction="LRU" size="100000" flushInterval="600000" />
...
I don't have a hazelcast.xml or eureka-client.properties.
It starts fine, but logs this:
2019-11-13 09:51:48,003 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] [localhost-startStop-1] Returning cached instance of singleton bean 'org.springframework.transaction.config.internalTransactionAdvisor'
2019-11-13 09:51:48,005 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] [localhost-startStop-1] Finished creating instance of bean 'hazelcastConfig'
2019-11-13 09:51:48,005 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] [localhost-startStop-1] Autowiring by type from bean name 'hazelcastInstance' via factory method to bean named 'hazelcastConfig'
2019-11-13 09:51:48,066 INFO [com.hazelcast.instance.DefaultAddressPicker] [localhost-startStop-1] [LOCAL] [dev] [3.7.7] Prefer IPv4 stack is true.
2019-11-13 09:51:48,124 INFO [com.hazelcast.instance.DefaultAddressPicker] [localhost-startStop-1] [LOCAL] [dev] [3.7.7] Picked [10.20.20.86]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
2019-11-13 09:51:48,142 INFO [com.hazelcast.system] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Hazelcast 3.7.7 (20170404 - e3c56ea) starting at [10.20.20.86]:5701
2019-11-13 09:51:48,142 INFO [com.hazelcast.system] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Copyright (c) 2008-2016, Hazelcast, Inc. All Rights Reserved.
2019-11-13 09:51:48,142 INFO [com.hazelcast.system] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Configured Hazelcast Serialization version : 1
2019-11-13 09:51:48,341 INFO [com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Backpressure is disabled
2019-11-13 09:51:49,006 INFO [com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Starting 4 partition threads
2019-11-13 09:51:49,008 INFO [com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Starting 3 generic threads (1 dedicated for priority tasks)
2019-11-13 09:51:49,013 INFO [com.hazelcast.core.LifecycleService] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] [10.20.20.86]:5701 is STARTING
2019-11-13 09:51:49,014 INFO [com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThreadingModel] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] TcpIpConnectionManager configured with Non Blocking IO-threading model: 3 input threads and 3 output threads
2019-11-13 09:51:49,031 WARN [com.hazelcast.instance.Node] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] No join method is enabled! Starting standalone.
2019-11-13 09:51:49,063 INFO [com.hazelcast.core.LifecycleService] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] [10.20.20.86]:5701 is STARTED
2019-11-13 09:51:49,269 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] [localhost-startStop-1] Eagerly caching bean 'hazelcastInstance' to allow for resolving potential circular references
...
2019-11-13 09:51:50,563 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Registered type handler: 'class [C'
2019-11-13 09:51:50,563 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Registered type handler: 'class java.time.Duration'
2019-11-13 09:51:50,563 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Registered type handler: 'class java.net.URL'
2019-11-13 09:51:50,563 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Registered type handler: 'class java.time.ZonedDateTime'
2019-11-13 09:51:50,655 INFO [com.hazelcast.config.XmlConfigLocator] [main] Loading 'hazelcast-default.xml' from classpath.
2019-11-13 09:51:50,812 INFO [com.hazelcast.instance.DefaultAddressPicker] [main] [LOCAL] [dev] [3.7.7] Prefer IPv4 stack is true.
2019-11-13 09:51:50,867 INFO [com.hazelcast.instance.DefaultAddressPicker] [main] [LOCAL] [dev] [3.7.7] Picked [10.20.20.86]:5702, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5702], bind any local is true
2019-11-13 09:51:50,868 INFO [com.hazelcast.system] [main] [10.20.20.86]:5702 [dev] [3.7.7] Hazelcast 3.7.7 (20170404 - e3c56ea) starting at [10.20.20.86]:5702
2019-11-13 09:51:50,868 INFO [com.hazelcast.system] [main] [10.20.20.86]:5702 [dev] [3.7.7] Copyright (c) 2008-2016, Hazelcast, Inc. All Rights Reserved.
2019-11-13 09:51:50,868 INFO [com.hazelcast.system] [main] [10.20.20.86]:5702 [dev] [3.7.7] Configured Hazelcast Serialization version : 1
2019-11-13 09:51:50,873 INFO [com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator] [main] [10.20.20.86]:5702 [dev] [3.7.7] Backpressure is disabled
2019-11-13 09:51:51,010 INFO [com.hazelcast.instance.Node] [main] [10.20.20.86]:5702 [dev] [3.7.7] Creating MulticastJoiner
2019-11-13 09:51:51,019 INFO [com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl] [main] [10.20.20.86]:5702 [dev] [3.7.7] Starting 4 partition threads
2019-11-13 09:51:51,020 INFO [com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl] [main] [10.20.20.86]:5702 [dev] [3.7.7] Starting 3 generic threads (1 dedicated for priority tasks)
2019-11-13 09:51:51,020 INFO [com.hazelcast.core.LifecycleService] [main] [10.20.20.86]:5702 [dev] [3.7.7] [10.20.20.86]:5702 is STARTING
2019-11-13 09:51:51,021 INFO [com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThreadingModel] [main] [10.20.20.86]:5702 [dev] [3.7.7] TcpIpConnectionManager configured with Non Blocking IO-threading model: 3 input threads and 3 output threads
2019-11-13 09:51:53,952 INFO [com.hazelcast.internal.cluster.impl.MulticastJoiner] [main] [10.20.20.86]:5702 [dev] [3.7.7]
Members [1] {
Member [10.20.20.86]:5702 - d29f6be8-a775-4804-bce3-8e0d3aaaab4b this
}
2019-11-13 09:51:53,953 WARN [com.hazelcast.instance.Node] [main] [10.20.20.86]:5702 [dev] [3.7.7] Config seed port is 5701 and cluster size is 1. Some of the ports seem occupied!
2019-11-13 09:51:53,954 INFO [com.hazelcast.core.LifecycleService] [main] [10.20.20.86]:5702 [dev] [3.7.7] [10.20.20.86]:5702 is STARTED
2019-11-13 09:51:50,917 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Parsed mapper file: 'file [C:\workspaces\projects\com.sjngm.blah.db\target\classes\sqlmap\AttributeMapper.xml]'
It logs the two warnings and I don't know why. At first it tries to instantiate a standalone instance and then it plays along and uses Eureka and "complains" about the opened port 5701.
IMHO the first block shouldn't be there at all, which would result in the second warning not being printed. It looks like Hazelcast initialises itself at first and then Spring-Boot creates the #Bean.
What am I missing here?
As you disabled multicast, you have no joiner for Hazelcast. That is why it prints
No join method is enabled! Starting standalone.
Here is the link how to enable it for Eureka
For older versions like 3.7, you can use to configure Eureka by giving fully qualified class name.
<network>
<discovery-strategies>
<discovery-strategy class="com.hazelcast.eureka.one.EurekaOneDiscoveryStrategy" enabled="true">
<properties>
<property name="namespace">hazelcast</property>
</properties>
</discovery-strategy>
</discovery-strategies>
</network>
P.S: I suggest you to upgrade to latest hazelcast as 3.7.7 is pretty old.
Latest Hazelcast Versions are listed here. https://hazelcast.org/download/
Sometimes my Spring Boot application is shutting down with no clear reasons.
I can only see the following output in the appliaction log:
2019-09-02 01:39:16.199 INFO 23535 --- [ActiveMQ ShutdownHook] o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.15.9 (localhost, ID:example-33285-1567372309839-0:1) is shutting down
2019-09-02 01:39:16.216 INFO 23535 --- [ActiveMQ Connection Executor: vm://localhost#0] o.s.j.c.CachingConnectionFactory : Encountered a JMSException - resetting the underlying JMS Connection
javax.jms.JMSException: peer (vm://localhost#1) stopped.
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:54) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.ActiveMQConnection.onAsyncException(ActiveMQConnection.java:1960) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.ActiveMQConnection.onException(ActiveMQConnection.java:1979) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.TransportFilter.onException(TransportFilter.java:114) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.ResponseCorrelator.onException(ResponseCorrelator.java:126) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.TransportFilter.onException(TransportFilter.java:114) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.vm.VMTransport.stop(VMTransport.java:233) ~[activemq-broker-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.TransportFilter.stop(TransportFilter.java:72) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.TransportFilter.stop(TransportFilter.java:72) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.ResponseCorrelator.stop(ResponseCorrelator.java:132) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.broker.TransportConnection.doStop(TransportConnection.java:1194) ~[activemq-broker-5.15.9.jar!/:5.15.9]
at org.apache.activemq.broker.TransportConnection$4.run(TransportConnection.java:1160) ~[activemq-broker-5.15.9.jar!/:5.15.9]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
Caused by: org.apache.activemq.transport.TransportDisposedIOException: peer (vm://localhost#1) stopped.
... 9 common frames omitted
2019-09-02 01:39:16.218 INFO 23535 --- [Thread-7] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService 'taskScheduler'
2019-09-02 01:39:16.218 INFO 23535 --- [ActiveMQ ShutdownHook] o.a.activemq.broker.TransportConnector : Connector vm://localhost stopped
2019-09-02 01:39:16.225 INFO 23535 --- [Thread-7] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
2019-09-02 01:39:16.230 INFO 23535 --- [ActiveMQ ShutdownHook] o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.15.9 (localhost, ID:example-33285-1567372309839-0:1) uptime 1 hour 27 minutes
2019-09-02 01:39:16.230 INFO 23535 --- [ActiveMQ ShutdownHook] o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.15.9 (localhost, ID:example-33285-1567372309839-0:1) is shutdown
I have no idea what is a cause of this shutdown. What steps should I do in order to determine the reason?
I'm trying to use Hibernate with Spring and PostgreSQL, but I have lots of errors like:
org.postgresql.Driver : Connection error
ERROR 3424 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
I have put postgreSQL.Driver in my libs but nothing to do error say there.
Edit:
2018-03-19 15:54:53.660 INFO 7640 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2018-03-19 15:54:54.664 WARN 7640 --- [ main] unknown.jul.logger : ConnectException occurred while connecting to localhost:5432`
`at com.ttmik.back.MainKt.main(main.kt:19) ~[classes/:na]
2018-03-19 15:54:54.675 ERROR 7640 --- [ main] org.postgresql.Driver : Connection error:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Check if your PostgreSQL can be reached by using telnet. If you get something like:
telnet localhost 5432
Connecting To localhost...
Could not open connection to the host, on port 5432: Connection failed
then most likely need to change listen_addresses to * in postgresql.conf file as per this answer on serverfault.
While subscribing message using DefaultJmsListenerContainerFactory in spring and camel using failover activemq transport I am continuously getting below INFO messages.
2016-08-25 15:00:07,235 [ActiveMQ Task-1] INFO transport.failover.FailoverTransport Successfully connected to tcp://localhost:61616
2016-08-25 15:00:08,265 [ActiveMQ Task-1] INFO transport.failover.FailoverTransport Successfully connected to tcp://localhost:61616
2016-08-25 15:00:08,265 [ActiveMQ Task-1] INFO transport.failover.FailoverTransport Successfully connected to tcp://localhost:61616
2016-08-25 15:00:09,296 [ActiveMQ Task-1] INFO transport.failover.FailoverTransport Successfully connected to tcp://localhost:61616
2016-08-25 15:00:09,328 [ActiveMQ Task-1] INFO transport.failover.FailoverTransport Successfully connected to tcp://localhost:61616
2016-08-25 15:00:10,299 [ActiveMQ Task-1] INFO transport.failover.FailoverTransport Successfully connected to tcp://localhost:61616
2016-08-25 15:00:10,346 [ActiveMQ Task-1] INFO transport.failover.FailoverTransport Successfully connected to tcp://localhost:61616
2016-08-25 15:00:11,318 [ActiveMQ Task-1] INFO transport.failover.FailoverTransport Successfully connected to tcp://localhost:61616
Is this possible to disable this INFO message on console or is there any time interval for printing this message on console?
I have tried to use some ActiveMQ transport connection option but it didn't help me.
The first thing which comes to my mind that you could play around with the failover parameters like documented here: http://activemq.apache.org/failover-transport-reference.html
We found that the connection pool is disabled by default when using SpringBoot and ActiveMQ. We set the following property in our application.yml file to enable the pool:
spring.activemq.pool.enabled: true
Setting the log level to WARN just masks the problem, as it will still be discarding and recreating the connections behind the scenes.
From the ActiveMQ Forum:
The default idleTimeout of the PooledConnectionFactory is only 30
seconds. And physical connections are borrowed in a round-robin fashion.
So if it takes the application more than 30 seconds to cycle through the 5
connections, you'll start observing connection churn, which seems exactly
what's happening in your case.
Is it possible that 30 secs elapsed between subsequent uses of the
JmsTemplate in your scenario?
So the solution should be to update the connection pool's idleTimeout.