hazelcast with spring-boot keeps restarting - spring-boot

I am deploying a spring-boot application that is using hazelcast to a kubernetes cluster for test purpose. In this deployment (using kind) there is only one instance.
Hazelcast is configured like this:
hazelcast:
network:
join:
multicast:
enabled: false
kubernetes:
enabled: true
service-dns: my-app-hs
Which is the same configuration applied in the real deployment, the only difference is that in the real deployment there are 3 instances at least.
The issue I now see is that the spring-boot application is going down and a new instance is starting up again.
Here the full logs (debug level for hazelcast):
{"thread":"main","logger":"org.example.app.MyApplicationKt","message":"Starting MyApplicationKt using Java 11.0.15 on my-app-74975f549-jg9d6 with PID 1 (/app/classes started by root in /)","context":"default","severity":"INFO","time":"2022-06-07T11:40:34.449"}
{"thread":"main","logger":"org.example.app.MyApplicationKt","message":"The following 1 profile is active: \"production\"","context":"default","severity":"INFO","time":"2022-06-07T11:40:34.525"}
{"thread":"main","logger":"org.springframework.cloud.context.scope.GenericScope","message":"BeanFactory id=40671f22-f42d-3e93-8223-2e3e4f329f31","context":"default","severity":"INFO","time":"2022-06-07T11:40:38.584"}
{"thread":"main","logger":"com.hazelcast.internal.config.AbstractConfigLocator","message":"Loading 'hazelcast.yaml' from the classpath.","context":"default","severity":"INFO","time":"2022-06-07T11:40:39.8"}
{"thread":"main","logger":"com.hazelcast.internal.util.JavaVersion","message":"Detected runtime version: Java 11","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:40.648"}
{"thread":"main","logger":"com.hazelcast.instance.impl.HazelcastInstanceFactory","message":"Hazelcast is starting in a Java modular environment (Java 9 and newer) but without proper access to required Java packages. Use additional Java arguments to provide Hazelcast access to Java internal API. The internal API access is used to get the best performance results. Arguments to be used:\n --add-modules java.se --add-exports java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.management/sun.management=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED","context":"default","severity":"WARN","time":"2022-06-07T11:40:40.651"}
{"thread":"main","logger":"com.hazelcast.instance.AddressPicker","message":"[LOCAL] [dev] [5.1.1] Prefer IPv4 stack is true, prefer IPv6 addresses is false","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:40.727"}
{"thread":"main","logger":"com.hazelcast.instance.AddressPicker","message":"[LOCAL] [dev] [5.1.1] Trying to bind inet socket address: 0.0.0.0/0.0.0.0:5701","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:40.758"}
{"thread":"main","logger":"com.hazelcast.instance.AddressPicker","message":"[LOCAL] [dev] [5.1.1] Bind successful to inet socket address: /0:0:0:0:0:0:0:0:5701","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:40.76"}
{"thread":"main","logger":"com.hazelcast.instance.AddressPicker","message":"[LOCAL] [dev] [5.1.1] Picked [10.244.0.44]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:40.762"}
{"thread":"main","logger":"com.hazelcast.system.logo","message":"[10.244.0.44]:5701 [dev] [5.1.1] \n\t+ + o o o o---o o----o o o---o o o----o o--o--o\n\t+ + + + | | / \\ / | | / / \\ | | \n\t+ + + + + o----o o o o o----o | o o o o----o | \n\t+ + + + | | / \\ / | | \\ / \\ | | \n\t+ + o o o o o---o o----o o----o o---o o o o----o o ","context":"default","severity":"INFO","time":"2022-06-07T11:40:40.812"}
{"thread":"main","logger":"com.hazelcast.system","message":"[10.244.0.44]:5701 [dev] [5.1.1] Copyright (c) 2008-2022, Hazelcast, Inc. All Rights Reserved.","context":"default","severity":"INFO","time":"2022-06-07T11:40:40.813"}
{"thread":"main","logger":"com.hazelcast.system","message":"[10.244.0.44]:5701 [dev] [5.1.1] Hazelcast Platform 5.1.1 (20220317 - 5b5fa10) starting at [10.244.0.44]:5701","context":"default","severity":"INFO","time":"2022-06-07T11:40:40.813"}
{"thread":"main","logger":"com.hazelcast.system","message":"[10.244.0.44]:5701 [dev] [5.1.1] Cluster name: dev","context":"default","severity":"INFO","time":"2022-06-07T11:40:40.814"}
{"thread":"main","logger":"com.hazelcast.system","message":"[10.244.0.44]:5701 [dev] [5.1.1] Configured Hazelcast Serialization version: 1","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:40.814"}
{"thread":"main","logger":"com.hazelcast.system","message":"[10.244.0.44]:5701 [dev] [5.1.1] Integrity Checker is disabled. Fail-fast on corrupted executables will not be performed.\nTo enable integrity checker do one of the following: \n - Change member config using Java API: config.setIntegrityCheckerEnabled(true);\n - Change XML/YAML configuration property: Set hazelcast.integrity-checker.enabled to true\n - Add system property: -Dhz.integritychecker.enabled=true (for Hazelcast embedded, works only when loading config via Config.load)\n - Add environment variable: HZ_INTEGRITYCHECKER_ENABLED=true (recommended when running container image. For Hazelcast embedded, works only when loading config via Config.load)","context":"default","severity":"INFO","time":"2022-06-07T11:40:40.815"}
{"thread":"main","logger":"com.hazelcast.system","message":"[10.244.0.44]:5701 [dev] [5.1.1] The Jet engine is disabled.\nTo enable the Jet engine on the members, do one of the following:\n - Change member config using Java API: config.getJetConfig().setEnabled(true)\n - Change XML/YAML configuration property: Set hazelcast.jet.enabled to true\n - Add system property: -Dhz.jet.enabled=true (for Hazelcast embedded, works only when loading config via Config.load)\n - Add environment variable: HZ_JET_ENABLED=true (recommended when running container image. For Hazelcast embedded, works only when loading config via Config.load)","context":"default","severity":"INFO","time":"2022-06-07T11:40:40.822"}
{"thread":"main","logger":"com.hazelcast.internal.util.ServiceLoader","message":"The class com.hazelcast.jet.impl.metrics.JetMetricsDataSerializerHook does not implement the expected interface com.hazelcast.nio.serialization.SerializerHook","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.106"}
{"thread":"main","logger":"com.hazelcast.internal.util.ServiceLoader","message":"The class com.hazelcast.jet.impl.observer.JetObserverDataSerializerHook does not implement the expected interface com.hazelcast.nio.serialization.SerializerHook","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.119"}
{"thread":"main","logger":"com.hazelcast.internal.util.ServiceLoader","message":"The class com.hazelcast.jet.impl.metrics.JetMetricsDataSerializerHook does not implement the expected interface com.hazelcast.nio.serialization.SerializerHook","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.136"}
{"thread":"main","logger":"com.hazelcast.internal.util.ServiceLoader","message":"The class com.hazelcast.jet.impl.observer.JetObserverDataSerializerHook does not implement the expected interface com.hazelcast.nio.serialization.SerializerHook","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.136"}
{"thread":"main","logger":"com.hazelcast.internal.metrics.impl.MetricsConfigHelper","message":"[10.244.0.44]:5701 [dev] [5.1.1] Collecting debug metrics and sending to diagnostics is disabled","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.15"}
{"thread":"main","logger":"com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator","message":"[10.244.0.44]:5701 [dev] [5.1.1] Backpressure is disabled","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.198"}
{"thread":"main","logger":"com.hazelcast.spi.impl.operationservice.impl.InboundResponseHandlerSupplier","message":"[10.244.0.44]:5701 [dev] [5.1.1] Running with 2 response threads","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.232"}
{"thread":"main","logger":"com.hazelcast.internal.server.tcp.LocalAddressRegistry","message":"[10.244.0.44]:5701 [dev] [5.1.1] LinkedAddresses{primaryAddress=[10.244.0.44]:5701, allLinkedAddresses=[[fe80:0:0:0:7049:95ff:fe69:3b4d%eth0]:5701, [10.244.0.44]:5701, [fe80:0:0:0:7049:95ff:fe69:3b4d]:5701, [0:0:0:0:0:0:0:1]:5701, [0:0:0:0:0:0:0:1%lo]:5701, [127.0.0.1]:5701]} are registered for the local member with local uuid=dad735b3-e933-4af2-9756-50bcb47a3491","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.326"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Kubernetes Discovery properties: { service-dns: my-app-hs, service-dns-timeout: 5, service-name: null, service-port: 0, service-label: null, service-label-value: true, namespace: null, pod-label: null, pod-label-value: null, resolve-not-ready-addresses: true, expose-externally-mode: AUTO, use-node-name-as-external-address: false, service-per-pod-label: null, service-per-pod-label-value: null, kubernetes-api-retries: 3, kubernetes-master: https://kubernetes.default.svc}","context":"default","severity":"INFO","time":"2022-06-07T11:40:41.54"}
{"thread":"main","logger":"com.hazelcast.spi.utils.RetryUtils","message":"Couldn't connect to the service, [1] retrying in 1 seconds...","context":"default","severity":"WARN","time":"2022-06-07T11:40:42.088"}
{"thread":"main","logger":"com.hazelcast.spi.utils.RetryUtils","message":"Couldn't connect to the service, [2] retrying in 2 seconds...","context":"default","severity":"WARN","time":"2022-06-07T11:40:43.621"}
{"thread":"main","logger":"com.hazelcast.spi.utils.RetryUtils","message":"Couldn't connect to the service, [3] retrying in 3 seconds...","context":"default","severity":"WARN","time":"2022-06-07T11:40:45.895"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Kubernetes Discovery activated with mode: DNS_LOOKUP","context":"default","severity":"INFO","time":"2022-06-07T11:40:49.284"}
{"thread":"main","logger":"com.hazelcast.system.security","message":"[10.244.0.44]:5701 [dev] [5.1.1] \n🔒 Security recommendations and their status:\n ⚠️ Use a custom cluster name\n ✅ Disable member multicast discovery/join method\n ⚠️ Use advanced networking, separate client and member sockets\n ⚠️ Bind Server sockets to a single network interface (disable hazelcast.socket.server.bind.any)\n ✅ Disable scripting in the Management Center\n ✅ Disable console in the Management Center\n ⚠️ Enable Security (Enterprise)\n ⚠️ Use TLS communication protection (Enterprise)\n ⚠️ Enable auditlog (Enterprise)\nCheck the hazelcast-security-hardened.xml/yaml example config file to find why and how to configure these security related settings.\n","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.286"}
{"thread":"main","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Using Discovery SPI","context":"default","severity":"INFO","time":"2022-06-07T11:40:49.366"}
{"thread":"main","logger":"com.hazelcast.cp.CPSubsystem","message":"[10.244.0.44]:5701 [dev] [5.1.1] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.","context":"default","severity":"WARN","time":"2022-06-07T11:40:49.372"}
{"thread":"main","logger":"com.hazelcast.internal.metrics.impl.MetricsService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Configuring metrics collection, collection interval=5 seconds, retention=5 seconds, publishers=[Management Center Publisher, JMX Publisher]","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.713"}
{"thread":"main","logger":"com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl","message":"[10.244.0.44]:5701 [dev] [5.1.1] Starting 8 partition threads and 5 generic threads (1 dedicated for priority tasks)","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.734"}
{"thread":"main","logger":"com.hazelcast.sql.impl.SqlServiceImpl","message":"[10.244.0.44]:5701 [dev] [5.1.1] Optimizer class \"com.hazelcast.jet.sql.impl.CalciteSqlOptimizer\" not found, falling back to com.hazelcast.sql.impl.optimizer.DisabledSqlOptimizer","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.741"}
{"thread":"main","logger":"com.hazelcast.internal.diagnostics.Diagnostics","message":"[10.244.0.44]:5701 [dev] [5.1.1] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.","context":"default","severity":"INFO","time":"2022-06-07T11:40:49.747"}
{"thread":"main","logger":"com.hazelcast.core.LifecycleService","message":"[10.244.0.44]:5701 [dev] [5.1.1] [10.244.0.44]:5701 is STARTING","context":"default","severity":"INFO","time":"2022-06-07T11:40:49.756"}
{"thread":"main","logger":"com.hazelcast.internal.partition.InternalPartitionService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Adding Member [10.244.0.44]:5701 - dad735b3-e933-4af2-9756-50bcb47a3491 this","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.757"}
{"thread":"main","logger":"com.hazelcast.internal.networking.nio.NioNetworking","message":"[10.244.0.44]:5701 [dev] [5.1.1] TcpIpConnectionManager configured with Non Blocking IO-threading model: 3 input threads and 3 output threads","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.777"}
{"thread":"main","logger":"com.hazelcast.internal.networking.nio.NioNetworking","message":"[10.244.0.44]:5701 [dev] [5.1.1] write through enabled:true","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.778"}
{"thread":"main","logger":"com.hazelcast.internal.networking.nio.NioNetworking","message":"[10.244.0.44]:5701 [dev] [5.1.1] IO threads selector mode is SELECT","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.778"}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/app/libs/hazelcast-5.1.1.jar) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
{"thread":"main","logger":"com.hazelcast.internal.cluster.ClusterService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Setting master address to null","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.821"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:49.833"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:49.845"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:49.865"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:49.906"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:49.987"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:50.148"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:50.469"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:50.97"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:51.471"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:51.971"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:52.472"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:52.973"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:53.474"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:53.976"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:54.477"}
{"thread":"main","logger":"com.hazelcast.internal.cluster.impl.DiscoveryJoiner","message":"[10.244.0.44]:5701 [dev] [5.1.1] This node will assume master role since none of the possible members accepted join request.","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:54.979"}
{"thread":"main","logger":"com.hazelcast.internal.cluster.ClusterService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Setting master address to [10.244.0.44]:5701","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:54.979"}
{"thread":"main","logger":"com.hazelcast.internal.cluster.impl.MembershipManager","message":"[10.244.0.44]:5701 [dev] [5.1.1] Local member list join version is set to 1","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:54.979"}
{"thread":"main","logger":"com.hazelcast.internal.cluster.impl.DiscoveryJoiner","message":"[10.244.0.44]:5701 [dev] [5.1.1] PostJoin master: [10.244.0.44]:5701, isMaster: true","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:54.98"}
{"thread":"main","logger":"com.hazelcast.internal.cluster.ClusterService","message":"[10.244.0.44]:5701 [dev] [5.1.1] \n\nMembers {size:1, ver:1} [\n\tMember [10.244.0.44]:5701 - dad735b3-e933-4af2-9756-50bcb47a3491 this\n]\n","context":"default","severity":"INFO","time":"2022-06-07T11:40:54.98"}
{"thread":"main","logger":"com.hazelcast.core.LifecycleService","message":"[10.244.0.44]:5701 [dev] [5.1.1] [10.244.0.44]:5701 is STARTED","context":"default","severity":"INFO","time":"2022-06-07T11:40:54.996"}
{"thread":"main","logger":"org.example.app.proxy.GatewayConfiguration","message":"Adding routes for */gateway with backend http://collaboration-server/co-unblu and identity provider 'microsoft'","context":"default","severity":"INFO","time":"2022-06-07T11:40:55.218"}
{"thread":"main","logger":"org.example.app.proxy.GatewayConfiguration","message":"Adding public (unprotected) route '/gateway/rest/product/all'","context":"default","severity":"INFO","time":"2022-06-07T11:40:55.225"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [After]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.433"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Before]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.434"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Between]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.434"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Cookie]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.434"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Header]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.434"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Host]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.435"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Method]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.435"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Path]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.435"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Query]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.435"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [ReadBody]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.435"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [RemoteAddr]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.436"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Weight]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.436"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [CloudFoundryRouteService]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.436"}
{"thread":"main","logger":"org.springframework.boot.web.embedded.netty.NettyWebServer","message":"Netty started on port 80","context":"default","severity":"INFO","time":"2022-06-07T11:40:57.513"}
{"thread":"main","logger":"org.springframework.boot.actuate.endpoint.web.EndpointLinksResolver","message":"Exposing 6 endpoint(s) beneath base path '/actuator'","context":"default","severity":"INFO","time":"2022-06-07T11:40:57.654"}
{"thread":"main","logger":"org.springframework.boot.web.embedded.netty.NettyWebServer","message":"Netty started on port 8081","context":"default","severity":"INFO","time":"2022-06-07T11:40:57.728"}
{"thread":"main","logger":"org.example.app.MyApplicationKt","message":"Started MyApplicationKt in 25.429 seconds (JVM running for 26.302)","context":"default","severity":"INFO","time":"2022-06-07T11:40:57.817"}
{"thread":"hz.unruffled_matsumoto.cached.thread-2","logger":"com.hazelcast.internal.cluster.impl.MembershipManager","message":"[10.244.0.44]:5701 [dev] [5.1.1] Sending member list to the non-master nodes: \n\nMembers {size:1, ver:1} [\n\tMember [10.244.0.44]:5701 - dad735b3-e933-4af2-9756-50bcb47a3491 this\n]\n","context":"default","severity":"DEBUG","time":"2022-06-07T11:41:49.728"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Running shutdown hook... Current state: ACTIVE","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.095"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.core.LifecycleService","message":"[10.244.0.44]:5701 [dev] [5.1.1] [10.244.0.44]:5701 is SHUTTING_DOWN","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.095"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Terminating forcefully...","context":"default","severity":"WARN","time":"2022-06-07T11:41:58.099"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.internal.cluster.ClusterService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Setting master address to null","context":"default","severity":"DEBUG","time":"2022-06-07T11:41:58.1"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Shutting down connection manager...","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.1"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Shutting down node engine...","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.102"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.internal.cluster.ClusterService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Setting master address to null","context":"default","severity":"DEBUG","time":"2022-06-07T11:41:58.108"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.instance.impl.NodeExtension","message":"[10.244.0.44]:5701 [dev] [5.1.1] Destroying node NodeExtension.","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.11"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Hazelcast Shutdown is completed in 12 ms.","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.111"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.core.LifecycleService","message":"[10.244.0.44]:5701 [dev] [5.1.1] [10.244.0.44]:5701 is SHUTDOWN","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.111"}
{"thread":"SpringApplicationShutdownHook","logger":"com.hazelcast.core.LifecycleService","message":"[10.244.0.44]:5701 [dev] [5.1.1] [10.244.0.44]:5701 is SHUTTING_DOWN","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.111"}
{"thread":"SpringApplicationShutdownHook","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Node is already shutting down... Waiting for shutdown process to complete...","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.111"}
{"thread":"SpringApplicationShutdownHook","logger":"com.hazelcast.core.LifecycleService","message":"[10.244.0.44]:5701 [dev] [5.1.1] [10.244.0.44]:5701 is SHUTDOWN","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.111"}
Stream closed EOF for env-easing-grove/my-app-74975f549-jg9d6 (my-app)
Why is the application going down all the time? Because there is only one instance?
Update:
I verified with k9s with :svc the services. The services my-app and my-app-hs refer to the same pot.
But when starting with 2 replicas they will not find each other. So really the dns lookup fails in this kind cluster.

In my case the issue was cause by an issue with the configuration of the livenessProbe. The wrong port was configured for the liveness probe. Fixing the port to be the one of the spring actuator fixed the problem.
livenessProbe {
httpGet {
path = "/actuator/health/liveness"
this.port = IntOrString(managamentPort)
}
initialDelaySeconds = 60
failureThreshold = 6
}
General suggestion if you come across the same problem: Check your liveness and readiness probe.

Related

Slow startup of hazelcast

I have a spring-boot application, and i would like to use hazelcast as a cache provider with spring-boot caching.
I have the following configuration in a hazelcast.yaml file:
hazelcast:
cluster-name: message-handler-cluster
network:
join:
auto-detection:
enabled: false
multicast:
enabled: false
tcp-ip:
enabled: true
member-list:
- 127.0.0.1
integrity-checker:
enabled: false
When i use hazelcast-spring 5.1.1 maven dependency hazelcast is starting painfully slow. Here is the startup log:
2022-04-19 09:59:30 WARN (main) [] [StandardLoggerFactory$StandardLogger] [] [] Hazelcast is starting in a Java modular environment (Java 9 and newer) but without proper access to required Java packages. Use additional Java arguments to provide Hazelcast access to Java internal API. The internal API access is used to get the best performance results. Arguments to be used:
--add-modules java.se --add-exports java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.management/sun.management=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [LOCAL] [message-handler-cluster] [5.1.1] Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [127.0.0.1]
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1]
+ + o o o o---o o----o o o---o o o----o o--o--o
+ + + + | | / \ / | | / / \ | |
+ + + + + o----o o o o o----o | o o o o----o |
+ + + + | | / \ / | | \ / \ | |
+ + o o o o o---o o----o o----o o---o o o o----o o
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Copyright (c) 2008-2022, Hazelcast, Inc. All Rights Reserved.
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Hazelcast Platform 5.1.1 (20220317 - 5b5fa10) starting at [127.0.0.1]:5701
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Cluster name: message-handler-cluster
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Integrity Checker is disabled. Fail-fast on corrupted executables will not be performed.
To enable integrity checker do one of the following:
- Change member config using Java API: config.setIntegrityCheckerEnabled(true);
- Change XML/YAML configuration property: Set hazelcast.integrity-checker.enabled to true
- Add system property: -Dhz.integritychecker.enabled=true (for Hazelcast embedded, works only when loading config via Config.load)
- Add environment variable: HZ_INTEGRITYCHECKER_ENABLED=true (recommended when running container image. For Hazelcast embedded, works only when loading config via Config.load)
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] The Jet engine is disabled.
To enable the Jet engine on the members, do one of the following:
- Change member config using Java API: config.getJetConfig().setEnabled(true)
- Change XML/YAML configuration property: Set hazelcast.jet.enabled to true
- Add system property: -Dhz.jet.enabled=true (for Hazelcast embedded, works only when loading config via Config.load)
- Add environment variable: HZ_JET_ENABLED=true (recommended when running container image. For Hazelcast embedded, works only when loading config via Config.load)
2022-04-19 10:00:22 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Enable DEBUG/FINE log level for log category com.hazelcast.system.security or use -Dhazelcast.security.recommendations system property to see security recommendations and the status of current config.
2022-04-19 10:00:23 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Using TCP/IP discovery
2022-04-19 10:00:23 WARN (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
2022-04-19 10:00:23 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2022-04-19 10:00:23 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] [127.0.0.1]:5701 is STARTING
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/C:/Users/ZC15PL/.m2/repository/com/hazelcast/hazelcast/5.1.1/hazelcast-5.1.1.jar) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2022-04-19 10:00:25 INFO (hz.upbeat_engelbart.cached.thread-2) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] [127.0.0.1]:5703 is added to the blacklist.
2022-04-19 10:00:25 INFO (hz.upbeat_engelbart.cached.thread-1) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] [127.0.0.1]:5702 is added to the blacklist.
2022-04-19 10:00:26 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1]
Members {size:1, ver:1} [
Member [127.0.0.1]:5701 - 01db03fc-2dc4-45a6-813c-80b843d1e1b4 this
]
2022-04-19 10:00:26 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] [127.0.0.1]:5701 is STARTED
Startup time is about 30-40 seconds.
When i use hazelcast-spring 4.2.4 maven dependency hazelcast is starting very quick. Here is the startup log:
2022-04-19 10:04:41 WARN (main) [] [StandardLoggerFactory$StandardLogger] [] [] Hazelcast is starting in a Java modular environment (Java 9 and newer) but without proper access to required Java packages. Use additional Java arguments to provide Hazelcast access to Java internal API. The internal API access is used to get the best performance results. Arguments to be used:
--add-modules java.se --add-exports java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.management/sun.management=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED
2022-04-19 10:04:41 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [LOCAL] [message-handler-cluster] [4.2.4] Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [127.0.0.1]
2022-04-19 10:04:41 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] Hazelcast 4.2.4 (20211220 - 25f0049) starting at [127.0.0.1]:5701
2022-04-19 10:04:42 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] Using TCP/IP discovery
2022-04-19 10:04:42 WARN (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
2022-04-19 10:04:43 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2022-04-19 10:04:43 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] [127.0.0.1]:5701 is STARTING
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/C:/Users/ZC15PL/.m2/repository/com/hazelcast/hazelcast/4.2.4/hazelcast-4.2.4.jar) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2022-04-19 10:04:45 INFO (hz.vibrant_ganguly.cached.thread-3) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] [127.0.0.1]:5703 is added to the blacklist.
2022-04-19 10:04:45 INFO (hz.vibrant_ganguly.cached.thread-2) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] [127.0.0.1]:5702 is added to the blacklist.
2022-04-19 10:04:46 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4]
Members {size:1, ver:1} [
Member [127.0.0.1]:5701 - c810dae2-5697-4d3d-9c60-0a7ed89f66d7 this
]
2022-04-19 10:04:46 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] [127.0.0.1]:5701 is STARTED
Startup time is 3-4 seconds.
How should i configure hazelcast 5.1.1 to achieve the above, very quick startup time, or what am i missing in the configuration?
Update:
More detailed (DEBUG) log with 5.1.1 (demo github project):
2022-04-19 16:44:42.425 DEBUG 14620 --- [ main] c.h.s.i.o.impl.BackpressureRegulator : [127.0.0.1]:5701 [demo-cluster] [5.1.1] Backpressure is disabled
2022-04-19 16:44:42.494 DEBUG 14620 --- [ main] h.s.i.o.i.InboundResponseHandlerSupplier : [127.0.0.1]:5701 [demo-cluster] [5.1.1] Running with 2 response threads
2022-04-19 16:45:35.617 DEBUG 14620 --- [ main] c.h.i.server.tcp.LocalAddressRegistry : [127.0.0.1]:5701 [demo-cluster] [5.1.1] LinkedAddresses{primaryAddress=[127.0.0.1]:5701, allLinkedAddresses=[[fe80:0:0:0:5582:4f40:ed57:9ab0%eth3]:5701, [fe80:0:0:0:e154:4b5e:dd8b:23aa%wlan4]:5701, [172.31.64.1]:5701, [fe80:0:0:0:f161:66b7:46bd:3d9c%eth26]:5701, [172.25.112.1]:5701, [fe80:0:0:0:e0b7:e6fc:65d7:29c9]:5701, [fe80:0:0:0:f161:66b7:46bd:3d9c]:5701, [fe80:0:0:0:e154:4b5e:dd8b:23aa]:5701, [fe80:0:0:0:150:ba24:fd1c:f8d6]:5701, [192.168.96.1]:5701, [fe80:0:0:0:f1d8:d5be:d257:80f4]:5701, [fe80:0:0:0:7d64:92ab:6a06:9db2%eth8]:5701, [fe80:0:0:0:b45f:145f:a925:afd%eth15]:5701, [fe80:0:0:0:81c6:d272:c19d:e398%eth51]:5701, [fe80:0:0:0:5918:ecac:57d2:bbc1%net0]:5701, [fe80:0:0:0:4cd7:74e4:1e3c:e686]:5701, [127.0.0.1]:5701, [fe80:0:0:0:4cd7:74e4:1e3c:e686%eth23]:5701, [fe80:0:0:0:e0b7:e6fc:65d7:29c9%eth2]:5701, [fe80:0:0:0:81c6:d272:c19d:e398]:5701, [fe80:0:0:0:1c97:db6c:48a5:dee5]:5701, [fe80:0:0:0:1c97:db6c:48a5:dee5%eth10]:5701, [192.168.160.1]:5701, [fe80:0:0:0:5582:4f40:ed57:9ab0]:5701, [fe80:0:0:0:f1d8:d5be:d257:80f4%eth31]:5701, [fe80:0:0:0:150:ba24:fd1c:f8d6%eth7]:5701, [fe80:0:0:0:20ca:afcd:2c09:c89a]:5701, [fe80:0:0:0:f145:d1f6:dc9b:a5a8]:5701, [fe80:0:0:0:b45f:145f:a925:afd]:5701, [fe80:0:0:0:e927:f676:20d4:509d%wlan3]:5701, [172.16.128.213]:5701, [fe80:0:0:0:20ca:afcd:2c09:c89a%eth21]:5701, [fe80:0:0:0:f145:d1f6:dc9b:a5a8%wlan0]:5701, [10.83.179.202]:5701, [fe80:0:0:0:e927:f676:20d4:509d]:5701, [192.168.224.1]:5701, [172.31.112.1]:5701, [0:0:0:0:0:0:0:1]:5701, [fe80:0:0:0:7d64:92ab:6a06:9db2]:5701, [fe80:0:0:0:5918:ecac:57d2:bbc1]:5701, [172.27.16.1]:5701]} are registered for the local member with local uuid=e291a410-c08f-498d-9cc1-445bf3999ace
2022-04-19 16:45:35.885 DEBUG 14620 --- [ main] com.hazelcast.system.security : [127.0.0.1]:5701 [demo-cluster] [5.1.1]
Cheers,
Zsolt
I'm the one who added this change in 5.1. I didn't think that it would take so long before. With 5.1, we started to register all the addresses of the server sockets in a registry so that the Hazelcast member knows that these addresses are its own addresses. See: https://github.com/hazelcast/hazelcast/blob/bbcb69ae732a0717cbc5a28339c94dfd49a73493/hazelcast/src/main/java/com/hazelcast/internal/server/tcp/LocalAddressRegistry.java#L246-L261. This results in that if a server binds a socket to any local address, it iterates all network interfaces and records these addresses in the registry as the member's own address and this seems to take too long in your environment. Could you try to use -Dhazelcast.socket.bind.any=false property to avoid binding any interface for the member's server sockets, otherwise it binds 0.0.0.0 and we iterate over all the interfaces .

Hazelcast not shutting down gracefully in Spring Boot?

I'm trying to understand how Spring Boot shut down distributed Hazelcast cache. When I connect and then shut down a second instance I get the following logs:
First Instance (Still Running)
2021-09-20 15:34:47.994 INFO 11492 --- [.IO.thread-in-0] c.h.internal.nio.tcp.TcpIpConnection : [localhost]:8084 [dev] [4.0.2] Initialized new cluster connection between /127.0.0.1:8084 and /127.0.0.1:60552
2021-09-20 15:34:54.048 INFO 11492 --- [ration.thread-0] c.h.internal.cluster.ClusterService : [localhost]:8084 [dev] [4.0.2]
Members {size:2, ver:2} [
Member [localhost]:8084 - 4c874ad9-04d1-4857-8279-f3a47be3070b this
Member [localhost]:8085 - 2282b4e7-2b6d-4e5b-9ac8-dfac988ce39f
]
2021-09-20 15:35:11.087 INFO 11492 --- [.IO.thread-in-0] c.h.internal.nio.tcp.TcpIpConnection : [localhost]:8084 [dev] [4.0.2] Connection[id=1, /127.0.0.1:8084->/127.0.0.1:60552, qualifier=null, endpoint=[localhost]:8085, alive=false, connectionType=MEMBER] closed. Reason: Connection closed by the other side
2021-09-20 15:35:11.092 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Connecting to localhost/127.0.0.1:8085, timeout: 10000, bind-any: true
2021-09-20 15:35:13.126 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Could not connect to: localhost/127.0.0.1:8085. Reason: SocketException[Connection refused: no further information to address localhost/127.0.0.1:8085]
2021-09-20 15:35:15.285 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Connecting to localhost/127.0.0.1:8085, timeout: 10000, bind-any: true
2021-09-20 15:35:17.338 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Could not connect to: localhost/127.0.0.1:8085. Reason: SocketException[Connection refused: no further information to address localhost/127.0.0.1:8085]
2021-09-20 15:35:17.450 INFO 11492 --- [cached.thread-3] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Connecting to localhost/127.0.0.1:8085, timeout: 10000, bind-any: true
2021-09-20 15:35:19.474 INFO 11492 --- [cached.thread-3] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Could not connect to: localhost/127.0.0.1:8085. Reason: SocketException[Connection refused: no further information to address localhost/127.0.0.1:8085]
2021-09-20 15:35:19.474 WARN 11492 --- [cached.thread-3] c.h.i.n.tcp.TcpIpConnectionErrorHandler : [localhost]:8084 [dev] [4.0.2] Removing connection to endpoint [localhost]:8085 Cause => java.net.SocketException {Connection refused: no further information to address localhost/127.0.0.1:8085}, Error-Count: 5
2021-09-20 15:35:19.475 INFO 11492 --- [cached.thread-3] c.h.i.cluster.impl.MembershipManager : [localhost]:8084 [dev] [4.0.2] Removing Member [localhost]:8085 - 2282b4e7-2b6d-4e5b-9ac8-dfac988ce39f
2021-09-20 15:35:19.477 INFO 11492 --- [cached.thread-3] c.h.internal.cluster.ClusterService : [localhost]:8084 [dev] [4.0.2]
Members {size:1, ver:3} [
Member [localhost]:8084 - 4c874ad9-04d1-4857-8279-f3a47be3070b this
]
2021-09-20 15:35:19.478 INFO 11492 --- [cached.thread-7] c.h.t.TransactionManagerService : [localhost]:8084 [dev] [4.0.2] Committing/rolling-back live transactions of [localhost]:8085, UUID: 2282b4e7-2b6d-4e5b-9ac8-dfac988ce39f
It seems that when I shut it down the second instance does not report that it is closing down correctly to the first one. We get a warning after it cannot connect to it for a couple of seconds and therefore removed from the cluster.
Second Instance (The one that was shutdown)
2021-09-20 15:42:03.516 INFO 4900 --- [.ShutdownThread] com.hazelcast.instance.impl.Node : [localhost]:8085 [dev] [4.0.2] Running shutdown hook... Current state: ACTIVE
2021-09-20 15:42:03.520 INFO 4900 --- [ionShutdownHook] o.s.b.w.e.tomcat.GracefulShutdown : Commencing graceful shutdown. Waiting for active requests to complete
2021-09-20 15:42:03.901 INFO 4900 --- [tomcat-shutdown] o.s.b.w.e.tomcat.GracefulShutdown : Graceful shutdown complete
It seams that it is trying to run a shutdown hook, but last report it does is still "ACTIVE" and it never goes to "SHUTTING_DOWN" or "SHUT_DOWN" as mentioned in this artice.
Config
pom.xml
...
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.5.4</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
...
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-all</artifactId>
<version>4.0.2</version>
</dependency>
</dependencies>
...
Just to add some context. I have the following application.yml
---
server:
shutdown: graceful
And the following hazelcast.yaml
---
hazelcast:
shutdown:
policy: GRACEFUL
shutdown.max.wait: 8
network:
port:
auto-increment: true
port-count: 20
port: 8084
join:
multicast:
enabled: false
tcp-ip:
enabled: true
member-list:
- localhost:8084
The question
So my theory is that Spring Boot shuts down hazelcast by terminating it instead of allowing it do shut down gracefully.
How can I make Spring Boot and Hazelcast shut down properly so that the other instances recognizees that it is shutting down rather then just be "gone"?
There are 2 things at play here. First is a real issue terminating the instance instead of gracefully shutting down. The other is seeing it correctly in the logs.
Hazelcast by default registers a shutdown hook that terminates the instance on JVM exit.
You can disable the shutdown hook completely by setting this property:
-Dhazelcast.shutdownhook.enabled=false
Alternatively, you could change the policy to graceful shutdown
-Dhazelcast.shutdownhook.policy=GRACEFUL
but this would result in both spring boot gracefully shutting down = finishing serving requests and Hazelcast instance shutting down concurrently, leading to issues.
To see the logs correctly set the logging type to slf4j:
-Dhazelcast.logging.type=slf4j
Then you will see all the info logs from Hazelcast correctly and also changing the log level via
-Dlogging.level.com.hazelcast=TRACE
works.

Hazelcast: Avoid Warning When Running a Cluster

I have this Spring-Boot 1.5.4 project that needed a clustered database cache with Hazelcast. So the changes I made are these:
pom.xml:
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-eureka-one</artifactId>
<version>1.1</version>
</dependency>
<dependency>
<groupId>org.mybatis.caches</groupId>
<artifactId>mybatis-hazelcast</artifactId>
<version>1.1.1</version>
</dependency>
Bean:
#Bean
public Config hazelcastConfig(EurekaClient eurekaClient) {
EurekaOneDiscoveryStrategyFactory.setEurekaClient(eurekaClient);
Config config = new Config();
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
return config;
}
mapper.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.sjngm.blah.dao.mapper.AttributeMapper">
<resultMap type="attribute" id="attributeResult">
...
</resultMap>
<cache type="org.mybatis.caches.hazelcast.HazelcastCache" eviction="LRU" size="100000" flushInterval="600000" />
...
I don't have a hazelcast.xml or eureka-client.properties.
It starts fine, but logs this:
2019-11-13 09:51:48,003 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] [localhost-startStop-1] Returning cached instance of singleton bean 'org.springframework.transaction.config.internalTransactionAdvisor'
2019-11-13 09:51:48,005 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] [localhost-startStop-1] Finished creating instance of bean 'hazelcastConfig'
2019-11-13 09:51:48,005 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] [localhost-startStop-1] Autowiring by type from bean name 'hazelcastInstance' via factory method to bean named 'hazelcastConfig'
2019-11-13 09:51:48,066 INFO [com.hazelcast.instance.DefaultAddressPicker] [localhost-startStop-1] [LOCAL] [dev] [3.7.7] Prefer IPv4 stack is true.
2019-11-13 09:51:48,124 INFO [com.hazelcast.instance.DefaultAddressPicker] [localhost-startStop-1] [LOCAL] [dev] [3.7.7] Picked [10.20.20.86]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
2019-11-13 09:51:48,142 INFO [com.hazelcast.system] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Hazelcast 3.7.7 (20170404 - e3c56ea) starting at [10.20.20.86]:5701
2019-11-13 09:51:48,142 INFO [com.hazelcast.system] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Copyright (c) 2008-2016, Hazelcast, Inc. All Rights Reserved.
2019-11-13 09:51:48,142 INFO [com.hazelcast.system] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Configured Hazelcast Serialization version : 1
2019-11-13 09:51:48,341 INFO [com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Backpressure is disabled
2019-11-13 09:51:49,006 INFO [com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Starting 4 partition threads
2019-11-13 09:51:49,008 INFO [com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Starting 3 generic threads (1 dedicated for priority tasks)
2019-11-13 09:51:49,013 INFO [com.hazelcast.core.LifecycleService] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] [10.20.20.86]:5701 is STARTING
2019-11-13 09:51:49,014 INFO [com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThreadingModel] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] TcpIpConnectionManager configured with Non Blocking IO-threading model: 3 input threads and 3 output threads
2019-11-13 09:51:49,031 WARN [com.hazelcast.instance.Node] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] No join method is enabled! Starting standalone.
2019-11-13 09:51:49,063 INFO [com.hazelcast.core.LifecycleService] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] [10.20.20.86]:5701 is STARTED
2019-11-13 09:51:49,269 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] [localhost-startStop-1] Eagerly caching bean 'hazelcastInstance' to allow for resolving potential circular references
...
2019-11-13 09:51:50,563 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Registered type handler: 'class [C'
2019-11-13 09:51:50,563 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Registered type handler: 'class java.time.Duration'
2019-11-13 09:51:50,563 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Registered type handler: 'class java.net.URL'
2019-11-13 09:51:50,563 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Registered type handler: 'class java.time.ZonedDateTime'
2019-11-13 09:51:50,655 INFO [com.hazelcast.config.XmlConfigLocator] [main] Loading 'hazelcast-default.xml' from classpath.
2019-11-13 09:51:50,812 INFO [com.hazelcast.instance.DefaultAddressPicker] [main] [LOCAL] [dev] [3.7.7] Prefer IPv4 stack is true.
2019-11-13 09:51:50,867 INFO [com.hazelcast.instance.DefaultAddressPicker] [main] [LOCAL] [dev] [3.7.7] Picked [10.20.20.86]:5702, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5702], bind any local is true
2019-11-13 09:51:50,868 INFO [com.hazelcast.system] [main] [10.20.20.86]:5702 [dev] [3.7.7] Hazelcast 3.7.7 (20170404 - e3c56ea) starting at [10.20.20.86]:5702
2019-11-13 09:51:50,868 INFO [com.hazelcast.system] [main] [10.20.20.86]:5702 [dev] [3.7.7] Copyright (c) 2008-2016, Hazelcast, Inc. All Rights Reserved.
2019-11-13 09:51:50,868 INFO [com.hazelcast.system] [main] [10.20.20.86]:5702 [dev] [3.7.7] Configured Hazelcast Serialization version : 1
2019-11-13 09:51:50,873 INFO [com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator] [main] [10.20.20.86]:5702 [dev] [3.7.7] Backpressure is disabled
2019-11-13 09:51:51,010 INFO [com.hazelcast.instance.Node] [main] [10.20.20.86]:5702 [dev] [3.7.7] Creating MulticastJoiner
2019-11-13 09:51:51,019 INFO [com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl] [main] [10.20.20.86]:5702 [dev] [3.7.7] Starting 4 partition threads
2019-11-13 09:51:51,020 INFO [com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl] [main] [10.20.20.86]:5702 [dev] [3.7.7] Starting 3 generic threads (1 dedicated for priority tasks)
2019-11-13 09:51:51,020 INFO [com.hazelcast.core.LifecycleService] [main] [10.20.20.86]:5702 [dev] [3.7.7] [10.20.20.86]:5702 is STARTING
2019-11-13 09:51:51,021 INFO [com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThreadingModel] [main] [10.20.20.86]:5702 [dev] [3.7.7] TcpIpConnectionManager configured with Non Blocking IO-threading model: 3 input threads and 3 output threads
2019-11-13 09:51:53,952 INFO [com.hazelcast.internal.cluster.impl.MulticastJoiner] [main] [10.20.20.86]:5702 [dev] [3.7.7]
Members [1] {
Member [10.20.20.86]:5702 - d29f6be8-a775-4804-bce3-8e0d3aaaab4b this
}
2019-11-13 09:51:53,953 WARN [com.hazelcast.instance.Node] [main] [10.20.20.86]:5702 [dev] [3.7.7] Config seed port is 5701 and cluster size is 1. Some of the ports seem occupied!
2019-11-13 09:51:53,954 INFO [com.hazelcast.core.LifecycleService] [main] [10.20.20.86]:5702 [dev] [3.7.7] [10.20.20.86]:5702 is STARTED
2019-11-13 09:51:50,917 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Parsed mapper file: 'file [C:\workspaces\projects\com.sjngm.blah.db\target\classes\sqlmap\AttributeMapper.xml]'
It logs the two warnings and I don't know why. At first it tries to instantiate a standalone instance and then it plays along and uses Eureka and "complains" about the opened port 5701.
IMHO the first block shouldn't be there at all, which would result in the second warning not being printed. It looks like Hazelcast initialises itself at first and then Spring-Boot creates the #Bean.
What am I missing here?
As you disabled multicast, you have no joiner for Hazelcast. That is why it prints
No join method is enabled! Starting standalone.
Here is the link how to enable it for Eureka
For older versions like 3.7, you can use to configure Eureka by giving fully qualified class name.
<network>
<discovery-strategies>
<discovery-strategy class="com.hazelcast.eureka.one.EurekaOneDiscoveryStrategy" enabled="true">
<properties>
<property name="namespace">hazelcast</property>
</properties>
</discovery-strategy>
</discovery-strategies>
</network>
P.S: I suggest you to upgrade to latest hazelcast as 3.7.7 is pretty old.
Latest Hazelcast Versions are listed here. https://hazelcast.org/download/

Connection Refused when creating a two nodes cluster in Apache NiFi

I'm facing a problem regarding a refused connection on the cluster node protocol port.
I'm using the following configs to create the two nodes cluster:
For the First node manager :
####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=true
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=10.129.140.22
nifi.web.http.port=3000
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=10000
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=
# cluster load balancing properties #
nifi.cluster.load.balance.host=
nifi.cluster.load.balance.port=6342
nifi.cluster.load.balance.connections.per.node=4
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=localhost:2181
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi
For the second node slave:
####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=false
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=
nifi.web.http.port=9021
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=10001
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=
# cluster load balancing properties #
nifi.cluster.load.balance.host=10.129.140.22
nifi.cluster.load.balance.port=6343
nifi.cluster.load.balance.connections.per.node=4
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=10.129.140.22:2181
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi
The logs fils shows the following :
For the slave
2019-05-23 10:37:07,384 INFO [main] o.a.n.c.repository.FileSystemRepository Initializing FileSystemRepository with 'Always Sync' set to false
2019-05-23 10:37:07,541 INFO [main] o.apache.nifi.controller.FlowController Not enabling RAW Socket Site-to-Site functionality because nifi.remote.input.socket.port is not set
2019-05-23 10:37:07,546 INFO [main] o.apache.nifi.controller.FlowController Checking if there is already a Cluster Coordinator Elected...
2019-05-23 10:37:07,591 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2019-05-23 10:37:07,658 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2019-05-23 10:37:07,693 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2019-05-23 10:37:07,697 INFO [main] o.apache.nifi.controller.FlowController The Election for Cluster Coordinator has already begun (Leader is localhost:10000). Will not register to be elected for this role until after connecting to the cluster and inheriting the cluster's flow.
2019-05-23 10:37:07,699 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=true] Registered new Leader Selector for role Cluster Coordinator; this node is a silent observer in the election.
2019-05-23 10:37:07,699 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2019-05-23 10:37:07,703 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector for role Cluster Coordinator; this node is a silent observer in the election.
2019-05-23 10:37:07,703 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] started
2019-05-23 10:37:07,703 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor Heartbeat Monitor started
2019-05-23 10:37:07,706 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2019-05-23 10:37:09,587 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#1a6a4595{nifi-api,/nifi-api,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-api-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-api-1.9.2.war}
2019-05-23 10:37:09,850 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=77ms
2019-05-23 10:37:09,852 INFO [main] o.e.j.s.h.C._nifi_content_viewer No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:09,873 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#4b1b2255{nifi-content-viewer,/nifi-content-viewer,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-content-viewer-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-content-viewer-1.9.2.war}
2019-05-23 10:37:09,895 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=6ms
2019-05-23 10:37:09,896 WARN [main] o.e.j.webapp.StandardDescriptorProcessor Duplicate mapping from / to default
2019-05-23 10:37:09,915 INFO [main] o.e.j.s.h.ContextHandler._nifi_docs No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:09,917 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#4965454c{nifi-docs,/nifi-docs,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-docs-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-docs-1.9.2.war}
2019-05-23 10:37:09,936 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=8ms
2019-05-23 10:37:09,955 INFO [main] o.e.j.server.handler.ContextHandler._ No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:09,957 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#1e4a4ed5{nifi-error,/,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-error-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-error-1.9.2.war}
2019-05-23 10:37:09,967 INFO [main] o.eclipse.jetty.server.AbstractConnector Started ServerConnector#4518bffd{HTTP/1.1,[http/1.1]}{0.0.0.0:9021}
2019-05-23 10:37:09,967 INFO [main] org.eclipse.jetty.server.Server Started #28769ms
2019-05-23 10:37:09,978 INFO [main] org.apache.nifi.web.server.JettyServer Loading Flow...
2019-05-23 10:37:09,982 INFO [main] org.apache.nifi.io.socket.SocketListener Now listening for connections from nodes on port 10001
2019-05-23 10:37:10,026 INFO [main] o.apache.nifi.controller.FlowController Successfully synchronized controller with proposed flow
2019-05-23 10:37:10,071 INFO [main] o.a.nifi.controller.StandardFlowService Connecting Node: localhost:9021
2019-05-23 10:37:10,073 INFO [main] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:10,074 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed to create socket to localhost:10000 due to: java.net.ConnectException: Connection refused (Connection refused)
2019-05-23 10:37:12,715 WARN [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Failed to determine which node is elected active Cluster Coordinator: ZooKeeper reports the address as localhost:10000, but there is no node with this address. Attempted to determine the node's information but failed to retrieve its information due to org.apache.nifi.cluster.protocol.ProtocolException: Failed to create socket due to: java.net.ConnectException: Connection refused (Connection refused)
2019-05-23 10:37:12,720 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for localhost:9021 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:12,721 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for localhost:9021 -- Requesting that node connect to cluster
2019-05-23 10:37:12,721 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of localhost:9021 changed from NodeConnectionStatus[nodeId=localhost:9021, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=1] to NodeConnectionStatus[nodeId=localhost:9021, state=CONNECTING, updateId=3]
2019-05-23 10:37:15,075 INFO [main] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:15,076 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed to create socket to localhost:10000 due to: java.net.ConnectException: Connection refused (Connection refused)
For the manager
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:java.io.tmpdir=/tmp
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:java.compiler=<NA>
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:os.name=Linux
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:os.arch=amd64
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:os.version=4.15.0-20-generic
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:user.name=root
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:user.home=/root
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:user.dir=/home/superman/nifi-1.9.2
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer tickTime set to 2000
2019-05-23 10:36:59,754 INFO [main] o.a.zookeeper.server.ZooKeeperServer minSessionTimeout set to -1
2019-05-23 10:36:59,754 INFO [main] o.a.zookeeper.server.ZooKeeperServer maxSessionTimeout set to -1
2019-05-23 10:36:59,855 INFO [main] o.apache.nifi.controller.FlowController Checking if there is already a Cluster Coordinator Elected...
2019-05-23 10:36:59,903 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2019-05-23 10:36:59,950 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] o.a.zookeeper.server.ZooKeeperServer Client attempting to establish new session at /127.0.0.1:40388
2019-05-23 10:36:59,950 INFO [SyncThread:0] o.a.z.server.persistence.FileTxnLog Creating new log file: log.3c
2019-05-23 10:36:59,963 INFO [SyncThread:0] o.a.zookeeper.server.ZooKeeperServer Established session 0x16ae443f4130000 with negotiated timeout 4000 for client /127.0.0.1:40388
2019-05-23 10:36:59,975 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2019-05-23 10:36:59,998 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2019-05-23 10:37:00,003 INFO [main] o.apache.nifi.controller.FlowController The Election for Cluster Coordinator has already begun (Leader is localhost:10001). Will not register to be elected for this role until after connecting to the cluster and inheriting the cluster's flow.
2019-05-23 10:37:00,005 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=true] Registered new Leader Selector for role Cluster Coordinator; this node is a silent observer in the election.
2019-05-23 10:37:00,005 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2019-05-23 10:37:00,017 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector for role Cluster Coordinator; this node is a silent observer in the election.
2019-05-23 10:37:00,017 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] started
2019-05-23 10:37:00,017 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor Heartbeat Monitor started
2019-05-23 10:37:00,019 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] o.a.zookeeper.server.ZooKeeperServer Client attempting to establish new session at /127.0.0.1:40390
2019-05-23 10:37:00,020 INFO [SyncThread:0] o.a.zookeeper.server.ZooKeeperServer Established session 0x16ae443f4130001 with negotiated timeout 4000 for client /127.0.0.1:40390
2019-05-23 10:37:00,020 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2019-05-23 10:37:02,022 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#1a05ff8e{nifi-api,/nifi-api,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-api-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-api-1.9.2.war}
2019-05-23 10:37:02,373 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=165ms
2019-05-23 10:37:02,375 INFO [main] o.e.j.s.h.C._nifi_content_viewer No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:02,401 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#251e2f4a{nifi-content-viewer,/nifi-content-viewer,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-content-viewer-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-content-viewer-1.9.2.war}
2019-05-23 10:37:02,419 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=6ms
2019-05-23 10:37:02,420 WARN [main] o.e.j.webapp.StandardDescriptorProcessor Duplicate mapping from / to default
2019-05-23 10:37:02,421 INFO [main] o.e.j.s.h.ContextHandler._nifi_docs No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:02,441 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#1abea1ed{nifi-docs,/nifi-docs,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-docs-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-docs-1.9.2.war}
2019-05-23 10:37:02,457 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=6ms
2019-05-23 10:37:02,475 INFO [main] o.e.j.server.handler.ContextHandler._ No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:02,478 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#6f5288c5{nifi-error,/,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-error-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-error-1.9.2.war}
2019-05-23 10:37:02,488 INFO [main] o.eclipse.jetty.server.AbstractConnector Started ServerConnector#167ed1cf{HTTP/1.1,[http/1.1]}{10.129.140.22:3000}
2019-05-23 10:37:02,488 INFO [main] org.eclipse.jetty.server.Server Started #26145ms
2019-05-23 10:37:02,500 INFO [main] org.apache.nifi.web.server.JettyServer Loading Flow...
2019-05-23 10:37:02,503 INFO [main] org.apache.nifi.io.socket.SocketListener Now listening for connections from nodes on port 10000
2019-05-23 10:37:02,545 INFO [main] o.apache.nifi.controller.FlowController Successfully synchronized controller with proposed flow
2019-05-23 10:37:02,587 INFO [main] o.a.nifi.controller.StandardFlowService Connecting Node: 10.129.140.22:3000
2019-05-23 10:37:02,589 INFO [main] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10001; will use this address for sending heartbeat messages
2019-05-23 10:37:02,590 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed to create socket to localhost:10001 due to: java.net.ConnectException: Connection refused (Connection refused)
2019-05-23 10:37:04,001 INFO [SessionTracker] o.a.zookeeper.server.ZooKeeperServer Expiring session 0x16ae42f180d0003, timeout of 4000ms exceeded
2019-05-23 10:37:04,001 INFO [SessionTracker] o.a.zookeeper.server.ZooKeeperServer Expiring session 0x16ae42f180d0002, timeout of 4000ms exceeded
2019-05-23 10:37:05,026 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:05,028 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Requesting that node connect to cluster
2019-05-23 10:37:05,028 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=0] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=5]
2019-05-23 10:37:07,591 WARN [main] o.a.nifi.controller.StandardFlowService There is currently no Cluster Coordinator. This often happens upon restart of NiFi when running an embedded ZooKeeper. Will register this node to become the active Cluster Coordinator and will attempt to connect to cluster again
2019-05-23 10:37:07,594 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector for role Cluster Coordinator; this node is an active participant in the election.
2019-05-23 10:37:07,612 INFO [Leader Election Notification Thread-1] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener#1d6dcdcb This node has been elected Leader for Role 'Cluster Coordinator'
2019-05-23 10:37:07,612 INFO [Leader Election Notification Thread-1] o.apache.nifi.controller.FlowController This node elected Active Cluster Coordinator
2019-05-23 10:37:07,668 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:07,668 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Requesting that node connect to cluster
2019-05-23 10:37:07,669 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=1] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=6]
2019-05-23 10:37:07,675 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:07,675 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Requesting that node connect to cluster
2019-05-23 10:37:07,675 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=2] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=7]
2019-05-23 10:37:07,694 INFO [Process Cluster Protocol Request-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=5] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=5]
2019-05-23 10:37:07,695 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:07,699 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Requesting that node connect to cluster
2019-05-23 10:37:07,700 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=3] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=8]
2019-05-23 10:37:07,701 INFO [Process Cluster Protocol Request-5] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=7] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=7]
2019-05-23 10:37:07,702 INFO [Process Cluster Protocol Request-1] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 19834836-9bda-41b3-8fef-4a288d90c7bf (type=NODE_STATUS_CHANGE, length=1103 bytes) from localhost.localdomain in 33 millis
2019-05-23 10:37:07,702 INFO [Process Cluster Protocol Request-5] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 85b0bb3f-c2a6-4dfd-abd6-e9df14710c4d (type=NODE_STATUS_CHANGE, length=1103 bytes) from localhost.localdomain in 10 millis
2019-05-23 10:37:07,703 INFO [Process Cluster Protocol Request-3] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=6] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=6]
2019-05-23 10:37:07,705 INFO [Process Cluster Protocol Request-3] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 80447901-4ad3-44e3-91ad-d9f075624eae (type=NODE_STATUS_CHANGE, length=1103 bytes) from localhost.localdomain in 31 millis
2019-05-23 10:37:07,706 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Processing reconnection request from cluster coordinator.
2019-05-23 10:37:07,706 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Received a Reconnection Request that contained no DataFlow. Will attempt to connect to cluster using local flow.
2019-05-23 10:37:07,707 INFO [Process Cluster Protocol Request-2] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 22cdceee-c01f-445f-a091-38812e878d10 (type=RECONNECTION_REQUEST, length=3095 bytes) from 10.129.140.22:3000 in 34 millis
2019-05-23 10:37:07,708 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Processing reconnection request from cluster coordinator.
2019-05-23 10:37:07,708 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Received a Reconnection Request that contained no DataFlow. Will attempt to connect to cluster using local flow.
2019-05-23 10:37:07,709 INFO [Process Cluster Protocol Request-4] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 8605cf39-2034-4ee2-92c4-0fbe54e97fb2 (type=RECONNECTION_REQUEST, length=3013 bytes) from 10.129.140.22:3000 in 27 millis
2019-05-23 10:37:07,712 INFO [Reconnect 10.129.140.22:3000] o.a.n.c.c.node.NodeClusterCoordinator Successfully requested that 10.129.140.22:3000 join the cluster
2019-05-23 10:37:07,712 INFO [Reconnect 10.129.140.22:3000] o.a.n.c.c.node.NodeClusterCoordinator Successfully requested that 10.129.140.22:3000 join the cluster
2019-05-23 10:37:07,725 INFO [Process Cluster Protocol Request-6] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 0ca55348-44eb-416b-91dd-3d80da4c5ebe (type=RECONNECTION_REQUEST, length=3013 bytes) from 10.129.140.22:3000 in 29 millis
2019-05-23 10:37:07,725 INFO [Reconnect 10.129.140.22:3000] o.a.n.c.c.node.NodeClusterCoordinator Successfully requested that 10.129.140.22:3000 join the cluster
2019-05-23 10:37:07,728 INFO [Process Cluster Protocol Request-7] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=8] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=8]
2019-05-23 10:37:07,728 INFO [Process Cluster Protocol Request-7] o.a.n.c.p.impl.SocketProtocolListener Finished processing request c9b647d7-67ac-4d0a-833b-8a0a8cc0ba6d (type=NODE_STATUS_CHANGE, length=1103 bytes) from localhost.localdomain in 3 millis
2019-05-23 10:37:07,728 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Connecting Node: 10.129.140.22:3000
2019-05-23 10:37:07,725 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Processing reconnection request from cluster coordinator.
2019-05-23 10:37:07,732 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.h.AbstractHeartbeatMonitor Finished processing 4 heartbeats in 2 seconds, 708 millis
2019-05-23 10:37:07,732 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Received a Reconnection Request that contained no DataFlow. Will attempt to connect to cluster using local flow.
2019-05-23 10:37:07,733 INFO [Reconnect to Cluster] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:07,734 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Connecting Node: 10.129.140.22:3000
2019-05-23 10:37:07,735 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Connecting Node: 10.129.140.22:3000
2019-05-23 10:37:07,736 INFO [Reconnect to Cluster] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:07,736 INFO [Reconnect to Cluster] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:07,748 INFO [Process Cluster Protocol Request-8] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 434daf63-1beb-4b82-9290-bb0da4e89b7f (type=RECONNECTION_REQUEST, length=2972 bytes) from 10.129.140.22:3000 in 16 millis
2019-05-23 10:37:07,749 INFO [Reconnect 10.129.140.22:3000] o.a.n.c.c.node.NodeClusterCoordinator Successfully requested that 10.129.140.22:3000 join the cluster**
Set these properties on both:
nifi.web.http.host=<host>
nifi.cluster.node.address=<host>
Beware of this value visibility in network scopes:
nifi.zookeeper.connect.string=localhost:2181
e.g 'localhost', in the other side you're using real IP addr
They share it during replication, primary/coordinator node election and flow election.

Unable to create Hazelcast 3.0-RC1 on Amazon EC2 machines

I am trying to run Hazelcast cluster (version 3.0-RC1) on Amazon EC2 machine. I followed the example given in the documentation. The machines are not forming a cluster. Instead, I get the following message:
2013-07-16 17:58:41 com.hazelcast.system 11295 INFO [10.168.30.154]:5701 [hzmap]
Copyright (C) 2008-2013 Hazelcast.com
2013-07-16 17:58:41 com.hazelcast.instance.Node 11302 WARN [10.168.30.154]:5701 [hzmap] com.hazelcast.impl.cluster.TcpIpJoinerOverAWS
2013-07-16 17:58:41 com.hazelcast.core.LifecycleService 11304 INFO [10.168.30.154]:5701 [hzmap] Address[10.168.30.154]:5701 is STARTING
2013-07-16 17:58:41 com.hazelcast.instance.Node 11636 WARN [10.168.30.154]:5701 [hzmap] No join method is enabled! Starting standalone.
2013-07-16 17:58:41 com.hazelcast.core.LifecycleService 11739 INFO [10.168.30.154]:5701 [hzmap] Address[10.168.30.154]:5701 is STARTED
2013-07-16 17:58:42 com.hazelcast.partition.PartitionService 12812 INFO [10.168.30.154]:5701 [hzmap] Initializing cluster partition table first arrangement...
Bug has been resolved in RC-2.
It looks like there is a bug in Hazelcast 3.0-RC1
Node#createJoiner() method still using the old package name for TcpIPJoinerAWS class.
Class clazz = Class.forName("com.hazelcast.impl.cluster.TcpIpJoinerOverAWS");
Instead, the current package name for this class is:
Class clazz = Class.forName("com.hazelcast.impl.TcpIpJoinerOverAWS");
This bug is solved in Hazelcast 3 final.

Resources