Slow startup of hazelcast - spring-boot

I have a spring-boot application, and i would like to use hazelcast as a cache provider with spring-boot caching.
I have the following configuration in a hazelcast.yaml file:
hazelcast:
cluster-name: message-handler-cluster
network:
join:
auto-detection:
enabled: false
multicast:
enabled: false
tcp-ip:
enabled: true
member-list:
- 127.0.0.1
integrity-checker:
enabled: false
When i use hazelcast-spring 5.1.1 maven dependency hazelcast is starting painfully slow. Here is the startup log:
2022-04-19 09:59:30 WARN (main) [] [StandardLoggerFactory$StandardLogger] [] [] Hazelcast is starting in a Java modular environment (Java 9 and newer) but without proper access to required Java packages. Use additional Java arguments to provide Hazelcast access to Java internal API. The internal API access is used to get the best performance results. Arguments to be used:
--add-modules java.se --add-exports java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.management/sun.management=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [LOCAL] [message-handler-cluster] [5.1.1] Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [127.0.0.1]
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1]
+ + o o o o---o o----o o o---o o o----o o--o--o
+ + + + | | / \ / | | / / \ | |
+ + + + + o----o o o o o----o | o o o o----o |
+ + + + | | / \ / | | \ / \ | |
+ + o o o o o---o o----o o----o o---o o o o----o o
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Copyright (c) 2008-2022, Hazelcast, Inc. All Rights Reserved.
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Hazelcast Platform 5.1.1 (20220317 - 5b5fa10) starting at [127.0.0.1]:5701
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Cluster name: message-handler-cluster
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Integrity Checker is disabled. Fail-fast on corrupted executables will not be performed.
To enable integrity checker do one of the following:
- Change member config using Java API: config.setIntegrityCheckerEnabled(true);
- Change XML/YAML configuration property: Set hazelcast.integrity-checker.enabled to true
- Add system property: -Dhz.integritychecker.enabled=true (for Hazelcast embedded, works only when loading config via Config.load)
- Add environment variable: HZ_INTEGRITYCHECKER_ENABLED=true (recommended when running container image. For Hazelcast embedded, works only when loading config via Config.load)
2022-04-19 09:59:30 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] The Jet engine is disabled.
To enable the Jet engine on the members, do one of the following:
- Change member config using Java API: config.getJetConfig().setEnabled(true)
- Change XML/YAML configuration property: Set hazelcast.jet.enabled to true
- Add system property: -Dhz.jet.enabled=true (for Hazelcast embedded, works only when loading config via Config.load)
- Add environment variable: HZ_JET_ENABLED=true (recommended when running container image. For Hazelcast embedded, works only when loading config via Config.load)
2022-04-19 10:00:22 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Enable DEBUG/FINE log level for log category com.hazelcast.system.security or use -Dhazelcast.security.recommendations system property to see security recommendations and the status of current config.
2022-04-19 10:00:23 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Using TCP/IP discovery
2022-04-19 10:00:23 WARN (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
2022-04-19 10:00:23 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2022-04-19 10:00:23 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] [127.0.0.1]:5701 is STARTING
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/C:/Users/ZC15PL/.m2/repository/com/hazelcast/hazelcast/5.1.1/hazelcast-5.1.1.jar) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2022-04-19 10:00:25 INFO (hz.upbeat_engelbart.cached.thread-2) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] [127.0.0.1]:5703 is added to the blacklist.
2022-04-19 10:00:25 INFO (hz.upbeat_engelbart.cached.thread-1) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] [127.0.0.1]:5702 is added to the blacklist.
2022-04-19 10:00:26 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1]
Members {size:1, ver:1} [
Member [127.0.0.1]:5701 - 01db03fc-2dc4-45a6-813c-80b843d1e1b4 this
]
2022-04-19 10:00:26 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [5.1.1] [127.0.0.1]:5701 is STARTED
Startup time is about 30-40 seconds.
When i use hazelcast-spring 4.2.4 maven dependency hazelcast is starting very quick. Here is the startup log:
2022-04-19 10:04:41 WARN (main) [] [StandardLoggerFactory$StandardLogger] [] [] Hazelcast is starting in a Java modular environment (Java 9 and newer) but without proper access to required Java packages. Use additional Java arguments to provide Hazelcast access to Java internal API. The internal API access is used to get the best performance results. Arguments to be used:
--add-modules java.se --add-exports java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.management/sun.management=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED
2022-04-19 10:04:41 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [LOCAL] [message-handler-cluster] [4.2.4] Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [127.0.0.1]
2022-04-19 10:04:41 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] Hazelcast 4.2.4 (20211220 - 25f0049) starting at [127.0.0.1]:5701
2022-04-19 10:04:42 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] Using TCP/IP discovery
2022-04-19 10:04:42 WARN (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
2022-04-19 10:04:43 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2022-04-19 10:04:43 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] [127.0.0.1]:5701 is STARTING
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/C:/Users/ZC15PL/.m2/repository/com/hazelcast/hazelcast/4.2.4/hazelcast-4.2.4.jar) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2022-04-19 10:04:45 INFO (hz.vibrant_ganguly.cached.thread-3) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] [127.0.0.1]:5703 is added to the blacklist.
2022-04-19 10:04:45 INFO (hz.vibrant_ganguly.cached.thread-2) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] [127.0.0.1]:5702 is added to the blacklist.
2022-04-19 10:04:46 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4]
Members {size:1, ver:1} [
Member [127.0.0.1]:5701 - c810dae2-5697-4d3d-9c60-0a7ed89f66d7 this
]
2022-04-19 10:04:46 INFO (main) [] [StandardLoggerFactory$StandardLogger] [] [] [127.0.0.1]:5701 [message-handler-cluster] [4.2.4] [127.0.0.1]:5701 is STARTED
Startup time is 3-4 seconds.
How should i configure hazelcast 5.1.1 to achieve the above, very quick startup time, or what am i missing in the configuration?
Update:
More detailed (DEBUG) log with 5.1.1 (demo github project):
2022-04-19 16:44:42.425 DEBUG 14620 --- [ main] c.h.s.i.o.impl.BackpressureRegulator : [127.0.0.1]:5701 [demo-cluster] [5.1.1] Backpressure is disabled
2022-04-19 16:44:42.494 DEBUG 14620 --- [ main] h.s.i.o.i.InboundResponseHandlerSupplier : [127.0.0.1]:5701 [demo-cluster] [5.1.1] Running with 2 response threads
2022-04-19 16:45:35.617 DEBUG 14620 --- [ main] c.h.i.server.tcp.LocalAddressRegistry : [127.0.0.1]:5701 [demo-cluster] [5.1.1] LinkedAddresses{primaryAddress=[127.0.0.1]:5701, allLinkedAddresses=[[fe80:0:0:0:5582:4f40:ed57:9ab0%eth3]:5701, [fe80:0:0:0:e154:4b5e:dd8b:23aa%wlan4]:5701, [172.31.64.1]:5701, [fe80:0:0:0:f161:66b7:46bd:3d9c%eth26]:5701, [172.25.112.1]:5701, [fe80:0:0:0:e0b7:e6fc:65d7:29c9]:5701, [fe80:0:0:0:f161:66b7:46bd:3d9c]:5701, [fe80:0:0:0:e154:4b5e:dd8b:23aa]:5701, [fe80:0:0:0:150:ba24:fd1c:f8d6]:5701, [192.168.96.1]:5701, [fe80:0:0:0:f1d8:d5be:d257:80f4]:5701, [fe80:0:0:0:7d64:92ab:6a06:9db2%eth8]:5701, [fe80:0:0:0:b45f:145f:a925:afd%eth15]:5701, [fe80:0:0:0:81c6:d272:c19d:e398%eth51]:5701, [fe80:0:0:0:5918:ecac:57d2:bbc1%net0]:5701, [fe80:0:0:0:4cd7:74e4:1e3c:e686]:5701, [127.0.0.1]:5701, [fe80:0:0:0:4cd7:74e4:1e3c:e686%eth23]:5701, [fe80:0:0:0:e0b7:e6fc:65d7:29c9%eth2]:5701, [fe80:0:0:0:81c6:d272:c19d:e398]:5701, [fe80:0:0:0:1c97:db6c:48a5:dee5]:5701, [fe80:0:0:0:1c97:db6c:48a5:dee5%eth10]:5701, [192.168.160.1]:5701, [fe80:0:0:0:5582:4f40:ed57:9ab0]:5701, [fe80:0:0:0:f1d8:d5be:d257:80f4%eth31]:5701, [fe80:0:0:0:150:ba24:fd1c:f8d6%eth7]:5701, [fe80:0:0:0:20ca:afcd:2c09:c89a]:5701, [fe80:0:0:0:f145:d1f6:dc9b:a5a8]:5701, [fe80:0:0:0:b45f:145f:a925:afd]:5701, [fe80:0:0:0:e927:f676:20d4:509d%wlan3]:5701, [172.16.128.213]:5701, [fe80:0:0:0:20ca:afcd:2c09:c89a%eth21]:5701, [fe80:0:0:0:f145:d1f6:dc9b:a5a8%wlan0]:5701, [10.83.179.202]:5701, [fe80:0:0:0:e927:f676:20d4:509d]:5701, [192.168.224.1]:5701, [172.31.112.1]:5701, [0:0:0:0:0:0:0:1]:5701, [fe80:0:0:0:7d64:92ab:6a06:9db2]:5701, [fe80:0:0:0:5918:ecac:57d2:bbc1]:5701, [172.27.16.1]:5701]} are registered for the local member with local uuid=e291a410-c08f-498d-9cc1-445bf3999ace
2022-04-19 16:45:35.885 DEBUG 14620 --- [ main] com.hazelcast.system.security : [127.0.0.1]:5701 [demo-cluster] [5.1.1]
Cheers,
Zsolt

I'm the one who added this change in 5.1. I didn't think that it would take so long before. With 5.1, we started to register all the addresses of the server sockets in a registry so that the Hazelcast member knows that these addresses are its own addresses. See: https://github.com/hazelcast/hazelcast/blob/bbcb69ae732a0717cbc5a28339c94dfd49a73493/hazelcast/src/main/java/com/hazelcast/internal/server/tcp/LocalAddressRegistry.java#L246-L261. This results in that if a server binds a socket to any local address, it iterates all network interfaces and records these addresses in the registry as the member's own address and this seems to take too long in your environment. Could you try to use -Dhazelcast.socket.bind.any=false property to avoid binding any interface for the member's server sockets, otherwise it binds 0.0.0.0 and we iterate over all the interfaces .

Related

hazelcast with spring-boot keeps restarting

I am deploying a spring-boot application that is using hazelcast to a kubernetes cluster for test purpose. In this deployment (using kind) there is only one instance.
Hazelcast is configured like this:
hazelcast:
network:
join:
multicast:
enabled: false
kubernetes:
enabled: true
service-dns: my-app-hs
Which is the same configuration applied in the real deployment, the only difference is that in the real deployment there are 3 instances at least.
The issue I now see is that the spring-boot application is going down and a new instance is starting up again.
Here the full logs (debug level for hazelcast):
{"thread":"main","logger":"org.example.app.MyApplicationKt","message":"Starting MyApplicationKt using Java 11.0.15 on my-app-74975f549-jg9d6 with PID 1 (/app/classes started by root in /)","context":"default","severity":"INFO","time":"2022-06-07T11:40:34.449"}
{"thread":"main","logger":"org.example.app.MyApplicationKt","message":"The following 1 profile is active: \"production\"","context":"default","severity":"INFO","time":"2022-06-07T11:40:34.525"}
{"thread":"main","logger":"org.springframework.cloud.context.scope.GenericScope","message":"BeanFactory id=40671f22-f42d-3e93-8223-2e3e4f329f31","context":"default","severity":"INFO","time":"2022-06-07T11:40:38.584"}
{"thread":"main","logger":"com.hazelcast.internal.config.AbstractConfigLocator","message":"Loading 'hazelcast.yaml' from the classpath.","context":"default","severity":"INFO","time":"2022-06-07T11:40:39.8"}
{"thread":"main","logger":"com.hazelcast.internal.util.JavaVersion","message":"Detected runtime version: Java 11","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:40.648"}
{"thread":"main","logger":"com.hazelcast.instance.impl.HazelcastInstanceFactory","message":"Hazelcast is starting in a Java modular environment (Java 9 and newer) but without proper access to required Java packages. Use additional Java arguments to provide Hazelcast access to Java internal API. The internal API access is used to get the best performance results. Arguments to be used:\n --add-modules java.se --add-exports java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.management/sun.management=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED","context":"default","severity":"WARN","time":"2022-06-07T11:40:40.651"}
{"thread":"main","logger":"com.hazelcast.instance.AddressPicker","message":"[LOCAL] [dev] [5.1.1] Prefer IPv4 stack is true, prefer IPv6 addresses is false","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:40.727"}
{"thread":"main","logger":"com.hazelcast.instance.AddressPicker","message":"[LOCAL] [dev] [5.1.1] Trying to bind inet socket address: 0.0.0.0/0.0.0.0:5701","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:40.758"}
{"thread":"main","logger":"com.hazelcast.instance.AddressPicker","message":"[LOCAL] [dev] [5.1.1] Bind successful to inet socket address: /0:0:0:0:0:0:0:0:5701","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:40.76"}
{"thread":"main","logger":"com.hazelcast.instance.AddressPicker","message":"[LOCAL] [dev] [5.1.1] Picked [10.244.0.44]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:40.762"}
{"thread":"main","logger":"com.hazelcast.system.logo","message":"[10.244.0.44]:5701 [dev] [5.1.1] \n\t+ + o o o o---o o----o o o---o o o----o o--o--o\n\t+ + + + | | / \\ / | | / / \\ | | \n\t+ + + + + o----o o o o o----o | o o o o----o | \n\t+ + + + | | / \\ / | | \\ / \\ | | \n\t+ + o o o o o---o o----o o----o o---o o o o----o o ","context":"default","severity":"INFO","time":"2022-06-07T11:40:40.812"}
{"thread":"main","logger":"com.hazelcast.system","message":"[10.244.0.44]:5701 [dev] [5.1.1] Copyright (c) 2008-2022, Hazelcast, Inc. All Rights Reserved.","context":"default","severity":"INFO","time":"2022-06-07T11:40:40.813"}
{"thread":"main","logger":"com.hazelcast.system","message":"[10.244.0.44]:5701 [dev] [5.1.1] Hazelcast Platform 5.1.1 (20220317 - 5b5fa10) starting at [10.244.0.44]:5701","context":"default","severity":"INFO","time":"2022-06-07T11:40:40.813"}
{"thread":"main","logger":"com.hazelcast.system","message":"[10.244.0.44]:5701 [dev] [5.1.1] Cluster name: dev","context":"default","severity":"INFO","time":"2022-06-07T11:40:40.814"}
{"thread":"main","logger":"com.hazelcast.system","message":"[10.244.0.44]:5701 [dev] [5.1.1] Configured Hazelcast Serialization version: 1","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:40.814"}
{"thread":"main","logger":"com.hazelcast.system","message":"[10.244.0.44]:5701 [dev] [5.1.1] Integrity Checker is disabled. Fail-fast on corrupted executables will not be performed.\nTo enable integrity checker do one of the following: \n - Change member config using Java API: config.setIntegrityCheckerEnabled(true);\n - Change XML/YAML configuration property: Set hazelcast.integrity-checker.enabled to true\n - Add system property: -Dhz.integritychecker.enabled=true (for Hazelcast embedded, works only when loading config via Config.load)\n - Add environment variable: HZ_INTEGRITYCHECKER_ENABLED=true (recommended when running container image. For Hazelcast embedded, works only when loading config via Config.load)","context":"default","severity":"INFO","time":"2022-06-07T11:40:40.815"}
{"thread":"main","logger":"com.hazelcast.system","message":"[10.244.0.44]:5701 [dev] [5.1.1] The Jet engine is disabled.\nTo enable the Jet engine on the members, do one of the following:\n - Change member config using Java API: config.getJetConfig().setEnabled(true)\n - Change XML/YAML configuration property: Set hazelcast.jet.enabled to true\n - Add system property: -Dhz.jet.enabled=true (for Hazelcast embedded, works only when loading config via Config.load)\n - Add environment variable: HZ_JET_ENABLED=true (recommended when running container image. For Hazelcast embedded, works only when loading config via Config.load)","context":"default","severity":"INFO","time":"2022-06-07T11:40:40.822"}
{"thread":"main","logger":"com.hazelcast.internal.util.ServiceLoader","message":"The class com.hazelcast.jet.impl.metrics.JetMetricsDataSerializerHook does not implement the expected interface com.hazelcast.nio.serialization.SerializerHook","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.106"}
{"thread":"main","logger":"com.hazelcast.internal.util.ServiceLoader","message":"The class com.hazelcast.jet.impl.observer.JetObserverDataSerializerHook does not implement the expected interface com.hazelcast.nio.serialization.SerializerHook","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.119"}
{"thread":"main","logger":"com.hazelcast.internal.util.ServiceLoader","message":"The class com.hazelcast.jet.impl.metrics.JetMetricsDataSerializerHook does not implement the expected interface com.hazelcast.nio.serialization.SerializerHook","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.136"}
{"thread":"main","logger":"com.hazelcast.internal.util.ServiceLoader","message":"The class com.hazelcast.jet.impl.observer.JetObserverDataSerializerHook does not implement the expected interface com.hazelcast.nio.serialization.SerializerHook","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.136"}
{"thread":"main","logger":"com.hazelcast.internal.metrics.impl.MetricsConfigHelper","message":"[10.244.0.44]:5701 [dev] [5.1.1] Collecting debug metrics and sending to diagnostics is disabled","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.15"}
{"thread":"main","logger":"com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator","message":"[10.244.0.44]:5701 [dev] [5.1.1] Backpressure is disabled","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.198"}
{"thread":"main","logger":"com.hazelcast.spi.impl.operationservice.impl.InboundResponseHandlerSupplier","message":"[10.244.0.44]:5701 [dev] [5.1.1] Running with 2 response threads","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.232"}
{"thread":"main","logger":"com.hazelcast.internal.server.tcp.LocalAddressRegistry","message":"[10.244.0.44]:5701 [dev] [5.1.1] LinkedAddresses{primaryAddress=[10.244.0.44]:5701, allLinkedAddresses=[[fe80:0:0:0:7049:95ff:fe69:3b4d%eth0]:5701, [10.244.0.44]:5701, [fe80:0:0:0:7049:95ff:fe69:3b4d]:5701, [0:0:0:0:0:0:0:1]:5701, [0:0:0:0:0:0:0:1%lo]:5701, [127.0.0.1]:5701]} are registered for the local member with local uuid=dad735b3-e933-4af2-9756-50bcb47a3491","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:41.326"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Kubernetes Discovery properties: { service-dns: my-app-hs, service-dns-timeout: 5, service-name: null, service-port: 0, service-label: null, service-label-value: true, namespace: null, pod-label: null, pod-label-value: null, resolve-not-ready-addresses: true, expose-externally-mode: AUTO, use-node-name-as-external-address: false, service-per-pod-label: null, service-per-pod-label-value: null, kubernetes-api-retries: 3, kubernetes-master: https://kubernetes.default.svc}","context":"default","severity":"INFO","time":"2022-06-07T11:40:41.54"}
{"thread":"main","logger":"com.hazelcast.spi.utils.RetryUtils","message":"Couldn't connect to the service, [1] retrying in 1 seconds...","context":"default","severity":"WARN","time":"2022-06-07T11:40:42.088"}
{"thread":"main","logger":"com.hazelcast.spi.utils.RetryUtils","message":"Couldn't connect to the service, [2] retrying in 2 seconds...","context":"default","severity":"WARN","time":"2022-06-07T11:40:43.621"}
{"thread":"main","logger":"com.hazelcast.spi.utils.RetryUtils","message":"Couldn't connect to the service, [3] retrying in 3 seconds...","context":"default","severity":"WARN","time":"2022-06-07T11:40:45.895"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Kubernetes Discovery activated with mode: DNS_LOOKUP","context":"default","severity":"INFO","time":"2022-06-07T11:40:49.284"}
{"thread":"main","logger":"com.hazelcast.system.security","message":"[10.244.0.44]:5701 [dev] [5.1.1] \n🔒 Security recommendations and their status:\n ⚠️ Use a custom cluster name\n ✅ Disable member multicast discovery/join method\n ⚠️ Use advanced networking, separate client and member sockets\n ⚠️ Bind Server sockets to a single network interface (disable hazelcast.socket.server.bind.any)\n ✅ Disable scripting in the Management Center\n ✅ Disable console in the Management Center\n ⚠️ Enable Security (Enterprise)\n ⚠️ Use TLS communication protection (Enterprise)\n ⚠️ Enable auditlog (Enterprise)\nCheck the hazelcast-security-hardened.xml/yaml example config file to find why and how to configure these security related settings.\n","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.286"}
{"thread":"main","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Using Discovery SPI","context":"default","severity":"INFO","time":"2022-06-07T11:40:49.366"}
{"thread":"main","logger":"com.hazelcast.cp.CPSubsystem","message":"[10.244.0.44]:5701 [dev] [5.1.1] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.","context":"default","severity":"WARN","time":"2022-06-07T11:40:49.372"}
{"thread":"main","logger":"com.hazelcast.internal.metrics.impl.MetricsService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Configuring metrics collection, collection interval=5 seconds, retention=5 seconds, publishers=[Management Center Publisher, JMX Publisher]","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.713"}
{"thread":"main","logger":"com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl","message":"[10.244.0.44]:5701 [dev] [5.1.1] Starting 8 partition threads and 5 generic threads (1 dedicated for priority tasks)","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.734"}
{"thread":"main","logger":"com.hazelcast.sql.impl.SqlServiceImpl","message":"[10.244.0.44]:5701 [dev] [5.1.1] Optimizer class \"com.hazelcast.jet.sql.impl.CalciteSqlOptimizer\" not found, falling back to com.hazelcast.sql.impl.optimizer.DisabledSqlOptimizer","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.741"}
{"thread":"main","logger":"com.hazelcast.internal.diagnostics.Diagnostics","message":"[10.244.0.44]:5701 [dev] [5.1.1] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.","context":"default","severity":"INFO","time":"2022-06-07T11:40:49.747"}
{"thread":"main","logger":"com.hazelcast.core.LifecycleService","message":"[10.244.0.44]:5701 [dev] [5.1.1] [10.244.0.44]:5701 is STARTING","context":"default","severity":"INFO","time":"2022-06-07T11:40:49.756"}
{"thread":"main","logger":"com.hazelcast.internal.partition.InternalPartitionService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Adding Member [10.244.0.44]:5701 - dad735b3-e933-4af2-9756-50bcb47a3491 this","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.757"}
{"thread":"main","logger":"com.hazelcast.internal.networking.nio.NioNetworking","message":"[10.244.0.44]:5701 [dev] [5.1.1] TcpIpConnectionManager configured with Non Blocking IO-threading model: 3 input threads and 3 output threads","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.777"}
{"thread":"main","logger":"com.hazelcast.internal.networking.nio.NioNetworking","message":"[10.244.0.44]:5701 [dev] [5.1.1] write through enabled:true","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.778"}
{"thread":"main","logger":"com.hazelcast.internal.networking.nio.NioNetworking","message":"[10.244.0.44]:5701 [dev] [5.1.1] IO threads selector mode is SELECT","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.778"}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/app/libs/hazelcast-5.1.1.jar) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
{"thread":"main","logger":"com.hazelcast.internal.cluster.ClusterService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Setting master address to null","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:49.821"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:49.833"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:49.845"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:49.865"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:49.906"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:49.987"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:50.148"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:50.469"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:50.97"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:51.471"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:51.971"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:52.472"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:52.973"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:53.474"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:53.976"}
{"thread":"main","logger":"com.hazelcast.spi.discovery.integration.DiscoveryService","message":"[10.244.0.44]:5701 [dev] [5.1.1] DNS lookup for serviceDns 'my-app-hs' failed: unknown host","context":"default","severity":"WARN","time":"2022-06-07T11:40:54.477"}
{"thread":"main","logger":"com.hazelcast.internal.cluster.impl.DiscoveryJoiner","message":"[10.244.0.44]:5701 [dev] [5.1.1] This node will assume master role since none of the possible members accepted join request.","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:54.979"}
{"thread":"main","logger":"com.hazelcast.internal.cluster.ClusterService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Setting master address to [10.244.0.44]:5701","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:54.979"}
{"thread":"main","logger":"com.hazelcast.internal.cluster.impl.MembershipManager","message":"[10.244.0.44]:5701 [dev] [5.1.1] Local member list join version is set to 1","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:54.979"}
{"thread":"main","logger":"com.hazelcast.internal.cluster.impl.DiscoveryJoiner","message":"[10.244.0.44]:5701 [dev] [5.1.1] PostJoin master: [10.244.0.44]:5701, isMaster: true","context":"default","severity":"DEBUG","time":"2022-06-07T11:40:54.98"}
{"thread":"main","logger":"com.hazelcast.internal.cluster.ClusterService","message":"[10.244.0.44]:5701 [dev] [5.1.1] \n\nMembers {size:1, ver:1} [\n\tMember [10.244.0.44]:5701 - dad735b3-e933-4af2-9756-50bcb47a3491 this\n]\n","context":"default","severity":"INFO","time":"2022-06-07T11:40:54.98"}
{"thread":"main","logger":"com.hazelcast.core.LifecycleService","message":"[10.244.0.44]:5701 [dev] [5.1.1] [10.244.0.44]:5701 is STARTED","context":"default","severity":"INFO","time":"2022-06-07T11:40:54.996"}
{"thread":"main","logger":"org.example.app.proxy.GatewayConfiguration","message":"Adding routes for */gateway with backend http://collaboration-server/co-unblu and identity provider 'microsoft'","context":"default","severity":"INFO","time":"2022-06-07T11:40:55.218"}
{"thread":"main","logger":"org.example.app.proxy.GatewayConfiguration","message":"Adding public (unprotected) route '/gateway/rest/product/all'","context":"default","severity":"INFO","time":"2022-06-07T11:40:55.225"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [After]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.433"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Before]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.434"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Between]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.434"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Cookie]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.434"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Header]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.434"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Host]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.435"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Method]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.435"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Path]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.435"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Query]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.435"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [ReadBody]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.435"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [RemoteAddr]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.436"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [Weight]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.436"}
{"thread":"main","logger":"org.springframework.cloud.gateway.route.RouteDefinitionRouteLocator","message":"Loaded RoutePredicateFactory [CloudFoundryRouteService]","context":"default","severity":"INFO","time":"2022-06-07T11:40:56.436"}
{"thread":"main","logger":"org.springframework.boot.web.embedded.netty.NettyWebServer","message":"Netty started on port 80","context":"default","severity":"INFO","time":"2022-06-07T11:40:57.513"}
{"thread":"main","logger":"org.springframework.boot.actuate.endpoint.web.EndpointLinksResolver","message":"Exposing 6 endpoint(s) beneath base path '/actuator'","context":"default","severity":"INFO","time":"2022-06-07T11:40:57.654"}
{"thread":"main","logger":"org.springframework.boot.web.embedded.netty.NettyWebServer","message":"Netty started on port 8081","context":"default","severity":"INFO","time":"2022-06-07T11:40:57.728"}
{"thread":"main","logger":"org.example.app.MyApplicationKt","message":"Started MyApplicationKt in 25.429 seconds (JVM running for 26.302)","context":"default","severity":"INFO","time":"2022-06-07T11:40:57.817"}
{"thread":"hz.unruffled_matsumoto.cached.thread-2","logger":"com.hazelcast.internal.cluster.impl.MembershipManager","message":"[10.244.0.44]:5701 [dev] [5.1.1] Sending member list to the non-master nodes: \n\nMembers {size:1, ver:1} [\n\tMember [10.244.0.44]:5701 - dad735b3-e933-4af2-9756-50bcb47a3491 this\n]\n","context":"default","severity":"DEBUG","time":"2022-06-07T11:41:49.728"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Running shutdown hook... Current state: ACTIVE","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.095"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.core.LifecycleService","message":"[10.244.0.44]:5701 [dev] [5.1.1] [10.244.0.44]:5701 is SHUTTING_DOWN","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.095"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Terminating forcefully...","context":"default","severity":"WARN","time":"2022-06-07T11:41:58.099"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.internal.cluster.ClusterService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Setting master address to null","context":"default","severity":"DEBUG","time":"2022-06-07T11:41:58.1"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Shutting down connection manager...","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.1"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Shutting down node engine...","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.102"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.internal.cluster.ClusterService","message":"[10.244.0.44]:5701 [dev] [5.1.1] Setting master address to null","context":"default","severity":"DEBUG","time":"2022-06-07T11:41:58.108"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.instance.impl.NodeExtension","message":"[10.244.0.44]:5701 [dev] [5.1.1] Destroying node NodeExtension.","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.11"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Hazelcast Shutdown is completed in 12 ms.","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.111"}
{"thread":"hz.ShutdownThread","logger":"com.hazelcast.core.LifecycleService","message":"[10.244.0.44]:5701 [dev] [5.1.1] [10.244.0.44]:5701 is SHUTDOWN","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.111"}
{"thread":"SpringApplicationShutdownHook","logger":"com.hazelcast.core.LifecycleService","message":"[10.244.0.44]:5701 [dev] [5.1.1] [10.244.0.44]:5701 is SHUTTING_DOWN","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.111"}
{"thread":"SpringApplicationShutdownHook","logger":"com.hazelcast.instance.impl.Node","message":"[10.244.0.44]:5701 [dev] [5.1.1] Node is already shutting down... Waiting for shutdown process to complete...","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.111"}
{"thread":"SpringApplicationShutdownHook","logger":"com.hazelcast.core.LifecycleService","message":"[10.244.0.44]:5701 [dev] [5.1.1] [10.244.0.44]:5701 is SHUTDOWN","context":"default","severity":"INFO","time":"2022-06-07T11:41:58.111"}
Stream closed EOF for env-easing-grove/my-app-74975f549-jg9d6 (my-app)
Why is the application going down all the time? Because there is only one instance?
Update:
I verified with k9s with :svc the services. The services my-app and my-app-hs refer to the same pot.
But when starting with 2 replicas they will not find each other. So really the dns lookup fails in this kind cluster.
In my case the issue was cause by an issue with the configuration of the livenessProbe. The wrong port was configured for the liveness probe. Fixing the port to be the one of the spring actuator fixed the problem.
livenessProbe {
httpGet {
path = "/actuator/health/liveness"
this.port = IntOrString(managamentPort)
}
initialDelaySeconds = 60
failureThreshold = 6
}
General suggestion if you come across the same problem: Check your liveness and readiness probe.

Hazelcast not shutting down gracefully in Spring Boot?

I'm trying to understand how Spring Boot shut down distributed Hazelcast cache. When I connect and then shut down a second instance I get the following logs:
First Instance (Still Running)
2021-09-20 15:34:47.994 INFO 11492 --- [.IO.thread-in-0] c.h.internal.nio.tcp.TcpIpConnection : [localhost]:8084 [dev] [4.0.2] Initialized new cluster connection between /127.0.0.1:8084 and /127.0.0.1:60552
2021-09-20 15:34:54.048 INFO 11492 --- [ration.thread-0] c.h.internal.cluster.ClusterService : [localhost]:8084 [dev] [4.0.2]
Members {size:2, ver:2} [
Member [localhost]:8084 - 4c874ad9-04d1-4857-8279-f3a47be3070b this
Member [localhost]:8085 - 2282b4e7-2b6d-4e5b-9ac8-dfac988ce39f
]
2021-09-20 15:35:11.087 INFO 11492 --- [.IO.thread-in-0] c.h.internal.nio.tcp.TcpIpConnection : [localhost]:8084 [dev] [4.0.2] Connection[id=1, /127.0.0.1:8084->/127.0.0.1:60552, qualifier=null, endpoint=[localhost]:8085, alive=false, connectionType=MEMBER] closed. Reason: Connection closed by the other side
2021-09-20 15:35:11.092 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Connecting to localhost/127.0.0.1:8085, timeout: 10000, bind-any: true
2021-09-20 15:35:13.126 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Could not connect to: localhost/127.0.0.1:8085. Reason: SocketException[Connection refused: no further information to address localhost/127.0.0.1:8085]
2021-09-20 15:35:15.285 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Connecting to localhost/127.0.0.1:8085, timeout: 10000, bind-any: true
2021-09-20 15:35:17.338 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Could not connect to: localhost/127.0.0.1:8085. Reason: SocketException[Connection refused: no further information to address localhost/127.0.0.1:8085]
2021-09-20 15:35:17.450 INFO 11492 --- [cached.thread-3] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Connecting to localhost/127.0.0.1:8085, timeout: 10000, bind-any: true
2021-09-20 15:35:19.474 INFO 11492 --- [cached.thread-3] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Could not connect to: localhost/127.0.0.1:8085. Reason: SocketException[Connection refused: no further information to address localhost/127.0.0.1:8085]
2021-09-20 15:35:19.474 WARN 11492 --- [cached.thread-3] c.h.i.n.tcp.TcpIpConnectionErrorHandler : [localhost]:8084 [dev] [4.0.2] Removing connection to endpoint [localhost]:8085 Cause => java.net.SocketException {Connection refused: no further information to address localhost/127.0.0.1:8085}, Error-Count: 5
2021-09-20 15:35:19.475 INFO 11492 --- [cached.thread-3] c.h.i.cluster.impl.MembershipManager : [localhost]:8084 [dev] [4.0.2] Removing Member [localhost]:8085 - 2282b4e7-2b6d-4e5b-9ac8-dfac988ce39f
2021-09-20 15:35:19.477 INFO 11492 --- [cached.thread-3] c.h.internal.cluster.ClusterService : [localhost]:8084 [dev] [4.0.2]
Members {size:1, ver:3} [
Member [localhost]:8084 - 4c874ad9-04d1-4857-8279-f3a47be3070b this
]
2021-09-20 15:35:19.478 INFO 11492 --- [cached.thread-7] c.h.t.TransactionManagerService : [localhost]:8084 [dev] [4.0.2] Committing/rolling-back live transactions of [localhost]:8085, UUID: 2282b4e7-2b6d-4e5b-9ac8-dfac988ce39f
It seems that when I shut it down the second instance does not report that it is closing down correctly to the first one. We get a warning after it cannot connect to it for a couple of seconds and therefore removed from the cluster.
Second Instance (The one that was shutdown)
2021-09-20 15:42:03.516 INFO 4900 --- [.ShutdownThread] com.hazelcast.instance.impl.Node : [localhost]:8085 [dev] [4.0.2] Running shutdown hook... Current state: ACTIVE
2021-09-20 15:42:03.520 INFO 4900 --- [ionShutdownHook] o.s.b.w.e.tomcat.GracefulShutdown : Commencing graceful shutdown. Waiting for active requests to complete
2021-09-20 15:42:03.901 INFO 4900 --- [tomcat-shutdown] o.s.b.w.e.tomcat.GracefulShutdown : Graceful shutdown complete
It seams that it is trying to run a shutdown hook, but last report it does is still "ACTIVE" and it never goes to "SHUTTING_DOWN" or "SHUT_DOWN" as mentioned in this artice.
Config
pom.xml
...
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.5.4</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
...
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-all</artifactId>
<version>4.0.2</version>
</dependency>
</dependencies>
...
Just to add some context. I have the following application.yml
---
server:
shutdown: graceful
And the following hazelcast.yaml
---
hazelcast:
shutdown:
policy: GRACEFUL
shutdown.max.wait: 8
network:
port:
auto-increment: true
port-count: 20
port: 8084
join:
multicast:
enabled: false
tcp-ip:
enabled: true
member-list:
- localhost:8084
The question
So my theory is that Spring Boot shuts down hazelcast by terminating it instead of allowing it do shut down gracefully.
How can I make Spring Boot and Hazelcast shut down properly so that the other instances recognizees that it is shutting down rather then just be "gone"?
There are 2 things at play here. First is a real issue terminating the instance instead of gracefully shutting down. The other is seeing it correctly in the logs.
Hazelcast by default registers a shutdown hook that terminates the instance on JVM exit.
You can disable the shutdown hook completely by setting this property:
-Dhazelcast.shutdownhook.enabled=false
Alternatively, you could change the policy to graceful shutdown
-Dhazelcast.shutdownhook.policy=GRACEFUL
but this would result in both spring boot gracefully shutting down = finishing serving requests and Hazelcast instance shutting down concurrently, leading to issues.
To see the logs correctly set the logging type to slf4j:
-Dhazelcast.logging.type=slf4j
Then you will see all the info logs from Hazelcast correctly and also changing the log level via
-Dlogging.level.com.hazelcast=TRACE
works.

Hazelcast: Avoid Warning When Running a Cluster

I have this Spring-Boot 1.5.4 project that needed a clustered database cache with Hazelcast. So the changes I made are these:
pom.xml:
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-eureka-one</artifactId>
<version>1.1</version>
</dependency>
<dependency>
<groupId>org.mybatis.caches</groupId>
<artifactId>mybatis-hazelcast</artifactId>
<version>1.1.1</version>
</dependency>
Bean:
#Bean
public Config hazelcastConfig(EurekaClient eurekaClient) {
EurekaOneDiscoveryStrategyFactory.setEurekaClient(eurekaClient);
Config config = new Config();
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
return config;
}
mapper.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.sjngm.blah.dao.mapper.AttributeMapper">
<resultMap type="attribute" id="attributeResult">
...
</resultMap>
<cache type="org.mybatis.caches.hazelcast.HazelcastCache" eviction="LRU" size="100000" flushInterval="600000" />
...
I don't have a hazelcast.xml or eureka-client.properties.
It starts fine, but logs this:
2019-11-13 09:51:48,003 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] [localhost-startStop-1] Returning cached instance of singleton bean 'org.springframework.transaction.config.internalTransactionAdvisor'
2019-11-13 09:51:48,005 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] [localhost-startStop-1] Finished creating instance of bean 'hazelcastConfig'
2019-11-13 09:51:48,005 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] [localhost-startStop-1] Autowiring by type from bean name 'hazelcastInstance' via factory method to bean named 'hazelcastConfig'
2019-11-13 09:51:48,066 INFO [com.hazelcast.instance.DefaultAddressPicker] [localhost-startStop-1] [LOCAL] [dev] [3.7.7] Prefer IPv4 stack is true.
2019-11-13 09:51:48,124 INFO [com.hazelcast.instance.DefaultAddressPicker] [localhost-startStop-1] [LOCAL] [dev] [3.7.7] Picked [10.20.20.86]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
2019-11-13 09:51:48,142 INFO [com.hazelcast.system] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Hazelcast 3.7.7 (20170404 - e3c56ea) starting at [10.20.20.86]:5701
2019-11-13 09:51:48,142 INFO [com.hazelcast.system] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Copyright (c) 2008-2016, Hazelcast, Inc. All Rights Reserved.
2019-11-13 09:51:48,142 INFO [com.hazelcast.system] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Configured Hazelcast Serialization version : 1
2019-11-13 09:51:48,341 INFO [com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Backpressure is disabled
2019-11-13 09:51:49,006 INFO [com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Starting 4 partition threads
2019-11-13 09:51:49,008 INFO [com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] Starting 3 generic threads (1 dedicated for priority tasks)
2019-11-13 09:51:49,013 INFO [com.hazelcast.core.LifecycleService] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] [10.20.20.86]:5701 is STARTING
2019-11-13 09:51:49,014 INFO [com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThreadingModel] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] TcpIpConnectionManager configured with Non Blocking IO-threading model: 3 input threads and 3 output threads
2019-11-13 09:51:49,031 WARN [com.hazelcast.instance.Node] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] No join method is enabled! Starting standalone.
2019-11-13 09:51:49,063 INFO [com.hazelcast.core.LifecycleService] [localhost-startStop-1] [10.20.20.86]:5701 [dev] [3.7.7] [10.20.20.86]:5701 is STARTED
2019-11-13 09:51:49,269 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] [localhost-startStop-1] Eagerly caching bean 'hazelcastInstance' to allow for resolving potential circular references
...
2019-11-13 09:51:50,563 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Registered type handler: 'class [C'
2019-11-13 09:51:50,563 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Registered type handler: 'class java.time.Duration'
2019-11-13 09:51:50,563 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Registered type handler: 'class java.net.URL'
2019-11-13 09:51:50,563 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Registered type handler: 'class java.time.ZonedDateTime'
2019-11-13 09:51:50,655 INFO [com.hazelcast.config.XmlConfigLocator] [main] Loading 'hazelcast-default.xml' from classpath.
2019-11-13 09:51:50,812 INFO [com.hazelcast.instance.DefaultAddressPicker] [main] [LOCAL] [dev] [3.7.7] Prefer IPv4 stack is true.
2019-11-13 09:51:50,867 INFO [com.hazelcast.instance.DefaultAddressPicker] [main] [LOCAL] [dev] [3.7.7] Picked [10.20.20.86]:5702, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5702], bind any local is true
2019-11-13 09:51:50,868 INFO [com.hazelcast.system] [main] [10.20.20.86]:5702 [dev] [3.7.7] Hazelcast 3.7.7 (20170404 - e3c56ea) starting at [10.20.20.86]:5702
2019-11-13 09:51:50,868 INFO [com.hazelcast.system] [main] [10.20.20.86]:5702 [dev] [3.7.7] Copyright (c) 2008-2016, Hazelcast, Inc. All Rights Reserved.
2019-11-13 09:51:50,868 INFO [com.hazelcast.system] [main] [10.20.20.86]:5702 [dev] [3.7.7] Configured Hazelcast Serialization version : 1
2019-11-13 09:51:50,873 INFO [com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator] [main] [10.20.20.86]:5702 [dev] [3.7.7] Backpressure is disabled
2019-11-13 09:51:51,010 INFO [com.hazelcast.instance.Node] [main] [10.20.20.86]:5702 [dev] [3.7.7] Creating MulticastJoiner
2019-11-13 09:51:51,019 INFO [com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl] [main] [10.20.20.86]:5702 [dev] [3.7.7] Starting 4 partition threads
2019-11-13 09:51:51,020 INFO [com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl] [main] [10.20.20.86]:5702 [dev] [3.7.7] Starting 3 generic threads (1 dedicated for priority tasks)
2019-11-13 09:51:51,020 INFO [com.hazelcast.core.LifecycleService] [main] [10.20.20.86]:5702 [dev] [3.7.7] [10.20.20.86]:5702 is STARTING
2019-11-13 09:51:51,021 INFO [com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThreadingModel] [main] [10.20.20.86]:5702 [dev] [3.7.7] TcpIpConnectionManager configured with Non Blocking IO-threading model: 3 input threads and 3 output threads
2019-11-13 09:51:53,952 INFO [com.hazelcast.internal.cluster.impl.MulticastJoiner] [main] [10.20.20.86]:5702 [dev] [3.7.7]
Members [1] {
Member [10.20.20.86]:5702 - d29f6be8-a775-4804-bce3-8e0d3aaaab4b this
}
2019-11-13 09:51:53,953 WARN [com.hazelcast.instance.Node] [main] [10.20.20.86]:5702 [dev] [3.7.7] Config seed port is 5701 and cluster size is 1. Some of the ports seem occupied!
2019-11-13 09:51:53,954 INFO [com.hazelcast.core.LifecycleService] [main] [10.20.20.86]:5702 [dev] [3.7.7] [10.20.20.86]:5702 is STARTED
2019-11-13 09:51:50,917 DEBUG [org.mybatis.spring.SqlSessionFactoryBean] [main] Parsed mapper file: 'file [C:\workspaces\projects\com.sjngm.blah.db\target\classes\sqlmap\AttributeMapper.xml]'
It logs the two warnings and I don't know why. At first it tries to instantiate a standalone instance and then it plays along and uses Eureka and "complains" about the opened port 5701.
IMHO the first block shouldn't be there at all, which would result in the second warning not being printed. It looks like Hazelcast initialises itself at first and then Spring-Boot creates the #Bean.
What am I missing here?
As you disabled multicast, you have no joiner for Hazelcast. That is why it prints
No join method is enabled! Starting standalone.
Here is the link how to enable it for Eureka
For older versions like 3.7, you can use to configure Eureka by giving fully qualified class name.
<network>
<discovery-strategies>
<discovery-strategy class="com.hazelcast.eureka.one.EurekaOneDiscoveryStrategy" enabled="true">
<properties>
<property name="namespace">hazelcast</property>
</properties>
</discovery-strategy>
</discovery-strategies>
</network>
P.S: I suggest you to upgrade to latest hazelcast as 3.7.7 is pretty old.
Latest Hazelcast Versions are listed here. https://hazelcast.org/download/

Error in accessing SonarQube dashboard on localhost:9000

I was configuring SonarQube 6.1 on my PC and initially i was able to view sonarqube dashboard on localhost:9000. But after doing required changes in configuration files(sonar and wrapper) ,after creating db in sql server i am NOT ABLE to see sonar dashboard on localhost:9000 ,its giving network error(tcp_error).
Sql server connection :
sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar;integratedSecurity=true.
Any hint will be appreciated.
i am attaching the logs here :
--> Wrapper Started as Console
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.02.14 19:22:49 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory C:\Sonarqube 6.1\temp
2018.02.14 19:22:49 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[es]: C:\Program Files\Java\jdk1.8.0_152\jre\bin\java -Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djna.nosys=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=C:\Sonarqube 6.1\temp -javaagent:C:\Program Files\Java\jdk1.8.0_152\jre\lib\management-agent.jar -cp ./lib/common/*;./lib/search/* org.sonar.search.SearchServer C:\Sonarqube 6.1\temp\sq-process2295132313122351201properties
2018.02.14 19:22:51 INFO es[][o.s.p.ProcessEntryPoint] Starting es
2018.02.14 19:22:51 INFO es[][o.s.s.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.02.14 19:22:51 INFO es[][o.elasticsearch.node] [sonarqube] version[2.3.3], pid[69296], build[218bdf1/2016-05-17T15:40:04Z]
2018.02.14 19:22:51 INFO es[][o.elasticsearch.node] [sonarqube] initializing ...
2018.02.14 19:22:51 INFO es[][o.e.plugins] [sonarqube] modules [], plugins [], sites []
2018.02.14 19:22:51 INFO es[][o.elasticsearch.env] [sonarqube] using [1] data paths, mounts [[C (C:)]], net usable_space [17.2gb], net total_space [97.6gb], spins? [unknown], types [NTFS]
2018.02.14 19:22:51 INFO es[][o.elasticsearch.env] [sonarqube] heap size [990.7mb], compressed ordinary object pointers [true]
2018.02.14 19:23:10 INFO es[][o.elasticsearch.node] [sonarqube] initialized
2018.02.14 19:23:10 INFO es[][o.elasticsearch.node] [sonarqube] starting ...
2018.02.14 19:23:10 INFO es[][o.e.transport] [sonarqube] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2018.02.14 19:23:10 INFO es[][o.e.discovery] [sonarqube] sonarqube/SiLqQ7h5TnqOmun2zWvO_w
2018.02.14 19:23:14 INFO es[][o.e.cluster.service] [sonarqube] new_master {sonarqube}{SiLqQ7h5TnqOmun2zWvO_w}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube, master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
2018.02.14 19:23:14 INFO es[][o.elasticsearch.node] [sonarqube] started
2018.02.14 19:23:15 INFO es[][o.e.gateway] [sonarqube] recovered [5] indices into cluster_state
2018.02.14 19:23:19 INFO es[][o.e.c.r.allocation] [sonarqube] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[tests][4]] ...]).
2018.02.14 19:23:19 INFO app[][o.s.p.m.Monitor] Process[es] is up
2018.02.14 19:23:19 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[web]: C:\Program Files\Java\jdk1.8.0_152\jre\bin\java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Djruby.compile.invokedynamic=false -Xmx1024m -Xms512m -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=C:\Sonarqube 6.1\temp -javaagent:C:\Program Files\Java\jdk1.8.0_152\jre\lib\management-agent.jar -cp ./lib/common/*;./lib/server/*;C:\Sonarqube 6.1\lib\jdbc\mssql\sqljdbc42.jar org.sonar.server.app.WebServer C:\Sonarqube 6.1\temp\sq-process6566518313829106923properties
2018.02.14 19:23:20 INFO web[][o.s.p.ProcessEntryPoint] Starting web
2018.02.14 19:23:21 INFO web[][o.s.s.a.TomcatContexts] Webapp directory: C:\Sonarqube 6.1\web
2018.02.14 19:23:21 INFO web[][o.a.c.h.Http11NioProtocol] Initializing ProtocolHandler ["http-nio-0.0.0.0-9000"]
2018.02.14 19:23:22 INFO web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2018.02.14 19:23:24 INFO web[][o.e.plugins] [The Stepford Cuckoos] modules [], plugins [], sites []
2018.02.14 19:23:25 INFO web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2018.02.14 19:23:25 INFO web[][o.s.s.p.LogServerVersion] SonarQube Server / 6.1 / dc148a71a1c184ccad588b66251980c994879dff
2018.02.14 19:23:25 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:sqlserver://localhost;databaseName=sonar;integratedSecurity=true
2018.02.14 19:23:28 INFO web[][o.s.s.p.ServerFileSystemImpl] SonarQube home: C:\Sonarqube 6.1
2018.02.14 19:23:28 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin C# / 5.3.1 / 829e9f5ce2582c2e45f2db2130d2fbaa509fbc64
2018.02.14 19:23:28 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin Git / 1.2 / a713dd64daf8719ba4e7f551f9a1966c62690c17
2018.02.14 19:23:28 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin Java / 4.0 / b653c6c8640ab3d6015d036a060f58e027a653af
2018.02.14 19:23:28 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin JavaScript / 2.14 / 8e37a262d72dd863345f9c6e87421e2d1853a2e6
2018.02.14 19:23:28 INFO web[][o.s.s.p.ServerPluginRepository] Deploy plugin SVN / 1.3 / aff503d48bc77b07c2b62abf93249d0a20bd355c
2018.02.14 19:23:30 INFO web[][o.s.s.p.w.RailsAppsDeployer] Deploying Ruby on Rails applications
2018.02.14 19:23:31 INFO web[][o.s.d.c.MssqlCharsetHandler] Verify that database collation is case-sensitive and accent-sensitive
2018.02.14 19:23:33 INFO web[][o.s.s.p.UpdateCenterClient] Update center: https://update.sonarsource.org/update-center.properties (no proxy)
2018.02.14 19:23:38 INFO web[][o.s.s.n.NotificationDaemon] Notification service started (delay 60 sec.)
2018.02.14 19:23:39 INFO web[][o.s.s.s.RegisterMetrics] Register metrics
2018.02.14 19:23:41 INFO web[][o.s.s.r.RegisterRules] Register rules
2018.02.14 19:23:45 INFO web[][o.s.s.q.RegisterQualityProfiles] Register quality profiles
2018.02.14 19:23:47 INFO web[][o.s.s.s.RegisterNewMeasureFilters] Register measure filters
2018.02.14 19:23:47 INFO web[][o.s.s.s.RegisterDashboards] Register dashboards
2018.02.14 19:23:47 INFO web[][o.s.s.s.RegisterPermissionTemplates] Register permission templates
2018.02.14 19:23:47 INFO web[][o.s.s.s.RenameDeprecatedPropertyKeys] Rename deprecated property keys
2018.02.14 19:23:47 INFO web[][o.s.s.e.IndexerStartupTask] Index issues
2018.02.14 19:23:47 INFO web[][o.s.s.e.IndexerStartupTask] Index tests
2018.02.14 19:23:47 INFO web[][o.s.s.e.IndexerStartupTask] Index users
2018.02.14 19:23:47 INFO web[][o.s.s.e.IndexerStartupTask] Index views
2018.02.14 19:23:48 INFO web[][jruby.rack] jruby 1.7.9 (ruby-1.8.7p370) 2013-12-06 87b108a on Java HotSpot(TM) 64-Bit Server VM 1.8.0_152-b16 [Windows 10-amd64]
2018.02.14 19:23:48 INFO web[][jruby.rack] using a shared (threadsafe!) runtime
2018.02.14 19:25:23 INFO web[][jruby.rack] keeping custom (config.logger) Rails logger instance
2018.02.14 19:25:24 INFO web[][o.s.s.p.w.MasterServletFilter] Initializing servlet filter org.sonar.server.ws.WebServiceFilter#4dbb34d2 [pattern=org.sonar.api.web.ServletFilter$UrlPattern#d28af38]
2018.02.14 19:25:24 INFO web[][o.s.s.p.w.MasterServletFilter] Initializing servlet filter org.sonar.server.authentication.InitFilter#65316401 [pattern=org.sonar.api.web.ServletFilter$UrlPattern#7138d1d1]
2018.02.14 19:25:24 INFO web[][o.s.s.p.w.MasterServletFilter] Initializing servlet filter org.sonar.server.authentication.OAuth2CallbackFilter#70eb9145 [pattern=org.sonar.api.web.ServletFilter$UrlPattern#1e14f235]
2018.02.14 19:25:24 INFO web[][o.s.s.p.w.MasterServletFilter] Initializing servlet filter org.sonar.server.authentication.ws.LoginAction#33c29508 [pattern=org.sonar.api.web.ServletFilter$UrlPattern#afe5c1f]
2018.02.14 19:25:24 INFO web[][o.s.s.p.w.MasterServletFilter] Initializing servlet filter org.sonar.server.authentication.ws.ValidateAction#6e03c549 [pattern=org.sonar.api.web.ServletFilter$UrlPattern#49ce7754]
2018.02.14 19:25:24 INFO web[][o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-0.0.0.0-9000"]
2018.02.14 19:25:24 INFO web[][o.s.s.a.TomcatAccessLog] Web server is started
2018.02.14 19:25:24 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2018.02.14 19:25:24 INFO app[][o.s.p.m.Monitor] Process[web] is up
2018.02.14 19:25:24 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[ce]: C:\Program Files\Java\jdk1.8.0_152\jre\bin\java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=C:\Sonarqube 6.1\temp -javaagent:C:\Program Files\Java\jdk1.8.0_152\jre\lib\management-agent.jar -cp ./lib/common/*;./lib/server/*;./lib/ce/*;C:\Sonarqube 6.1\lib\jdbc\mssql\sqljdbc42.jar org.sonar.ce.app.CeServer C:\Sonarqube 6.1\temp\sq-process3506711799158227159properties
2018.02.14 19:25:27 INFO ce[][o.s.p.ProcessEntryPoint] Starting ce
2018.02.14 19:25:27 INFO ce[][o.s.ce.app.CeServer] Compute Engine starting up...
2018.02.14 19:25:29 INFO ce[][o.e.plugins] [Gorgon] modules [], plugins [], sites []
2018.02.14 19:25:31 INFO ce[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2018.02.14 19:25:32 INFO ce[][o.sonar.db.Database] Create JDBC data source for jdbc:sqlserver://localhost;databaseName=sonar;integratedSecurity=true
2018.02.14 19:25:34 INFO ce[][o.s.s.p.ServerFileSystemImpl] SonarQube home: C:\Sonarqube 6.1
2018.02.14 19:25:34 INFO ce[][o.s.c.c.CePluginRepository] Load plugins
2018.02.14 19:25:40 INFO ce[][o.s.s.c.q.PurgeCeActivities] Delete the Compute Engine tasks created before Fri Aug 18 19:25:40 IST 2017
2018.02.14 19:25:40 INFO ce[][o.s.ce.app.CeServer] Compute Engine is up
2018.02.14 19:25:40 INFO app[][o.s.p.m.Monitor] Process[ce] is up

How to setup JanusGraph using Docker for Cassandra and Elasticsearch?

I'm trying to setup JanusGraph for development on my local machine. My goal is to have a setup similar to the Cassandra remote server mode. As storage backend, I want to use Cassandra and as index backend I planned to use Elasticsearch.
For both, I'm using Docker containers (Cassandra, Elasticsearch).
My janusgraph-server.properties file looks like this:
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=cassandra
storage.hostname=127.0.0.1
storage.cassandra.astyanax.cluster-name=cassandra_test_cluster
index.search.backend=elasticsearch
index.search.hostname=127.0.0.1
index.search.port=9300
index.search.elasticsearch.cluster-name=elasticsearch_test_cluster
Starting the gremlin-server leads to this failures:
0 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer -
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
162 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Configuring Gremlin Server from conf/gremlin-server/gremlin-server.yaml
256 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics ConsoleReporter configured with report interval=180000ms
263 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics CsvReporter configured with report interval=180000ms to fileName=/tmp/gremlin-server-metrics.csv
343 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics JmxReporter configured with domain= and agentId=
345 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics Slf4jReporter configured with interval=180000ms and loggerName=org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics
800 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=ClusterJanusGraphConnectionPool,ServiceType=connectionpool
807 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor - AddHost: 127.0.0.1
884 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=KeyspaceJanusGraphConnectionPool,ServiceType=connectionpool
884 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor - AddHost: 127.0.0.1
1070 [main] INFO org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration - Generated unique-instance-id=c0a8000424833-XXX-MacBook-Pro-local1
1078 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=ClusterJanusGraphConnectionPool,ServiceType=connectionpool
1079 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor - AddHost: 127.0.0.1
1082 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=KeyspaceJanusGraphConnectionPool,ServiceType=connectionpool
1082 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor - AddHost: 127.0.0.1
1099 [main] INFO org.janusgraph.diskstorage.Backend - Configuring index [search]
1179 [main] INFO org.elasticsearch.plugins - [General Orwell Taylor] loaded [], sites []
1655 [main] INFO org.janusgraph.diskstorage.es.ElasticSearchIndex - Configured remote host: 127.0.0.1 : 9300
1738 [elasticsearch[General Orwell Taylor][generic][T#2]] INFO org.elasticsearch.client.transport - [General Orwell Taylor] failed to get local cluster state for [#transport#-1][XXX-MacBook-Pro.local][inet[/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.NodeDisconnectedException: [][inet[/127.0.0.1:9300]][cluster:monitor/state] disconnected
1743 [main] WARN org.apache.tinkerpop.gremlin.server.GremlinServer - Graph [graph] configured at [conf/gremlin-server/janusgraph-server.properties] could not be instantiated and will not be available in Gremlin Server. GraphFactory message: GraphFactory could not instantiate this Graph implementation [class org.janusgraph.core.JanusGraphFactory]
java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [class org.janusgraph.core.JanusGraphFactory]
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:82)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:70)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:104)
at org.apache.tinkerpop.gremlin.server.GraphManager.lambda$new$0(GraphManager.java:55)
at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)
at org.apache.tinkerpop.gremlin.server.GraphManager.<init>(GraphManager.java:53)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:83)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:110)
at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:344)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:78)
... 8 more
Caused by: java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.es.ElasticSearchIndex
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:69)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:477)
at org.janusgraph.diskstorage.Backend.getIndexes(Backend.java:464)
at org.janusgraph.diskstorage.Backend.<init>(Backend.java:149)
at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1850)
at org.janusgraph.graphdb.database.StandardJanusGraph.<init>(StandardJanusGraph.java:134)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:107)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:87)
... 13 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58)
... 20 more
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:279)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:198)
at org.elasticsearch.client.transport.support.InternalTransportClusterAdminClient.execute(InternalTransportClusterAdminClient.java:86)
at org.elasticsearch.client.support.AbstractClusterAdminClient.health(AbstractClusterAdminClient.java:127)
at org.elasticsearch.action.admin.cluster.health.ClusterHealthRequestBuilder.doExecute(ClusterHealthRequestBuilder.java:92)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.<init>(ElasticSearchIndex.java:215)
... 25 more
1745 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
2190 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded gremlin-groovy ScriptEngine
2836 [main] WARN org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor - Could not initialize gremlin-groovy ScriptEngine with scripts/empty-sample.groovy as script could not be evaluated - javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
None of the configured nodes are available: [] why?
What can I do to make them available?
Have you verified whether Elasticsearch and Cassandra are running on those ports on localhost? If not, I would recommend checking that you're forwarding to those ports when starting your containers.
I would also recommend checking the logs for Cassandra and Elasticsearch and seeing if there is any errors in those.

Resources