Spring Boot application unexpected shutdown with no reason - spring

Sometimes my Spring Boot application is shutting down with no clear reasons.
I can only see the following output in the appliaction log:
2019-09-02 01:39:16.199 INFO 23535 --- [ActiveMQ ShutdownHook] o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.15.9 (localhost, ID:example-33285-1567372309839-0:1) is shutting down
2019-09-02 01:39:16.216 INFO 23535 --- [ActiveMQ Connection Executor: vm://localhost#0] o.s.j.c.CachingConnectionFactory : Encountered a JMSException - resetting the underlying JMS Connection
javax.jms.JMSException: peer (vm://localhost#1) stopped.
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:54) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.ActiveMQConnection.onAsyncException(ActiveMQConnection.java:1960) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.ActiveMQConnection.onException(ActiveMQConnection.java:1979) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.TransportFilter.onException(TransportFilter.java:114) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.ResponseCorrelator.onException(ResponseCorrelator.java:126) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.TransportFilter.onException(TransportFilter.java:114) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.vm.VMTransport.stop(VMTransport.java:233) ~[activemq-broker-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.TransportFilter.stop(TransportFilter.java:72) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.TransportFilter.stop(TransportFilter.java:72) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.transport.ResponseCorrelator.stop(ResponseCorrelator.java:132) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.broker.TransportConnection.doStop(TransportConnection.java:1194) ~[activemq-broker-5.15.9.jar!/:5.15.9]
at org.apache.activemq.broker.TransportConnection$4.run(TransportConnection.java:1160) ~[activemq-broker-5.15.9.jar!/:5.15.9]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
Caused by: org.apache.activemq.transport.TransportDisposedIOException: peer (vm://localhost#1) stopped.
... 9 common frames omitted
2019-09-02 01:39:16.218 INFO 23535 --- [Thread-7] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService 'taskScheduler'
2019-09-02 01:39:16.218 INFO 23535 --- [ActiveMQ ShutdownHook] o.a.activemq.broker.TransportConnector : Connector vm://localhost stopped
2019-09-02 01:39:16.225 INFO 23535 --- [Thread-7] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
2019-09-02 01:39:16.230 INFO 23535 --- [ActiveMQ ShutdownHook] o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.15.9 (localhost, ID:example-33285-1567372309839-0:1) uptime 1 hour 27 minutes
2019-09-02 01:39:16.230 INFO 23535 --- [ActiveMQ ShutdownHook] o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.15.9 (localhost, ID:example-33285-1567372309839-0:1) is shutdown
I have no idea what is a cause of this shutdown. What steps should I do in order to determine the reason?

Related

Hazelcast not shutting down gracefully in Spring Boot?

I'm trying to understand how Spring Boot shut down distributed Hazelcast cache. When I connect and then shut down a second instance I get the following logs:
First Instance (Still Running)
2021-09-20 15:34:47.994 INFO 11492 --- [.IO.thread-in-0] c.h.internal.nio.tcp.TcpIpConnection : [localhost]:8084 [dev] [4.0.2] Initialized new cluster connection between /127.0.0.1:8084 and /127.0.0.1:60552
2021-09-20 15:34:54.048 INFO 11492 --- [ration.thread-0] c.h.internal.cluster.ClusterService : [localhost]:8084 [dev] [4.0.2]
Members {size:2, ver:2} [
Member [localhost]:8084 - 4c874ad9-04d1-4857-8279-f3a47be3070b this
Member [localhost]:8085 - 2282b4e7-2b6d-4e5b-9ac8-dfac988ce39f
]
2021-09-20 15:35:11.087 INFO 11492 --- [.IO.thread-in-0] c.h.internal.nio.tcp.TcpIpConnection : [localhost]:8084 [dev] [4.0.2] Connection[id=1, /127.0.0.1:8084->/127.0.0.1:60552, qualifier=null, endpoint=[localhost]:8085, alive=false, connectionType=MEMBER] closed. Reason: Connection closed by the other side
2021-09-20 15:35:11.092 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Connecting to localhost/127.0.0.1:8085, timeout: 10000, bind-any: true
2021-09-20 15:35:13.126 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Could not connect to: localhost/127.0.0.1:8085. Reason: SocketException[Connection refused: no further information to address localhost/127.0.0.1:8085]
2021-09-20 15:35:15.285 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Connecting to localhost/127.0.0.1:8085, timeout: 10000, bind-any: true
2021-09-20 15:35:17.338 INFO 11492 --- [ached.thread-13] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Could not connect to: localhost/127.0.0.1:8085. Reason: SocketException[Connection refused: no further information to address localhost/127.0.0.1:8085]
2021-09-20 15:35:17.450 INFO 11492 --- [cached.thread-3] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Connecting to localhost/127.0.0.1:8085, timeout: 10000, bind-any: true
2021-09-20 15:35:19.474 INFO 11492 --- [cached.thread-3] c.h.internal.nio.tcp.TcpIpConnector : [localhost]:8084 [dev] [4.0.2] Could not connect to: localhost/127.0.0.1:8085. Reason: SocketException[Connection refused: no further information to address localhost/127.0.0.1:8085]
2021-09-20 15:35:19.474 WARN 11492 --- [cached.thread-3] c.h.i.n.tcp.TcpIpConnectionErrorHandler : [localhost]:8084 [dev] [4.0.2] Removing connection to endpoint [localhost]:8085 Cause => java.net.SocketException {Connection refused: no further information to address localhost/127.0.0.1:8085}, Error-Count: 5
2021-09-20 15:35:19.475 INFO 11492 --- [cached.thread-3] c.h.i.cluster.impl.MembershipManager : [localhost]:8084 [dev] [4.0.2] Removing Member [localhost]:8085 - 2282b4e7-2b6d-4e5b-9ac8-dfac988ce39f
2021-09-20 15:35:19.477 INFO 11492 --- [cached.thread-3] c.h.internal.cluster.ClusterService : [localhost]:8084 [dev] [4.0.2]
Members {size:1, ver:3} [
Member [localhost]:8084 - 4c874ad9-04d1-4857-8279-f3a47be3070b this
]
2021-09-20 15:35:19.478 INFO 11492 --- [cached.thread-7] c.h.t.TransactionManagerService : [localhost]:8084 [dev] [4.0.2] Committing/rolling-back live transactions of [localhost]:8085, UUID: 2282b4e7-2b6d-4e5b-9ac8-dfac988ce39f
It seems that when I shut it down the second instance does not report that it is closing down correctly to the first one. We get a warning after it cannot connect to it for a couple of seconds and therefore removed from the cluster.
Second Instance (The one that was shutdown)
2021-09-20 15:42:03.516 INFO 4900 --- [.ShutdownThread] com.hazelcast.instance.impl.Node : [localhost]:8085 [dev] [4.0.2] Running shutdown hook... Current state: ACTIVE
2021-09-20 15:42:03.520 INFO 4900 --- [ionShutdownHook] o.s.b.w.e.tomcat.GracefulShutdown : Commencing graceful shutdown. Waiting for active requests to complete
2021-09-20 15:42:03.901 INFO 4900 --- [tomcat-shutdown] o.s.b.w.e.tomcat.GracefulShutdown : Graceful shutdown complete
It seams that it is trying to run a shutdown hook, but last report it does is still "ACTIVE" and it never goes to "SHUTTING_DOWN" or "SHUT_DOWN" as mentioned in this artice.
Config
pom.xml
...
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.5.4</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
...
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-all</artifactId>
<version>4.0.2</version>
</dependency>
</dependencies>
...
Just to add some context. I have the following application.yml
---
server:
shutdown: graceful
And the following hazelcast.yaml
---
hazelcast:
shutdown:
policy: GRACEFUL
shutdown.max.wait: 8
network:
port:
auto-increment: true
port-count: 20
port: 8084
join:
multicast:
enabled: false
tcp-ip:
enabled: true
member-list:
- localhost:8084
The question
So my theory is that Spring Boot shuts down hazelcast by terminating it instead of allowing it do shut down gracefully.
How can I make Spring Boot and Hazelcast shut down properly so that the other instances recognizees that it is shutting down rather then just be "gone"?
There are 2 things at play here. First is a real issue terminating the instance instead of gracefully shutting down. The other is seeing it correctly in the logs.
Hazelcast by default registers a shutdown hook that terminates the instance on JVM exit.
You can disable the shutdown hook completely by setting this property:
-Dhazelcast.shutdownhook.enabled=false
Alternatively, you could change the policy to graceful shutdown
-Dhazelcast.shutdownhook.policy=GRACEFUL
but this would result in both spring boot gracefully shutting down = finishing serving requests and Hazelcast instance shutting down concurrently, leading to issues.
To see the logs correctly set the logging type to slf4j:
-Dhazelcast.logging.type=slf4j
Then you will see all the info logs from Hazelcast correctly and also changing the log level via
-Dlogging.level.com.hazelcast=TRACE
works.

ERROR delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted(hadoop window10)

I use windows 10 and node manager also not starting correctly. I see the following errors:
Resource manager is not connecting and failing due to :
2021-07-07 11:01:52,473 ERROR delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
2021-07-07 11:01:52,493 INFO handler.ContextHandler: Stopped o.e.j.w.WebAppContext#756b58a7{/,null,UNAVAILABLE}{/cluster}
2021-07-07 11:01:52,504 INFO server.AbstractConnector: Stopped ServerConnector#633a2e99{HTTP/1.1,[http/1.1]}{0.0.0.0:8088}
2021-07-07 11:01:52,504 INFO handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler#7b420819{/static,jar:file:/F:/hadoop_new/share/hadoop/yarn/hadoop-yarn-common-3.2.1.jar!/webapps/static,UNAVAILABLE}
2021-07-07 11:01:52,507 INFO handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler#c9d0d6{/logs,file:///F:/hadoop_new/logs/,UNAVAILABLE}
2021-07-07 11:01:52,541 INFO ipc.Server: Stopping server on 8033
2021-07-07 11:01:52,543 INFO ipc.Server: Stopping IPC Server listener on 8033
2021-07-07 11:01:52,544 INFO resourcemanager.ResourceManager: Transitioning to standby state
2021-07-07 11:01:52,544 INFO ipc.Server: Stopping IPC Server Responder
2021-07-07 11:01:52,550 INFO resourcemanager.ResourceManager: Transitioned to standby state
2021-07-07 11:01:52,554 FATAL resourcemanager.ResourceManager: Error starting ResourceManager
org.apache.hadoop.service.ServiceStateException: 5: Access is denied.
and
2021-07-07 11:01:51,625 INFO recovery.RMStateStore: Storing RMDTMasterKey.
2021-07-07 11:01:52,158 INFO store.AbstractFSNodeStore: Created store directory :file:/tmp/hadoop-yarn-Abby/node-attribute
2021-07-07 11:01:52,186 INFO service.AbstractService: Service NodeAttributesManagerImpl failed in state STARTED
5: Access is denied.
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileOutputStreamWithMode(NativeIO.java:595)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:246)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:232)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:331)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:320)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:305)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987)
at org.apache.hadoop.yarn.nodelabels.store.AbstractFSNodeStore.recoverFromStore(AbstractFSNodeStore.java:160)
at org.apache.hadoop.yarn.server.resourcemanager.nodelabels.FileSystemNodeAttributeStore.recover(FileSystemNodeAttributeStore.java:95)
at org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NodeAttributesManagerImpl.initNodeAttributeStore(NodeAttributesManagerImpl.java:140)
at org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NodeAttributesManagerImpl.serviceStart(NodeAttributesManagerImpl.java:123)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:895)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1262)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1303)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1299)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1299)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1350)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1535)
2021-07-07 11:01:52,212 INFO service.AbstractService: Service RMActiveServices failed in state STARTED
org.apache.hadoop.service.ServiceStateException: 5: Access is denied.
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:203)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:895)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1262)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1303)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1299)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1299)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1350)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1535)
Caused by: 5: Access is denied.
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileOutputStreamWithMode(NativeIO.java:595)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:246)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:232)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:331)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:320)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:305)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987)
at org.apache.hadoop.yarn.nodelabels.store.AbstractFSNodeStore.recoverFromStore(AbstractFSNodeStore.java:160)
at org.apache.hadoop.yarn.server.resourcemanager.nodelabels.FileSystemNodeAttributeStore.recover(FileSystemNodeAttributeStore.java:95)
at org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NodeAttributesManagerImpl.initNodeAttributeStore(NodeAttributesManagerImpl.java:140)
at org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NodeAttributesManagerImpl.serviceStart(NodeAttributesManagerImpl.java:123)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
... 13 more
You have access denied, maybe need to run with another user. Try to start services with a user with more access like Administrator in windows.

Apache Nifi - refused to connect to localhost error

When I tried to connect to Nifi UI using http://localhost:8080/nifi, i am getting below error
org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
java.net.BindException: Address already in use: bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:331)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:299)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:235)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:398)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:935)
at org.apache.nifi.NiFi.<init>(NiFi.java:158)
at org.apache.nifi.NiFi.<init>(NiFi.java:72)
at org.apache.nifi.NiFi.main(NiFi.java:297)
2020-02-27 11:51:11,834 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...
2020-02-27 11:51:11,836 INFO [Thread-1] o.eclipse.jetty.server.AbstractConnector Stopped ServerConnector#355ee205{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
2020-02-27 11:51:11,837 INFO [Thread-1] org.eclipse.jetty.server.session node 0 Stopped scavenging
Can anyone suggest what is the cause of this issue?
Nifi version- 1.9.2,installed on windows machine
Here is the nifi status logs,
12:33:16.886 [main] DEBUG org.apache.nifi.bootstrap.NotificationServiceManager - Found 0 service elements
12:33:16.896 [main] INFO org.apache.nifi.bootstrap.NotificationServiceManager - Successfully loaded the following 0 services: []
12:33:16.897 [main] INFO org.apache.nifi.bootstrap.RunNiFi - Registered no Notification Services for Notification Type NIFI_STARTED
12:33:16.897 [main] INFO org.apache.nifi.bootstrap.RunNiFi - Registered no Notification Services for Notification Type NIFI_STOPPED
12:33:16.898 [main] INFO org.apache.nifi.bootstrap.RunNiFi - Registered no Notification Services for Notification Type NIFI_DIED
12:33:16.899 [main] DEBUG org.apache.nifi.bootstrap.Command - Status File:
12:33:16.900 [main] DEBUG org.apache.nifi.bootstrap.Command - Properties: {pid=9724}
Failed to determine if Process 9724 is running; assuming that it is not
12:33:16.902 [main] INFO org.apache.nifi.bootstrap.Command - Apache NiFi is not running
The port use by nifi is already used by another process.
you can change web server port in conf/nifi.properties

rabbitmq docker spring o.s.a.r.l.SimpleMessageListenerContainer : Failed to check/redeclare auto-delete queue(s)

I have been trying to get send and receive messages on a rabbit MQ running in docker with ssl enabled. I first got a Spring boot application working without ssl. With everything running on localhost I succeeded in sending and receiving messages with the dockerized rabbit MQ. But now, with the ssl auth plugin enabled in the dockerized rabbit MQ I get a strange message claiming
o.s.a.r.l.SimpleMessageListenerContainer : Failed to check/redeclare auto-delete queue(s).
When I look at the dockerized rabbit instance through the web console I see that there are no queues at all. If I stop and restart the container I get the same result.
I have pointed rabbit MQ to this configuration as suggested by Jeff Becker's blog
The config file looks like this :
%% -*- mode: erlang -*-
[
{rabbit, [
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile,"/home/callen/.ssh/private_key.pem"},
{certfile,"/home/callen/.ssh/private_key.pem"},
{keyfile,"/home/callen/.ssh/public_key.pem"},
{verify,verify_none},
{fail_if_no_peer_cert,false}]}
]}
].
Here is the stack trace from the Spring boot application :
2018-01-11 11:12:57.395 INFO 17786 --- [ main] n.k.r.r.RabbitConsumerApplicationTests : Starting RabbitConsumerApplicationTests on localhost.localdomain with PID 17786 (started by callen in /home/callen/Projects/rabbit-consumer)
2018-01-11 11:12:57.396 INFO 17786 --- [ main] n.k.r.r.RabbitConsumerApplicationTests : No active profile set, falling back to default profiles: default
2018-01-11 11:12:57.482 INFO 17786 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#36b4fe2a: startup date [Thu Jan 11 11:12:57 EST 2018]; root of context hierarchy
2018-01-11 11:12:58.351 INFO 17786 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.amqp.rabbit.annotation.RabbitBootstrapConfiguration' of type [org.springframework.amqp.rabbit.annotation.RabbitBootstrapConfiguration$$EnhancerBySpringCGLIB$$3062a902] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-01-11 11:12:59.423 INFO 17786 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2018-01-11 11:12:59.529 ERROR 17786 --- [ container-1] o.s.a.r.l.SimpleMessageListenerContainer : Failed to check/redeclare auto-delete queue(s).
org.springframework.amqp.AmqpIOException: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:71) ~[spring-rabbit-1.7.4.RELEASE.jar:na]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:368) ~[spring-rabbit-1.7.4.RELEASE.jar:na]
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createConnection(CachingConnectionFactory.java:573) ~[spring-rabbit-1.7.4.RELEASE.jar:na]
at org.springframework.amqp.rabbit.core.RabbitTemplate.doExecute(RabbitTemplate.java:1430) ~[spring-rabbit-1.7.4.RELEASE.jar:na]
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:1411) ~[spring-rabbit-1.7.4.RELEASE.jar:na]
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:1387) ~[spring-rabbit-1.7.4.RELEASE.jar:na]
at org.springframework.amqp.rabbit.core.RabbitAdmin.getQueueProperties(RabbitAdmin.java:336) ~[spring-rabbit-1.7.4.RELEASE.jar:na]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.redeclareElementsIfNecessary(SimpleMessageListenerContainer.java:1171) ~[spring-rabbit-1.7.4.RELEASE.jar:na]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1422) [spring-rabbit-1.7.4.RELEASE.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002) ~[na:1.8.0_151]
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385) ~[na:1.8.0_151]
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:757) ~[na:1.8.0_151]
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123) ~[na:1.8.0_151]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[na:1.8.0_151]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[na:1.8.0_151]
at java.io.DataOutputStream.flush(DataOutputStream.java:123) ~[na:1.8.0_151]
at com.rabbitmq.client.impl.SocketFrameHandler.sendHeader(SocketFrameHandler.java:147) ~[amqp-client-4.0.3.jar:4.0.3]
at com.rabbitmq.client.impl.SocketFrameHandler.sendHeader(SocketFrameHandler.java:153) ~[amqp-client-4.0.3.jar:4.0.3]
at com.rabbitmq.client.impl.AMQConnection.start(AMQConnection.java:285) ~[amqp-client-4.0.3.jar:4.0.3]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:909) ~[amqp-client-4.0.3.jar:4.0.3]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:859) ~[amqp-client-4.0.3.jar:4.0.3]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:799) ~[amqp-client-4.0.3.jar:4.0.3]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:352) ~[spring-rabbit-1.7.4.RELEASE.jar:na]
... 8 common frames omitted
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505) ~[na:1.8.0_151]
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983) ~[na:1.8.0_151]
... 21 common frames omitted
What say you, good people?

Sonarqube 5.0 install error

When installing sonarqube 5.0, I got following error messages while starting SonarQube on windows7 with mysql 5.6.22:
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2015.01.19 11:18:57 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[search]: C:\Tools\jdk1.7.0_71\jre\bin\java -Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djava.net.preferIPv4Stack=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=C:\Tools\sonarqube-5.0\temp -cp ./lib/common/*;./lib/search/* org.sonar.search.SearchServer C:\Users\li2\AppData\Local\Temp\sq-process2755720839215931323properties
2015.01.19 11:19:09 INFO sea[o.s.p.ProcessEntryPoint] Starting search
2015.01.19 11:19:09 INFO sea[o.s.s.SearchServer] Starting Elasticsearch[sonarqube] on port 9001
2015.01.19 11:19:09 WARN sea[o.s.s.SearchSettings] Elasticsearch HTTP connector is enabled on port 9010. MUST NOT BE USED INTO PRODUCTION
2015.01.19 11:19:09 INFO sea[o.elasticsearch.node] [sonar-1421662737113] version[1.1.2], pid[8464], build[e511f7b/2014-05-22T12:27:39Z]
2015.01.19 11:19:09 INFO sea[o.elasticsearch.node] [sonar-1421662737113] initializing ...
2015.01.19 11:19:09 INFO sea[o.e.plugins] [sonar-1421662737113] loaded [], sites []
2015.01.19 11:19:11 INFO sea[o.elasticsearch.node] [sonar-1421662737113] initialized
2015.01.19 11:19:11 INFO sea[o.elasticsearch.node] [sonar-1421662737113] starting ...
2015.01.19 11:19:27 INFO sea[o.e.transport] [sonar-1421662737113] bound_address {inet[/0.0.0.0:9001]}, publish_address {inet[/192.168.0.107:9001]}
2015.01.19 11:19:30 INFO sea[o.e.cluster.service] [sonar-1421662737113] new_master [sonar-1421662737113][RB8i_Ar8Rv-Do_15hhhWtQ][LI21][inet[/192.168.0.107:9001]]{rack_id=sonar-1421662737113}, reason: zen-disco-join (elected_as_master)
2015.01.19 11:19:51 WARN sea[o.e.cluster.service] [sonar-1421662737113] failed to connect to node [[sonar-1421662737113][RB8i_Ar8Rv-Do_15hhhWtQ][LI21][inet[/192.168.0.107:9001]]{rack_id=sonar-1421662737113}]
org.elasticsearch.transport.ConnectTransportException: [sonar-1421662737113][inet[/192.168.0.107:9001]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:719) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:648) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:616) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:129) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:405) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:134) [elasticsearch-1.1.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
Caused by: java.net.ConnectException: Connection timed out: no further information: /192.168.0.107:9001
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_71]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_71]
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:150) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) ~[elasticsearch-1.1.2.jar:na]
... 3 common frames omitted
2015.01.19 11:19:51 INFO sea[o.e.discovery] [sonar-1421662737113] sonarqube/RB8i_Ar8Rv-Do_15hhhWtQ
2015.01.19 11:19:51 INFO sea[o.elasticsearch.http] [sonar-1421662737113] bound_address {inet[/127.0.0.1:9010]}, publish_address {inet[/127.0.0.1:9010]}
2015.01.19 11:19:52 INFO sea[o.e.gateway] [sonar-1421662737113] recovered [4] indices into cluster_state
2015.01.19 11:19:52 INFO sea[o.elasticsearch.node] [sonar-1421662737113] started
2015.01.19 11:19:53 INFO app[o.s.p.m.Monitor] Process[search] is up
2015.01.19 11:19:53 INFO app[o.s.p.m.JavaProcessLauncher] Launch process[web]: C:\Tools\jdk1.7.0_71\jre\bin\java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Xmx768m -XX:MaxPermSize=160m -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=C:\Tools\sonarqube-5.0\temp -cp ./lib/common/*;./lib/server/*;C:\Tools\sonarqube-5.0\lib\jdbc\mysql\mysql-connector-java-5.1.27.jar org.sonar.server.app.WebServer C:\Users\li2\AppData\Local\Temp\sq-process1889452272417488373properties
2015.01.19 11:20:06 INFO web[o.s.p.ProcessEntryPoint] Starting web
2015.01.19 11:20:06 INFO web[o.s.s.app.Connectors] HTTP connector is enabled on port 9000
2015.01.19 11:20:06 INFO web[o.s.s.app.Webapp] Webapp directory: C:\Tools\sonarqube-5.0\web
2015.01.19 11:20:07 INFO web[o.e.plugins] [sonar-1421662737113] loaded [], sites []
2015.01.19 11:20:19 INFO web[o.s.s.p.ServerImpl] SonarQube Server / 5.0 / dc62506bf3b331ec19c053e225e415d164ee60b0
2015.01.19 11:20:19 INFO web[o.s.c.p.Database] Create JDBC datasource for jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
2015.01.19 11:20:20 INFO web[o.s.s.p.DefaultServerFileSystem] SonarQube home: C:\Tools\sonarqube-5.0
2015.01.19 11:20:20 INFO web[o.s.a.u.TimeProfiler] Install plugins...
2015.01.19 11:20:20 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Findbugs / 3.1 / adc09c989cebc856d44239116a00ab0b602b0851
2015.01.19 11:20:20 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Duplications / 5.0 / dc62506bf3b331ec19c053e225e415d164ee60b0
2015.01.19 11:20:20 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Git / 5.0 / dc62506bf3b331ec19c053e225e415d164ee60b0
2015.01.19 11:20:20 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Core / 5.0 / dc62506bf3b331ec19c053e225e415d164ee60b0
2015.01.19 11:20:20 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Java / 2.8 / 20a3d682b1334eb1857e7bc8a40e11f04fed9528
2015.01.19 11:20:20 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin SVN / 5.0 / dc62506bf3b331ec19c053e225e415d164ee60b0
2015.01.19 11:20:20 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin English Pack / 5.0 / dc62506bf3b331ec19c053e225e415d164ee60b0
2015.01.19 11:20:20 INFO web[o.s.s.p.ServerPluginJarsInstaller] Deploy plugin Email notifications / 5.0 / dc62506bf3b331ec19c053e225e415d164ee60b0
2015.01.19 11:20:20 INFO web[o.s.a.u.TimeProfiler] Install plugins done: 234 ms
2015.01.19 11:20:21 INFO web[o.s.s.p.RailsAppsDeployer] Deploy Ruby on Rails applications
2015.01.19 11:20:21 INFO web[jruby.rack] jruby 1.7.9 (ruby-1.8.7p370) 2013-12-06 87b108a on Java HotSpot(TM) 64-Bit Server VM 1.7.0_71-b14 [Windows 7-amd64]
2015.01.19 11:20:21 INFO web[jruby.rack] using a shared (threadsafe!) runtime
2015.01.19 11:20:29 INFO web[DbMigration] == InitialSchema: migrating ==================================================
2015.01.19 11:20:29 INFO web[DbMigration] -- create_table(:projects, {})
2015.01.19 11:20:29 INFO web[DbMigration] -> 0.0310s
2015.01.19 11:20:29 INFO web[DbMigration] -> 0 rows
........
........
2015.01.19 11:20:50 INFO web[o.s.j.s.AbstractDatabaseConnector] Initializing Hibernate
2015.01.19 11:20:51 INFO web[o.s.s.p.UpdateCenterClient] Update center: http://update.sonarsource.org/update-center.properties (HTTP proxy: xxx)
2015.01.19 11:20:52 INFO web[o.s.s.n.NotificationService] Notification service started (delay 60 sec.)
2015.01.19 11:20:52 INFO web[o.s.s.s.IndexSynchronizer] Index rules for updates after Sun Jan 18 20:37:25 CET 2015
2015.01.19 11:20:52 INFO web[o.s.s.s.IndexSynchronizer] Index activeRules for updates after Sun Jan 18 20:37:27 CET 2015
2015.01.19 11:20:52 INFO web[o.s.s.s.IndexSynchronizer] Index sonarLogs for updates after null
2015.01.19 11:20:52 INFO web[o.s.s.s.IndexSynchronizer] Index issues
2015.01.19 11:20:52 INFO web[o.s.s.s.IndexSynchronizer] Index source files
2015.01.19 11:20:52 INFO web[o.s.a.u.TimeProfiler] Load metrics...
2015.01.19 11:20:52 INFO web[o.s.s.s.RegisterMetrics] Cleaning quality gate conditions
2015.01.19 11:20:52 INFO web[o.s.a.u.TimeProfiler] Load metrics done: 234 ms
2015.01.19 11:20:52 INFO web[o.s.s.s.RegisterDebtModel] Register technical debt model...
2015.01.19 11:20:52 INFO web[o.s.s.s.RegisterDebtModel] Register technical debt model done: 78 ms
2015.01.19 11:20:52 INFO web[o.s.a.u.TimeProfiler] Register rules...
2015.01.19 11:22:57 WARN sea[o.e.cluster.service] [sonar-1421662737113] failed to reconnect to node [sonar-1421662737113][RB8i_Ar8Rv-Do_15hhhWtQ][LI21][inet[/192.168.0.107:9001]]{rack_id=sonar-1421662737113}
org.elasticsearch.transport.ConnectTransportException: [sonar-1421662737113][inet[/192.168.0.107:9001]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:719) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:648) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:616) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:129) ~[elasticsearch-1.1.2.jar:na]
at org.elasticsearch.cluster.service.InternalClusterService$ReconnectToNodes.run(InternalClusterService.java:516) ~[elasticsearch-1.1.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
Caused by: java.net.ConnectException: Connection timed out: no further information: /192.168.0.107:9001
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_71]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_71]
I have searched for similar questions but with no success.
Does someone have an idea how to fix that?
Thanks!
I got the same problem. It could be resolved by a possibility to not only specify sonar.search.port but also something like sonar.search.host. The reason in my setting is that the default IP which sonar search uses is only accessible from outside hosts. It must not be used from localhost.
I worked around it by adding the following line to sonar.properties as described here:
sonar.search.javaAdditionalOpts=-Des.network.host=127.0.0.1

Resources