Conductor build error Execution failed for task ':conductor-annotations:spotlessJavaCheck' - spring

i try to build conductor from netflix to use it in my micro services application, but when i do it, i get an error while building conductor, i don't know what im doing wrong, i use docker-compose build, how it is described in documentation but it not works. Here is error output, it have some more details, but i can not copy it all here because of limit on the page, in the rest of the output is same problem with format violations:
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':conductor-annotations:spotlessJavaCheck'.
> The following files had format violations:
src/main/java/com/netflix/conductor/annotations/protogen/ProtoEnum.java
## -1,26 +1,26 ##
-/*\r\n
- * Copyright 2022 Netflix, Inc.\r\n
- * <p>\r\n
- * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with\r\n
- * the License. You may obtain a copy of the License at\r\n
- * <p>\r\n
- * http://www.apache.org/licenses/LICENSE-2.0\r\n
- * <p>\r\n
- * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on\r\n
- * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\r\n
- * specific language governing permissions and limitations under the License.\r\n
- */\r\n
-package com.netflix.conductor.annotations.protogen;\r\n
-\r\n
-import java.lang.annotation.ElementType;\r\n
-import java.lang.annotation.Retention;\r\n
-import java.lang.annotation.RetentionPolicy;\r\n
-import java.lang.annotation.Target;\r\n
-\r\n
-/**\r\n
- * ProtoEnum annotates an enum type that will be exposed via the GRPC API as a native Protocol\r\n
- * Buffers enum.\r\n
- */\r\n
-#Retention(RetentionPolicy.RUNTIME)\r\n
-#Target(ElementType.TYPE)\r\n
-public #interface ProtoEnum {}\r\n
+/*\n
+ * Copyright 2022 Netflix, Inc.\n
+ * <p>\n
+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with\n
+ * the License. You may obtain a copy of the License at\n
+ * <p>\n
+ * http://www.apache.org/licenses/LICENSE-2.0\n
+ * <p>\n
+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on\n
+ * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\n
+ * specific language governing permissions and limitations under the License.\n
+ */\n
+package com.netflix.conductor.annotations.protogen;\n
+\n
+import java.lang.annotation.ElementType;\n
+import java.lang.annotation.Retention;\n
+import java.lang.annotation.RetentionPolicy;\n
+import java.lang.annotation.Target;\n
+\n
+/**\n
+ * ProtoEnum annotates an enum type that will be exposed via the GRPC API as a native Protocol\n
+ * Buffers enum.\n
... (4 more lines that didn't fit)
Violations also present in:
src/main/java/com/netflix/conductor/annotations/protogen/ProtoField.java
src/main/java/com/netflix/conductor/annotations/protogen/ProtoMessage.java
Run './gradlew :conductor-annotations:spotlessApply' to fix these violations.
* Try:
> Run with --info or --debug option to get more log output.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':conductor-annotations:spotlessJavaCheck'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:142)
at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:282)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:140)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:128)
at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:77)
at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:69)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:327)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:314)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:307)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:293)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:420)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:342)
at org.gradle.execution.plan.DefaultPlanExecutor.process(DefaultPlanExecutor.java:96)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph.executeWithServices(DefaultTaskExecutionGraph.java:140)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph.execute(DefaultTaskExecutionGraph.java:125)
at org.gradle.execution.SelectedTaskExecutionAction.execute(SelectedTaskExecutionAction.java:39)
at org.gradle.execution.DryRunBuildExecutionAction.execute(DryRunBuildExecutionAction.java:51)
at org.gradle.execution.BuildOperationFiringBuildWorkerExecutor$ExecuteTasks.call(BuildOperationFiringBuildWorkerExecutor.java:54)
at org.gradle.execution.BuildOperationFiringBuildWorkerExecutor$ExecuteTasks.call(BuildOperationFiringBuildWorkerExecutor.java:43)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
at org.gradle.execution.BuildOperationFiringBuildWorkerExecutor.execute(BuildOperationFiringBuildWorkerExecutor.java:40)
at org.gradle.internal.build.DefaultBuildLifecycleController.lambda$executeTasks$7(DefaultBuildLifecycleController.java:161)
at org.gradle.internal.model.StateTransitionController.doTransition(StateTransitionController.java:247)
at org.gradle.internal.model.StateTransitionController.lambda$tryTransition$7(StateTransitionController.java:174)
at org.gradle.internal.work.DefaultSynchronizer.withLock(DefaultSynchronizer.java:44)
at org.gradle.internal.model.StateTransitionController.tryTransition(StateTransitionController.java:174)
at org.gradle.internal.build.DefaultBuildLifecycleController.executeTasks(DefaultBuildLifecycleController.java:161)
at org.gradle.internal.build.DefaultBuildWorkGraphController$DefaultBuildWorkGraph.runWork(DefaultBuildWorkGraphController.java:156)
at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:249)
at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:109)
at org.gradle.composite.internal.DefaultBuildController.doRun(DefaultBuildController.java:164)
at org.gradle.composite.internal.DefaultBuildController.access$000(DefaultBuildController.java:45)
at org.gradle.composite.internal.DefaultBuildController$BuildOpRunnable.run(DefaultBuildController.java:183)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)

Related

Hazelcasts tries ports other than specified

I have two EC2 instances forming a Hazelcast cluster.
Hazelcast I use is in the vertx-hazelcast:3.9.1 package, which runs Hazelcast version 3.12.2.
I also use the hazelcast-aws:2.4 plugin.
My cluster.xml is:
<?xml version="1.0" encoding="UTF-8"?>
<!--
~ Copyright 2017 Red Hat, Inc.
~
~ Red Hat licenses this file to you under the Apache License, version 2.0
~ (the "License"); you may not use this file except in compliance with the
~ License. You may obtain a copy of the License at:
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
~ WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
~ License for the specific language governing permissions and limitations
~ under the License.
-->
<hazelcast
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-3.12.xsd">
<network>
<port port-count="1" auto-increment="false">5701</port>
<public-address>x.x.x.x</public-address>
<join>
<multicast enabled="false"/>
<aws enabled="true">
<security-group-name>security-group-name</security-group-name>
</aws>
</join>
</network>
</hazelcast>
Both instances have the same cluster.xml, but with different entries in <public-address></public-address>.
What happens on cluster startup, and what I'd like to avoid, is that Hazelcast tries connecting to instances in the same security group, using ports 5701-5708, even though I thought I had set up just one port.
It writes unnecessarily to the log, which looks like this:
2021-04-27 10:51:28,671 INFO com.hazelcast.nio.tcp.TcpIpConnector:65 - [x.x.x.x]:5701 [dev] [3.12.2] Connecting to /x.x.x.x:5703, timeout: 10000, bind-any: true
2021-04-27 10:51:28,682 INFO com.hazelcast.nio.tcp.TcpIpConnector:65 - [x.x.x.x]:5701 [dev] [3.12.2] Could not connect to: /x.x.x.x:5703. Reason: SocketException[Connection refused to address /x.x.x.x:5704]
2021-04-27 10:51:28,717 INFO com.hazelcast.internal.cluster.impl.DiscoveryJoiner:65 - [x.x.x.x]:5701 [dev] [3.12.2] [x.x.x.x]:5703 is added to the blacklist.
...
It writes the same output for all ports in the said range.
I seem to have done as suggested here.
How do I stop it trying to use ports other than 5701?
In the "port" tag if you want to use a specific port, set auto-increment to false. If it is set to false, the port-count attribute must be ignored, so remove it:
<port auto-increment="false">5701</port>
Also add the following line just below the previous one:
<reuse-address>true</reuse-address>
When you shutdown a cluster member, the server socket port will be in the TIME_WAIT state for the next couple of minutes. If you start the member right after shutting it down, you may not be able to bind it to the same port because it is in the TIME_WAIT state. If you set reuse-address to true, the TIME_WAIT state is ignored and you can bind the member to the same port again. Default value is false. If you set this to true, Hazelcast will use the same port when you restart a member right after you shut it down.

SSL connection issue when calling GRPC service from Alpine Linux Image using Zulu jdk 11

I have a spring boot application which calls a GRPC service using Netty. When I run my code in my local machine (MacOS and zulu jdk without JCE) I am able to connect to the GRPC service.
Note: We are using Oracle JDK 1.8 compiled GRPC client jar
When I build a docker image (Alpine Linux with zulu jdk without JCE) I get below error
javax.net.ssl|ERROR|37|C7-services-1|2019-09-26 21:18:10.622 GMT|TransportContext.java:312|Fatal (HANDSHAKE_FAILURE): Couldn't kickstart handshaking (
"throwable" : {
javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
at java.base/sun.security.ssl.HandshakeContext.<init>(HandshakeContext.java:169)
at java.base/sun.security.ssl.ClientHandshakeContext.<init>(ClientHandshakeContext.java:98)
at java.base/sun.security.ssl.TransportContext.kickstart(TransportContext.java:216)
at java.base/sun.security.ssl.SSLEngineImpl.beginHandshake(SSLEngineImpl.java:103)
at io.netty.handler.ssl.JdkSslEngine.beginHandshake(JdkSslEngine.java:155)
at io.netty.handler.ssl.SslHandler.handshake(SslHandler.java:1967)
at io.netty.handler.ssl.SslHandler.startHandshakeProcessing(SslHandler.java:1886)
at io.netty.handler.ssl.SslHandler.channelActive(SslHandler.java:2021)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:225)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:211)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:204)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelActive(DefaultChannelPipeline.java:1396)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:225)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:211)
at io.netty.channel.DefaultChannelPipeline.fireChannelActive(DefaultChannelPipeline.java:906)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:311)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:670)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.base/java.lang.Thread.run(Thread.java:834)}
)
I see that on my local below cipher suites are present
* TLS_AES_128_GCM_SHA256
* TLS_AES_256_GCM_SHA384
* TLS_DHE_DSS_WITH_AES_128_CBC_SHA
* TLS_DHE_DSS_WITH_AES_128_CBC_SHA256
* TLS_DHE_DSS_WITH_AES_128_GCM_SHA256
* TLS_DHE_DSS_WITH_AES_256_CBC_SHA
* TLS_DHE_DSS_WITH_AES_256_CBC_SHA256
* TLS_DHE_DSS_WITH_AES_256_GCM_SHA384
* TLS_DHE_RSA_WITH_AES_128_CBC_SHA
* TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
* TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
* TLS_DHE_RSA_WITH_AES_256_CBC_SHA
* TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
* TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
* TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
* TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
* TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
* TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
* TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
* TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA
* TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256
* TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256
* TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA
* TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384
* TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384
* TLS_ECDH_RSA_WITH_AES_128_CBC_SHA
* TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256
* TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256
* TLS_ECDH_RSA_WITH_AES_256_CBC_SHA
* TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384
* TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384
* TLS_EMPTY_RENEGOTIATION_INFO_SCSV
* TLS_RSA_WITH_AES_128_CBC_SHA
* TLS_RSA_WITH_AES_128_CBC_SHA256
* TLS_RSA_WITH_AES_128_GCM_SHA256
* TLS_RSA_WITH_AES_256_CBC_SHA
* TLS_RSA_WITH_AES_256_CBC_SHA256
* TLS_RSA_WITH_AES_256_GCM_SHA384
and in the image i see very few
* TLS_AES_128_GCM_SHA256
* TLS_AES_256_GCM_SHA384
* TLS_DHE_DSS_WITH_AES_128_CBC_SHA
* TLS_DHE_DSS_WITH_AES_128_CBC_SHA256
* TLS_DHE_DSS_WITH_AES_128_GCM_SHA256
* TLS_DHE_DSS_WITH_AES_256_CBC_SHA
* TLS_DHE_DSS_WITH_AES_256_CBC_SHA256
* TLS_DHE_DSS_WITH_AES_256_GCM_SHA384
* TLS_DHE_RSA_WITH_AES_128_CBC_SHA
* TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
* TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
* TLS_DHE_RSA_WITH_AES_256_CBC_SHA
* TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
* TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
* TLS_EMPTY_RENEGOTIATION_INFO_SCSV
* TLS_RSA_WITH_AES_128_CBC_SHA
* TLS_RSA_WITH_AES_128_CBC_SHA256
* TLS_RSA_WITH_AES_128_GCM_SHA256
* TLS_RSA_WITH_AES_256_CBC_SHA
* TLS_RSA_WITH_AES_256_CBC_SHA256
* TLS_RSA_WITH_AES_256_GCM_SHA384
Note: BTW on my local it is choosing TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256
I downgraded my java version to Oracle JDK 1.8 and built the image also with JDK 1.8. Now i was able to resolve the issue with SSL handshake as I can see all the ciphers now available within the image. However I ended up with an issue which is very Alpine Linux.
Caused by: java.lang.IllegalStateException: Could not find TLS ALPN provider; no working netty-tcnative, Conscrypt, or Jetty NPN/ALPN available
at io.grpc.netty.GrpcSslContexts.defaultSslProvider(GrpcSslContexts.java:258) ~[grpc-netty-1.16.1.jar:1.16.1]
at io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:171) ~[grpc-netty-1.16.1.jar:1.16.1]
at io.grpc.netty.GrpcSslContexts.forClient(GrpcSslContexts.java:120) ~[grpc-netty-1.16.1.jar:1.16.1]
at com.samsclub.list.core.client.C7ProductCatalogProvider.createChannel(C7ProductCatalogProvider.java:104) ~[sams-list-core-0.0.10-SNAPSHOT.jar:0.0.10-SNAPSHOT]
at com.samsclub.list.core.client.C7ProductCatalogProvider.init(C7ProductCatalogProvider.java:59) ~[sams-list-core-0.0.10-SNAPSHOT.jar:0.0.10-SNAPSHOT]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_212]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_212]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_212]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_212]
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleElement.invoke(InitDestroyAnnotationBeanPostProcessor.java:363) ~[spring-beans-5.1.7.RELEASE.jar:5.1.7.RELEASE]
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleMetadata.invokeInitMet
see this https://github.com/grpc/grpc-java/issues/3336
I really want to be on zulu 11 and wants to call this GRPC service. Should we just get the GRPC client jar compiled with JDK 11.?
My Docker Alpine image config
/app # cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.10.0
PRETTY_NAME="Alpine Linux v3.10"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
/app # java -version
openjdk version "11.0.4" 2019-07-16 LTS
OpenJDK Runtime Environment Zulu11.33+15-CA (build 11.0.4+11-LTS)
OpenJDK 64-Bit Server VM Zulu11.33+15-CA (build 11.0.4+11-LTS, mixed mode)
What do you mean by saying zulu jdk without JCE?
Official zulu jdk is provided with all required crypto libraries and providers. In case of you manually exclude some of the providers or libraries you'll miss corresponding functionality.
For example SunEC crypto provider is responsible for ECDSA and ECDH algorithms implementation. Excluding SunEC from the list of providers or disabling/removing jdk.crypto.ec module, you'll miss all TLS_ECDH_ECDSA_* TLS_ECDHE_ECDSA_* cipher suites. It could be a reason of your TLS handshake failure.
Clean alpine docker image with Zulu 11 jdk shows the following:
$ docker run alpn_zulu more /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.10.2
PRETTY_NAME="Alpine Linux v3.10"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
$ docker run alpn_zulu java -version
openjdk version "11.0.4" 2019-07-16 LTS
OpenJDK Runtime Environment Zulu11.33+15-CA (build 11.0.4+11-LTS)
OpenJDK 64-Bit Server VM Zulu11.33+15-CA (build 11.0.4+11-LTS, mixed mode)
$ docker run alpn_zulu jrunscript -e "java.util.Arrays.asList(javax.net.ssl.SSLServerSocketFactory.getDefault().getSupportedCipherSuites()).stream().forEach(println)"
Warning: Nashorn engine is planned to be removed from a future JDK release
TLS_AES_128_GCM_SHA256
TLS_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_DSS_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_DSS_WITH_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
TLS_RSA_WITH_AES_256_CBC_SHA256
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
TLS_DHE_DSS_WITH_AES_256_CBC_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA
TLS_DHE_RSA_WITH_AES_256_CBC_SHA
TLS_DHE_DSS_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
TLS_RSA_WITH_AES_128_CBC_SHA256
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256
TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
TLS_DHE_DSS_WITH_AES_128_CBC_SHA256
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA
TLS_DHE_RSA_WITH_AES_128_CBC_SHA
TLS_DHE_DSS_WITH_AES_128_CBC_SHA
TLS_EMPTY_RENEGOTIATION_INFO_SCSV

Hadoop Single Node Cluster setup error during namenode format

I have installed Apache Hadoop 2.6.0 in Windows 10. I have been trying to fix this issue but failed to understand the error or any mistake from my end.
I have set up all the paths correctly, Hadoop version is showing the version in command prompt properly.
I have already created temp directory inside hadoop directory like c:\hadoop\temp.
When I am trying to format the Namenode, I am getting this error:
C:\hadoop\bin>hdfs namenode -format
18/07/18 20:44:55 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = TheBhaskarDas/192.168.44.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.6.5
STARTUP_MSG: classpath = C:\hadoop\etc\hadoop;C:\hadoop\share\hadoop\common\lib\activation-1.1.jar;C:\hadoop\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.jar;C:\hadoop\share\hadoop\common\lib\apacheds-kerberos-codec-2.0.0-M15.jar;C:\hadoop\share\hadoop\common\lib\api-asn1-api-1.0.0-M20.jar;C:\hadoop\share\hadoop\common\lib\api-util-1.0.0-M20.jar;C:\hadoop\share\hadoop\common\lib\asm-3.2.jar;C:\hadoop\share\hadoop\common\lib\avro-1.7.4.jar;C:\hadoop\share\hadoop\common\lib\commons-beanutils-1.7.0.jar;C:\hadoop\share\hadoop\common\lib\commons-beanutils-core-1.8.0.jar;C:\hadoop\share\hadoop\common\lib\commons-cli-1.2.jar;C:\hadoop\share\hadoop\common\lib\commons-codec-1.4.jar;C:\hadoop\share\hadoop\common\lib\commons-collections-3.2.2.jar;C:\hadoop\share\hadoop\common\lib\commons-compress-1.4.1.jar;C:\hadoop\share\hadoop\common\lib\commons-configuration-1.6.jar;C:\hadoop\share\hadoop\common\lib\commons-digester-1.8.jar;C:\hadoop\share\hadoop\common\lib\commons-el-1.0.jar;C:\hadoop\share\hadoop\common\lib\commons-httpclient-3.1.jar;C:\hadoop\share\hadoop\common\lib\commons-io-2.4.jar;C:\hadoop\share\hadoop\common\lib\commons-lang-2.6.jar;C:\hadoop\share\hadoop\common\lib\commons-logging-1.1.3.jar;C:\hadoop\share\hadoop\common\lib\commons-math3-3.1.1.jar;C:\hadoop\share\hadoop\common\lib\commons-net-3.1.jar;C:\hadoop\share\hadoop\common\lib\curator-client-2.6.0.jar;C:\hadoop\share\hadoop\common\lib\curator-framework-2.6.0.jar;C:\hadoop\share\hadoop\common\lib\curator-recipes-2.6.0.jar;C:\hadoop\share\hadoop\common\lib\gson-2.2.4.jar;C:\hadoop\share\hadoop\common\lib\guava-11.0.2.jar;C:\hadoop\share\hadoop\common\lib\hadoop-annotations-2.6.5.jar;C:\hadoop\share\hadoop\common\lib\hadoop-auth-2.6.5.jar;C:\hadoop\share\hadoop\common\lib\hamcrest-core-1.3.jar;C:\hadoop\share\hadoop\common\lib\htrace-core-3.0.4.jar;C:\hadoop\share\hadoop\common\lib\httpclient-4.2.5.jar;C:\hadoop\share\hadoop\common\lib\httpcore-4.2.5.jar;C:\hadoop\share\hadoop\common\lib\jackson-core-asl-1.9.13.jar;C:\hadoop\share\hadoop\common\lib\jackson-jaxrs-1.9.13.jar;C:\hadoop\share\hadoop\common\lib\jackson-mapper-asl-1.9.13.jar;C:\hadoop\share\hadoop\common\lib\jackson-xc-1.9.13.jar;C:\hadoop\share\hadoop\common\lib\jasper-compiler-5.5.23.jar;C:\hadoop\share\hadoop\common\lib\jasper-runtime-5.5.23.jar;C:\hadoop\share\hadoop\common\lib\java-xmlbuilder-0.4.jar;C:\hadoop\share\hadoop\common\lib\jaxb-api-2.2.2.jar;C:\hadoop\share\hadoop\common\lib\jaxb-impl-2.2.3-1.jar;C:\hadoop\share\hadoop\common\lib\jersey-core-1.9.jar;C:\hadoop\share\hadoop\common\lib\jersey-json-1.9.jar;C:\hadoop\share\hadoop\common\lib\jersey-server-1.9.jar;C:\hadoop\share\hadoop\common\lib\jets3t-0.9.0.jar;C:\hadoop\share\hadoop\common\lib\jettison-1.1.jar;C:\hadoop\share\hadoop\common\lib\jetty-6.1.26.jar;C:\hadoop\share\hadoop\common\lib\jetty-util-6.1.26.jar;C:\hadoop\share\hadoop\common\lib\jsch-0.1.42.jar;C:\hadoop\share\hadoop\common\lib\jsp-api-2.1.jar;C:\hadoop\share\hadoop\common\lib\jsr305-1.3.9.jar;C:\hadoop\share\hadoop\common\lib\junit-4.11.jar;C:\hadoop\share\hadoop\common\lib\log4j-1.2.17.jar;C:\hadoop\share\hadoop\common\lib\mockito-all-1.8.5.jar;C:\hadoop\share\hadoop\common\lib\netty-3.6.2.Final.jar;C:\hadoop\share\hadoop\common\lib\paranamer-2.3.jar;C:\hadoop\share\hadoop\common\lib\protobuf-java-2.5.0.jar;C:\hadoop\share\hadoop\common\lib\servlet-api-2.5.jar;C:\hadoop\share\hadoop\common\lib\slf4j-api-1.7.5.jar;C:\hadoop\share\hadoop\common\lib\slf4j-log4j12-1.7.5.jar;C:\hadoop\share\hadoop\common\lib\snappy-java-1.0.4.1.jar;C:\hadoop\share\hadoop\common\lib\stax-api-1.0-2.jar;C:\hadoop\share\hadoop\common\lib\xmlenc-0.52.jar;C:\hadoop\share\hadoop\common\lib\xz-1.0.jar;C:\hadoop\share\hadoop\common\lib\zookeeper-3.4.6.jar;C:\hadoop\share\hadoop\common\hadoop-common-2.6.5-tests.jar;C:\hadoop\share\hadoop\common\hadoop-common-2.6.5.jar;C:\hadoop\share\hadoop\common\hadoop-nfs-2.6.5.jar;C:\hadoop\share\hadoop\hdfs;C:\hadoop\share\hadoop\hdfs\lib\asm-3.2.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-cli-1.2.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-codec-1.4.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-daemon-1.0.13.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-el-1.0.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-io-2.4.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-lang-2.6.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-logging-1.1.3.jar;C:\hadoop\share\hadoop\hdfs\lib\guava-11.0.2.jar;C:\hadoop\share\hadoop\hdfs\lib\htrace-core-3.0.4.jar;C:\hadoop\share\hadoop\hdfs\lib\jackson-core-asl-1.9.13.jar;C:\hadoop\share\hadoop\hdfs\lib\jackson-mapper-asl-1.9.13.jar;C:\hadoop\share\hadoop\hdfs\lib\jasper-runtime-5.5.23.jar;C:\hadoop\share\hadoop\hdfs\lib\jersey-core-1.9.jar;C:\hadoop\share\hadoop\hdfs\lib\jersey-server-1.9.jar;C:\hadoop\share\hadoop\hdfs\lib\jetty-6.1.26.jar;C:\hadoop\share\hadoop\hdfs\lib\jetty-util-6.1.26.jar;C:\hadoop\share\hadoop\hdfs\lib\jsp-api-2.1.jar;C:\hadoop\share\hadoop\hdfs\lib\jsr305-1.3.9.jar;C:\hadoop\share\hadoop\hdfs\lib\log4j-1.2.17.jar;C:\hadoop\share\hadoop\hdfs\lib\netty-3.6.2.Final.jar;C:\hadoop\share\hadoop\hdfs\lib\protobuf-java-2.5.0.jar;C:\hadoop\share\hadoop\hdfs\lib\servlet-api-2.5.jar;C:\hadoop\share\hadoop\hdfs\lib\xercesImpl-2.9.1.jar;C:\hadoop\share\hadoop\hdfs\lib\xml-apis-1.3.04.jar;C:\hadoop\share\hadoop\hdfs\lib\xmlenc-0.52.jar;C:\hadoop\share\hadoop\hdfs\hadoop-hdfs-2.6.5-tests.jar;C:\hadoop\share\hadoop\hdfs\hadoop-hdfs-2.6.5.jar;C:\hadoop\share\hadoop\hdfs\hadoop-hdfs-nfs-2.6.5.jar;C:\hadoop\share\hadoop\yarn\lib\activation-1.1.jar;C:\hadoop\share\hadoop\yarn\lib\aopalliance-1.0.jar;C:\hadoop\share\hadoop\yarn\lib\asm-3.2.jar;C:\hadoop\share\hadoop\yarn\lib\commons-cli-1.2.jar;C:\hadoop\share\hadoop\yarn\lib\commons-codec-1.4.jar;C:\hadoop\share\hadoop\yarn\lib\commons-collections-3.2.2.jar;C:\hadoop\share\hadoop\yarn\lib\commons-compress-1.4.1.jar;C:\hadoop\share\hadoop\yarn\lib\commons-httpclient-3.1.jar;C:\hadoop\share\hadoop\yarn\lib\commons-io-2.4.jar;C:\hadoop\share\hadoop\yarn\lib\commons-lang-2.6.jar;C:\hadoop\share\hadoop\yarn\lib\commons-logging-1.1.3.jar;C:\hadoop\share\hadoop\yarn\lib\guava-11.0.2.jar;C:\hadoop\share\hadoop\yarn\lib\guice-3.0.jar;C:\hadoop\share\hadoop\yarn\lib\guice-servlet-3.0.jar;C:\hadoop\share\hadoop\yarn\lib\jackson-core-asl-1.9.13.jar;C:\hadoop\share\hadoop\yarn\lib\jackson-jaxrs-1.9.13.jar;C:\hadoop\share\hadoop\yarn\lib\jackson-mapper-asl-1.9.13.jar;C:\hadoop\share\hadoop\yarn\lib\jackson-xc-1.9.13.jar;C:\hadoop\share\hadoop\yarn\lib\javax.inject-1.jar;C:\hadoop\share\hadoop\yarn\lib\jaxb-api-2.2.2.jar;C:\hadoop\share\hadoop\yarn\lib\jaxb-impl-2.2.3-1.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-client-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-core-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-guice-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-json-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-server-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jettison-1.1.jar;C:\hadoop\share\hadoop\yarn\lib\jetty-6.1.26.jar;C:\hadoop\share\hadoop\yarn\lib\jetty-util-6.1.26.jar;C:\hadoop\share\hadoop\yarn\lib\jline-0.9.94.jar;C:\hadoop\share\hadoop\yarn\lib\jsr305-1.3.9.jar;C:\hadoop\share\hadoop\yarn\lib\leveldbjni-all-1.8.jar;C:\hadoop\share\hadoop\yarn\lib\log4j-1.2.17.jar;C:\hadoop\share\hadoop\yarn\lib\netty-3.6.2.Final.jar;C:\hadoop\share\hadoop\yarn\lib\protobuf-java-2.5.0.jar;C:\hadoop\share\hadoop\yarn\lib\servlet-api-2.5.jar;C:\hadoop\share\hadoop\yarn\lib\stax-api-1.0-2.jar;C:\hadoop\share\hadoop\yarn\lib\xz-1.0.jar;C:\hadoop\share\hadoop\yarn\lib\zookeeper-3.4.6.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-api-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-applications-distributedshell-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-applications-unmanaged-am-launcher-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-client-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-common-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-registry-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-applicationhistoryservice-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-common-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-nodemanager-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-resourcemanager-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-tests-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-web-proxy-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\lib\aopalliance-1.0.jar;C:\hadoop\share\hadoop\mapreduce\lib\asm-3.2.jar;C:\hadoop\share\hadoop\mapreduce\lib\avro-1.7.4.jar;C:\hadoop\share\hadoop\mapreduce\lib\commons-compress-1.4.1.jar;C:\hadoop\share\hadoop\mapreduce\lib\commons-io-2.4.jar;C:\hadoop\share\hadoop\mapreduce\lib\guice-3.0.jar;C:\hadoop\share\hadoop\mapreduce\lib\guice-servlet-3.0.jar;C:\hadoop\share\hadoop\mapreduce\lib\hadoop-annotations-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\lib\hamcrest-core-1.3.jar;C:\hadoop\share\hadoop\mapreduce\lib\jackson-core-asl-1.9.13.jar;C:\hadoop\share\hadoop\mapreduce\lib\jackson-mapper-asl-1.9.13.jar;C:\hadoop\share\hadoop\mapreduce\lib\javax.inject-1.jar;C:\hadoop\share\hadoop\mapreduce\lib\jersey-core-1.9.jar;C:\hadoop\share\hadoop\mapreduce\lib\jersey-guice-1.9.jar;C:\hadoop\share\hadoop\mapreduce\lib\jersey-server-1.9.jar;C:\hadoop\share\hadoop\mapreduce\lib\junit-4.11.jar;C:\hadoop\share\hadoop\mapreduce\lib\leveldbjni-all-1.8.jar;C:\hadoop\share\hadoop\mapreduce\lib\log4j-1.2.17.jar;C:\hadoop\share\hadoop\mapreduce\lib\netty-3.6.2.Final.jar;C:\hadoop\share\hadoop\mapreduce\lib\paranamer-2.3.jar;C:\hadoop\share\hadoop\mapreduce\lib\protobuf-java-2.5.0.jar;C:\hadoop\share\hadoop\mapreduce\lib\snappy-java-1.0.4.1.jar;C:\hadoop\share\hadoop\mapreduce\lib\xz-1.0.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-app-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-common-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-core-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-plugins-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-2.6.5-tests.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-shuffle-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.6.5.jar
STARTUP_MSG: build = https://github.com/apache/hadoop.git -r e8c9fe0b4c252caf2ebf1464220599650f119997; compiled by 'sjlee' on 2016-10-02T23:43Z
STARTUP_MSG: java = 1.8.0_181
************************************************************/
18/07/18 20:44:55 INFO namenode.NameNode: createNameNode [-format]
[Fatal Error] core-site.xml:19:6: The processing instruction target matching "[xX][mM][lL]" is not allowed.
18/07/18 20:44:55 FATAL conf.Configuration: error parsing conf core-site.xml
org.xml.sax.SAXParseException; systemId: file:/C:/hadoop/etc/hadoop/core-site.xml; lineNumber: 19; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed.
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2432)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2420)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2491)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2444)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2361)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1099)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1071)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1409)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:319)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:485)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1375)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)
18/07/18 20:44:55 FATAL namenode.NameNode: Failed to start namenode.
java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/C:/hadoop/etc/hadoop/core-site.xml; lineNumber: 19; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed.
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2597)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2444)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2361)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1099)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1071)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1409)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:319)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:485)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1375)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)
Caused by: org.xml.sax.SAXParseException; systemId: file:/C:/hadoop/etc/hadoop/core-site.xml; lineNumber: 19; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed.
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2432)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2420)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2491)
... 11 more
18/07/18 20:44:55 INFO util.ExitUtil: Exiting with status 1
18/07/18 20:44:55 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at TheBhaskarDas/192.168.44.1
************************************************************/
C:\hadoop\bin>
core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>C:\hadoop\temp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:50071</value>
</property>
</configuration>
I have fixed it.
I have removed all the characters/anything before <?xml and validated the XML files in https://www.w3schools.com/xml/xml_validator.asp
new core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>\hadoop\temp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:50071</value>
</property>
</configuration>

Issue while installing hadoop-2.2.0 in linux 64 bit machine

Using this link ,tried installing Hadoop version - 2.2.0(single node cluster)in ubuntu 12.04(64 bit machine)
http://bigdatahandler.com/hadoop-hdfs/installing-single-node-hadoop-2-2-0-on-ubuntu/
while formatting the hdfs file system via namenode using the following command
hadoop namenode -format
when i'm doing that ,getting the following issue,
14/08/07 10:38:39 FATAL namenode.NameNode: Exception in namenode join
java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/usr/local/hadoop/etc/hadoop/mapred-site.xml; lineNumber: 27; columnNumber: 1; Content is not allowed in trailing section.
What shall i need to do inorder to solve the following issue?
Mapred-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
14/08/07 10:38:39 FATAL namenode.NameNode: Exception in namenode join java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/usr/local/hadoop/etc/hadoop/mapred-site.xml; lineNumber: 27; columnNumber: 1; Content is not allowed in trailing section
Probably some caracter in your XML that you forget to erase. Please post your full XML. Like #Abhishek said!

How to set level logging to DEBUG in Tomcat?

I would like to set level logging to DEBUG in tomcat but in console nevertheless only INFO and WARN output.
Could anybody tell me what's wrong?
My C:\tomcat\logging.properties:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional DEBUGrmation regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
handlers = 1catalina.org.apache.juli.FileHandler, 2localhost.org.apache.juli.FileHandler, 3manager.org.apache.juli.FileHandler, 4host-manager.org.apache.juli.FileHandler, java.util.logging.ConsoleHandler
.handlers = 1catalina.org.apache.juli.FileHandler, java.util.logging.ConsoleHandler
############################################################
# Handler specific properties.
# Describes specific configuration DEBUG for Handlers.
############################################################
1catalina.org.apache.juli.FileHandler.level = DEBUG
1catalina.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
1catalina.org.apache.juli.FileHandler.prefix = catalina.
2localhost.org.apache.juli.FileHandler.level = DEBUG
2localhost.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
2localhost.org.apache.juli.FileHandler.prefix = localhost.
3manager.org.apache.juli.FileHandler.level = DEBUG
3manager.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
3manager.org.apache.juli.FileHandler.prefix = manager.
4host-manager.org.apache.juli.FileHandler.level = DEBUG
4host-manager.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
4host-manager.org.apache.juli.FileHandler.prefix = host-manager.
java.util.logging.ConsoleHandler.level = DEBUG
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
############################################################
# Facility specific properties.
# Provides extra control for each logger.
############################################################
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = DEBUG
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers = 2localhost.org.apache.juli.FileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].level = DEBUG
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].handlers = 3manager.org.apache.juli.FileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-manager].level = DEBUG
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-manager].handlers = 4host-manager.org.apache.juli.FileHandler
# For example, set the com.xyz.foo logger to only log SEVERE
# messages:
#org.apache.catalina.startup.ContextConfig.level = DEBUG
#org.apache.catalina.startup.HostConfig.level = DEBUG
#org.apache.catalina.session.ManagerBase.level = DEBUG
#org.apache.catalina.core.AprLifecycleListener.level=DEBUG
Example of my log:
INFO: Deploying configuration descriptor manager.xml
08.11.2010 1:06:42 org.apache.catalina.startup.HostConfig deployWAR
INFO: Deploying web application archive spring-mvc-trial.war
08.11.2010 1:06:46 org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory docs
08.11.2010 1:06:46 org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory examples
08.11.2010 1:06:46 org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory ROOT
08.11.2010 1:06:46 org.apache.coyote.http11.Http11AprProtocol start
INFO: Starting Coyote HTTP/1.1 on http-8080
08.11.2010 1:06:46 org.apache.coyote.ajp.AjpAprProtocol start
INFO: Starting Coyote AJP/1.3 on ajp-8009
08.11.2010 1:06:46 org.apache.catalina.startup.Catalina start
INFO: Server startup in 3777 ms
08.11.2010 1:09:36 org.apache.coyote.http11.Http11AprProtocol pause
INFO: Pausing Coyote HTTP/1.1 on http-8080
08.11.2010 1:09:36 org.apache.coyote.ajp.AjpAprProtocol pause
INFO: Pausing Coyote AJP/1.3 on ajp-8009
08.11.2010 1:09:37 org.apache.catalina.core.StandardService stop
INFO: Stopping service Catalina
08.11.2010 1:09:37 org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc
SEVERE: The web application [/spring-mvc-trial] registered the JBDC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
08.11.2010 1:09:37 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/spring-mvc-trial] appears to have started a thread named [MySQL Statement Cancellation Timer] but has failed to stop it. This is very likely to create a memory leak.
08.11.2010 1:09:38 org.apache.coyote.http11.Http11AprProtocol destroy
INFO: Stopping Coyote HTTP/1.1 on http-8080
08.11.2010 1:09:38 org.apache.coyote.ajp.AjpAprProtocol destroy
INFO: Stopping Coyote AJP/1.3 on ajp-8009
Firstly, the level name to use is FINE, not DEBUG. Let's assume for a minute that DEBUG is actually valid, as it makes the following explanation make a bit more sense...
In the Handler specific properties section, you're setting the logging level for those handlers to DEBUG. This means the handlers will handle any log messages with the DEBUG level or higher. It doesn't necessarily mean any DEBUG messages are actually getting passed to the handlers.
In the Facility specific properties section, you're setting the logging level for a few explicitly-named loggers to DEBUG. For those loggers, anything at level DEBUG or above will get passed to the handlers.
The default logging level is INFO, and apart from the loggers mentioned in the Facility specific properties section, all loggers will have that level.
If you want to see all FINE messages, add this:
.level = FINE
However, this will generate a vast quantity of log messages. It's probably more useful to set the logging level for your code:
your.package.level = FINE
See the Tomcat 6/Tomcat 7 logging documentation for more information. The example logging.properties file shown there uses FINE instead of DEBUG:
...
1catalina.org.apache.juli.FileHandler.level = FINE
...
and also gives you examples of setting additional logging levels:
# For example, set the com.xyz.foo logger to only log SEVERE
# messages:
#org.apache.catalina.startup.ContextConfig.level = FINE
#org.apache.catalina.startup.HostConfig.level = FINE
#org.apache.catalina.session.ManagerBase.level = FINE
JULI logging levels for Tomcat
SEVERE - Serious failures
WARNING - Potential problems
INFO - Informational messages
CONFIG - Static configuration messages
FINE - Trace messages
FINER - Detailed trace messages
FINEST - Highly detailed trace messages
You can find here more
https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/pasoe-admin/tomcat-logging.html
In addition to what has already been said (DEBUG -> FINE, FINER, FINEST in JULI), in case you're running Tomcat using an IDE, say Eclipse, note that it stores the configuration on a different path than CATALINA_HOME, so you may need to add
-Djava.util.logging.config.file="C:\apache-tomcat-9.0.31\conf\logging.properties"
to explicit set your logging properties.
More on this here: Where can I view Tomcat log files in Eclipse?

Resources