Hadoop installation issue ubuntu 18.04: hadoop namenode -format error - hadoop

I'm trying to do a hadoop installation.
I am following this article for hadoop installation instructions. One of the steps I need to do is to format the hadoop file system using the command:
root#ben-Aspire-E5-575G:~# hadoop namenode -format
I got the following error
2018-10-12 00:08:16,884 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2018-10-12 00:08:16,896 INFO namenode.NameNode: createNameNode [-format]
2018-10-12 00:08:17,024 ERROR conf.Configuration: error parsing conf hdfs-site.xml
com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <xml>
at [row,col,system-id]: [49,0,"file:/usr/local/hadoop/etc/hadoop/hdfs-site.xml"]
at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687)
at com.ctc.wstx.sr.BasicStreamReader.throwUnexpectedEOF(BasicStreamReader.java:5608)
at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2802)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123)
at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3257)
at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3063)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2986)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2926)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2806)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1366)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1679)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:339)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:572)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:174)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:156)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1587)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2018-10-12 00:08:17,031 ERROR namenode.NameNode: Failed to start namenode.
java.lang.RuntimeException: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <xml>
at [row,col,system-id]: [49,0,"file:/usr/local/hadoop/etc/hadoop/hdfs-site.xml"]
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3003)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2926)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2806)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1366)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1679)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:339)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:572)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:174)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:156)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1587)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
Caused by: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <xml>
at [row,col,system-id]: [49,0,"file:/usr/local/hadoop/etc/hadoop/hdfs-site.xml"]
at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687)
at com.ctc.wstx.sr.BasicStreamReader.throwUnexpectedEOF(BasicStreamReader.java:5608)
at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2802)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123)
at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3257)
at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3063)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2986)
... 11 more
2018-10-12 00:08:17,035 INFO util.ExitUtil: Exiting with status 1: java.lang.RuntimeException: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <xml>
at [row,col,system-id]: [49,0,"file:/usr/local/hadoop/etc/hadoop/hdfs-site.xml"]
2018-10-12 00:08:17,043 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ben-Aspire-E5-575G/127.0.1.1
************************************************************/
the hdfs-site.xml file is here
<xml version="1.0" encoding="UTF-8">
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create t$
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>

You're missing both </description> and a final </xml> outside of the end configuration tag in the file you've provided.
As the error says, there's a tag missing.
The description is optional, so feel free to delete that tag entirely

I think you have to fix the "description" tag. There is no proper end.

Related

Hadoop Single Node Cluster setup error during namenode format

I have installed Apache Hadoop 2.6.0 in Windows 10. I have been trying to fix this issue but failed to understand the error or any mistake from my end.
I have set up all the paths correctly, Hadoop version is showing the version in command prompt properly.
I have already created temp directory inside hadoop directory like c:\hadoop\temp.
When I am trying to format the Namenode, I am getting this error:
C:\hadoop\bin>hdfs namenode -format
18/07/18 20:44:55 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = TheBhaskarDas/192.168.44.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.6.5
STARTUP_MSG: classpath = C:\hadoop\etc\hadoop;C:\hadoop\share\hadoop\common\lib\activation-1.1.jar;C:\hadoop\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.jar;C:\hadoop\share\hadoop\common\lib\apacheds-kerberos-codec-2.0.0-M15.jar;C:\hadoop\share\hadoop\common\lib\api-asn1-api-1.0.0-M20.jar;C:\hadoop\share\hadoop\common\lib\api-util-1.0.0-M20.jar;C:\hadoop\share\hadoop\common\lib\asm-3.2.jar;C:\hadoop\share\hadoop\common\lib\avro-1.7.4.jar;C:\hadoop\share\hadoop\common\lib\commons-beanutils-1.7.0.jar;C:\hadoop\share\hadoop\common\lib\commons-beanutils-core-1.8.0.jar;C:\hadoop\share\hadoop\common\lib\commons-cli-1.2.jar;C:\hadoop\share\hadoop\common\lib\commons-codec-1.4.jar;C:\hadoop\share\hadoop\common\lib\commons-collections-3.2.2.jar;C:\hadoop\share\hadoop\common\lib\commons-compress-1.4.1.jar;C:\hadoop\share\hadoop\common\lib\commons-configuration-1.6.jar;C:\hadoop\share\hadoop\common\lib\commons-digester-1.8.jar;C:\hadoop\share\hadoop\common\lib\commons-el-1.0.jar;C:\hadoop\share\hadoop\common\lib\commons-httpclient-3.1.jar;C:\hadoop\share\hadoop\common\lib\commons-io-2.4.jar;C:\hadoop\share\hadoop\common\lib\commons-lang-2.6.jar;C:\hadoop\share\hadoop\common\lib\commons-logging-1.1.3.jar;C:\hadoop\share\hadoop\common\lib\commons-math3-3.1.1.jar;C:\hadoop\share\hadoop\common\lib\commons-net-3.1.jar;C:\hadoop\share\hadoop\common\lib\curator-client-2.6.0.jar;C:\hadoop\share\hadoop\common\lib\curator-framework-2.6.0.jar;C:\hadoop\share\hadoop\common\lib\curator-recipes-2.6.0.jar;C:\hadoop\share\hadoop\common\lib\gson-2.2.4.jar;C:\hadoop\share\hadoop\common\lib\guava-11.0.2.jar;C:\hadoop\share\hadoop\common\lib\hadoop-annotations-2.6.5.jar;C:\hadoop\share\hadoop\common\lib\hadoop-auth-2.6.5.jar;C:\hadoop\share\hadoop\common\lib\hamcrest-core-1.3.jar;C:\hadoop\share\hadoop\common\lib\htrace-core-3.0.4.jar;C:\hadoop\share\hadoop\common\lib\httpclient-4.2.5.jar;C:\hadoop\share\hadoop\common\lib\httpcore-4.2.5.jar;C:\hadoop\share\hadoop\common\lib\jackson-core-asl-1.9.13.jar;C:\hadoop\share\hadoop\common\lib\jackson-jaxrs-1.9.13.jar;C:\hadoop\share\hadoop\common\lib\jackson-mapper-asl-1.9.13.jar;C:\hadoop\share\hadoop\common\lib\jackson-xc-1.9.13.jar;C:\hadoop\share\hadoop\common\lib\jasper-compiler-5.5.23.jar;C:\hadoop\share\hadoop\common\lib\jasper-runtime-5.5.23.jar;C:\hadoop\share\hadoop\common\lib\java-xmlbuilder-0.4.jar;C:\hadoop\share\hadoop\common\lib\jaxb-api-2.2.2.jar;C:\hadoop\share\hadoop\common\lib\jaxb-impl-2.2.3-1.jar;C:\hadoop\share\hadoop\common\lib\jersey-core-1.9.jar;C:\hadoop\share\hadoop\common\lib\jersey-json-1.9.jar;C:\hadoop\share\hadoop\common\lib\jersey-server-1.9.jar;C:\hadoop\share\hadoop\common\lib\jets3t-0.9.0.jar;C:\hadoop\share\hadoop\common\lib\jettison-1.1.jar;C:\hadoop\share\hadoop\common\lib\jetty-6.1.26.jar;C:\hadoop\share\hadoop\common\lib\jetty-util-6.1.26.jar;C:\hadoop\share\hadoop\common\lib\jsch-0.1.42.jar;C:\hadoop\share\hadoop\common\lib\jsp-api-2.1.jar;C:\hadoop\share\hadoop\common\lib\jsr305-1.3.9.jar;C:\hadoop\share\hadoop\common\lib\junit-4.11.jar;C:\hadoop\share\hadoop\common\lib\log4j-1.2.17.jar;C:\hadoop\share\hadoop\common\lib\mockito-all-1.8.5.jar;C:\hadoop\share\hadoop\common\lib\netty-3.6.2.Final.jar;C:\hadoop\share\hadoop\common\lib\paranamer-2.3.jar;C:\hadoop\share\hadoop\common\lib\protobuf-java-2.5.0.jar;C:\hadoop\share\hadoop\common\lib\servlet-api-2.5.jar;C:\hadoop\share\hadoop\common\lib\slf4j-api-1.7.5.jar;C:\hadoop\share\hadoop\common\lib\slf4j-log4j12-1.7.5.jar;C:\hadoop\share\hadoop\common\lib\snappy-java-1.0.4.1.jar;C:\hadoop\share\hadoop\common\lib\stax-api-1.0-2.jar;C:\hadoop\share\hadoop\common\lib\xmlenc-0.52.jar;C:\hadoop\share\hadoop\common\lib\xz-1.0.jar;C:\hadoop\share\hadoop\common\lib\zookeeper-3.4.6.jar;C:\hadoop\share\hadoop\common\hadoop-common-2.6.5-tests.jar;C:\hadoop\share\hadoop\common\hadoop-common-2.6.5.jar;C:\hadoop\share\hadoop\common\hadoop-nfs-2.6.5.jar;C:\hadoop\share\hadoop\hdfs;C:\hadoop\share\hadoop\hdfs\lib\asm-3.2.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-cli-1.2.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-codec-1.4.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-daemon-1.0.13.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-el-1.0.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-io-2.4.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-lang-2.6.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-logging-1.1.3.jar;C:\hadoop\share\hadoop\hdfs\lib\guava-11.0.2.jar;C:\hadoop\share\hadoop\hdfs\lib\htrace-core-3.0.4.jar;C:\hadoop\share\hadoop\hdfs\lib\jackson-core-asl-1.9.13.jar;C:\hadoop\share\hadoop\hdfs\lib\jackson-mapper-asl-1.9.13.jar;C:\hadoop\share\hadoop\hdfs\lib\jasper-runtime-5.5.23.jar;C:\hadoop\share\hadoop\hdfs\lib\jersey-core-1.9.jar;C:\hadoop\share\hadoop\hdfs\lib\jersey-server-1.9.jar;C:\hadoop\share\hadoop\hdfs\lib\jetty-6.1.26.jar;C:\hadoop\share\hadoop\hdfs\lib\jetty-util-6.1.26.jar;C:\hadoop\share\hadoop\hdfs\lib\jsp-api-2.1.jar;C:\hadoop\share\hadoop\hdfs\lib\jsr305-1.3.9.jar;C:\hadoop\share\hadoop\hdfs\lib\log4j-1.2.17.jar;C:\hadoop\share\hadoop\hdfs\lib\netty-3.6.2.Final.jar;C:\hadoop\share\hadoop\hdfs\lib\protobuf-java-2.5.0.jar;C:\hadoop\share\hadoop\hdfs\lib\servlet-api-2.5.jar;C:\hadoop\share\hadoop\hdfs\lib\xercesImpl-2.9.1.jar;C:\hadoop\share\hadoop\hdfs\lib\xml-apis-1.3.04.jar;C:\hadoop\share\hadoop\hdfs\lib\xmlenc-0.52.jar;C:\hadoop\share\hadoop\hdfs\hadoop-hdfs-2.6.5-tests.jar;C:\hadoop\share\hadoop\hdfs\hadoop-hdfs-2.6.5.jar;C:\hadoop\share\hadoop\hdfs\hadoop-hdfs-nfs-2.6.5.jar;C:\hadoop\share\hadoop\yarn\lib\activation-1.1.jar;C:\hadoop\share\hadoop\yarn\lib\aopalliance-1.0.jar;C:\hadoop\share\hadoop\yarn\lib\asm-3.2.jar;C:\hadoop\share\hadoop\yarn\lib\commons-cli-1.2.jar;C:\hadoop\share\hadoop\yarn\lib\commons-codec-1.4.jar;C:\hadoop\share\hadoop\yarn\lib\commons-collections-3.2.2.jar;C:\hadoop\share\hadoop\yarn\lib\commons-compress-1.4.1.jar;C:\hadoop\share\hadoop\yarn\lib\commons-httpclient-3.1.jar;C:\hadoop\share\hadoop\yarn\lib\commons-io-2.4.jar;C:\hadoop\share\hadoop\yarn\lib\commons-lang-2.6.jar;C:\hadoop\share\hadoop\yarn\lib\commons-logging-1.1.3.jar;C:\hadoop\share\hadoop\yarn\lib\guava-11.0.2.jar;C:\hadoop\share\hadoop\yarn\lib\guice-3.0.jar;C:\hadoop\share\hadoop\yarn\lib\guice-servlet-3.0.jar;C:\hadoop\share\hadoop\yarn\lib\jackson-core-asl-1.9.13.jar;C:\hadoop\share\hadoop\yarn\lib\jackson-jaxrs-1.9.13.jar;C:\hadoop\share\hadoop\yarn\lib\jackson-mapper-asl-1.9.13.jar;C:\hadoop\share\hadoop\yarn\lib\jackson-xc-1.9.13.jar;C:\hadoop\share\hadoop\yarn\lib\javax.inject-1.jar;C:\hadoop\share\hadoop\yarn\lib\jaxb-api-2.2.2.jar;C:\hadoop\share\hadoop\yarn\lib\jaxb-impl-2.2.3-1.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-client-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-core-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-guice-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-json-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-server-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jettison-1.1.jar;C:\hadoop\share\hadoop\yarn\lib\jetty-6.1.26.jar;C:\hadoop\share\hadoop\yarn\lib\jetty-util-6.1.26.jar;C:\hadoop\share\hadoop\yarn\lib\jline-0.9.94.jar;C:\hadoop\share\hadoop\yarn\lib\jsr305-1.3.9.jar;C:\hadoop\share\hadoop\yarn\lib\leveldbjni-all-1.8.jar;C:\hadoop\share\hadoop\yarn\lib\log4j-1.2.17.jar;C:\hadoop\share\hadoop\yarn\lib\netty-3.6.2.Final.jar;C:\hadoop\share\hadoop\yarn\lib\protobuf-java-2.5.0.jar;C:\hadoop\share\hadoop\yarn\lib\servlet-api-2.5.jar;C:\hadoop\share\hadoop\yarn\lib\stax-api-1.0-2.jar;C:\hadoop\share\hadoop\yarn\lib\xz-1.0.jar;C:\hadoop\share\hadoop\yarn\lib\zookeeper-3.4.6.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-api-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-applications-distributedshell-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-applications-unmanaged-am-launcher-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-client-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-common-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-registry-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-applicationhistoryservice-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-common-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-nodemanager-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-resourcemanager-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-tests-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-web-proxy-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\lib\aopalliance-1.0.jar;C:\hadoop\share\hadoop\mapreduce\lib\asm-3.2.jar;C:\hadoop\share\hadoop\mapreduce\lib\avro-1.7.4.jar;C:\hadoop\share\hadoop\mapreduce\lib\commons-compress-1.4.1.jar;C:\hadoop\share\hadoop\mapreduce\lib\commons-io-2.4.jar;C:\hadoop\share\hadoop\mapreduce\lib\guice-3.0.jar;C:\hadoop\share\hadoop\mapreduce\lib\guice-servlet-3.0.jar;C:\hadoop\share\hadoop\mapreduce\lib\hadoop-annotations-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\lib\hamcrest-core-1.3.jar;C:\hadoop\share\hadoop\mapreduce\lib\jackson-core-asl-1.9.13.jar;C:\hadoop\share\hadoop\mapreduce\lib\jackson-mapper-asl-1.9.13.jar;C:\hadoop\share\hadoop\mapreduce\lib\javax.inject-1.jar;C:\hadoop\share\hadoop\mapreduce\lib\jersey-core-1.9.jar;C:\hadoop\share\hadoop\mapreduce\lib\jersey-guice-1.9.jar;C:\hadoop\share\hadoop\mapreduce\lib\jersey-server-1.9.jar;C:\hadoop\share\hadoop\mapreduce\lib\junit-4.11.jar;C:\hadoop\share\hadoop\mapreduce\lib\leveldbjni-all-1.8.jar;C:\hadoop\share\hadoop\mapreduce\lib\log4j-1.2.17.jar;C:\hadoop\share\hadoop\mapreduce\lib\netty-3.6.2.Final.jar;C:\hadoop\share\hadoop\mapreduce\lib\paranamer-2.3.jar;C:\hadoop\share\hadoop\mapreduce\lib\protobuf-java-2.5.0.jar;C:\hadoop\share\hadoop\mapreduce\lib\snappy-java-1.0.4.1.jar;C:\hadoop\share\hadoop\mapreduce\lib\xz-1.0.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-app-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-common-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-core-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-plugins-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-2.6.5-tests.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-shuffle-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.6.5.jar
STARTUP_MSG: build = https://github.com/apache/hadoop.git -r e8c9fe0b4c252caf2ebf1464220599650f119997; compiled by 'sjlee' on 2016-10-02T23:43Z
STARTUP_MSG: java = 1.8.0_181
************************************************************/
18/07/18 20:44:55 INFO namenode.NameNode: createNameNode [-format]
[Fatal Error] core-site.xml:19:6: The processing instruction target matching "[xX][mM][lL]" is not allowed.
18/07/18 20:44:55 FATAL conf.Configuration: error parsing conf core-site.xml
org.xml.sax.SAXParseException; systemId: file:/C:/hadoop/etc/hadoop/core-site.xml; lineNumber: 19; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed.
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2432)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2420)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2491)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2444)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2361)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1099)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1071)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1409)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:319)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:485)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1375)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)
18/07/18 20:44:55 FATAL namenode.NameNode: Failed to start namenode.
java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/C:/hadoop/etc/hadoop/core-site.xml; lineNumber: 19; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed.
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2597)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2444)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2361)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1099)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1071)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1409)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:319)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:485)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1375)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)
Caused by: org.xml.sax.SAXParseException; systemId: file:/C:/hadoop/etc/hadoop/core-site.xml; lineNumber: 19; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed.
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2432)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2420)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2491)
... 11 more
18/07/18 20:44:55 INFO util.ExitUtil: Exiting with status 1
18/07/18 20:44:55 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at TheBhaskarDas/192.168.44.1
************************************************************/
C:\hadoop\bin>
core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>C:\hadoop\temp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:50071</value>
</property>
</configuration>
I have fixed it.
I have removed all the characters/anything before <?xml and validated the XML files in https://www.w3schools.com/xml/xml_validator.asp
new core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>\hadoop\temp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:50071</value>
</property>
</configuration>

Hadoop Wordcount example failing due to AM container

I've been trying to run the hadoop wordcount example for a while now, however I am facing some issues. I have hadoop 2.7.1 and running it on Windows. Below are the error details:
command:
yarn jar C:\hadoop-2.7.1\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.7.1.jar wordcount input output
Output:
INFO input.FileInputFormat: Total input paths to process : 1
INFO mapreduce.JobSubmitter: number of splits:1
INFO mapreduce.JobSubmitter: Submitting tokens for job: job_14
90853163147_0009
INFO impl.YarnClientImpl: Submitted application application_14
90853163147_0009
INFO mapreduce.Job: The url to track the job: http://*****
*****/proxy/application_1490853163147_0009/
INFO mapreduce.Job: Running job: job_1490853163147_0009
INFO mapreduce.Job: Job job_1490853163147_0009 running in uber
mode : false
INFO mapreduce.Job: map 0% reduce 0%
INFO mapreduce.Job: Job job_1490853163147_0009 failed with sta
te FAILED due to: Application application_1490853163147_0009 failed 2 times due
to AM Container for appattempt_1490853163147_0009_000002 exited with exitCode:
1639
For more detailed output, check application tracking page:http://********
:****/cluster/app/application_1490853163147_0009Then, click on links to logs of
each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1490853163147_0009_02_000001
Exit code: 1639
Exception message: Incorrect command line arguments.
Stack trace: ExitCodeException exitCode=1639: Incorrect command line arguments.
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:
722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.la
unchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
ontainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
ontainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:617)
at java.lang.Thread.run(Thread.java:745)
Shell output: Usage: task create [TASKNAME] [COMMAND_LINE] |
task isAlive [TASKNAME] |
task kill [TASKNAME]
task processList [TASKNAME]
Creates a new task jobobject with taskname
Checks if task jobobject is alive
Kills task jobobject
Prints to stdout a list of processes in the task
along with their resource usage. One process per line
and comma separated info per process
ProcessId,VirtualMemoryCommitted(bytes),
WorkingSetSize(bytes),CpuTime(Millisec,Kernel+User)
Container exited with a non-zero exit code 1639
Failing this attempt. Failing the application.
INFO mapreduce.Job: Counters: 0
Yarn-site.xml:
<configuration>
<property>
<name>yarn.application.classpath</name>
<value>
C:\hadoop-2.7.1\etc\hadoop,
C:\hadoop-2.7.1\share\hadoop\common\*,
C:\hadoop-2.7.1\share\hadoop\common\lib\*,
C:\hadoop-2.7.1\share\hadoop\hdfs\*,
C:\hadoop-2.7.1\share\hadoop\hdfs\lib\*,
C:\hadoop-2.7.1\share\hadoop\mapreduce\*,
C:\hadoop-2.7.1\share\hadoop\mapreduce\lib\*,
C:\hadoop-2.7.1\share\hadoop\yarn\*,
C:\hadoop-2.7.1\share\hadoop\yarn\lib\*
</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
<value>98.5</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2200</value>
<description>Amount of physical memory, in MB, that can be allocated for containers.</description>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>500</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<description>Where to aggregate logs to.</description>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/tmp/logs</value>
</property>
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>259200</value>
</property>
<property>
<name>yarn.log-aggregation.retain-check-interval-seconds</name>
<value>3600</value>
</property>
</configuration>
mapred.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
Any idea on what is going wrong?
exitCode: 1639 Looks like your are running hadoop on Windows .
https://github.com/OctopusDeploy/Issues/issues/1346
I faced exactly same problem. I was following a guide on how to install Hadoop 2.6.0 (http://www.ics.uci.edu/~shantas/Install_Hadoop-2.6.0_on_Windows10.pdf) while actually installing Hadoop 2.8.0.
As soon as I was done I ran
hadoop jar D:\hadoop-2.8.0\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.8.0.jar wordcount /foo/bar/LICENSE.txt /out1
And got (from yarn nodemanager's logs):
17/06/19 13:15:30 INFO monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1497902417767_0004_01_000001
17/06/19 13:15:30 INFO nodemanager.DefaultContainerExecutor: launchContainer: [D:\hadoop-2.8.0\bin\winutils.exe, task, create, -m, -1, -c, -1, container_1497902417767_0004_01_000001, cmd /c D:/hadoop/temp/nm-localdir/usercache/******/appcache/application_1497902417767_0004/container_1497902417767_0004_01_000001/default_container_executor.cmd]
17/06/19 13:15:30 WARN nodemanager.DefaultContainerExecutor: Exit code from container container_1497902417767_0004_01_000001 is : 1639
17/06/19 13:15:30 WARN nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1497902417767_0004_01_000001 and exit code: 1639
ExitCodeException exitCode=1639: Incorrect command line arguments.
TaskExit: error (1639): Invalid command line argument. Consult the Windows Installer SDK for detailed command line help.
Another symptom was (from yarn nodemanager's logs):
17/06/19 13:25:49 WARN util.SysInfoWindows: Expected split length of sysInfo to be 11. Got 7
The solution was to get compatible (with Hadoop 2.8.0) binaries: https://github.com/steveloughran/winutils/tree/master/hadoop-2.8.0-RC3/bin
Once I got a correct winutils.exe, my problem went away.

Hadoop MapReduce Job stuck because auxService:mapreduce_shuffle does not exist

I've checked multiple posts with the same questions, and the solution is always adding the following to the yarn-site.xml
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarm.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
I covered both properties in the config and restarted yarn. The problem still remains.
The error is:
17/02/15 15:43:34 INFO mapreduce.Job: Task Id : attempt_1487202110321_0001_m_000000_2, Status : FAILED
Container launch failed for container_1487202110321_0001_01_000007 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:375)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I was hoping for a typo but can't seem to find it. Tried directly copy what's on stackoverflow into the xml file, still doesn't work.
What else can I try?
EDIT:
Since the error says the aux_service should be auxService, i modified the yarn-site.xml according, changing all aux-service to auxService, but it's still not working.
EDIT2:
In case anyone's interested, I call this command
hadoop jar hadoop-streaming-2.7.1.jar \
-input /user/myfolder/input1/* \
-output /user/myfolder/output1 \
-mapper <path>/<to>/<mapper>/mapper.py \
-reducer <path>/<to>/<reducer>/reducer.py
while I'm already in /usr/local/cellar/hadoop/2.7.1/libexec/share/hadoop/tools/lib/
EDIT 3:
I'm a dumbass. proof-read the script guys!
Update the property name in yarn-site.xml as yarn.nodemanager.aux-services,
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

Issue while installing hadoop-2.2.0 in linux 64 bit machine

Using this link ,tried installing Hadoop version - 2.2.0(single node cluster)in ubuntu 12.04(64 bit machine)
http://bigdatahandler.com/hadoop-hdfs/installing-single-node-hadoop-2-2-0-on-ubuntu/
while formatting the hdfs file system via namenode using the following command
hadoop namenode -format
when i'm doing that ,getting the following issue,
14/08/07 10:38:39 FATAL namenode.NameNode: Exception in namenode join
java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/usr/local/hadoop/etc/hadoop/mapred-site.xml; lineNumber: 27; columnNumber: 1; Content is not allowed in trailing section.
What shall i need to do inorder to solve the following issue?
Mapred-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
14/08/07 10:38:39 FATAL namenode.NameNode: Exception in namenode join java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/usr/local/hadoop/etc/hadoop/mapred-site.xml; lineNumber: 27; columnNumber: 1; Content is not allowed in trailing section
Probably some caracter in your XML that you forget to erase. Please post your full XML. Like #Abhishek said!

Unable to create default datanode in hadoop Cluster

I am using Ubuntu 10.04. I installed hadoop in my local directory as a standalone one.
~-desktop:~$ hadoop/bin/hadoop version
Hadoop 1.2.0
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
Compiled by hortonfo on Mon May 6 06:59:37 UTC 2013
From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
This command was run using /home/circar/hadoop/hadoop-core-1.2.0.jar
conf/core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/circar/hadoop/dataFiles</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
</configuration>
I've formatd the namenode twice
~-desktop:~$ hadoop/bin/hadoop namenode -format
Then I start hadoop with
~-desktop:~$ hadoop/bin/start-all.sh
Showing Result as:
starting namenode, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-namenode-circar-desktop.out
circar#localhost's password:
localhost: starting datanode, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-datanode-circar-desktop.out
circar#localhost's password:
localhost: starting secondarynamenode, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-secondarynamenode-circar-desktop.out
starting jobtracker, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-jobtracker-circar-desktop.out
circar#localhost's password:
localhost: starting tasktracker, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-tasktracker-circar-desktop.out
But in
/logs/hadoop-circar-datanode-circar-desktop.log
It is showing error as:
2013-06-24 17:32:47,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = circar-desktop/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.0
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May 6 06:59:37 UTC 2013
STARTUP_MSG: java = 1.6.0_26
************************************************************/
2013-06-24 17:32:47,315 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-06-24 17:32:47,324 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-06-24 17:32:47,325 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-06-24 17:32:47,325 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2013-06-24 17:32:47,447 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-06-24 17:32:47,450 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-06-24 17:32:49,265 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/circar/hadoop/dataFiles/dfs/data: namenode namespaceID = 186782509; datanode namespaceID = 1733977738
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:412)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:319)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1698)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1637)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1655)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1781)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1798)
2013-06-24 17:32:49,266 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at circar-desktop/127.0.1.1
************************************************************/
JPS Showing:
~-desktop:~$ jps
8084 Jps
7458 JobTracker
7369 SecondaryNameNode
7642 TaskTracker
6971 NameNode
When I'm trying to Stop it , it shows:
~-desktop:~$ hadoop/bin/stop-all.sh
stopping jobtracker
circar#localhost's password:
localhost: stopping tasktracker
stopping namenode
circar#localhost's password:
localhost: *no datanode to stop*
circar#localhost's password:
localhost: stopping secondarynamenode
What wrong am i doing? Anyone help me?
Ravi is correct. But also make sure that your cluster_id in both ${dfs.data.dir}/current/VERSION and ${dfs.name.dir}/current/VERSION matches. If not change the data node's cluster_id to have the same as namenode. After making the changes follow the steps Ravi mentioned.
Namenode generates new namespaceID every time you format HDFS. Datanodes bind themselves to namenode through namespaceID.
Follow the below steps to fix the problem
a) Stop the problematic DataNode.
b) Edit the value of namespaceID in ${dfs.data.dir}/current/VERSION to match the corresponding value of the current NameNode in ${dfs.name.dir}/current/VERSION.
c) Restart the DataNode.

Resources