Issue while installing hadoop-2.2.0 in linux 64 bit machine - hadoop

Using this link ,tried installing Hadoop version - 2.2.0(single node cluster)in ubuntu 12.04(64 bit machine)
http://bigdatahandler.com/hadoop-hdfs/installing-single-node-hadoop-2-2-0-on-ubuntu/
while formatting the hdfs file system via namenode using the following command
hadoop namenode -format
when i'm doing that ,getting the following issue,
14/08/07 10:38:39 FATAL namenode.NameNode: Exception in namenode join
java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/usr/local/hadoop/etc/hadoop/mapred-site.xml; lineNumber: 27; columnNumber: 1; Content is not allowed in trailing section.
What shall i need to do inorder to solve the following issue?
Mapred-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

14/08/07 10:38:39 FATAL namenode.NameNode: Exception in namenode join java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/usr/local/hadoop/etc/hadoop/mapred-site.xml; lineNumber: 27; columnNumber: 1; Content is not allowed in trailing section
Probably some caracter in your XML that you forget to erase. Please post your full XML. Like #Abhishek said!

Related

Hadoop installation issue ubuntu 18.04: hadoop namenode -format error

I'm trying to do a hadoop installation.
I am following this article for hadoop installation instructions. One of the steps I need to do is to format the hadoop file system using the command:
root#ben-Aspire-E5-575G:~# hadoop namenode -format
I got the following error
2018-10-12 00:08:16,884 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2018-10-12 00:08:16,896 INFO namenode.NameNode: createNameNode [-format]
2018-10-12 00:08:17,024 ERROR conf.Configuration: error parsing conf hdfs-site.xml
com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <xml>
at [row,col,system-id]: [49,0,"file:/usr/local/hadoop/etc/hadoop/hdfs-site.xml"]
at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687)
at com.ctc.wstx.sr.BasicStreamReader.throwUnexpectedEOF(BasicStreamReader.java:5608)
at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2802)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123)
at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3257)
at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3063)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2986)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2926)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2806)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1366)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1679)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:339)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:572)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:174)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:156)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1587)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2018-10-12 00:08:17,031 ERROR namenode.NameNode: Failed to start namenode.
java.lang.RuntimeException: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <xml>
at [row,col,system-id]: [49,0,"file:/usr/local/hadoop/etc/hadoop/hdfs-site.xml"]
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3003)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2926)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2806)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1366)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1679)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:339)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:572)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:174)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:156)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1587)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
Caused by: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <xml>
at [row,col,system-id]: [49,0,"file:/usr/local/hadoop/etc/hadoop/hdfs-site.xml"]
at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687)
at com.ctc.wstx.sr.BasicStreamReader.throwUnexpectedEOF(BasicStreamReader.java:5608)
at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2802)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123)
at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3257)
at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3063)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2986)
... 11 more
2018-10-12 00:08:17,035 INFO util.ExitUtil: Exiting with status 1: java.lang.RuntimeException: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF; was expecting a close tag for element <xml>
at [row,col,system-id]: [49,0,"file:/usr/local/hadoop/etc/hadoop/hdfs-site.xml"]
2018-10-12 00:08:17,043 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ben-Aspire-E5-575G/127.0.1.1
************************************************************/
the hdfs-site.xml file is here
<xml version="1.0" encoding="UTF-8">
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create t$
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>
You're missing both </description> and a final </xml> outside of the end configuration tag in the file you've provided.
As the error says, there's a tag missing.
The description is optional, so feel free to delete that tag entirely
I think you have to fix the "description" tag. There is no proper end.

Hadoop Single Node Cluster setup error during namenode format

I have installed Apache Hadoop 2.6.0 in Windows 10. I have been trying to fix this issue but failed to understand the error or any mistake from my end.
I have set up all the paths correctly, Hadoop version is showing the version in command prompt properly.
I have already created temp directory inside hadoop directory like c:\hadoop\temp.
When I am trying to format the Namenode, I am getting this error:
C:\hadoop\bin>hdfs namenode -format
18/07/18 20:44:55 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = TheBhaskarDas/192.168.44.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.6.5
STARTUP_MSG: classpath = C:\hadoop\etc\hadoop;C:\hadoop\share\hadoop\common\lib\activation-1.1.jar;C:\hadoop\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.jar;C:\hadoop\share\hadoop\common\lib\apacheds-kerberos-codec-2.0.0-M15.jar;C:\hadoop\share\hadoop\common\lib\api-asn1-api-1.0.0-M20.jar;C:\hadoop\share\hadoop\common\lib\api-util-1.0.0-M20.jar;C:\hadoop\share\hadoop\common\lib\asm-3.2.jar;C:\hadoop\share\hadoop\common\lib\avro-1.7.4.jar;C:\hadoop\share\hadoop\common\lib\commons-beanutils-1.7.0.jar;C:\hadoop\share\hadoop\common\lib\commons-beanutils-core-1.8.0.jar;C:\hadoop\share\hadoop\common\lib\commons-cli-1.2.jar;C:\hadoop\share\hadoop\common\lib\commons-codec-1.4.jar;C:\hadoop\share\hadoop\common\lib\commons-collections-3.2.2.jar;C:\hadoop\share\hadoop\common\lib\commons-compress-1.4.1.jar;C:\hadoop\share\hadoop\common\lib\commons-configuration-1.6.jar;C:\hadoop\share\hadoop\common\lib\commons-digester-1.8.jar;C:\hadoop\share\hadoop\common\lib\commons-el-1.0.jar;C:\hadoop\share\hadoop\common\lib\commons-httpclient-3.1.jar;C:\hadoop\share\hadoop\common\lib\commons-io-2.4.jar;C:\hadoop\share\hadoop\common\lib\commons-lang-2.6.jar;C:\hadoop\share\hadoop\common\lib\commons-logging-1.1.3.jar;C:\hadoop\share\hadoop\common\lib\commons-math3-3.1.1.jar;C:\hadoop\share\hadoop\common\lib\commons-net-3.1.jar;C:\hadoop\share\hadoop\common\lib\curator-client-2.6.0.jar;C:\hadoop\share\hadoop\common\lib\curator-framework-2.6.0.jar;C:\hadoop\share\hadoop\common\lib\curator-recipes-2.6.0.jar;C:\hadoop\share\hadoop\common\lib\gson-2.2.4.jar;C:\hadoop\share\hadoop\common\lib\guava-11.0.2.jar;C:\hadoop\share\hadoop\common\lib\hadoop-annotations-2.6.5.jar;C:\hadoop\share\hadoop\common\lib\hadoop-auth-2.6.5.jar;C:\hadoop\share\hadoop\common\lib\hamcrest-core-1.3.jar;C:\hadoop\share\hadoop\common\lib\htrace-core-3.0.4.jar;C:\hadoop\share\hadoop\common\lib\httpclient-4.2.5.jar;C:\hadoop\share\hadoop\common\lib\httpcore-4.2.5.jar;C:\hadoop\share\hadoop\common\lib\jackson-core-asl-1.9.13.jar;C:\hadoop\share\hadoop\common\lib\jackson-jaxrs-1.9.13.jar;C:\hadoop\share\hadoop\common\lib\jackson-mapper-asl-1.9.13.jar;C:\hadoop\share\hadoop\common\lib\jackson-xc-1.9.13.jar;C:\hadoop\share\hadoop\common\lib\jasper-compiler-5.5.23.jar;C:\hadoop\share\hadoop\common\lib\jasper-runtime-5.5.23.jar;C:\hadoop\share\hadoop\common\lib\java-xmlbuilder-0.4.jar;C:\hadoop\share\hadoop\common\lib\jaxb-api-2.2.2.jar;C:\hadoop\share\hadoop\common\lib\jaxb-impl-2.2.3-1.jar;C:\hadoop\share\hadoop\common\lib\jersey-core-1.9.jar;C:\hadoop\share\hadoop\common\lib\jersey-json-1.9.jar;C:\hadoop\share\hadoop\common\lib\jersey-server-1.9.jar;C:\hadoop\share\hadoop\common\lib\jets3t-0.9.0.jar;C:\hadoop\share\hadoop\common\lib\jettison-1.1.jar;C:\hadoop\share\hadoop\common\lib\jetty-6.1.26.jar;C:\hadoop\share\hadoop\common\lib\jetty-util-6.1.26.jar;C:\hadoop\share\hadoop\common\lib\jsch-0.1.42.jar;C:\hadoop\share\hadoop\common\lib\jsp-api-2.1.jar;C:\hadoop\share\hadoop\common\lib\jsr305-1.3.9.jar;C:\hadoop\share\hadoop\common\lib\junit-4.11.jar;C:\hadoop\share\hadoop\common\lib\log4j-1.2.17.jar;C:\hadoop\share\hadoop\common\lib\mockito-all-1.8.5.jar;C:\hadoop\share\hadoop\common\lib\netty-3.6.2.Final.jar;C:\hadoop\share\hadoop\common\lib\paranamer-2.3.jar;C:\hadoop\share\hadoop\common\lib\protobuf-java-2.5.0.jar;C:\hadoop\share\hadoop\common\lib\servlet-api-2.5.jar;C:\hadoop\share\hadoop\common\lib\slf4j-api-1.7.5.jar;C:\hadoop\share\hadoop\common\lib\slf4j-log4j12-1.7.5.jar;C:\hadoop\share\hadoop\common\lib\snappy-java-1.0.4.1.jar;C:\hadoop\share\hadoop\common\lib\stax-api-1.0-2.jar;C:\hadoop\share\hadoop\common\lib\xmlenc-0.52.jar;C:\hadoop\share\hadoop\common\lib\xz-1.0.jar;C:\hadoop\share\hadoop\common\lib\zookeeper-3.4.6.jar;C:\hadoop\share\hadoop\common\hadoop-common-2.6.5-tests.jar;C:\hadoop\share\hadoop\common\hadoop-common-2.6.5.jar;C:\hadoop\share\hadoop\common\hadoop-nfs-2.6.5.jar;C:\hadoop\share\hadoop\hdfs;C:\hadoop\share\hadoop\hdfs\lib\asm-3.2.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-cli-1.2.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-codec-1.4.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-daemon-1.0.13.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-el-1.0.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-io-2.4.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-lang-2.6.jar;C:\hadoop\share\hadoop\hdfs\lib\commons-logging-1.1.3.jar;C:\hadoop\share\hadoop\hdfs\lib\guava-11.0.2.jar;C:\hadoop\share\hadoop\hdfs\lib\htrace-core-3.0.4.jar;C:\hadoop\share\hadoop\hdfs\lib\jackson-core-asl-1.9.13.jar;C:\hadoop\share\hadoop\hdfs\lib\jackson-mapper-asl-1.9.13.jar;C:\hadoop\share\hadoop\hdfs\lib\jasper-runtime-5.5.23.jar;C:\hadoop\share\hadoop\hdfs\lib\jersey-core-1.9.jar;C:\hadoop\share\hadoop\hdfs\lib\jersey-server-1.9.jar;C:\hadoop\share\hadoop\hdfs\lib\jetty-6.1.26.jar;C:\hadoop\share\hadoop\hdfs\lib\jetty-util-6.1.26.jar;C:\hadoop\share\hadoop\hdfs\lib\jsp-api-2.1.jar;C:\hadoop\share\hadoop\hdfs\lib\jsr305-1.3.9.jar;C:\hadoop\share\hadoop\hdfs\lib\log4j-1.2.17.jar;C:\hadoop\share\hadoop\hdfs\lib\netty-3.6.2.Final.jar;C:\hadoop\share\hadoop\hdfs\lib\protobuf-java-2.5.0.jar;C:\hadoop\share\hadoop\hdfs\lib\servlet-api-2.5.jar;C:\hadoop\share\hadoop\hdfs\lib\xercesImpl-2.9.1.jar;C:\hadoop\share\hadoop\hdfs\lib\xml-apis-1.3.04.jar;C:\hadoop\share\hadoop\hdfs\lib\xmlenc-0.52.jar;C:\hadoop\share\hadoop\hdfs\hadoop-hdfs-2.6.5-tests.jar;C:\hadoop\share\hadoop\hdfs\hadoop-hdfs-2.6.5.jar;C:\hadoop\share\hadoop\hdfs\hadoop-hdfs-nfs-2.6.5.jar;C:\hadoop\share\hadoop\yarn\lib\activation-1.1.jar;C:\hadoop\share\hadoop\yarn\lib\aopalliance-1.0.jar;C:\hadoop\share\hadoop\yarn\lib\asm-3.2.jar;C:\hadoop\share\hadoop\yarn\lib\commons-cli-1.2.jar;C:\hadoop\share\hadoop\yarn\lib\commons-codec-1.4.jar;C:\hadoop\share\hadoop\yarn\lib\commons-collections-3.2.2.jar;C:\hadoop\share\hadoop\yarn\lib\commons-compress-1.4.1.jar;C:\hadoop\share\hadoop\yarn\lib\commons-httpclient-3.1.jar;C:\hadoop\share\hadoop\yarn\lib\commons-io-2.4.jar;C:\hadoop\share\hadoop\yarn\lib\commons-lang-2.6.jar;C:\hadoop\share\hadoop\yarn\lib\commons-logging-1.1.3.jar;C:\hadoop\share\hadoop\yarn\lib\guava-11.0.2.jar;C:\hadoop\share\hadoop\yarn\lib\guice-3.0.jar;C:\hadoop\share\hadoop\yarn\lib\guice-servlet-3.0.jar;C:\hadoop\share\hadoop\yarn\lib\jackson-core-asl-1.9.13.jar;C:\hadoop\share\hadoop\yarn\lib\jackson-jaxrs-1.9.13.jar;C:\hadoop\share\hadoop\yarn\lib\jackson-mapper-asl-1.9.13.jar;C:\hadoop\share\hadoop\yarn\lib\jackson-xc-1.9.13.jar;C:\hadoop\share\hadoop\yarn\lib\javax.inject-1.jar;C:\hadoop\share\hadoop\yarn\lib\jaxb-api-2.2.2.jar;C:\hadoop\share\hadoop\yarn\lib\jaxb-impl-2.2.3-1.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-client-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-core-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-guice-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-json-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jersey-server-1.9.jar;C:\hadoop\share\hadoop\yarn\lib\jettison-1.1.jar;C:\hadoop\share\hadoop\yarn\lib\jetty-6.1.26.jar;C:\hadoop\share\hadoop\yarn\lib\jetty-util-6.1.26.jar;C:\hadoop\share\hadoop\yarn\lib\jline-0.9.94.jar;C:\hadoop\share\hadoop\yarn\lib\jsr305-1.3.9.jar;C:\hadoop\share\hadoop\yarn\lib\leveldbjni-all-1.8.jar;C:\hadoop\share\hadoop\yarn\lib\log4j-1.2.17.jar;C:\hadoop\share\hadoop\yarn\lib\netty-3.6.2.Final.jar;C:\hadoop\share\hadoop\yarn\lib\protobuf-java-2.5.0.jar;C:\hadoop\share\hadoop\yarn\lib\servlet-api-2.5.jar;C:\hadoop\share\hadoop\yarn\lib\stax-api-1.0-2.jar;C:\hadoop\share\hadoop\yarn\lib\xz-1.0.jar;C:\hadoop\share\hadoop\yarn\lib\zookeeper-3.4.6.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-api-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-applications-distributedshell-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-applications-unmanaged-am-launcher-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-client-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-common-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-registry-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-applicationhistoryservice-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-common-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-nodemanager-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-resourcemanager-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-tests-2.6.5.jar;C:\hadoop\share\hadoop\yarn\hadoop-yarn-server-web-proxy-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\lib\aopalliance-1.0.jar;C:\hadoop\share\hadoop\mapreduce\lib\asm-3.2.jar;C:\hadoop\share\hadoop\mapreduce\lib\avro-1.7.4.jar;C:\hadoop\share\hadoop\mapreduce\lib\commons-compress-1.4.1.jar;C:\hadoop\share\hadoop\mapreduce\lib\commons-io-2.4.jar;C:\hadoop\share\hadoop\mapreduce\lib\guice-3.0.jar;C:\hadoop\share\hadoop\mapreduce\lib\guice-servlet-3.0.jar;C:\hadoop\share\hadoop\mapreduce\lib\hadoop-annotations-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\lib\hamcrest-core-1.3.jar;C:\hadoop\share\hadoop\mapreduce\lib\jackson-core-asl-1.9.13.jar;C:\hadoop\share\hadoop\mapreduce\lib\jackson-mapper-asl-1.9.13.jar;C:\hadoop\share\hadoop\mapreduce\lib\javax.inject-1.jar;C:\hadoop\share\hadoop\mapreduce\lib\jersey-core-1.9.jar;C:\hadoop\share\hadoop\mapreduce\lib\jersey-guice-1.9.jar;C:\hadoop\share\hadoop\mapreduce\lib\jersey-server-1.9.jar;C:\hadoop\share\hadoop\mapreduce\lib\junit-4.11.jar;C:\hadoop\share\hadoop\mapreduce\lib\leveldbjni-all-1.8.jar;C:\hadoop\share\hadoop\mapreduce\lib\log4j-1.2.17.jar;C:\hadoop\share\hadoop\mapreduce\lib\netty-3.6.2.Final.jar;C:\hadoop\share\hadoop\mapreduce\lib\paranamer-2.3.jar;C:\hadoop\share\hadoop\mapreduce\lib\protobuf-java-2.5.0.jar;C:\hadoop\share\hadoop\mapreduce\lib\snappy-java-1.0.4.1.jar;C:\hadoop\share\hadoop\mapreduce\lib\xz-1.0.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-app-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-common-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-core-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-plugins-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-2.6.5-tests.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-shuffle-2.6.5.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.6.5.jar
STARTUP_MSG: build = https://github.com/apache/hadoop.git -r e8c9fe0b4c252caf2ebf1464220599650f119997; compiled by 'sjlee' on 2016-10-02T23:43Z
STARTUP_MSG: java = 1.8.0_181
************************************************************/
18/07/18 20:44:55 INFO namenode.NameNode: createNameNode [-format]
[Fatal Error] core-site.xml:19:6: The processing instruction target matching "[xX][mM][lL]" is not allowed.
18/07/18 20:44:55 FATAL conf.Configuration: error parsing conf core-site.xml
org.xml.sax.SAXParseException; systemId: file:/C:/hadoop/etc/hadoop/core-site.xml; lineNumber: 19; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed.
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2432)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2420)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2491)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2444)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2361)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1099)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1071)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1409)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:319)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:485)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1375)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)
18/07/18 20:44:55 FATAL namenode.NameNode: Failed to start namenode.
java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/C:/hadoop/etc/hadoop/core-site.xml; lineNumber: 19; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed.
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2597)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2444)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2361)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1099)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1071)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1409)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:319)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:485)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1375)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)
Caused by: org.xml.sax.SAXParseException; systemId: file:/C:/hadoop/etc/hadoop/core-site.xml; lineNumber: 19; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed.
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2432)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2420)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2491)
... 11 more
18/07/18 20:44:55 INFO util.ExitUtil: Exiting with status 1
18/07/18 20:44:55 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at TheBhaskarDas/192.168.44.1
************************************************************/
C:\hadoop\bin>
core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>C:\hadoop\temp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:50071</value>
</property>
</configuration>
I have fixed it.
I have removed all the characters/anything before <?xml and validated the XML files in https://www.w3schools.com/xml/xml_validator.asp
new core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>\hadoop\temp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:50071</value>
</property>
</configuration>

Issue configuring Apache KNOX gateway with WEBHDFS

I have installed Apache KNOX using Apache Ambari. Followed the below link
for configuring the WEBHDFS with KNOX.KNOX is installed in address1.
https://knox.apache.org/books/knox-0-13-0/user-guide.html#WebHDFS
While invoking webhdfs using the curl command
curl -i -k -u guest:guest-password -X GET 'https://address1:8443/gateway/Andromd/webhdfs/v1/?op=LISTSTATUS'
it throws the following error
<title> 404 Not Found</title>
<p> Problem accessing /gateway//Andromd/webhdfs/v1/build-version.Reason:
<pre> Not Found </pre></p>
Andromd is the cluster name and added it under {GATEWAY_HOME}/conf/topologies/Andromd.xml.
Configurations are as follows.
<service>
<role>NAMENODE</role>
<url>hdfs://address2:8020</url>
</service>
<service>
<role>WEBHDFS</role>
<url>http://address2:50070/webhdfs</url>
</service
Content of /etc/hadoop/conf/hdfs-site.xml is as follows.
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.namenode.rpc-address</name>
<value>address2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>address2:50070</value>
</property>
<property>
<name>dfs.https.namenode.https-address</name>
<value>address2:50470</value>
</property>
Do let me know , is there anything missing from my side regarding the configuration.
Gateway.log content:
2017-10-13 18:24:24,586 ERROR digester3.Digester (Digester.java:parse(1652)) - An error occurred while parsing XML from '(already loaded from stream)', see nested exceptions
org.xml.sax.SAXParseException; lineNumber: 88; columnNumber: 18; The content of elements must consist of well-formed character data or markup.
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1239)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643)
at org.apache.commons.digester3.Digester.parse(Digester.java:1642)
at org.apache.commons.digester3.Digester.parse(Digester.java:1701)
at org.apache.hadoop.gateway.services.topology.impl.DefaultTopologyService.loadTopologyAttempt(DefaultTopologyService.java:124)
at org.apache.hadoop.gateway.services.topology.impl.DefaultTopologyService.loadTopology(DefaultTopologyService.java:100)
at org.apache.hadoop.gateway.services.topology.impl.DefaultTopologyService.loadTopologies(DefaultTopologyService.java:233)
at org.apache.hadoop.gateway.services.topology.impl.DefaultTopologyService.reloadTopologies(DefaultTopologyService.java:318)
at org.apache.hadoop.gateway.GatewayServer.start(GatewayServer.java:312)
at org.apache.hadoop.gateway.GatewayServer.startGateway(GatewayServer.java:231)
at org.apache.hadoop.gateway.GatewayServer.main(GatewayServer.java:114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.gateway.launcher.Invoker.invokeMainMethod(Invoker.java:70)
at org.apache.hadoop.gateway.launcher.Invoker.invoke(Invoker.java:39)
at org.apache.hadoop.gateway.launcher.Command.run(Command.java:101)
at org.apache.hadoop.gateway.launcher.Launcher.run(Launcher.java:69)
at org.apache.hadoop.gateway.launcher.Launcher.main(Launcher.java:46)
2017-10-13 18:24:24,587 ERROR hadoop.gateway (DefaultTopologyService.java:loadTopologies(250)) - Failed to load topology /usr/hdp/2.4.2.0-258/knox/bin/../conf/topologies/Andromeda.xml: org.xml.sax.SAXParseException; lineNumber: 88; columnNumber: 18; The content of elements must consist of well-formed character data or markup.
2017-10-13 18:24:24,588 INFO hadoop.gateway (GatewayServer.java:handleCreateDeployment(450)) - Loading topology admin from /usr/hdp/2.4.2.0-258/knox/bin/../data/deployments/admin.war.15f15c171c0
2017-10-13 18:24:24,793 INFO hadoop.gateway (GatewayServer.java:start(315)) - Monitoring topologies in directory: /usr/hdp/2.4.2.0-258/knox/bin/../conf/topologies
2017-10-13 18:24:24,795 INFO hadoop.gateway (GatewayServer.java:startGateway(232)) - Started gateway on port 8,443.
Added to above query, is necessary to include below tags in Andromd.xml,by replacing the sandbox.hortonworks.com with our cluster specific information.
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.namenode.rpc-address</name>
<value>sandbox.hortonworks.com:8020</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>sandbox.hortonworks.com:50070</value>
</property>
<property>
<name>dfs.https.namenode.https-address</name>
<value>sandbox.hortonworks.com:50470</value>
</property>
2017-10-13 18:24:24,587 ERROR hadoop.gateway (DefaultTopologyService.java:loadTopologies(250)) - Failed to load topology /usr/hdp/2.4.2.0-258/knox/bin/../conf/topologies/Andromeda.xml: org.xml.sax.SAXParseException; lineNumber: 88; columnNumber: 18; The content of elements must consist of well-formed character data or markup.
Looking at this line it looks like your topology file Andromeda.xml is not well formatted (missing closing tags or some such) at lineNumber: 88.
This is the reason why you are getting 404 because the topology is not being deployed. Check the logs after fixing the topology file and make sure there are no startup errors.

Hadoop MapReduce Job stuck because auxService:mapreduce_shuffle does not exist

I've checked multiple posts with the same questions, and the solution is always adding the following to the yarn-site.xml
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarm.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
I covered both properties in the config and restarted yarn. The problem still remains.
The error is:
17/02/15 15:43:34 INFO mapreduce.Job: Task Id : attempt_1487202110321_0001_m_000000_2, Status : FAILED
Container launch failed for container_1487202110321_0001_01_000007 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:375)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I was hoping for a typo but can't seem to find it. Tried directly copy what's on stackoverflow into the xml file, still doesn't work.
What else can I try?
EDIT:
Since the error says the aux_service should be auxService, i modified the yarn-site.xml according, changing all aux-service to auxService, but it's still not working.
EDIT2:
In case anyone's interested, I call this command
hadoop jar hadoop-streaming-2.7.1.jar \
-input /user/myfolder/input1/* \
-output /user/myfolder/output1 \
-mapper <path>/<to>/<mapper>/mapper.py \
-reducer <path>/<to>/<reducer>/reducer.py
while I'm already in /usr/local/cellar/hadoop/2.7.1/libexec/share/hadoop/tools/lib/
EDIT 3:
I'm a dumbass. proof-read the script guys!
Update the property name in yarn-site.xml as yarn.nodemanager.aux-services,
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

Hadoop Failed to set permissions of path: \tmp\

I'm running (or obviously trying to) Hadoop 1.2.1 on my Windows machine inside cygwin. Unfortunately there is something terribly wrong with my Hadoop. I'm getting the following error when I'm trying to execute simple Pig script on local mode.
Backend error message during job submission
-------------------------------------------
java.io.IOException: Failed to set permissions of path: \tmp\hadoop-antonbelev\mapred\staging\antonbelev1696923409\.staging to 0700
at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:691)
at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:664)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:514)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:349)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:193)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:126)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
at org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378)
at org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.pig.backend.hadoop20.PigJobControl.mainLoopAction(PigJobControl.java:157)
at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:134)
at java.lang.Thread.run(Thread.java:722)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
Pig Stack Trace
---------------
ERROR 2244: Job failed, hadoop does not return any error message
org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message
at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:148)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:607)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
I assume that there is something wrong with the hadoop installation or configuration files, but I'm new to Hadoop so is just an assumption. Can someone help me to resolve this problem. Thank you! : )
PS Also why the path in the \tmp\hadoop-antonbelev\mapred\staging\antonbelev1696923409\.staging to 0700 is using windows backslashes? I tried to find this file but it doesn't exists.
UPDATE:
Here I my config files:
core-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>localhost:9100</value>
</property>
</configuration>
hdfs-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9101</value>
</property>
</configuration>
hadoop-env.sh:
# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME. All others are
# optional. When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use. Required.
export JAVA_HOME="C:/Program Files/Java/jdk1.7.0_07"
# Extra Java CLASSPATH elements. Optional.
# export HADOOP_CLASSPATH=
# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=2000
# Extra Java runtime options. Empty by default.
# export HADOOP_OPTS=-server
# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
# export HADOOP_TASKTRACKER_OPTS=
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
# export HADOOP_CLIENT_OPTS
# Extra ssh options. Empty by default.
# export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"
# Where log files are stored. $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
# File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
# host:path where hadoop code should be rsync'd from. Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop
# Seconds to sleep between slave commands. Unset by default. This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HADOOP_SLAVE_SLEEP=0.1
# The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by
# the users that are going to run the hadoop daemons. Otherwise there is
# the potential for a symlink attack.
# export HADOOP_PID_DIR=/var/hadoop/pids
# A string representing this instance of hadoop. $USER by default.
# export HADOOP_IDENT_STRING=$USER
# The scheduling priority for daemon processes. See 'man nice'.
# export HADOOP_NICENESS=10
I'm not sure if any other config files are relevant.
try changing the file permissions of the folder you are using as hadoop tmp folder.
something like:
sudo chmod a+w /app/hadoop/tmp -R
Please add this entry in core-site.xml like this
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
Problem with your configuration is tmp folder which hadoop is reading is under root or /tmp.

Resources