Running Mapreduce issues - hadoop

I'm trying to run a wordcount jar on a cluster hadoop 2.7.1 (one master and 4 slaves), but the MapReduce job was blocked at:
$ hadoop jar wc.jar WordCount /input /output_hocine
17/03/13 09:41:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/03/13 09:41:43 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/03/13 09:41:43 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
17/03/13 09:41:44 INFO input.FileInputFormat: Total input paths to process : 3
17/03/13 09:41:44 INFO mapreduce.JobSubmitter: number of splits:3
17/03/13 09:41:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1489393376058_0003
17/03/13 09:41:44 INFO impl.YarnClientImpl: Submitted application application_1489393376058_0003
17/03/13 09:41:44 INFO mapreduce.Job: The url to track the job: http://ibnbadis21:8088/proxy/application_1489393376058_0003/
17/03/13 09:41:44 INFO mapreduce.Job: Running job: job_1489393376058_0003
Via The navigator, The output via the navigator is shown at this image:
Here is the content of the configuration files:
Core-site.xml:
<configuration>
<!-- <property>
<name>fs.defaultFS</name>
<value>hdfs://ibnbadis21:9000</value>
</property>-->
<property>
<name>fs.default.name</name>
<value>hdfs://ibnbadis21:9000</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
yarn-site.xml:
<?xml version="1.0"?> <configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
mapred-site.xml:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>ibnbadis21:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>ibnbadis21:19888</value>
</property>
<property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/user/app</value>
</property>
</configuration>
hdfs-site.xml:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/hadoop_data/hdfs/namenode</value>
</property>
<property> <name>dfs.namenode.checkpoint.dir</name>
<value>file:/usr/local/hadoop_data/hdfs/namesecondary</value>
</property>
<property> <name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_data/hdfs/datanode</value>
</property>
</configuration>
Can anyone tell me how can solve this problem, please?

Connecting to ResourceManager at /0.0.0.0:8032
0.0.0.0 (the default) is not a valid hostname.
So, add this in yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value> YOUR VALUE HERE </value> <!-- Needs Fully Qualified Domain Name -->
</property>
There are many values that you probably didn't set.
Refer Hadoop | Configuring the Hadoop Daemons
By the way, fs.defaultFS is the correct property to use.

Finally the problem was about access rights. The framework haven't the right to access at my yarn-site.xml file. That's why it used the default value 0.0.0.0/8030. Thus When I executed the command with privilege (sudo):
sudo hadoop jar wc.jar WordCount /input /output
My job MapReduce is executed successfully!

Related

Cannot set priority of namenode process xxxxx

I'm trying to install hadoop on my mac.
What I did are
brew install hadoop
*in hadoop-env.sh: set JAVA_HOME and HADOOP_OPTS*
Then, I tried start-dfs.sh, but the following error came up:
AL01299205:hadoop user$ /usr/local/Cellar/hadoop/3.2.1/sbin/start-dfs.sh
Starting namenodes on [AL01299205.local]
AL01299205.local: ERROR: Cannot set priority of namenode process 24897
Starting datanodes Starting secondary namenodes [AL01299205.local]
AL01299205.local: ERROR: Cannot set priority of secondarynamenode process 25147
2020-02-19 18:06:08,843 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
How can I fix this error?
I additionally edited some files as follows:
hadoop-evn.sh
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc="
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/Cellar/hadoop/hdfs/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9010</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
Then the errors were gone.

Unable to run mapreduce wordcount

I am trying to teach myself some hadoop basics and so have build a simple hadoop cluster. This works and I can put, ls, cat from the hdfs filesystem without any issues.
So I took the next step and tried to do a wordcount on a file I had put into hadoop, but I get the following error
$ hadoop jar /home/hadoop/share/hadoop/mapreduce/*examples*.jar wordcount data/sectors.txt results
2018-06-06 07:57:36,936 INFO client.RMProxy: Connecting to ResourceManager at ansdb1/10.49.17.12:8040
2018-06-06 07:57:37,404 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/hadoop/.staging/job_1528191458385_0014
2018-06-06 07:57:37,734 INFO input.FileInputFormat: Total input files to process : 1
2018-06-06 07:57:37,869 INFO mapreduce.JobSubmitter: number of splits:1
2018-06-06 07:57:37,923 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2018-06-06 07:57:38,046 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528191458385_0014
2018-06-06 07:57:38,048 INFO mapreduce.JobSubmitter: Executing with tokens: []
2018-06-06 07:57:38,284 INFO conf.Configuration: resource-types.xml not found
2018-06-06 07:57:38,284 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2018-06-06 07:57:38,382 INFO impl.YarnClientImpl: Submitted application application_1528191458385_0014
2018-06-06 07:57:38,445 INFO mapreduce.Job: The url to track the job: http://ansdb1:8088/proxy/application_1528191458385_0014/
2018-06-06 07:57:38,446 INFO mapreduce.Job: Running job: job_1528191458385_0014
2018-06-06 07:57:45,499 INFO mapreduce.Job: Job job_1528191458385_0014 running in uber mode : false
2018-06-06 07:57:45,501 INFO mapreduce.Job: map 0% reduce 0%
2018-06-06 07:57:45,521 INFO mapreduce.Job: Job job_1528191458385_0014 failed with state FAILED due to: Application application_1528191458385_0014 failed 2 times due to AM Container for appattempt_1528191458385_0014_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2018-06-06 07:57:43.301]Exception from container-launch.
Container id: container_1528191458385_0014_02_000001
Exit code: 1
[2018-06-06 07:57:43.304]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
Please check whether your etc/hadoop/mapred-site.xml contains the below configuration:
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
[2018-06-06 07:57:43.304]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
Please check whether your etc/hadoop/mapred-site.xml contains the below configuration:
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
For more detailed output, check the application tracking page: http://ansdb1:8088/cluster/app/application_1528191458385_0014 Then click on links to logs of each attempt.
. Failing the application.
2018-06-06 07:57:45,558 INFO mapreduce.Job: Counters: 0
I have searched lots of website and they seem to say that my environment isn't right. I have tried many of the suggested fixes, but nothing has worked.
Everything is running on both nodes:
$ jps
31858 ResourceManager
31544 SecondaryNameNode
6152 Jps
31275 DataNode
31132 NameNode
$ ssh ansdb2 jps
16615 NodeManager
21290 Jps
16478 DataNode
I can ls hadoop:
$ hadoop fs -ls /
Found 3 items
drwxrwxrwt - hadoop supergroup 0 2018-06-06 07:58 /tmp
drwxr-xr-x - hadoop supergroup 0 2018-06-05 11:46 /user
drwxr-xr-x - hadoop supergroup 0 2018-06-05 07:50 /usr
hadoop version:
$ hadoop version
Hadoop 3.1.0
Source code repository https://github.com/apache/hadoop -r 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
Compiled by centos on 2018-03-30T00:00Z
Compiled with protoc 2.5.0
From source with checksum 14182d20c972b3e2105580a1ad6990
This command was run using /home/hadoop/share/hadoop/common/hadoop-common-3.1.0.jar
hadoop classpath:
$ hadoop classpath
/home/hadoop/etc/hadoop:/home/hadoop/share/hadoop/common/lib/*:/home/hadoop/share/hadoop/common/*:/home/hadoop/share/hadoop/hdfs:/home/hadoop/share/hadoop/hdfs/lib/*:/home/hadoop/share/hadoop/hdfs/*:/home/hadoop/share/hadoop/mapreduce/*:/home/hadoop/share/hadoop/yarn:/home/hadoop/share/hadoop/yarn/lib/*:/home/hadoop/share/hadoop/yarn/*
my environment is setup:
# hadoop
## JAVA env variables
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-7.b10.el7.x86_64
export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
## HADOOP env variables
export HADOOP_HOME=/home/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export YARN_HOME=$HADOOP_HOME
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
PATH=$PATH:$JAVA_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
My hadoop xml files
core-site.xml:
$ cat $HADOOP_HOME/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ansdb1:9000/</value>
</property>
</configuration>
hdfs-site.xml:
$ cat $HADOOP_HOME/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.data.dir</name>
<value>/data/hadoop/datanode</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/data/hadoop/namenode</value>
</property>
<property>
<name>dfs.checkpoint.dir</name>
<value>/data/hadoop/secondarynamenode</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
yarn-site.xml:
$ cat $HADOOP_HOME/etc/hadoop/yarn-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ansdb1</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>ansdb1:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>ansdb1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>ansdb1:8040</value>
</property>
</configuration>
mapred-site.xml:
$ cat $HADOOP_HOME/etc/hadoop/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
I have checked which jar file contains MRAppMaster:
$ find /home/hadoop -name '*.jar' -exec grep -Hls MRAppMaster {} \;
/home/hadoop/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-app-3.1.0-sources.jar
/home/hadoop/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-app-3.1.0-test-sources.jar
/home/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.0.jar
Clearly I am missing something, so could somebody please point me the right direction.
After much googling of the same question asked different ways, I found this https://mathsigit.github.io/blog_page/2017/11/16/hole-of-submitting-mr-of-hadoop300RC0/ (it's in Chinese).
So I set the following properties in mapred-site.xml
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
And everything works

Unable to start name node while configuring Hadoop for Lustre

I'm trying to integrate hadoop with intel lustre. I have added hadoop-lustre-plugin-3.1.0 to hadoop-2.7.3/lib/native folder. Lustre is mounted at /mnt/lustre. I'm getting following error when I start hadoop using start-all.sh
[root#master hadoop]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
17/04/06 17:36:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on [ ]
...
core-site.xml :
<property>
<name>fs.defaultFS</name>
<value>lustre:///</value>
</property>
<property>
<name>fs.lustre.impl</name>
<value>org.apache.hadoop.fs.LustreFileSystem</value>
</property>
<property>
<name>fs.AbstractFileSystem.lustre.impl</name>
<value>org.apache.hadoop.fs.LustreFileSystemlustre</value>
</property
<property>
<name>fs.lustrefs.mount</name>
<value>/mnt/lustre/hadoop</value>
<description>This is the directory on Lustre that acts as the root level for Hadoop services</description>
</property>
<property>
<name>lustre.stripe.count</name>
<value>1</value>
</property>
<property>
<name>lustre.stripe.size</name>
<value>4194304</value>
</property>
<property>
<name>fs.block.size</name>
<value>1073741824</value>
</property>
maprd-site.xml
<property>
<name>mapreduce.job.map.output.collector.class</name>
<value>org.apache.hadoop.mapred.SharedFsPlugins$MapOutputBuffer</value>
</property>
<property>
<name>mapreduce.job.reduce.shuffle.consumer.plugin.class</name>
<value>org.apache.hadoop.mapred.SharedFsPlugins$Shuffle</value>
</property>
hdfs-site.xml
<property>
<name>dfs.name.dir</name>
<value>/mnt/lustre/hadoop/hadoop_tmp/namenode</value>
<description>true</description>
</property>
Is there any configuration that I have missed in configuration files?
As fs.defaultFS holds the lustre specific URI, the startup script is unable to determine the host in which Namenode has to be started.
Add this property in hdfs-site.xml,
<property>
<name>dfs.namenode.rpc-address</name>
<value>namenode_host:port</value>
</property>

Hadoop cannot start Yarn

I am new to Hadoop and I am trying to start Yarn daemon by using start-yarn.sh.
Below are my config files:
core-site.xml:
<?xml version="1.0"?>
<!-- core-site.xml -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml:
<?xml version="1.0"?>
<!-- hdfs-site.xml -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml:
<?xml version="1.0"?>
<!-- mapred-site.xml -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml:
<?xml version="1.0"?>
<!-- yarn-site.xml -->
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>localhost</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
I could start dfs and historyserver properly with:
dfs-start.sh --config $HADOOP_CONF_DIR (my config files)
mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver.
Both http://localhost:50070/ and http://localhost:19888 give me the correct pages. I try to run script start-yarn.sh --config $HADOOP_CONF_DIR, here is the output in the console:
start-yarn.sh --config $HADOOP_CONF_DIR
starting yarn daemons
starting resourcemanager, logging to /usr/lib/hadoop-2.5.2/logs/yarn-yyang-resourcemanager-yyang-ubuntu.out
2017-03-26 17:37:31,051 INFO [main] resourcemanager.ResourceManager (StringUtils.java:startupShutdownMessage(619)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting ResourceManager
STARTUP_MSG: host = yyang-ubuntu/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.5.2
STARTUP_MSG: classpath = /usr/lib/hadoop-2.5.2/conf_local/hadoop:/usr/lib/hadoop-2.5.2/conf_local/hadoop:/usr/lib/hadoop-2.5.2/conf_local/hadoop:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/activation-1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jettison-1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/paranamer-2.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/guava-11.0.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-el-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/avro-1.7.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/asm-3.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/xz-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-net-3.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/hadoop-annotations-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-io-2.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/junit-4.11.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/hadoop-auth-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/hadoop-nfs-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2-tests.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-2.5.2-tests.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/activation-1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/guice-3.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/asm-3.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/xz-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-common-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-api-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-common-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-client-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.2-tests.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-common-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-api-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-common-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-client-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/activation-1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/guice-3.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/asm-3.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/xz-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-2.5.2/conf_local/hadoop/rm-config/log4j.properties
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc72e9b000545b86b75a61f4835eb86d57bfafc0; compiled by 'jenkins' on 2014-11-14T23:45Z
STARTUP_MSG: java = 1.8.0_121
************************************************************/
The output seems ok to me (maybe I did not see the error). The resource manager's web UI does not give me the correct page (the site cannot be reached). But jps gives me:
6081 Jps
5554 JobHistoryServer
4443 SecondaryNameNode
4237 NameNode
which does not included resource manager.
I use the configuration from book Hadoop: The Definitive Guide, 4th Edition
Please help me fix the problem.
Refer this for installation issue:
https://stackoverflow.com/questions/22240488/couldnt-start-hadoop-datanode-normally/45671270#45671270
Meanwhile put only this under your yarn-site.xml
**
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
and mapred-site.xml should be:
**<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>**
**
and restart hadoop:

Hadoop not communicating with resourcemanager

Hi currently I'm running hadoop 2.4.1. I have created a simple java program DefaultMapperClass.java using eclipse and packaged it into ex1.jar
When I try to invoke this program via hadoop shell using the command,
**hadoop jar /home/Maddy/ex1.jar DefaultMapperClass hdfs://localhost/users/root/input/Hadoop.txt hdfs://localhost/users/root/output**
I get the below output in hadoop shell
**[root#localhost Maddy]# hadoop jar /home/Maddy/ex1.jar DefaultMapperClass hdfs://localhost/users/root/input/Hadoop.txt hdfs://localhost/users/root/output
14/09/05 19:26:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Job started: Fri Sep 05 19:26:35 CDT 2014
14/09/05 19:26:35 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
[root#localhost Maddy]#**
Seems like hadoop shell is trying to connect to resource manager but unsuccessful but there is no error message
mapred-site.xml file:
**<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>**
yarn-site.xml:
**<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>localhost:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>localhost:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>localhost:8031</value>
</property>
</configuration>**
What is missing here? Why execution is terminated after attempting to connect to resource manager?
I would suggest removing the following configurations from the yarnsite.xml as they are unnecessary :
<property>
<name>yarn.resourcemanager.address</name>
<value>localhost:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>localhost:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>localhost:8031</value>
</property>
You can access the resource manager at localhost:8088

Resources