Hadoop cannot start Yarn - hadoop

I am new to Hadoop and I am trying to start Yarn daemon by using start-yarn.sh.
Below are my config files:
core-site.xml:
<?xml version="1.0"?>
<!-- core-site.xml -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml:
<?xml version="1.0"?>
<!-- hdfs-site.xml -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml:
<?xml version="1.0"?>
<!-- mapred-site.xml -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml:
<?xml version="1.0"?>
<!-- yarn-site.xml -->
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>localhost</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
I could start dfs and historyserver properly with:
dfs-start.sh --config $HADOOP_CONF_DIR (my config files)
mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver.
Both http://localhost:50070/ and http://localhost:19888 give me the correct pages. I try to run script start-yarn.sh --config $HADOOP_CONF_DIR, here is the output in the console:
start-yarn.sh --config $HADOOP_CONF_DIR
starting yarn daemons
starting resourcemanager, logging to /usr/lib/hadoop-2.5.2/logs/yarn-yyang-resourcemanager-yyang-ubuntu.out
2017-03-26 17:37:31,051 INFO [main] resourcemanager.ResourceManager (StringUtils.java:startupShutdownMessage(619)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting ResourceManager
STARTUP_MSG: host = yyang-ubuntu/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.5.2
STARTUP_MSG: classpath = /usr/lib/hadoop-2.5.2/conf_local/hadoop:/usr/lib/hadoop-2.5.2/conf_local/hadoop:/usr/lib/hadoop-2.5.2/conf_local/hadoop:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/activation-1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jettison-1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/paranamer-2.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/guava-11.0.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-el-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/avro-1.7.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/asm-3.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/xz-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-net-3.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/hadoop-annotations-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-io-2.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/junit-4.11.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/hadoop-auth-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/hadoop-nfs-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2-tests.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-2.5.2-tests.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/activation-1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/guice-3.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/asm-3.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/xz-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-common-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-api-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-common-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-client-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.2-tests.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-common-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-api-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-common-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-client-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/activation-1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/guice-3.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/asm-3.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/xz-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-2.5.2/conf_local/hadoop/rm-config/log4j.properties
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc72e9b000545b86b75a61f4835eb86d57bfafc0; compiled by 'jenkins' on 2014-11-14T23:45Z
STARTUP_MSG: java = 1.8.0_121
************************************************************/
The output seems ok to me (maybe I did not see the error). The resource manager's web UI does not give me the correct page (the site cannot be reached). But jps gives me:
6081 Jps
5554 JobHistoryServer
4443 SecondaryNameNode
4237 NameNode
which does not included resource manager.
I use the configuration from book Hadoop: The Definitive Guide, 4th Edition
Please help me fix the problem.

Refer this for installation issue:
https://stackoverflow.com/questions/22240488/couldnt-start-hadoop-datanode-normally/45671270#45671270
Meanwhile put only this under your yarn-site.xml
**
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
and mapred-site.xml should be:
**<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>**
**
and restart hadoop:

Related

Hadoop 3.2.1 Multinode Cluster Nodemanager is not running

I have Hadoop 3.2.1 installed on Ubuntu 16.04lts and my cluster has 18 datanodes and 1 master.
After running:
$ start-dfs.sh
$ start-yarn.sh
$ jps
On master I get the following:
ResourceManager
NameNode
SecondaryNameNodecode
jps
And on datanodes:
DataNode
jps
All the nodes seems to be live:
NameNode Overview Web Page
But when I reach the Cluster overview, none of my datanodes seems to be active:
Cluster Overview
My configurations files:
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-3.2.1/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop-master:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/hadoop-3.2.1/data/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/hadoop-3.2.1/data/datanode</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
The namenode and datanode directories exists on every host (master and datanodes)
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services </name>
<value> mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>
</configuration>
Also I have configured hadoop-env.sh for JAVA_HOME Path and all the other variables are in .bashrc file (also in every host).
I have modified the /etc/hosts file to include all the hosts with their IPs and hostnames and finally I have also modified the workers file to include all the IPs of the datanodes.
The first time I have formatted the NameNode, the directories for the hdfs-site.xml was wrong (I had the datanode dir twice), so hdfs make its own directories under /tmp/hdfs/ (if I remember correctly). But I fixed this with formating again the NameNode with the corect directories.

Running Mapreduce issues

I'm trying to run a wordcount jar on a cluster hadoop 2.7.1 (one master and 4 slaves), but the MapReduce job was blocked at:
$ hadoop jar wc.jar WordCount /input /output_hocine
17/03/13 09:41:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/03/13 09:41:43 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/03/13 09:41:43 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
17/03/13 09:41:44 INFO input.FileInputFormat: Total input paths to process : 3
17/03/13 09:41:44 INFO mapreduce.JobSubmitter: number of splits:3
17/03/13 09:41:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1489393376058_0003
17/03/13 09:41:44 INFO impl.YarnClientImpl: Submitted application application_1489393376058_0003
17/03/13 09:41:44 INFO mapreduce.Job: The url to track the job: http://ibnbadis21:8088/proxy/application_1489393376058_0003/
17/03/13 09:41:44 INFO mapreduce.Job: Running job: job_1489393376058_0003
Via The navigator, The output via the navigator is shown at this image:
Here is the content of the configuration files:
Core-site.xml:
<configuration>
<!-- <property>
<name>fs.defaultFS</name>
<value>hdfs://ibnbadis21:9000</value>
</property>-->
<property>
<name>fs.default.name</name>
<value>hdfs://ibnbadis21:9000</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
yarn-site.xml:
<?xml version="1.0"?> <configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
mapred-site.xml:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>ibnbadis21:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>ibnbadis21:19888</value>
</property>
<property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/user/app</value>
</property>
</configuration>
hdfs-site.xml:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/hadoop_data/hdfs/namenode</value>
</property>
<property> <name>dfs.namenode.checkpoint.dir</name>
<value>file:/usr/local/hadoop_data/hdfs/namesecondary</value>
</property>
<property> <name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_data/hdfs/datanode</value>
</property>
</configuration>
Can anyone tell me how can solve this problem, please?
Connecting to ResourceManager at /0.0.0.0:8032
0.0.0.0 (the default) is not a valid hostname.
So, add this in yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value> YOUR VALUE HERE </value> <!-- Needs Fully Qualified Domain Name -->
</property>
There are many values that you probably didn't set.
Refer Hadoop | Configuring the Hadoop Daemons
By the way, fs.defaultFS is the correct property to use.
Finally the problem was about access rights. The framework haven't the right to access at my yarn-site.xml file. That's why it used the default value 0.0.0.0/8030. Thus When I executed the command with privilege (sudo):
sudo hadoop jar wc.jar WordCount /input /output
My job MapReduce is executed successfully!

Hadoop: namenode starting and then getting off suddenly,not showing in jps

I am trying to create a cluster for using hadoop. I am trying to start my namenode but it is not starting. After restarting the system it starts for a moment and then again goes off.I am using the command as a root user and given the namenode the root user rights. I am facing the same problem with jobtracker and datanode.
To start the namenode I am using the command hadoop-daemon.sh start namenode
What is the problem here?
[hadoop#localhost ~]$ hadoop-daemon.sh start namenode
starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop- namenode-localhost.localdomain.out
Warning: $HADOOP_HOME is deprecated.
[hadoop#localhost ~]$ jps
6500 Jps
[hadoop#localhost ~]$ jps
The core-site.xml file contains
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://lab1:8020</value>
</property>
</configuration>
The hdfs-site.xml contains
<configuration>
<property>
<name>dfs.replication.dir</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
</configuration>

Not able to see Job History(http://localhost:19888) page in web browser in Hadoop

I am using Hadoop version 2.4.1 on Ubuntu 14.04 32 bit.
When I run a sample job using hadoop jar user_jar.jar command, I am not able to see output on http://localhost:19888 (Page not found)
What could be the possible reason ?
Thank you in advance.
JPS output :
3931 Jps
3719 NodeManager
3420 SecondaryNameNode
3593 ResourceManager
3246 DataNode
3126 NameNode
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
Run mr-jobhistory-daemon:
$ $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh --config $HADOOP_CONFIG_DIR start historyserver
Now
$ jps
2135 DataNode
2339 SecondaryNameNode
2627 NodeManager
3176 JobHistoryServer
1971 NameNode
3213 Jps
2485 ResourceManager
and
$ netstat -ntlp | grep 19888
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:19888 0.0.0.0:* LISTEN 3176/java

CDH3u6 Single Node cluster DataNode start throw error

I get the following error -
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = atuls-macbook-air.local/192.168.0.22
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2-cdh3u6
STARTUP_MSG: build = git://ubuntu-slave01/var/lib/jenkins/workspace/CDH3u6-Full-RC/build/cdh3/hadoop20/0.20.2-cdh3u6/source -r efb405d2aa54039bdf39e0733cd0bb9423a1eb0a; compiled by 'jenkins' on Wed Mar 20 11:45:36 PDT 2013
************************************************************/
2014-10-31 09:06:49,252 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.ExceptionInInitializerError
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:231)
at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:309)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:635)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:544)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1757)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1750)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1618)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:255)
at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:226)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1680)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1635)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1653)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1779)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1796)
Caused by: java.lang.NumberFormatException: For input string: "558:feed::1"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:492)
at java.lang.Integer.parseInt(Integer.java:527)
at com.sun.jndi.dns.DnsClient.<init>(DnsClient.java:125)
at com.sun.jndi.dns.Resolver.<init>(Resolver.java:61)
at com.sun.jndi.dns.DnsContext.getResolver(DnsContext.java:570)
at com.sun.jndi.dns.DnsContext.c_getAttributes(DnsContext.java:430)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_getAttributes(ComponentDirContext.java:231)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:139)
at com.sun.jndi.toolkit.url.GenericURLDirContext.getAttributes(GenericURLDirContext.java:103)
at sun.security.krb5.KrbServiceLocator.getKerberosService(KrbServiceLocator.java:87)
at sun.security.krb5.Config.checkRealm(Config.java:1295)
at sun.security.krb5.Config.getRealmFromDNS(Config.java:1268)
at sun.security.krb5.Config.getDefaultRealm(Config.java:1162)
at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:81)
... 14 more
2014-10-31 09:06:49,253 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at atuls-macbook-air.local/192.168.0.22
************************************************************/
Do I need a special user or is something wrong in my setting? Is there a setting that I am missing? I changed the directory permissions.
Here is my hdfs-site.xml file -
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<!-- Immediately exit safemode as soon as one DataNode checks in.
On a multi-node cluster, these configurations must be removed. -->
<property>
<name>dfs.safemode.extension</name>
<value>0</value>
</property>
<property>
<name>dfs.safemode.min.datanodes</name>
<value>1</value>
</property>
<property>
<!-- specify this so that running 'hadoop namenode -format' formats the right dir -->
<name>dfs.name.dir</name>
<value>/Users/atul/hadoop_dfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/Users/atul/hadoop_dfs/data</value>
</property>
<!-- Enable Hue Plugins -->
<property>
<name>dfs.namenode.plugins</name>
<value>org.apache.hadoop.thriftfs.NamenodePlugin</value>
<description>Comma-separated list of namenode plug-ins to be activated.
</description>
</property>
<property>
<name>dfs.datanode.plugins</name>
<value>org.apache.hadoop.thriftfs.DatanodePlugin</value>
<description>Comma-separated list of datanode plug-ins to be activated.
</description>
</property>
<property>
<name>dfs.thrift.address</name>
<value>0.0.0.0:10090</value>
</property>
</configuration>
I would appreciate any help in this regard. I need to run CDH3u6 hence this version.

Resources