my datanode is not starting in hadoop 2.7.3 multi nodes - hadoop

my datanode is not starting in hadoop 2.7.3 multi nodes ( 1master, 2 slaves)
Here are my configuration files :
core-site.xml ( in master and slaves)
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://Hadoop:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
mapred-site.xml (in master and slaves)
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>Hadoop:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
hdfs-site.xml (in master)
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/var/lib/hadoop/hdfs/namenode</value>
</property>
</configuration>
hdfs-site.xml (in slaves)
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/var/lib/hadoop/hdfs/datanode</value>
</property>
</configuration>
yarn-site.xml ( in master and slaves)
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>Hadoop:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>Hadoop:8035</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>Hadoop:8050</value>
</property>
jps in master node :
13856 SecondaryNameNode
14083 Jps
13620 NameNode
14010 ResourceManager
jps in slaves
6162 Jps
6044 NodeManager
log file in slave 1
root#ubuntu:/usr/local/lib/hadoop-2.7.3/logs# gedit hadoop-root-datanode-ubuntu.log
2016-12-24 05:28:42,854 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = ubuntu/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3
STARTUP_MSG: classpath = /usr/local/lib/hadoop-2.7.3/etc/hadoop:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/usr/local/lib/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG: java = 1.8.0_111
************************************************************/
2016-12-24 05:28:42,881 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2016-12-24 05:28:44,573 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2016-12-24 05:28:44,737 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-12-24 05:28:44,737 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2016-12-24 05:28:44,743 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2016-12-24 05:28:44,745 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is ubuntu
2016-12-24 05:28:44,761 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2016-12-24 05:28:44,826 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2016-12-24 05:28:44,828 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2016-12-24 05:28:44,828 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5
2016-12-24 05:28:45,010 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-12-24 05:28:45,044 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2016-12-24 05:28:45,060 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2016-12-24 05:28:45,081 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-12-24 05:28:45,085 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2016-12-24 05:28:45,092 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-12-24 05:28:45,092 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-12-24 05:28:45,144 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 33633
2016-12-24 05:28:45,144 INFO org.mortbay.log: jetty-6.1.26
2016-12-24 05:28:45,533 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup#localhost:33633
2016-12-24 05:28:45,780 INFO org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:50075
2016-12-24 05:28:46,441 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = root
2016-12-24 05:28:46,447 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2016-12-24 05:28:46,638 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-12-24 05:28:46,729 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2016-12-24 05:28:46,771 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2016-12-24 05:28:46,805 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2016-12-24 05:28:46,827 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2016-12-24 05:28:46,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to Hadoop/192.168.88.137:54310 starting to offer service
2016-12-24 05:28:46,868 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-12-24 05:28:46,870 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2016-12-24 05:28:47,768 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2016-12-24 05:28:47,780 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /var/lib/hadoop/hdfs/datanode/in_use.lock acquired by nodename 6952#ubuntu
2016-12-24 05:28:47,788 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/var/lib/hadoop/hdfs/datanode/
java.io.IOException: Incompatible clusterIDs in /var/lib/hadoop/hdfs/datanode: namenode clusterID = CID-558e02e9-5f72-47a7-a165-b931abbab42c; datanode clusterID = CID-9ce648f5-4684-4895-8cda-260b845a29e8
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:745)
2016-12-24 05:28:47,804 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to Hadoop/192.168.88.137:54310. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:745)
2016-12-24 05:28:47,804 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to Hadoop/192.168.88.137:54310
2016-12-24 05:28:47,810 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2016-12-24 05:28:49,811 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2016-12-24 05:28:49,812 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2016-12-24 05:28:49,814 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ubuntu/127.0.1.1
************************************************************/
Now it works well I only leave these lines in my hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
Is this going to pose problems ?
On the graphical interface of my cluster I see only a single datanode
you can see datanodes information her enter image description here
Thnak you

Property fs.default.name is deprecated. Try below instead.
<property>
<name>fs.defaultFS</name>
<value>hdfs://NAME_NODE_HOST:8020</value>
</property>
After looking at the logs:
Looks like you need to format the namenode.
Please try iceberg's solution at
Datanode not starts correctly

Related

Run HDFS pseudo mode in a docker container

I'm trying to run a HDFS under pseudo mode in a docker container, configured with this page: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation, but I didn't use start-all.sh script as it isn't supposed to be able to do ssh, so I manually ran command bin/hdfs --daemon start namenode|datanode to start them one by one. The problem is I can see namenode started successfully, but datanode quited without any error message. the last piece of log from datanode is:
...
2018-04-09 21:04:03,830 INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/apps/hadoop/hdfs/data
2018-04-09 21:04:04,188 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2018-04-09 21:04:04,296 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2018-04-09 21:04:04,296 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2018-04-09 21:04:04,665 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2018-04-09 21:04:04,667 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2018-04-09 21:04:04,671 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is hdfs
2018-04-09 21:04:04,671 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2018-04-09 21:04:04,677 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2018-04-09 21:04:04,733 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:9866
2018-04-09 21:04:04,735 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwidth is 10485760 bytes/s
2018-04-09 21:04:04,735 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 50
core-site.xml file:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost</value>
</property>
</configuration>
And hdfs-site.xml is
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/apps/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/apps/hadoop/hdfs/data</value>
</property>
</configuration>
Did I miss any thing from there?
I think it is base image issue, I was using alpine, once I changed to centos, datanode works! must be something missing from alpine, appreciate if anyone knows what is it, as centos based image eventually will much more bigger then alpine.

DataNode is not starting on hadoop multinode cluster

I removed contents from hadoop tmp directory, dropped current folder from namenode directory, formatted namenode but got an exception: org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException java.net.BindException: Port in use: localhost:0.
My configuration is as follows:
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000/</value>
<description>NameNode URI</description>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/hduser/hdfs/namenode</value>
<description>NameNode directory for namespace and transaction logs storage.</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hduser/hdfs/datanode</value>
<description>DataNode directory</description>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8050</value>
</property>
</configuration>
data node log
2017-01-20 16:27:21,927 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-01-20 16:27:23,346 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-01-20 16:27:23,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-01-20 16:27:23,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2017-01-20 16:27:23,448 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2017-01-20 16:27:23,450 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is master
2017-01-20 16:27:23,461 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2017-01-20 16:27:23,491 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2017-01-20 16:27:23,493 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2017-01-20 16:27:23,493 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5
2017-01-20 16:27:23,650 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-01-20 16:27:23,663 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-01-20 16:27:23,673 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2017-01-20 16:27:23,677 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-01-20 16:27:23,689 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2017-01-20 16:27:23,690 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-01-20 16:27:23,690 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-01-20 16:27:23,716 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: localhost:0
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.<init>(DatanodeHttpServer.java:104)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:760)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1112)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:429)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2374)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2261)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2308)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2485)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2509)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914)
... 10 more
2017-01-20 16:27:23,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Shutdown complete.
2017-01-20 16:27:23,728 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.net.BindException: Port in use: localhost:0
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.<init>(DatanodeHttpServer.java:104)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:760)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1112)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:429)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2374)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2261)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2308)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2485)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2509)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914)
... 10 more
2017-01-20 16:27:23,730 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-01-20 16:27:23,735 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at master/10.0.1.1
************************************************************/
Any help would be highly appreciated.

hadoop namenode not starting/formatting on Ubuntu

am trying to set up a hadoop instance on Ubuntu. The namenode is not starting up. When i do jps command I can see all but namenode . Here is my hdfs-site.xml file.
<configuration>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/ac/hadoop/dfs</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/ac/hadoop/dfs</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
and heres my core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
The error that i got is
ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.
When I formatted namenode I got this on prompt
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hanu/127.0.1.1
STARTUP_MSG: args = [–format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.8.0_31
************************************************************/
Usage: java NameNode [-format [-force ] [-nonInteractive]] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-recover [ -force ] ]
15/02/03 15:03:41 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hanu/127.0.1.1
I've tried to to change files as per various suggestions out there but nothing is working. I think namenode is not formatting properly.
Whats wrong in my setup and how can I get it corrected.Any help is appreciated. Thanks
The reason you are seeing the error message is because of command typo, that is why namenode class is showing the Usage error, may be you have issued the command option improperly.
Make sure you type the command properly:
bin/hadoop namenode -format
and then try to start the NameNode, you could start NameNode service on foreground just to see if everything is working out properly and if you don't see any errors you could kill the process and start all the services using start-all.sh script.
Here's how you could start NameNode process on foreground:
bin/hadoop namenode
once started these are the log messages to look for to validate a proper startup:
15/02/04 10:42:44 INFO http.HttpServer: Jetty bound to port 50070
15/02/04 10:42:44 INFO mortbay.log: jetty-6.1.26
15/02/04 10:42:45 INFO mortbay.log: Started SelectChannelConnector#0.0.0.0:50070
15/02/04 10:42:45 INFO namenode.NameNode: Web-server up at: 0.0.0.0:50070
15/02/04 10:42:45 INFO ipc.Server: IPC Server Responder: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server listener on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 0 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 1 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 2 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 3 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 4 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 5 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 6 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 7 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 8 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 9 on 8020: starting
you could kill the service by sending <Ctrl+C> to the process.

Datanode daemon not starting on datanodes hadoop

I am unable to start datanode daemon on my cluster(version v2.2). It starts fine in master node but simply do not start in data nodes. No log files are created on data nodes,they are created in master-node daemon and no error message. I have made sure below things are right.
I am able to ssh all data nodes from master withought password. I have also set HADOOP_SECURE_DN_USER user to "hadoop" this is the user i am planning to start datanodes on, On all nodes.
I have added data nodes to slaves file, one per line.
HADOOP_HOME(/home/hadoop/hadoop-2.2.0),HADOOP_CONF_DIR($HADOOP_HOME/etc/hadoop) set on ALL the nodes.
all required directories are present on datanodes,users created,ipv6 disabled
Added necessary config file parameters, they are as below -
Below are log files for reference. They dont have any errors. Note "Network topology has 0 racks and 0 datanodes" below suggesting it is not recognizing ALL datanodes(may be safe mode one, not sure). Any help is much appreciated.
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/datanode</value>
</property>
</configuration>
yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.nodemanager.log.dirs</name>
<value>/home/yarn/logs</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.task.io.sort.mb</name>
<value>1024</value>
</property>
</configuration>
Namenode Log:
2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 1 secs
2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2013-12-06 23:54:46,972 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-12-06 23:54:46,972 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2013-12-06 23:54:46,975 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: localhost/192.168.56.1:9000
2013-12-06 23:54:46,975 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2013-12-06 23:55:08,530 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.56.1, storageID=DS-1268869381-192.168.56.1-50010-1386350725676, infoPort=50075, ipcPort=50020, storageInfo=lv=-47;cid=CID-d6194959-5a13-4d8b-8428-25134e8fb746;nsid=2144581313;c=0) storage DS-1268869381-192.168.56.1-50010-1386350725676
2013-12-06 23:55:08,535 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.56.1:50010
2013-12-06 23:55:08,717 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* processReport: Received first block report from 192.168.56.1:50010 after starting up or becoming active. Its block contents are no longer considered stale
2013-12-06 23:55:08,718 INFO BlockStateChange: BLOCK* processReport: from DatanodeRegistration(192.168.56.1, storageID=DS-1268869381-192.168.56.1-50010-1386350725676, infoPort=50075, ipcPort=50020, storageInfo=lv=-47;cid=CID-d6194959-5a13-4d8b-8428-25134e8fb746;nsid=2144581313;c=0), blocks: 0, processing time: 2 msecs
Datanode Log(on master node):
2013-12-06 23:55:08,469 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1981795271-192.168.56.1-1386350567299
2013-12-06 23:55:08,470 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current...
2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1981795271-192.168.56.1-1386350567299 on /home/hadoop/datanode/current: 8ms
2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1981795271-192.168.56.1-1386350567299: 9ms
2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current...
2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current: 0ms
2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 0ms
2013-12-06 23:55:08,485 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 beginning handshake with NN
2013-12-06 23:55:08,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 successfully registered with NN
2013-12-06 23:55:08,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode localhost/192.168.56.1:9000 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
2013-12-06 23:55:08,674 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 trying to claim ACTIVE state with txid=5
2013-12-06 23:55:08,674 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000
2013-12-06 23:55:08,767 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks took 2 msec to generate and 90 msecs for RPC and NN processing
2013-12-06 23:55:08,767 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand#38568c24
2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: 0.5% max memory = 889 MB
2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: capacity = 2^19 = 524288 entries
2013-12-06 23:55:08,774 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1981795271-192.168.56.1-1386350567299
2013-12-06 23:55:08,778 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-1981795271-192.168.56.1-1386350567299 to blockPoolScannerMap, new size=1

Failed to start Jobtracker and Tasktracker in CDH pseudo cluster

I noticed this problem when I tried to execute map reduce in R and failed to talk to JT and TT. This happened after I changed some config files, but unfortunately, I forgot how to change it back (my bad)!!
1) JT log:
2013-08-05 15:14:09,335 INFO org.apache.hadoop.mapred.JobTracker: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting JobTracker
STARTUP_MSG: host = rhadoop/172.16.1.39
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.0.0-mr1-cdh4.3.0
STARTUP_MSG: classpath = /etc/hadoop/conf:/usr/local/java/jdk1.7.0_15/lib/tools.jar:/usr/lib/hadoop-0.20-mapreduce:/usr/lib/hadoop-0.20-mapreduce/hadoop-core-2.0.0-mr1-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/activation-1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/ant-contrib-1.0b3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/avro-compiler-1.7.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-cli-1.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-codec-1.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-digester-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-el-1.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-lang-2.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-math-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-net-3.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/guava-11.0.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hadoop-fairscheduler-2.0.0-mr1-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-json-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jets3t-0.6.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jettison-1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jline-0.9.94.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsch-0.1.42.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsp-api-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-0.20-mapreduce/lib/junit-4.8.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/kfs-0.3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-0.20-mapreduce/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-0.20-mapreduce/lib/servlet-api-2.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/stax-api-1.0.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/xmlenc-0.52.jar:/usr/lib/hadoop-0.20-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/zookeeper-3.4.5-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsp-2.1/jsp-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsp-2.1/jsp-api-2.1.jar:/etc/hadoop/conf:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh4.3.0.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.3.0-tests.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.3.0.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.3.0-tests.jar:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-0.20-mapreduce/./:/usr/lib/hadoop-0.20-mapreduce/lib/commons-math-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-net-3.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-0.20-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsp-api-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-el-1.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-lang-2.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsch-0.1.42.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jets3t-0.6.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/activation-1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-digester-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/avro-compiler-1.7.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/stax-api-1.0.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-cli-1.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jline-0.9.94.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/xmlenc-0.52.jar:/usr/lib/hadoop-0.20-mapreduce/lib/junit-4.8.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/ant-contrib-1.0b3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/guava-11.0.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/servlet-api-2.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-codec-1.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jettison-1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-json-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/kfs-0.3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hadoop-fairscheduler-2.0.0-mr1-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/zookeeper-3.4.5-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-ant-2.0.0-mr1-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-ant.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-tools.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-tools-2.0.0-mr1-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-core-2.0.0-mr1-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-test.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-core.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-test-2.0.0-mr1-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-examples-2.0.0-mr1-cdh4.3.0.jar:/usr/lib/hadoop-0.20-mapreduce/.//hadoop-examples.jar
STARTUP_MSG: build = file:///data/1/jenkins/workspace/generic-package-ubuntu64-10-04/CDH4.3.0-Packaging-Hadoop-2013-05-27_19-02-30/hadoop-2.0.0+1357-1.cdh4.3.0.p0.21~lucid/src/hadoop-mapreduce1-project -r Unknown; compiled by 'jenkins' on Mon May 27 19:57:14 PDT 2013
STARTUP_MSG: java = 1.7.0_15
************************************************************/
2013-08-05 15:14:09,342 INFO org.apache.hadoop.mapred.JobTracker: registered UNIX signal handlers for [TERM, HUP, INT]
2013-08-05 15:14:14,823 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2013-08-05 15:14:14,836 INFO org.apache.hadoop.mapred.JobTracker: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
2013-08-05 15:14:14,837 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-08-05 15:14:14,838 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2013-08-05 15:14:14,850 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2013-08-05 15:14:15,081 INFO org.apache.hadoop.mapred.JobTracker: Starting jobtracker with owner as mapred
2013-08-05 15:14:15,230 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8021
2013-08-05 15:14:15,361 WARN org.apache.hadoop.ipc.RPC: Interface interface org.apache.hadoop.mapred.TaskTrackerManager ignored because it does not extend VersionedProtocol
2013-08-05 15:14:22,145 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-08-05 15:14:22,306 INFO org.apache.hadoop.http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-08-05 15:14:22,310 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context job
2013-08-05 15:14:22,314 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2013-08-05 15:14:22,314 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2013-08-05 15:14:22,689 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50030
2013-08-05 15:14:22,689 INFO org.mortbay.log: jetty-6.1.26.cloudera.2
2013-08-05 15:14:23,908 INFO org.mortbay.log: Started SelectChannelConnector#0.0.0.0:50030
2013-08-05 15:14:24,065 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
2013-08-05 15:14:24,075 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
2013-08-05 15:14:24,117 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 8021
2013-08-05 15:14:24,117 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030
2013-08-05 15:14:25,374 FATAL org.apache.hadoop.mapred.JobTracker: java.lang.IllegalArgumentException: Wrong FS: file:/var/lib/hadoop-hdfs/cache/mapred/mapred/system, expected: hdfs://rhadoop:8020
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:445)
at org.apache.hadoop.mapred.JobTracker.getSystemDir(JobTracker.java:4174)
at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1941)
at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1747)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:305)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:297)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4538)
2013-08-05 15:14:25,383 INFO org.apache.hadoop.mapred.JobTracker: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down JobTracker at rhadoop/172.16.1.39
**
********************************/
2)TT log(exceed body limit, so only I'll only show show errors):
2013-08-05 14:55:07,215 WARN org.apache.hadoop.mapred.TaskTracker: TaskTracker local dir file:///var/lib/hadoop-hdfs/cache/mapred/mapred/local error Dir is not readable: file:///var/lib/hadoop-hdfs/cache/mapred/mapred/local, removing from local dirs
2013-08-05 14:55:07,217 ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because org.apache.hadoop.util.DiskChecker$DiskErrorException: No mapred local directories are writable
at org.apache.hadoop.mapred.TaskTracker$LocalStorage.checkDirs(TaskTracker.java:279)
at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1710)
at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:4041)
2013-08-05 14:55:07,225 INFO org.apache.hadoop.mapred.TaskTracker: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down TaskTracker at rhadoop/172.16.1.39
************************************************************/
3) And here is my /etc/hosts file:
127.0.0.1 localhost
172.16.1.39 rhadoop
4) And my config in core-site.xml:
<property>
<name>fs.default.name</name>
<value>hdfs://rhadoop:8020</value>
</property>
5) And my mapred-site.xml:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>rhadoop:8021</value>
</property>
<!-- Enable Hue plugins -->
<property>
<name>mapred.jobtracker.plugins</name>
<value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value>
<description>Comma-separated list of jobtracker plug-ins to be activated.
</description>
</property>
<property>
<name>jobtracker.thrift.address</name>
<value>0.0.0.0:9290</value>
</property>
</configuration>
And one thing to mention is I checked the Wrong FS error for JT online, but most of them are like:
Wrong FS: hdfs:xxxxxxxxxxxxx, expected: xxxxxxxxx
ked But mine is like Wrong FS: file:xxxxxxxxxxx.
Can anyone help me with the configuration thing here?
I found the problem. In file /etc/hadoop/conf/hdfs-site.xml, there is one attribute:
<property>
<name>hadoop.tmp.dir</name>
<value>/var/lib/hadoop-hdfs/cache/${user.name}</value>
</property>
When I made some change, I carelessly changed the value to:
file:///var/lib/hadoop-hdfs/cache/${user.name}
which caused my problem!
Looks like you do not have proper permissions on the directory file:///var/lib/hadoop-hdfs/cache/mapred/mapred/local. Have you changed anything in your mapred-site.xml file?Specially the property mapred.local.dir?Showing us the JT and TT log files would be helpful.

Resources