How resolve hadoop installation error: hdfs namenode -format - hadoop

I installed hadoop on centos7. when i execute the command: hdfs namenode -format
I have an output with errors I tried several proposals that I saw on the internet but the problem is not solved.
21/06/16 14:15:01 INFO namenode.NameNode: Caching file names occuring more than 10 times
21/06/16 14:15:01 INFO util.GSet: Computing capacity for map cachedBlocks
21/06/16 14:15:01 INFO util.GSet: VM type = 64-bit
21/06/16 14:15:01 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
21/06/16 14:15:01 INFO util.GSet: capacity = 2^18 = 262144 entries
21/06/16 14:15:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
21/06/16 14:15:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
21/06/16 14:15:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
21/06/16 14:15:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
21/06/16 14:15:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
21/06/16 14:15:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
21/06/16 14:15:01 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
21/06/16 14:15:01 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
21/06/16 14:15:01 INFO util.GSet: Computing capacity for map NameNodeRetryCache
21/06/16 14:15:01 INFO util.GSet: VM type = 64-bit
21/06/16 14:15:01 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
21/06/16 14:15:01 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /storage/name ? (Y or N) y
21/06/16 14:15:06 WARN net.DNS: Unable to determine local hostname -falling back to "localhost"
java.net.UnknownHostException: LSHDP
localhost: LSHDP
localhost: Temporary failure in name resolution
at java.net.InetAddress.getLocalHost(InetAddress.java:1506)
at org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:264)
at org.apache.hadoop.net.DNS.<clinit>(DNS.java:57)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:966)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:575)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:157)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
Caused by: java.net.UnknownHostException: LSHDP
localhost: Temporary failure in name resolution
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
at java.net.InetAddress.getLocalHost(InetAddress.java:1501)
... 8 more
21/06/16 14:15:06 WARN net.DNS: Unable to determine address of the host-falling back to "localhost" address
java.net.UnknownHostException: LSHDP
localhost: LSHDP
localhost: Temporary failure in name resolution
at java.net.InetAddress.getLocalHost(InetAddress.java:1506)
at org.apache.hadoop.net.DNS.resolveLocalHostIPAddress(DNS.java:287)
at org.apache.hadoop.net.DNS.<clinit>(DNS.java:58)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:966)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:575)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:157)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
Caused by: java.net.UnknownHostException: LSHDP
localhost: Temporary failure in name resolution
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
at java.net.InetAddress.getLocalHost(InetAddress.java:1501)
... 8 more
21/06/16 14:15:06 INFO namenode.FSImage: Allocated new BlockPoolId: BP-352354458-127.0.0.1-1623852906859
21/06/16 14:15:06 INFO common.Storage: Storage directory /storage/name has been successfully formatted.
21/06/16 14:15:07 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
21/06/16 14:15:07 INFO util.ExitUtil: Exiting with status 0
21/06/16 14:15:07 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: LSHDP
localhost: LSHDP
localhost: Temporary failure in name resolution
************************************************************/

You need to fix your DNS server (or OS hosts file) such that a host named LSHDP is known.
For example, ping LSHDP should return you a similar error
Or you need to edit your Hadoop config files to use IP addresses rather than hostnames

Related

Namenode Shutdown Error - Exiting with Status 0 (Hadoop Installation)

I am trying to get Hadoop-2.8.1 working. I am running the command to configure the Namenode. However, Namenode shuts down when I run it from the Hadoop directory.
***********s-MacBook-Pro-2:~ ***********$ cd Downloads/hadoop-2.8.1
***********s-MacBook-Pro-2:hadoop-2.8.1 ***********$ bin/hdfs namenode -format
17/09/12 12:08:26 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: user = ***********
STARTUP_MSG: host = ***********s-macbook-pro-2.local/172.16.42.63
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.8.1
STARTUP_MSG: classpath = /Users/***********/Downloads/hadoop-2.8.1/etc/hadoop:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/activation-1.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/asm-3.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/avro-1.7.4.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/commons-cli-1.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/commons-codec-1.4.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/commons-collections-3.2.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/commons-compress-1.4.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/commons-configuration-1.6.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/commons-digester-1.8.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/commons-io-2.4.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/commons-lang-2.6.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/commons-logging-1.1.3.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/commons-math3-3.1.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/commons-net-3.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/curator-client-2.7.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/curator-framework-2.7.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/gson-2.2.4.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/guava-11.0.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/hadoop-annotations-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/hadoop-auth-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/hamcrest-core-1.3.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/httpclient-4.5.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/httpcore-4.4.4.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jcip-annotations-1.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jersey-core-1.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jersey-json-1.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jersey-server-1.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jets3t-0.9.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jettison-1.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jetty-6.1.26.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jetty-util-6.1.26.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jsch-0.1.51.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/json-smart-1.1.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jsp-api-2.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/jsr305-3.0.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/junit-4.11.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/log4j-1.2.17.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/mockito-all-1.8.5.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/netty-3.6.2.Final.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/nimbus-jose-jwt-3.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/paranamer-2.3.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/servlet-api-2.5.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/stax-api-1.0-2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/xmlenc-0.52.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/xz-1.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/lib/zookeeper-3.4.6.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/hadoop-common-2.8.1-tests.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/hadoop-common-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/common/hadoop-nfs-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/asm-3.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/commons-io-2.4.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/guava-11.0.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/okio-1.4.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/hadoop-hdfs-2.8.1-tests.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/hadoop-hdfs-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/hadoop-hdfs-client-2.8.1-tests.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/hadoop-hdfs-client-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.1-tests.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/activation-1.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/asm-3.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/commons-cli-1.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/commons-codec-1.4.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/commons-io-2.4.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/commons-lang-2.6.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/commons-math-2.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/fst-2.24.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/guava-11.0.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/guice-3.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/javax.inject-1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jersey-client-1.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jersey-core-1.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jersey-json-1.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jersey-server-1.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jettison-1.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jetty-6.1.26.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/log4j-1.2.17.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/objenesis-2.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/servlet-api-2.5.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/xz-1.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-api-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-client-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-common-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-registry-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-server-common-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/asm-3.2.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/guice-3.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/javax.inject-1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/junit-4.11.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/lib/xz-1.0.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.1-tests.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.1.jar:/Users/***********/Downloads/hadoop-2.8.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.1.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 20fe5304904fc2f5a18053c389e43cd26f7a70fe; compiled by 'vinodkv' on 2017-06-02T06:14Z
STARTUP_MSG: java = 1.8.0_144
************************************************************/
17/09/12 12:08:26 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/09/12 12:08:26 INFO namenode.NameNode: createNameNode [-format]
17/09/12 12:08:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-eb6e6984-6e78-4eda-8d6c-7aa3186c738e
17/09/12 12:08:27 INFO namenode.FSEditLog: Edit logging is async:false
17/09/12 12:08:27 INFO namenode.FSNamesystem: KeyProvider: null
17/09/12 12:08:27 INFO namenode.FSNamesystem: fsLock is fair: true
17/09/12 12:08:27 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
17/09/12 12:08:27 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/09/12 12:08:27 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/09/12 12:08:27 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/09/12 12:08:27 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Sep 12 12:08:27
17/09/12 12:08:27 INFO util.GSet: Computing capacity for map BlocksMap
17/09/12 12:08:27 INFO util.GSet: VM type = 64-bit
17/09/12 12:08:27 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/09/12 12:08:27 INFO util.GSet: capacity = 2^21 = 2097152 entries
17/09/12 12:08:27 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/09/12 12:08:27 INFO blockmanagement.BlockManager: defaultReplication = 1
17/09/12 12:08:27 INFO blockmanagement.BlockManager: maxReplication = 512
17/09/12 12:08:27 INFO blockmanagement.BlockManager: minReplication = 1
17/09/12 12:08:27 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
17/09/12 12:08:27 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/09/12 12:08:27 INFO blockmanagement.BlockManager: encryptDataTransfer = false
17/09/12 12:08:27 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
17/09/12 12:08:27 INFO namenode.FSNamesystem: fsOwner = *********** (auth:SIMPLE)
17/09/12 12:08:27 INFO namenode.FSNamesystem: supergroup = supergroup
17/09/12 12:08:27 INFO namenode.FSNamesystem: isPermissionEnabled = true
17/09/12 12:08:27 INFO namenode.FSNamesystem: HA Enabled: false
17/09/12 12:08:27 INFO namenode.FSNamesystem: Append Enabled: true
17/09/12 12:08:27 INFO util.GSet: Computing capacity for map INodeMap
17/09/12 12:08:27 INFO util.GSet: VM type = 64-bit
17/09/12 12:08:27 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/09/12 12:08:27 INFO util.GSet: capacity = 2^20 = 1048576 entries
17/09/12 12:08:27 INFO namenode.FSDirectory: ACLs enabled? false
17/09/12 12:08:27 INFO namenode.FSDirectory: XAttrs enabled? true
17/09/12 12:08:27 INFO namenode.NameNode: Caching file names occurring more than 10 times
17/09/12 12:08:27 INFO util.GSet: Computing capacity for map cachedBlocks
17/09/12 12:08:27 INFO util.GSet: VM type = 64-bit
17/09/12 12:08:27 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/09/12 12:08:27 INFO util.GSet: capacity = 2^18 = 262144 entries
17/09/12 12:08:27 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/09/12 12:08:27 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/09/12 12:08:27 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
17/09/12 12:08:27 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
17/09/12 12:08:27 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
17/09/12 12:08:27 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
17/09/12 12:08:27 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/09/12 12:08:27 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/09/12 12:08:27 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/09/12 12:08:27 INFO util.GSet: VM type = 64-bit
17/09/12 12:08:27 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/09/12 12:08:27 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /tmp/hadoop-***********/dfs/name ? (Y or N) Y
17/09/12 12:08:30 INFO namenode.FSImage: Allocated new BlockPoolId: BP-336205315-172.16.42.63-1505243310944
17/09/12 12:08:30 INFO common.Storage: Storage directory /tmp/hadoop-***********/dfs/name has been successfully formatted.
17/09/12 12:08:30 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-***********/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/09/12 12:08:31 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-***********/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 330 bytes saved in 0 seconds.
17/09/12 12:08:31 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/09/12 12:08:31 INFO util.ExitUtil: Exiting with status 0
17/09/12 12:08:31 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ***********s-macbook-pro-2.local/172.16.42.63
************************************************************/
hdfs namenode -format command just format the namenode, it wouldn't bring namenode service up. In order to run hdfs namenode you have you need to execute start-dfs.sh , Also you can run namenode in the foreground using the below command after namenode format for testing.
hdfs namenode

Hadoop Installation Issue on Windows

I have been trying to install Hadoop on Windows 7 for quite sometime now. I am following this blog for instructions. But unfortunately I have not been be able to run the Namenode.
There seems to be issue with hdfs-site.xml file but I dont see anything wrong in it. Please have a look at it
hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>F:\hadoop-2.7.2\data\namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>F:\hadoop-2.7.2\data\datanode</value>
</property>
</configuration>
and the error log that I am getting on running hdfs namenode -format command in command prompt:
C:\Users\ABC>hdfs namenode -format
Hadoop common not found.
16/08/05 12:44:53 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = ABC-PC/172.20.0.51
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.2
STARTUP_MSG: classpath = F:\hadoop-2.7.2\etc\hadoop;F:\hadoop-2.7.2\share\hado
op\common\lib\commons-compress-1.4.1.jar;F:\hadoop-2.7.2\share\hadoop\common\lib
\jersey-server-1.9.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\jets3t-0.9.0.jar;
F:\hadoop-2.7.2\share\hadoop\common\lib\jersey-core-1.9.jar;F:\hadoop-2.7.2\shar
e\hadoop\common\lib\hadoop-auth-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\common\li
b\commons-digester-1.8.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\log4j-1.2.17.
jar;F:\hadoop-2.7.2\share\hadoop\common\lib\java-xmlbuilder-0.4.jar;F:\hadoop-2.
7.2\share\hadoop\common\lib\curator-client-2.7.1.jar;F:\hadoop-2.7.2\share\hadoo
p\common\lib\jetty-util-6.1.26.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\xmlen
c-0.52.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\activation-1.1.jar;F:\hadoop-
2.7.2\share\hadoop\common\lib\jackson-core-asl-1.9.13.jar;F:\hadoop-2.7.2\share\
hadoop\common\lib\jaxb-impl-2.2.3-1.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\
curator-framework-2.7.1.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\apacheds-ker
beros-codec-2.0.0-M15.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\netty-3.6.2.Fi
nal.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\commons-collections-3.2.2.jar;F:
\hadoop-2.7.2\share\hadoop\common\lib\htrace-core-3.1.0-incubating.jar;F:\hadoop
-2.7.2\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.jar;F:\hadoop-2.7.2\share
\hadoop\common\lib\jetty-6.1.26.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\comm
ons-configuration-1.6.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\asm-3.2.jar;F:
\hadoop-2.7.2\share\hadoop\common\lib\commons-io-2.4.jar;F:\hadoop-2.7.2\share\h
adoop\common\lib\commons-codec-1.4.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\j
ackson-mapper-asl-1.9.13.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\curator-rec
ipes-2.7.1.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\mockito-all-1.8.5.jar;F:\
hadoop-2.7.2\share\hadoop\common\lib\commons-math3-3.1.1.jar;F:\hadoop-2.7.2\sha
re\hadoop\common\lib\commons-net-3.1.jar;F:\hadoop-2.7.2\share\hadoop\common\lib
\snappy-java-1.0.4.1.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\jsch-0.1.42.jar
;F:\hadoop-2.7.2\share\hadoop\common\lib\stax-api-1.0-2.jar;F:\hadoop-2.7.2\shar
e\hadoop\common\lib\jackson-jaxrs-1.9.13.jar;F:\hadoop-2.7.2\share\hadoop\common
\lib\api-util-1.0.0-M20.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\jsp-api-2.1.
jar;F:\hadoop-2.7.2\share\hadoop\common\lib\httpclient-4.2.5.jar;F:\hadoop-2.7.2
\share\hadoop\common\lib\guava-11.0.2.jar;F:\hadoop-2.7.2\share\hadoop\common\li
b\zookeeper-3.4.6.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\commons-lang-2.6.j
ar;F:\hadoop-2.7.2\share\hadoop\common\lib\xz-1.0.jar;F:\hadoop-2.7.2\share\hado
op\common\lib\jackson-xc-1.9.13.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\hado
op-annotations-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\jaxb-api-2.2.2.
jar;F:\hadoop-2.7.2\share\hadoop\common\lib\jersey-json-1.9.jar;F:\hadoop-2.7.2\
share\hadoop\common\lib\protobuf-java-2.5.0.jar;F:\hadoop-2.7.2\share\hadoop\com
mon\lib\httpcore-4.2.5.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\avro-1.7.4.ja
r;F:\hadoop-2.7.2\share\hadoop\common\lib\commons-beanutils-core-1.8.0.jar;F:\ha
doop-2.7.2\share\hadoop\common\lib\servlet-api-2.5.jar;F:\hadoop-2.7.2\share\had
oop\common\lib\api-asn1-api-1.0.0-M20.jar;F:\hadoop-2.7.2\share\hadoop\common\li
b\gson-2.2.4.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\commons-cli-1.2.jar;F:\
hadoop-2.7.2\share\hadoop\common\lib\junit-4.11.jar;F:\hadoop-2.7.2\share\hadoop
\common\lib\jettison-1.1.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\jsr305-3.0.
0.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\commons-logging-1.1.3.jar;F:\hadoo
p-2.7.2\share\hadoop\common\lib\slf4j-log4j12-1.7.10.jar;F:\hadoop-2.7.2\share\h
adoop\common\lib\hamcrest-core-1.3.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\s
lf4j-api-1.7.10.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\commons-httpclient-3
.1.jar;F:\hadoop-2.7.2\share\hadoop\common\lib\commons-beanutils-1.7.0.jar;F:\ha
doop-2.7.2\share\hadoop\common\lib\paranamer-2.3.jar;F:\hadoop-2.7.2\share\hadoo
p\common\hadoop-nfs-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\common\hadoop-common-
2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\common\hadoop-common-2.7.2-tests.jar;F:\h
adoop-2.7.2\share\hadoop\hdfs;F:\hadoop-2.7.2\share\hadoop\hdfs\lib\jersey-serve
r-1.9.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib\leveldbjni-all-1.8.jar;F:\hadoop
-2.7.2\share\hadoop\hdfs\lib\jersey-core-1.9.jar;F:\hadoop-2.7.2\share\hadoop\hd
fs\lib\netty-all-4.0.23.Final.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib\log4j-1.
2.17.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib\jetty-util-6.1.26.jar;F:\hadoop-2
.7.2\share\hadoop\hdfs\lib\xmlenc-0.52.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib
\xercesImpl-2.9.1.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib\jackson-core-asl-1.9
.13.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-daemon-1.0.13.jar;F:\hadoo
p-2.7.2\share\hadoop\hdfs\lib\netty-3.6.2.Final.jar;F:\hadoop-2.7.2\share\hadoop
\hdfs\lib\htrace-core-3.1.0-incubating.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib
\jetty-6.1.26.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib\asm-3.2.jar;F:\hadoop-2.
7.2\share\hadoop\hdfs\lib\commons-io-2.4.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\l
ib\xml-apis-1.3.04.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-codec-1.4.j
ar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib\jackson-mapper-asl-1.9.13.jar;F:\hadoop
-2.7.2\share\hadoop\hdfs\lib\guava-11.0.2.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\
lib\commons-lang-2.6.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib\protobuf-java-2.5
.0.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib\servlet-api-2.5.jar;F:\hadoop-2.7.2
\share\hadoop\hdfs\lib\commons-cli-1.2.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib
\jsr305-3.0.0.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-logging-1.1.3.ja
r;F:\hadoop-2.7.2\share\hadoop\hdfs\hadoop-hdfs-2.7.2-tests.jar;F:\hadoop-2.7.2\
share\hadoop\hdfs\hadoop-hdfs-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\hdfs\hadoop
-hdfs-nfs-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-compress-1.4.1
.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-server-1.9.jar;F:\hadoop-2.7.2
\share\hadoop\yarn\lib\leveldbjni-all-1.8.jar;F:\hadoop-2.7.2\share\hadoop\yarn\
lib\jersey-core-1.9.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\log4j-1.2.17.jar;F
:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-client-1.9.jar;F:\hadoop-2.7.2\share
\hadoop\yarn\lib\jetty-util-6.1.26.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\act
ivation-1.1.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\jackson-core-asl-1.9.13.ja
r;F:\hadoop-2.7.2\share\hadoop\yarn\lib\jaxb-impl-2.2.3-1.jar;F:\hadoop-2.7.2\sh
are\hadoop\yarn\lib\netty-3.6.2.Final.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\
commons-collections-3.2.2.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\aopalliance-
1.0.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\jetty-6.1.26.jar;F:\hadoop-2.7.2\s
hare\hadoop\yarn\lib\asm-3.2.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-i
o-2.4.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-codec-1.4.jar;F:\hadoop-
2.7.2\share\hadoop\yarn\lib\jersey-guice-1.9.jar;F:\hadoop-2.7.2\share\hadoop\ya
rn\lib\jackson-mapper-asl-1.9.13.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\zooke
eper-3.4.6-tests.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\javax.inject-1.jar;F:
\hadoop-2.7.2\share\hadoop\yarn\lib\stax-api-1.0-2.jar;F:\hadoop-2.7.2\share\had
oop\yarn\lib\jackson-jaxrs-1.9.13.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\guic
e-3.0.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\guava-11.0.2.jar;F:\hadoop-2.7.2
\share\hadoop\yarn\lib\zookeeper-3.4.6.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib
\commons-lang-2.6.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\xz-1.0.jar;F:\hadoop
-2.7.2\share\hadoop\yarn\lib\jackson-xc-1.9.13.jar;F:\hadoop-2.7.2\share\hadoop\
yarn\lib\jaxb-api-2.2.2.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-json-1.
9.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\protobuf-java-2.5.0.jar;F:\hadoop-2.
7.2\share\hadoop\yarn\lib\servlet-api-2.5.jar;F:\hadoop-2.7.2\share\hadoop\yarn\
lib\guice-servlet-3.0.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-cli-1.2.
jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\jettison-1.1.jar;F:\hadoop-2.7.2\share
\hadoop\yarn\lib\jsr305-3.0.0.jar;F:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-
logging-1.1.3.jar;F:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-api-2.7.2.jar;F:
\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-applicationhistoryservice-2.7
.2.jar;F:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-web-proxy-2.7.2.jar;
F:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-resourcemanager-2.7.2.jar;F
:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-sharedcachemanager-2.7.2.jar
;F:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-applications-distributedshell-2.7
.2.jar;F:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-applications-unmanaged-am-l
auncher-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-registry-2.7.2.j
ar;F:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-client-2.7.2.jar;F:\hadoop-2.7.
2\share\hadoop\yarn\hadoop-yarn-common-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\ya
rn\hadoop-yarn-server-common-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\yarn\hadoop-
yarn-server-nodemanager-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-
server-tests-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\mapreduce\lib\commons-compre
ss-1.4.1.jar;F:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jersey-server-1.9.jar;F:
\hadoop-2.7.2\share\hadoop\mapreduce\lib\leveldbjni-all-1.8.jar;F:\hadoop-2.7.2\
share\hadoop\mapreduce\lib\jersey-core-1.9.jar;F:\hadoop-2.7.2\share\hadoop\mapr
educe\lib\log4j-1.2.17.jar;F:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jackson-co
re-asl-1.9.13.jar;F:\hadoop-2.7.2\share\hadoop\mapreduce\lib\netty-3.6.2.Final.j
ar;F:\hadoop-2.7.2\share\hadoop\mapreduce\lib\aopalliance-1.0.jar;F:\hadoop-2.7.
2\share\hadoop\mapreduce\lib\asm-3.2.jar;F:\hadoop-2.7.2\share\hadoop\mapreduce\
lib\commons-io-2.4.jar;F:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jersey-guice-1
.9.jar;F:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jackson-mapper-asl-1.9.13.jar;
F:\hadoop-2.7.2\share\hadoop\mapreduce\lib\javax.inject-1.jar;F:\hadoop-2.7.2\sh
are\hadoop\mapreduce\lib\snappy-java-1.0.4.1.jar;F:\hadoop-2.7.2\share\hadoop\ma
preduce\lib\guice-3.0.jar;F:\hadoop-2.7.2\share\hadoop\mapreduce\lib\xz-1.0.jar;
F:\hadoop-2.7.2\share\hadoop\mapreduce\lib\hadoop-annotations-2.7.2.jar;F:\hadoo
p-2.7.2\share\hadoop\mapreduce\lib\protobuf-java-2.5.0.jar;F:\hadoop-2.7.2\share
\hadoop\mapreduce\lib\avro-1.7.4.jar;F:\hadoop-2.7.2\share\hadoop\mapreduce\lib\
guice-servlet-3.0.jar;F:\hadoop-2.7.2\share\hadoop\mapreduce\lib\junit-4.11.jar;
F:\hadoop-2.7.2\share\hadoop\mapreduce\lib\hamcrest-core-1.3.jar;F:\hadoop-2.7.2
\share\hadoop\mapreduce\lib\paranamer-2.3.jar;F:\hadoop-2.7.2\share\hadoop\mapre
duce\hadoop-mapreduce-examples-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\mapreduce\
hadoop-mapreduce-client-hs-plugins-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\mapred
uce\hadoop-mapreduce-client-common-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\mapred
uce\hadoop-mapreduce-client-jobclient-2.7.2-tests.jar;F:\hadoop-2.7.2\share\hado
op\mapreduce\hadoop-mapreduce-client-hs-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\m
apreduce\hadoop-mapreduce-client-shuffle-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\
mapreduce\hadoop-mapreduce-client-core-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop\ma
preduce\hadoop-mapreduce-client-jobclient-2.7.2.jar;F:\hadoop-2.7.2\share\hadoop
\mapreduce\hadoop-mapreduce-client-app-2.7.2.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b16
5c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08
Z
STARTUP_MSG: java = 1.7.0_79
************************************************************/
16/08/05 12:44:53 INFO namenode.NameNode: createNameNode [-format]
16/08/05 12:44:53 WARN util.NativeCodeLoader: Unable to load native-hadoop libra
ry for your platform... using builtin-java classes where applicable
16/08/05 12:44:54 ERROR common.Util: Syntax error in URI F:\hadoop-2.7.2\data\na
menode. Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in opaque part at index 2: F:\had
oop-2.7.2\data\namenode
at java.net.URI$Parser.fail(URI.java:2829)
at java.net.URI$Parser.checkChars(URI.java:3002)
at java.net.URI$Parser.parse(URI.java:3039)
at java.net.URI.<init>(URI.java:595)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:48)
at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util
.java:98)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FS
Namesystem.java:1400)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceDirs(
FSNamesystem.java:1355)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
966)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo
de.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:15
54)
16/08/05 12:44:54 WARN common.Util: Path F:\hadoop-2.7.2\data\namenode should be
specified as a URI in configuration files. Please update hdfs configuration.
16/08/05 12:44:54 ERROR common.Util: Syntax error in URI F:\hadoop-2.7.2\data\na
menode. Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in opaque part at index 2: F:\had
oop-2.7.2\data\namenode
at java.net.URI$Parser.fail(URI.java:2829)
at java.net.URI$Parser.checkChars(URI.java:3002)
at java.net.URI$Parser.parse(URI.java:3039)
at java.net.URI.<init>(URI.java:595)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:48)
at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util
.java:98)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FS
Namesystem.java:1400)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEdits
Dirs(FSNamesystem.java:1445)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEdits
Dirs(FSNamesystem.java:1414)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
971)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo
de.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:15
54)
16/08/05 12:44:54 WARN common.Util: Path F:\hadoop-2.7.2\data\namenode should be
specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-e302dfa9-9520-4074-9247-d9f09cd0f882
16/08/05 12:44:54 INFO namenode.FSNamesystem: No KeyProvider found.
16/08/05 12:44:54 INFO namenode.FSNamesystem: fsLock is fair:true
16/08/05 12:44:54 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.lim
it=1000
16/08/05 12:44:54 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.re
gistration.ip-hostname-check=true
16/08/05 12:44:54 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.
block.deletion.sec is set to 000:00:00:00.000
16/08/05 12:44:54 INFO blockmanagement.BlockManager: The block deletion will sta
rt around 2016 Aug 05 12:44:54
16/08/05 12:44:54 INFO util.GSet: Computing capacity for map BlocksMap
16/08/05 12:44:54 INFO util.GSet: VM type = 32-bit
16/08/05 12:44:54 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
16/08/05 12:44:54 INFO util.GSet: capacity = 2^22 = 4194304 entries
16/08/05 12:44:54 INFO blockmanagement.BlockManager: dfs.block.access.token.enab
le=false
16/08/05 12:44:54 INFO blockmanagement.BlockManager: defaultReplication
= 1
16/08/05 12:44:54 INFO blockmanagement.BlockManager: maxReplication
= 512
16/08/05 12:44:54 INFO blockmanagement.BlockManager: minReplication
= 1
16/08/05 12:44:54 INFO blockmanagement.BlockManager: maxReplicationStreams
= 2
16/08/05 12:44:54 INFO blockmanagement.BlockManager: replicationRecheckInterval
= 3000
16/08/05 12:44:54 INFO blockmanagement.BlockManager: encryptDataTransfer
= false
16/08/05 12:44:54 INFO blockmanagement.BlockManager: maxNumBlocksToLog
= 1000
16/08/05 12:44:54 INFO namenode.FSNamesystem: fsOwner = ABC (auth:S
IMPLE)
16/08/05 12:44:54 INFO namenode.FSNamesystem: supergroup = supergroup
16/08/05 12:44:54 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/08/05 12:44:54 INFO namenode.FSNamesystem: HA Enabled: false
16/08/05 12:44:54 INFO namenode.FSNamesystem: Append Enabled: true
16/08/05 12:44:54 INFO util.GSet: Computing capacity for map INodeMap
16/08/05 12:44:54 INFO util.GSet: VM type = 32-bit
16/08/05 12:44:54 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
16/08/05 12:44:54 INFO util.GSet: capacity = 2^21 = 2097152 entries
16/08/05 12:44:54 INFO namenode.FSDirectory: ACLs enabled? false
16/08/05 12:44:54 INFO namenode.FSDirectory: XAttrs enabled? true
16/08/05 12:44:54 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
16/08/05 12:44:54 INFO namenode.NameNode: Caching file names occuring more than
10 times
16/08/05 12:44:54 INFO util.GSet: Computing capacity for map cachedBlocks
16/08/05 12:44:54 INFO util.GSet: VM type = 32-bit
16/08/05 12:44:54 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
16/08/05 12:44:54 INFO util.GSet: capacity = 2^19 = 524288 entries
16/08/05 12:44:54 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pc
t = 0.9990000128746033
16/08/05 12:44:54 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanode
s = 0
16/08/05 12:44:54 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension
= 30000
16/08/05 12:44:54 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.n
um.buckets = 10
16/08/05 12:44:54 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.user
s = 10
16/08/05 12:44:54 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.
minutes = 1,5,25
16/08/05 12:44:54 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/08/05 12:44:54 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total
heap and retry cache entry expiry time is 600000 millis
16/08/05 12:44:54 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/08/05 12:44:54 INFO util.GSet: VM type = 32-bit
16/08/05 12:44:54 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 29
7.0 KB
16/08/05 12:44:54 INFO util.GSet: capacity = 2^16 = 65536 entries
Re-format filesystem in Storage Directory F:\hadoop-2.7.2\data\namenode ? (Y or
N) y
16/08/05 12:55:16 INFO namenode.FSImage: Allocated new BlockPoolId: BP-124614392
5-172.20.0.51-1470383716578
16/08/05 12:55:16 INFO common.Storage: Storage directory F:\hadoop-2.7.2\data\na
menode has been successfully formatted.
16/08/05 12:55:16 INFO namenode.NNStorageRetentionManager: Going to retain 1 ima
ges with txid >= 0
16/08/05 12:55:16 INFO util.ExitUtil: Exiting with status 0
16/08/05 12:55:16 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ABC-PC/192.168.0.51
************************************************************/
and can anybody please guide what I am doing wrong here?
UPDATE
Thanks to #Binary Nerd for setting things correctly. But now I am facing another problem, even though the System Architecture is 32-bits the NameNode is not started and following error appears (excerpt):
CreateProcess error=216, This version of %1 is not compatible with the version of
Windows you're running. Check your computer's system information to see whether
you need a x86 (32-bit) or x64 (64-bit) version of the program, and then contact
the software publisher
Looks like this is the main error you're getting is:
ERROR common.Util: Syntax error in URI F:\hadoop-2.7.2\data\namenode.
You've specified it as:
<property>
<name>dfs.namenode.name.dir</name>
<value>F:\hadoop-2.7.2\data\namenode</value>
</property>
Perhaps the first thing to try is using the same format as the blog (forward slashes):
F:/hadoop-2.7.2/data/namenode
If that doesnt help you can try making it a valid URI:
file:///f:/hadoop-2.7.2/data/namenode
Kinda late, but for future reference.
My problem was that I copy pasted the hdfs-site.xml from the tutorial and a special character,probably newline, was added in this line
<property><name>dfs.namenode.name.dir</name><value>/hadoop-
2.6.0/data/name</value><final>true</final></property>
so just go and delete it
<property>
<name>dfs.namenode.name.dir</name><value>/hadoop-2.6.0/data/name</value><final>true</final>
</property>
this sort of installation problems are the worst...they just discourage you so much
In hdfs-site.xml file, change the F:\hadoop-2.7.2\data\namenode to the file:/F:/hadoop-2.7.2/data/namenode
After this error will be resolved

hadoop namenode and datanote not started

Last Edit
I fixed it by mixing many different answers together.
First I changed the rights of:
/usr/local/hadoop_store/hdfs/namenode
/usr/local/hadoop_store/hdfs/datanode
to 777.
Then I ran stop-all.sh and restarted hadoop.
Should this question be closed?
I know this has been used before, but the questioneers seem to work with much older versions. also, none of the answers helped me.
I installed hadoop 2.7.0 on Ubuntu 15.10 and followed the following tutorial exactly:
https://www.digitalocean.com/community/tutorials/how-to-install-hadoop-on-ubuntu-13-10
I tried about 20 others, this was the first that was understandable.
now, when I run jps, I get:
14812 SecondaryNameNode
15101 NodeManager
14969 ResourceManager
15519 Jps
Which means the NameNode and the DataNode have not started.
Does anyone know how to fix this?
Edit:
I think this might be important: When I formatted my namenode using
hdfs namenode -format
I got one hell of an output:
> Blockquote STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = me-Aspire-E5-574G/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.2
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.2.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.2.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.7.2.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.2-tests.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.2.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.2-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.2.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08Z
STARTUP_MSG: java = 1.7.0_101
************************************************************/
16/06/16 10:18:13 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/06/16 10:18:13 INFO namenode.NameNode: createNameNode [-format]
16/06/16 10:18:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-19779c07-66da-44f2-b05c-6664e2a2abfc
16/06/16 10:18:14 INFO namenode.FSNamesystem: No KeyProvider found.
16/06/16 10:18:14 INFO namenode.FSNamesystem: fsLock is fair:true
16/06/16 10:18:15 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/06/16 10:18:15 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/06/16 10:18:15 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/06/16 10:18:15 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Jun 16 10:18:15
16/06/16 10:18:15 INFO util.GSet: Computing capacity for map BlocksMap
16/06/16 10:18:15 INFO util.GSet: VM type = 64-bit
16/06/16 10:18:15 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
16/06/16 10:18:15 INFO util.GSet: capacity = 2^21 = 2097152 entries
16/06/16 10:18:15 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/06/16 10:18:15 INFO blockmanagement.BlockManager: defaultReplication = 1
16/06/16 10:18:15 INFO blockmanagement.BlockManager: maxReplication = 512
16/06/16 10:18:15 INFO blockmanagement.BlockManager: minReplication = 1
16/06/16 10:18:15 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
16/06/16 10:18:15 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/06/16 10:18:15 INFO blockmanagement.BlockManager: encryptDataTransfer = false
16/06/16 10:18:15 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
16/06/16 10:18:15 INFO namenode.FSNamesystem: fsOwner =me (auth:SIMPLE)
16/06/16 10:18:15 INFO namenode.FSNamesystem: supergroup = supergroup
16/06/16 10:18:15 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/06/16 10:18:15 INFO namenode.FSNamesystem: HA Enabled: false
16/06/16 10:18:15 INFO namenode.FSNamesystem: Append Enabled: true
16/06/16 10:18:15 INFO util.GSet: Computing capacity for map INodeMap
16/06/16 10:18:15 INFO util.GSet: VM type = 64-bit
16/06/16 10:18:15 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
16/06/16 10:18:15 INFO util.GSet: capacity = 2^20 = 1048576 entries
16/06/16 10:18:15 INFO namenode.FSDirectory: ACLs enabled? false
16/06/16 10:18:15 INFO namenode.FSDirectory: XAttrs enabled? true
16/06/16 10:18:15 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
16/06/16 10:18:15 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/06/16 10:18:15 INFO util.GSet: Computing capacity for map cachedBlocks
16/06/16 10:18:15 INFO util.GSet: VM type = 64-bit
16/06/16 10:18:15 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
16/06/16 10:18:15 INFO util.GSet: capacity = 2^18 = 262144 entries
16/06/16 10:18:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/06/16 10:18:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/06/16 10:18:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
16/06/16 10:18:15 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
16/06/16 10:18:15 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
16/06/16 10:18:15 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
16/06/16 10:18:15 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/06/16 10:18:15 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/06/16 10:18:15 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/06/16 10:18:15 INFO util.GSet: VM type = 64-bit
16/06/16 10:18:15 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/06/16 10:18:15 INFO util.GSet: capacity = 2^15 = 32768 entries
16/06/16 10:18:15 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1368358985-127.0.1.1-1466065095377
16/06/16 10:18:15 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /usr/local/hadoop_store/hdfs/namenode/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
16/06/16 10:18:15 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot create directory /usr/local/hadoop_store/hdfs/namenode/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
16/06/16 10:18:15 INFO util.ExitUtil: Exiting with status 1
16/06/16 10:18:15 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at me-Aspire-E5-574G/127.0.1.1
> Blockquote
I did what a user in the comments advised:
When I do:
hdfs namenode -format
I get the above long output.
However, when I do
sudo hdfs namenode -format
I get:
sudo: hdfs: command not found
Does that even make sense?
1 Clear the tmp folder which you set in $HADOOP_HOME/etc/hadoop/core-site.xml
2 format the namnode and datanode
$HADOOP_HOME/bin/hadoop namenode -format
$HADOOP_HOME/bin/hdfs namenode -format
$HADOOP_HOME/bin/hadoop datanode -format
$HADOOP_HOME/bin/hdfs datanode -format
3 Then start hadoop

start-dfs.sh -not working - localhost: Bad port 'localhost' (Hadoop 2.7.1)

I have installed hadoop 2.7.1 on ubuntu 14.10
When i try the command hadoop version - its working fine.
hadoop namenode -format command is also working fine
The command start-dfs.sh - not working
I am getting
Starting namenodes on [localhost]
localhost: Bad port 'localhost'
localhost: Bad port 'localhost'
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Bad Port '0.0.0.0'
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:/usr/local/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/usr/local/hadoopdata/hdfs/datanode</value>
</property>
</configuration>
`host file
127.0.0.1 localhost
127.0.1.1 hp-HP-Notebook
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
hadoop namenode -format
hp#hp-HP-Notebook:~$ hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
16/01/19 22:15:18 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hp-HP-Notebook/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.1
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.1-tests.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.1-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.1.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.1.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG: java = 1.7.0_79
************************************************************/
16/01/19 22:15:18 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/01/19 22:15:18 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-beba2328-b534-4370-9f89-d5b3fc3c9986
16/01/19 22:15:21 INFO namenode.FSNamesystem: No KeyProvider found.
16/01/19 22:15:21 INFO namenode.FSNamesystem: fsLock is fair:true
16/01/19 22:15:21 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/01/19 22:15:21 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/01/19 22:15:21 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/01/19 22:15:21 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Jan 19 22:15:21
16/01/19 22:15:21 INFO util.GSet: Computing capacity for map BlocksMap
16/01/19 22:15:21 INFO util.GSet: VM type = 64-bit
16/01/19 22:15:21 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
16/01/19 22:15:21 INFO util.GSet: capacity = 2^21 = 2097152 entries
16/01/19 22:15:21 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/01/19 22:15:21 INFO blockmanagement.BlockManager: defaultReplication = 1
16/01/19 22:15:21 INFO blockmanagement.BlockManager: maxReplication = 512
16/01/19 22:15:21 INFO blockmanagement.BlockManager: minReplication = 1
16/01/19 22:15:21 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
16/01/19 22:15:21 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
16/01/19 22:15:21 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/01/19 22:15:21 INFO blockmanagement.BlockManager: encryptDataTransfer = false
16/01/19 22:15:21 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
16/01/19 22:15:21 INFO namenode.FSNamesystem: fsOwner = hp (auth:SIMPLE)
16/01/19 22:15:21 INFO namenode.FSNamesystem: supergroup = supergroup
16/01/19 22:15:21 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/01/19 22:15:21 INFO namenode.FSNamesystem: HA Enabled: false
16/01/19 22:15:21 INFO namenode.FSNamesystem: Append Enabled: true
16/01/19 22:15:22 INFO util.GSet: Computing capacity for map INodeMap
16/01/19 22:15:22 INFO util.GSet: VM type = 64-bit
16/01/19 22:15:22 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
16/01/19 22:15:22 INFO util.GSet: capacity = 2^20 = 1048576 entries
16/01/19 22:15:22 INFO namenode.FSDirectory: ACLs enabled? false
16/01/19 22:15:22 INFO namenode.FSDirectory: XAttrs enabled? true
16/01/19 22:15:22 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
16/01/19 22:15:22 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/01/19 22:15:22 INFO util.GSet: Computing capacity for map cachedBlocks
16/01/19 22:15:22 INFO util.GSet: VM type = 64-bit
16/01/19 22:15:22 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
16/01/19 22:15:22 INFO util.GSet: capacity = 2^18 = 262144 entries
16/01/19 22:15:22 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/01/19 22:15:22 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/01/19 22:15:22 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
16/01/19 22:15:22 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
16/01/19 22:15:22 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
16/01/19 22:15:22 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
16/01/19 22:15:22 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/01/19 22:15:22 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/01/19 22:15:22 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/01/19 22:15:22 INFO util.GSet: VM type = 64-bit
16/01/19 22:15:22 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/01/19 22:15:22 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /usr/local/hadoopdata/hdfs/namenode ? (Y or N) y
16/01/19 22:15:28 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1331619148-127.0.1.1-1453221928666
16/01/19 22:15:28 INFO common.Storage: Storage directory /usr/local/hadoopdata/hdfs/namenode has been successfully formatted.
16/01/19 22:15:29 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/01/19 22:15:29 INFO util.ExitUtil: Exiting with status 0
16/01/19 22:15:29 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hp-HP-Notebook/127.0.1.1
************************************************************/
try this,
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<property>
 <name>mapreduce.framework.name</name>
 <value>yarn</value>
</property>
hadoop-env.sh
export JAVA_HOME= set path
export HADOOP_HOME= set path
yarn-site.xml
<property>
 <name>yarn.nodemanager.aux-services</name>
 <value>mapreduce_shuffle</value>
</property>
<property>
 <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
 <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
hadoop namenode -format
./start-all.sh

Neither Namenode nor datanode is starting on master of multi-node cluster

I was able to successfully start single-node cluster on 2 computers on my home network, but I am having trouble starting them as a multi-node cluster. When I run the command start-dfs.sh I get the output
hduser#eric-T5082:/usr/local/hadoop/sbin$ start-dfs.sh
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-eric-T5082.out
slave: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-Study-Linux.out
master: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-eric-T5082.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-eric-T5082.out
When I run jps, I get the following output:
hduser#eric-T5082:/usr/local/hadoop/sbin$ jps
The program 'jps' can be found in the following packages:
* openjdk-7-jdk
* openjdk-6-jdk
Try: sudo apt-get install <selected package>
yet jps is returning the correct result for the slave node:
hduser#Study-Linux:/usr/local/hadoop/etc/hadoop$ jps
6401 Jps
6300 DataNode
I suspect this may be due to (a) a port problem, i.e. the port is already occupied; (b) a problem with temporary files being generated and interfering with the hdfs namenode -format command. But I have tried to address problem (a) by trying different ports for the namenode and (b) erasing the temporary files before running hdfs.
Regarding (a), here is the result of netstat -l:
hduser#eric-T5082:/usr/local/hadoop/sbin$ netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 eric-T5082:domain *:* LISTEN
tcp 0 0 *:50070 *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 localhost:ipp *:* LISTEN
tcp 0 0 *:50010 *:* LISTEN
tcp 0 0 *:50075 *:* LISTEN
tcp 0 0 *:50020 *:* LISTEN
tcp 0 0 localhost:52999 *:* LISTEN
tcp 0 0 master:9000 *:* LISTEN
tcp 0 0 *:50090 *:* LISTEN
tcp6 0 0 [::]:ssh [::]:* LISTEN
udp 0 0 *:36200 *:*
udp 0 0 *:19057 *:*
udp 0 0 *:ipp *:*
udp 0 0 eric-T5082:domain *:*
udp 0 0 *:bootpc *:*
udp 0 0 *:mdns *:*
udp6 0 0 [::]:mdns [::]:*
udp6 0 0 [::]:46391 [::]:*
udp6 0 0 [::]:51513 [::]:*
Here is core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
And here is mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
And finally, hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hduser/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hduser/mydata/hdfs/datanode</value>
</property>
</configuration>
HDFS appears to be working correctly
hduser#eric-T5082:/usr/local/hadoop/bin$ hdfs namenode -format
15/12/21 17:09:04 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = eric-T5082/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.1
STARTUP_MSG: classpath = [jar files omitted]
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG: java = 1.7.0_91
************************************************************/
15/12/21 17:09:04 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/12/21 17:09:04 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-a8ee5a69-5938-434f-86de-57198465fb70
15/12/21 17:09:08 INFO namenode.FSNamesystem: No KeyProvider found.
15/12/21 17:09:08 INFO namenode.FSNamesystem: fsLock is fair:true
15/12/21 17:09:08 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/12/21 17:09:08 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/12/21 17:09:08 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/12/21 17:09:08 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Dec 21 17:09:08
15/12/21 17:09:08 INFO util.GSet: Computing capacity for map BlocksMap
15/12/21 17:09:08 INFO util.GSet: VM type = 64-bit
15/12/21 17:09:08 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
15/12/21 17:09:08 INFO util.GSet: capacity = 2^21 = 2097152 entries
15/12/21 17:09:08 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/12/21 17:09:08 INFO blockmanagement.BlockManager: defaultReplication = 2
15/12/21 17:09:08 INFO blockmanagement.BlockManager: maxReplication = 512
15/12/21 17:09:08 INFO blockmanagement.BlockManager: minReplication = 1
15/12/21 17:09:08 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
15/12/21 17:09:08 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
15/12/21 17:09:08 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/12/21 17:09:08 INFO blockmanagement.BlockManager: encryptDataTransfer = false
15/12/21 17:09:08 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
15/12/21 17:09:08 INFO namenode.FSNamesystem: fsOwner = hduser (auth:SIMPLE)
15/12/21 17:09:08 INFO namenode.FSNamesystem: supergroup = supergroup
15/12/21 17:09:08 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/12/21 17:09:08 INFO namenode.FSNamesystem: HA Enabled: false
15/12/21 17:09:08 INFO namenode.FSNamesystem: Append Enabled: true
15/12/21 17:09:08 INFO util.GSet: Computing capacity for map INodeMap
15/12/21 17:09:08 INFO util.GSet: VM type = 64-bit
15/12/21 17:09:08 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
15/12/21 17:09:08 INFO util.GSet: capacity = 2^20 = 1048576 entries
15/12/21 17:09:08 INFO namenode.FSDirectory: ACLs enabled? false
15/12/21 17:09:08 INFO namenode.FSDirectory: XAttrs enabled? true
15/12/21 17:09:08 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
15/12/21 17:09:08 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/12/21 17:09:08 INFO util.GSet: Computing capacity for map cachedBlocks
15/12/21 17:09:08 INFO util.GSet: VM type = 64-bit
15/12/21 17:09:08 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
15/12/21 17:09:08 INFO util.GSet: capacity = 2^18 = 262144 entries
15/12/21 17:09:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/12/21 17:09:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/12/21 17:09:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
15/12/21 17:09:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
15/12/21 17:09:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
15/12/21 17:09:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
15/12/21 17:09:08 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/12/21 17:09:08 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/12/21 17:09:08 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/12/21 17:09:08 INFO util.GSet: VM type = 64-bit
15/12/21 17:09:08 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
15/12/21 17:09:08 INFO util.GSet: capacity = 2^15 = 32768 entries
15/12/21 17:09:09 INFO namenode.FSImage: Allocated new BlockPoolId: BP-923014467-127.0.1.1-1450746548917
15/12/21 17:09:09 INFO common.Storage: Storage directory /home/hduser/mydata/hdfs/namenode has been successfully formatted.
15/12/21 17:09:09 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/12/21 17:09:09 INFO util.ExitUtil: Exiting with status 0
15/12/21 17:09:09 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at eric-T5082/127.0.1.1
************************************************************/
Finally, here is the namenode log file
2015-12-21 17:50:09,702 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = eric-T5082/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.1
STARTUP_MSG: classpath = [jar files omitted]
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG: java = 1.7.0_91
************************************************************/
2015-12-21 17:50:09,722 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-12-21 17:50:09,752 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-12-21 17:50:10,933 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-12-21 17:50:11,338 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-12-21 17:50:11,338 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-12-21 17:50:11,352 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://master:9000
2015-12-21 17:50:11,353 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use master:9000 to access this namenode/service.
2015-12-21 17:50:18,046 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-12-21 17:50:18,595 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-12-21 17:50:18,685 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2015-12-21 17:50:18,739 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-12-21 17:50:18,795 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-12-21 17:50:18,837 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-12-21 17:50:18,838 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-12-21 17:50:18,838 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-12-21 17:50:19,192 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-12-21 17:50:19,216 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-12-21 17:50:19,698 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-12-21 17:50:19,699 INFO org.mortbay.log: jetty-6.1.26
2015-12-21 17:50:21,961 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup#0.0.0.0:50070
2015-12-21 17:50:27,119 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-12-21 17:50:27,119 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-12-21 17:50:27,277 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-12-21 17:50:27,277 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-12-21 17:50:27,385 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-12-21 17:50:27,385 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-12-21 17:50:27,388 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-12-21 17:50:27,391 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Dec 21 17:50:27
2015-12-21 17:50:27,395 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-12-21 17:50:27,396 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-12-21 17:50:27,399 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
2015-12-21 17:50:27,399 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2015-12-21 17:50:27,425 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-12-21 17:50:27,425 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 2
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2015-12-21 17:50:27,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hduser (auth:SIMPLE)
2015-12-21 17:50:27,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2015-12-21 17:50:27,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-12-21 17:50:27,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-12-21 17:50:27,446 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-12-21 17:50:27,585 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-12-21 17:50:27,585 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-12-21 17:50:27,586 INFO org.apache.hadoop.util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
2015-12-21 17:50:27,586 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2015-12-21 17:50:27,596 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2015-12-21 17:50:27,596 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2015-12-21 17:50:27,596 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2015-12-21 17:50:27,597 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-12-21 17:50:27,624 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-12-21 17:50:27,624 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-12-21 17:50:27,625 INFO org.apache.hadoop.util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
2015-12-21 17:50:27,625 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries
2015-12-21 17:50:27,630 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-12-21 17:50:27,630 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-12-21 17:50:27,630 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2015-12-21 17:50:27,663 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2015-12-21 17:50:27,663 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2015-12-21 17:50:27,663 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2015-12-21 17:50:27,860 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-12-21 17:50:27,860 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-12-21 17:50:27,890 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-12-21 17:50:27,890 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-12-21 17:50:27,891 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
2015-12-21 17:50:27,891 INFO org.apache.hadoop.util.GSet: capacity = 2^15 = 32768 entries
2015-12-21 17:50:27,992 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hduser/mydata/hdfs/namenode/in_use.lock acquired by nodename 20222#eric-T5082
2015-12-21 17:50:28,411 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /home/hduser/mydata/hdfs/namenode/current
2015-12-21 17:50:28,891 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hduser/mydata/hdfs/namenode/current/edits_inprogress_0000000000000000003 -> /home/hduser/mydata/hdfs/namenode/current/edits_0000000000000000003-0000000000000000003
2015-12-21 17:50:29,189 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2015-12-21 17:50:29,311 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2015-12-21 17:50:29,311 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 2 from /home/hduser/mydata/hdfs/namenode/current/fsimage_0000000000000000002
2015-12-21 17:50:29,312 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream#1610d6ac expecting start txid #3
2015-12-21 17:50:29,312 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file /home/hduser/mydata/hdfs/namenode/current/edits_0000000000000000003-0000000000000000003
2015-12-21 17:50:29,319 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream '/home/hduser/mydata/hdfs/namenode/current/edits_0000000000000000003-0000000000000000003' to transaction ID 3
2015-12-21 17:50:29,333 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file /home/hduser/mydata/hdfs/namenode/current/edits_0000000000000000003-0000000000000000003 of size 1048576 edits # 1 loaded in 0 seconds
2015-12-21 17:50:29,360 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2015-12-21 17:50:29,362 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 4
2015-12-21 17:50:29,714 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2015-12-21 17:50:29,714 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 1808 msecs
2015-12-21 17:50:32,500 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to master:9000
2015-12-21 17:50:32,561 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-12-21 17:50:32,632 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
2015-12-21 17:50:32,867 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2015-12-21 17:50:32,940 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2015-12-21 17:50:32,941 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2015-12-21 17:50:32,941 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2015-12-21 17:50:32,948 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 5 secs
2015-12-21 17:50:32,949 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2015-12-21 17:50:32,949 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2015-12-21 17:50:32,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:33,020 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 0
2015-12-21 17:50:33,020 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0
2015-12-21 17:50:33,020 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0
2015-12-21 17:50:33,020 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0
2015-12-21 17:50:33,021 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0
2015-12-21 17:50:33,021 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 61 msec
2015-12-21 17:50:33,239 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: master/192.168.1.120:9000
2015-12-21 17:50:33,239 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2015-12-21 17:50:33,230 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-12-21 17:50:33,234 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2015-12-21 17:50:33,281 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2015-12-21 17:50:35,393 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.1.109:50010, datanodeUuid=e33c9d91-19c9-4e7f-85a3-e6fe5105b2d3, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-a8ee5a69-5938-434f-86de-57198465fb70;nsid=2101567829;c=0) storage e33c9d91-19c9-4e7f-85a3-e6fe5105b2d3
2015-12-21 17:50:35,394 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:35,401 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.1.109:50010
2015-12-21 17:50:35,818 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:35,818 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-b4ddb959-74db-409c-b65f-b940d01b5ec3 for DN 192.168.1.109:50010
2015-12-21 17:50:36,101 INFO BlockStateChange: BLOCK* processReport: from storage DS-b4ddb959-74db-409c-b65f-b940d01b5ec3 node DatanodeRegistration(192.168.1.109:50010, datanodeUuid=e33c9d91-19c9-4e7f-85a3-e6fe5105b2d3, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-a8ee5a69-5938-434f-86de-57198465fb70;nsid=2101567829;c=0), blocks: 0, hasStaleStorage: false, processing time: 9 msecs
2015-12-21 17:50:38,406 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.1.120:50010, datanodeUuid=ab241604-21db-4c11-91c7-5271d42f9ffa, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-a8ee5a69-5938-434f-86de-57198465fb70;nsid=2101567829;c=0) storage ab241604-21db-4c11-91c7-5271d42f9ffa
2015-12-21 17:50:38,406 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:38,407 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.1.120:50010
2015-12-21 17:50:38,560 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:38,560 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-cd7c7489-dcac-4028-ac7a-a883ad1319da for DN 192.168.1.120:50010
2015-12-21 17:50:38,666 INFO BlockStateChange: BLOCK* processReport: from storage DS-cd7c7489-dcac-4028-ac7a-a883ad1319da node DatanodeRegistration(192.168.1.120:50010, datanodeUuid=ab241604-21db-4c11-91c7-5271d42f9ffa, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-a8ee5a69-5938-434f-86de-57198465fb70;nsid=2101567829;c=0), blocks: 0, hasStaleStorage: false, processing time: 1 msecs

Resources