A few details about my installation:
Ubuntu 14.04 LTS 64 bit
Oracle Java JDK 1.8.0_40
Hadoop 2.6.0
I have been following the instructions from http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php to install Hadoop. Everything is working fine up to the point where I have to format the namenode.
When I run $ hadoop namenode -format I get the following error:
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/04/12 19:01:02 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = iulian-ThinkPad-T530/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.6.0
STARTUP_MSG: classpath = /usr/local/hadoop-2.6.0/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.0-tests.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-common-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-registry-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-common-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-api-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-client-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG: java = 1.8.0_40
************************************************************/
15/04/12 19:01:02 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/04/12 19:01:02 INFO namenode.NameNode: createNameNode [-format]
15/04/12 19:01:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-72fe7fe8-d67b-481e-8288-9f835727d80a
15/04/12 19:01:03 FATAL namenode.NameNode: Failed to start namenode.
java.lang.NullPointerException
at java.io.File.<init>(File.java:277)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.setStorageDirectories(NNStorage.java:305)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.<init>(NNStorage.java:166)
at org.apache.hadoop.hdfs.server.namenode.FSImage.<init>(FSImage.java:127)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1379)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
15/04/12 19:01:03 INFO util.ExitUtil: Exiting with status 1
15/04/12 19:01:03 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at iulian-ThinkPad-T530/127.0.1.1
************************************************************/
The following have been added to ~/.bashrc:
#HADOOP VARIABLES
export JAVA_HOME=/usr/local/jdk1.8.0_40
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
This happens if you miss dfs.namenode.name.dir property in hdfs-site.xml
Please ensure that it is present, the the directory provided as 'value' is valid.
Example:
<property>
<name>dfs.namenode.name.dir</name>
<value>/data/namenode</value>
</property>
The format command clearly says that there is an error in namenode storage directory. Check if the namenode directory exists and has proper permissions assigned. Next check if the hdfs-site.xml has proper values set for dfs.namenode.name.dir property.
Related
Trying to run single node Hadoop on Mac Sierra.
Here is my configuration:
JAVA Version https://wiki.apache.org/hadoop/HadoopJavaVersions
my-MacBook-Pro:hadoop me$ java -version
java version "1.8.0_25"
Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode)
HDFS-SITE
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
CORE-SITE
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
MAPRED-SITE
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9010</value>
</property>
</configuration>
HADOOP-ENV
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home
$ bin/hadoop # displays the usage documentation
Here are some extra settings:
I DID NOT MAKE THIS ACTIVE, WHICH SOME INSTRUCTIONS RECOMMENDED
#export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
BY DEFAULT TWO OTHER CONFIGS ALREADY ACTIVE IN HADOOP-ENV
# Some parts of the shell code may do special things dependent upon
# the operating system. We have to set this here. See the next
# section as to why....
export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
# Under certain conditions, Java on OS X will throw SCDynamicStore errors
# in the system logs.
# See HADOOP-8719 for more information. If one needs Kerberos
# support on OS X, one will want to change/remove this extra bit.
case ${HADOOP_OS_TYPE} in
Darwin*)
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= "
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.kdc= "
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf= "
;;
esac
Here is where I formatted namenode:
NAMENODE FORMATTED
My-MacBook-Pro:3.0.0 me$ bin/hdfs namenode -format
WARNING: /usr/local/Cellar/hadoop/3.0.0/libexec/logs does not exist. Creating.
Starting namenode:
2018-04-09 17:45:15,037 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = My-MacBook-Pro.local/10.5.1.16
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 3.0.0
STARTUP_MSG: classpath = /usr/ …
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r c25427ceca461ee979d30edd7a4b0f50718e6533; compiled by 'andrew' on 2017-12-08T19:16Z
STARTUP_MSG: java = 1.8.0_25
************************************************************/
2018-04-09 17:45:15,060 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2018-04-09 17:45:15,076 INFO namenode.NameNode: createNameNode [-format]
2018-04-09 17:45:15,518 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-f925aa84-648a-478b-9134-77d1766a86cb
…
Formatting:
2018-04-09 17:45:17,121 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:416)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:601)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:167)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1169)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
2018-04-09 17:45:17,128 ERROR namenode.NameNode: Failed to start namenode.
Exception:
java.io.IOException: Cannot create directory /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current
2018-04-09 17:45:17,140 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at My-MacBook-Pro.local/10.5.1.16
************************************************************/
My-MacBook-Pro:3.0.0 me$ sbin/start-dfs.sh
Starting namenodes on [localhost]
Syntax errors:
localhost: /Users/me/.bashrc: line 2: syntax error near unexpected token `('
localhost: /Users/me/.bashrc: line 2: `==> Searching for a previously deleted formula (in the last month)...'
Starting datanodes
localhost: /Users/me/.bashrc: line 2: syntax error near unexpected token `('
localhost: /Users/me/.bashrc: line 2: `==> Searching for a previously deleted formula (in the last month)...'
Starting secondary namenodes [My-MacBook-Pro.local]
My-MacBook-Pro.local: Warning: Permanently added the ECDSA host key for IP address '10.5.13.173' to the list of known hosts.
My-MacBook-Pro.local: /Users/me/.bashrc: line 2: syntax error near unexpected token `('
My-MacBook-Pro.local: /Users/me/.bashrc: line 2: `==> Searching for a previously deleted formula (in the last month)...'
Warning:
2018-04-10 08:28:44,359 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
JPS
My-MacBook-Pro:3.0.0 me$ jps
23201
46872 SecondaryNameNode
46730 DataNode
47039 Jps
Primarily wondering where to begin troubleshooting? That last warning looks problematic. And I do not get anything at http://localhost:9000/, http://localhost:9010/, http://localhost:9870/
I'm a hadoop green hand, I try install hadoop3.0 in my vm, after I config hadoop, and then try :
hdfs namenode ‐format
and got output:
2017-12-26 00:20:56,255 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost/127.0.0.1
STARTUP_MSG: args = [‐format]
STARTUP_MSG: version = 3.0.0
STARTUP_MSG: classpath = /opt/hadoop-3.0.0/etc/hadoop:/opt/hadoop-3.0.0/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop-3.0.0/share/hadoop/common/lib/kerby-util-1.0.1.jar: ............. hadoop-yarn-applications-unmanaged-am-launcher-3.0.0.jar:/opt/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-registry-3.0.0.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r c25427ceca461ee979d30edd7a4b0f50718e6533; compiled by 'andrew' on 2017-12-08T19:16Z
STARTUP_MSG: java = 1.8.0_151
************************************************************/
2017-12-26 00:20:56,265 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-12-26 00:20:56,269 INFO namenode.NameNode: createNameNode [‐format]
Usage: hdfs namenode [-backup] |
[-checkpoint] |
[-format [-clusterid cid ] [-force] [-nonInteractive] ] |
[-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-rollback] |
[-rollingUpgrade <rollback|started> ] |
[-importCheckpoint] |
[-initializeSharedEdits] |
[-bootstrapStandby [-force] [-nonInteractive] [-skipSharedEditsCheck] ] |
[-recover [ -force] ] |
[-metadataVersion ]
2017-12-26 00:20:56,365 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
************************************************************/
I config hdfs-site.xml as follow:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/dan/hadoop_data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/dan/hadoop_data/datanode</value>
</property>
</configuration>
when I start namenode service, it fail and log tell :
2017-12-26 00:03:41,331 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.io.IOException: NameNode is not formatted.
2017-12-26 00:03:41,337 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
Can anyone tell me to fix this issue?
Thanks in advance!
Solution 1:
Sometimes It happens. First, stop all service and just go to your current directory and delete current directory. Hadoop current directory stores all logs files as well. By removing current directory start all service again.
Stop all service :
$HADOOP_HOME/sbin/stop-all.sh
After stop all service once you should format Namenode by the following command.
Format name node:
$HADOOP_HOME/bin/hadoop namenode –format
Now again start all service by following command.
Start all service:
$HADOOP_HOME/sbin/start-all.sh
Solution 2:
Sometimes Namenode went into safe-mode. You need to leave safe node by following command.
$HADOOP_HOME/bin/hdfs dfsadmin -safemode leave
I was using Hadoop 1.2.1 in a pseudo-distributed mode in Ubuntu and everything was working fine. But then I had to restart my system . And now when I am hit jps command after giving start-all.sh i was able to see only tasktracker and jobtracker running. Could anyone tell me the possible reason of this problem? and guide me getting this resolved?
************************************************************/
2017-03-13 18:41:16,733 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = keerthana-VirtualBox/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.8.0_121
************************************************************/
2017-03-13 18:41:19,383 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-03-13 18:41:19,628 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2017-03-13 18:41:19,653 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-03-13 18:41:19,653 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2017-03-13 18:41:21,947 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2017-03-13 18:41:22,117 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2017-03-13 18:41:23,564 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /home/keerthana/hadoop/dfs/data, expected: rwxr-xr-x, while actual: rwxrwxr-x
2017-03-13 18:41:23,564 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.
2017-03-13 18:41:23,564 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-03-13 18:41:23,630 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at keerthana-VirtualBox/127.0.1.1
************************************************************/
As suggested from the logs, there are some problems with the dfs.data.dir "/home/keerthana/hadoop/dfs/data" location.
Provide check if the folder exists or not and whether they have the proper permission or not. If folder exists , then provide 755 access for that folder.
chmod -R 755 /home/keerthana/hadoop/dfs/data
I am new in Apache Hadoop and I am following a video course on Udemy.
The course is based on Hadoop 1.2.1, is it a too old version? Is better start my study with another course based on a more recent version or is it ok?
So I have installed Hadoop 1.2.1 on an Ubuntu 12.04 system and I have configured it in pseudo distribution mode.
According with the tutorial I have do it using the following settings in the following configuration files:
conf/core-site.xml:
fs.default.name
hdfs://localhost:9000
conf/hdfs-site.xml:
dfs.replication
1
conf/mapred-site.xml:
mapred.job.tracker
localhost:9001
Then in the Linux shell I do:
ssh localhost
So I am connected through SSH to my local system.
Then I go into the Hadoop bin directory, /home/andrea/hadoop/hadoop-1.2.1/bin/ and here I perform this command that have to perform the format of the name node (what exactly means?):
bin/hadoop namenode –format
And this I the obtained output:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ ./hadoop namenode –format
16/01/17 12:55:25 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = andrea-virtual-machine/127.0.1.1
STARTUP_MSG: args = [–format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_79
************************************************************/
Usage: java NameNode [-format [-force ] [-nonInteractive]] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-recover [ -force ] ]
16/01/17 12:55:25 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at andrea-virtual-machine/127.0.1.1
************************************************************/
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
Then I try to start all the nodes performing this command:
./start–all.sh
and now I obtain:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ ./start-all.sh
starting namenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-namenode-andrea-virtual-machine.out
localhost: starting datanode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-datanode-andrea-virtual-machine.out
localhost: starting secondarynamenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-secondarynamenode-andrea-virtual-machine.out
starting jobtracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-jobtracker-andrea-virtual-machine.out
localhost: starting tasktracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-tasktracker-andrea-virtual-machine.out
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
Now I try to open in the browser the following URLs:
http//localhost:50070/
and can't open it (page not found)
and:
http://localhost:50030/
this is correctly opened and redirect to this jsp page:
http://localhost:50030/jobtracker.jsp
So, in the shell I perform the jps command that lists all the running Java process for the user:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ jps
6247 Jps
5720 DataNode
5872 SecondaryNameNode
6116 TaskTracker
5965 JobTracker
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
As you can see seems that the NameNode is not started.
On the tutorial that I am following say that:
If NameNode or DataNode is not listed than it might happen that the
namenode's or datanode's root directory which is set by the property
'dfs.name.dir' is getting messed up. It by default points to the /tmp
directory which operating system changes from time to time. Thus, HDFS
when comes up after some changes by OS, gets confused and namenode
doesn't start.
So to solve this problem provide this solution (that can't work for me).
First stop all nodes by the stop-all.sh script.
Then I have to explicitly set the 'dfs.name.dir' and 'dfs.data.dir'.
So I have created a dfs directory into the Hadoop path and into this directory I have created 2 directories (at the same level): data and name (the idea is to make two folders inside it which would be used for datanode demon and namenode demon).
So I have something like this:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/dfs$ tree
.
├── data
└── name
Then I use this configuration for the hdfs-site.xml where I explicitly set the previous 2 directories:
<configuration>
<property>
<name>dfs.data.dir</name>
<value>/home/andrea/hadoop/hadoop-1.2.1/dfs/data/</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/andrea/hadoop/hadoop-1.2.1/dfs/name/</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
So, after this changing, I run again the command to format the NameNode:
hadoop namenode –format
And I obtain this output:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/dfs$ hadoop namenode –format16/01/17 13:14:53 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = andrea-virtual-machine/127.0.1.1
STARTUP_MSG: args = [–format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_79
************************************************************/
Usage: java NameNode [-format [-force ] [-nonInteractive]] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-recover [ -force ] ]
16/01/17 13:14:53 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at andrea-virtual-machine/127.0.1.1
************************************************************/
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/dfs$
So I start again all the nodes by: start-all.sh and this is the obtained output:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ start-all.sh
starting namenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-namenode-andrea-virtual-machine.out
localhost: starting datanode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-datanode-andrea-virtual-machine.out
localhost: starting secondarynamenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-secondarynamenode-andrea-virtual-machine.out
starting jobtracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-jobtracker-andrea-virtual-machine.out
localhost: starting tasktracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-tasktracker-andrea-virtual-machine.out
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
Then I perform the jps command to see if all the nodes are correctly started but this is what I obtain:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ jps
8041 SecondaryNameNode
8310 TaskTracker
8406 Jps
8139 JobTracker
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
The situation worsened because now I have 2 nodes that are not started: the NameNode and the DataNode.
What am I missing? How can I try to solve this issue and start all my nodes?
Would you try to turn.of the IPTABLES.once and reformat along with exporting the java path.
IF you have configure in hdfs-site.xml with ,
While you are formatting name node
<property>
<name>dfs.name.dir</name>
<value>/home/andrea/hadoop/hadoop-1.2.1/dfs/name/</value>
</property>
then while formatting name node you should see
> successfully formatted /home/andrea/hadoop/hadoop-1.2.1/dfs/name/
message if name node format is successful. As per your logs I am not able to see those successful logs. Check permission issues may be there.
If it didn't start try using another command:
hadoop-daemon.sh start namenode
Hope it works...
Edited mapred-site.xml,core-site.xml,hadoop-env.sh,hdfs-site.xml,masters and slaves.
I have 1 DataNode and 2 Namenodes.Both of them started successfully and I am able to see it in browser.
Started start-mapred.sh and it started JobTracker and TaskTracker on the Namenode but unable to start Tasktracker on the datanaode.
Started Tasktracker and the following is the output.
->hadoop tasktracker
Warning: $HADOOP_HOME is deprecated.
13/10/17 03:21:55 INFO mapred.TaskTracker: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting TaskTracker
STARTUP_MSG: host = tintin/10.193.184.157
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.1.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782; compiled by 'hortonf o' on Thu Jan 31 02:03:24 UTC 2013
************************************************************/
13/10/17 03:21:55 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
13/10/17 03:21:55 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
13/10/17 03:21:55 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
13/10/17 03:21:55 INFO impl.MetricsSystemImpl: TaskTracker metrics system started
13/10/17 03:21:55 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/10/17 03:21:55 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.
13/10/17 03:21:55 WARN impl.MetricsSystemImpl: Source name ugi already exists!
13/10/17 03:21:55 ERROR mapred.TaskTracker: Can not start task tracker because java.lang.IllegalArgumentException: Does no t contain a valid host:port authority:
10.193.184.132:54311
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:149)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
at org.apache.hadoop.mapred.JobTracker.getAddress(JobTracker.java:2312)
at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1532)
at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3906)
13/10/17 03:21:55 INFO mapred.TaskTracker: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down TaskTracker at tintin/10.193.184.157
************************************************************/
Though quite late to answer this, still does your mapred-site.xml contain mapred.job.tracker property ?
My job tracker runs on 8021, so the configuration is as follows:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>http://localhost:8021</value>
</property>
</configuration>
I faced such an issue due to missing properties.