Run Hadoop Locally on Mac with Java 1.8 - macos

Trying to run single node Hadoop on Mac Sierra.
Here is my configuration:
JAVA Version https://wiki.apache.org/hadoop/HadoopJavaVersions
my-MacBook-Pro:hadoop me$ java -version
java version "1.8.0_25"
Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode)
HDFS-SITE
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
CORE-SITE
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
MAPRED-SITE
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9010</value>
</property>
</configuration>
HADOOP-ENV
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home
$ bin/hadoop # displays the usage documentation
Here are some extra settings:
I DID NOT MAKE THIS ACTIVE, WHICH SOME INSTRUCTIONS RECOMMENDED
#export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
BY DEFAULT TWO OTHER CONFIGS ALREADY ACTIVE IN HADOOP-ENV
# Some parts of the shell code may do special things dependent upon
# the operating system. We have to set this here. See the next
# section as to why....
export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
# Under certain conditions, Java on OS X will throw SCDynamicStore errors
# in the system logs.
# See HADOOP-8719 for more information. If one needs Kerberos
# support on OS X, one will want to change/remove this extra bit.
case ${HADOOP_OS_TYPE} in
Darwin*)
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= "
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.kdc= "
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf= "
;;
esac
Here is where I formatted namenode:
NAMENODE FORMATTED
My-MacBook-Pro:3.0.0 me$ bin/hdfs namenode -format
WARNING: /usr/local/Cellar/hadoop/3.0.0/libexec/logs does not exist. Creating.
Starting namenode:
2018-04-09 17:45:15,037 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = My-MacBook-Pro.local/10.5.1.16
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 3.0.0
STARTUP_MSG: classpath = /usr/ …
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r c25427ceca461ee979d30edd7a4b0f50718e6533; compiled by 'andrew' on 2017-12-08T19:16Z
STARTUP_MSG: java = 1.8.0_25
************************************************************/
2018-04-09 17:45:15,060 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2018-04-09 17:45:15,076 INFO namenode.NameNode: createNameNode [-format]
2018-04-09 17:45:15,518 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-f925aa84-648a-478b-9134-77d1766a86cb
…
Formatting:
2018-04-09 17:45:17,121 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:416)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:601)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:167)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1169)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
2018-04-09 17:45:17,128 ERROR namenode.NameNode: Failed to start namenode.
Exception:
java.io.IOException: Cannot create directory /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current
2018-04-09 17:45:17,140 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at My-MacBook-Pro.local/10.5.1.16
************************************************************/
My-MacBook-Pro:3.0.0 me$ sbin/start-dfs.sh
Starting namenodes on [localhost]
Syntax errors:
localhost: /Users/me/.bashrc: line 2: syntax error near unexpected token `('
localhost: /Users/me/.bashrc: line 2: `==> Searching for a previously deleted formula (in the last month)...'
Starting datanodes
localhost: /Users/me/.bashrc: line 2: syntax error near unexpected token `('
localhost: /Users/me/.bashrc: line 2: `==> Searching for a previously deleted formula (in the last month)...'
Starting secondary namenodes [My-MacBook-Pro.local]
My-MacBook-Pro.local: Warning: Permanently added the ECDSA host key for IP address '10.5.13.173' to the list of known hosts.
My-MacBook-Pro.local: /Users/me/.bashrc: line 2: syntax error near unexpected token `('
My-MacBook-Pro.local: /Users/me/.bashrc: line 2: `==> Searching for a previously deleted formula (in the last month)...'
Warning:
2018-04-10 08:28:44,359 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
JPS
My-MacBook-Pro:3.0.0 me$ jps
23201
46872 SecondaryNameNode
46730 DataNode
47039 Jps
Primarily wondering where to begin troubleshooting? That last warning looks problematic. And I do not get anything at http://localhost:9000/, http://localhost:9010/, http://localhost:9870/

Related

The process of NameNode isn't present when executing jps

I'm new to the Hadoop ecosystem,
I installed hadooop 3.3.0 as a Pseudo-Distributed Mode.
The all application http://localhost:8088/ is working but to view name node of the application on http://localhost:9870/ i couldn't (This site can’t be reached).
$ jps
24553 Jps
20537 NodeManager
20429 ResourceManager
and
$ hadoop version
Hadoop 3.3.0
Source code repository https://github.com/apache/hadoop.git -r aa96f1871bfd858f9bac59cf2a81ec470da649af
Compiled by brahma on 2020-07-06T18:21Z
Compiled with protoc 3.7.1
From source with checksum 5dc29b802d6ccd77b262ef9d04d19c4
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-3.3.0.jar
I tried to restart the process but in vain
$ stop-all.sh
WARNING: Stopping all Apache Hadoop daemons as mhannani in 10 seconds.
WARNING: Use CTRL-C to abort.
Stopping namenodes on [HP]
Stopping datanodes
Stopping secondary namenodes [HP]
2021-01-06 16:42:07,540 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping nodemanagers
Stopping resourcemanager
format :
$ hdfs namenode -format
2021-01-06 16:44:14,683 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = HP/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 3.3.0
and then
$ start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as mhannani in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [HP]
Starting datanodes
Starting secondary namenodes [HP]
HP: ERROR: Cannot set priority of secondarynamenode process 29847
2021-01-06 16:45:38,266 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers
Please how can i fix that, in order to access my hdfs file system from the browser as it was on earlier version of Hadoop on http://localhost:9870/50075 ?
Any help or advice would be appreciated, Thanks folks.
The issue was not setting correctly the namenode path, and datanode paths of my local file systems :
on $HADOOP_HOME/etc/hadoop/hdfs-site.xml :
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file://AbsolutePATH/TO/WHERE/THE/namenode/Should/be/stored</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file://The/same/for/dataNode</value>
</property>
</configuration>

Why I can't start the NameNode in this Hadoop 1.2.1 installation?

I am new in Apache Hadoop and I am following a video course on Udemy.
The course is based on Hadoop 1.2.1, is it a too old version? Is better start my study with another course based on a more recent version or is it ok?
So I have installed Hadoop 1.2.1 on an Ubuntu 12.04 system and I have configured it in pseudo distribution mode.
According with the tutorial I have do it using the following settings in the following configuration files:
conf/core-site.xml:
fs.default.name
hdfs://localhost:9000
conf/hdfs-site.xml:
dfs.replication
1
conf/mapred-site.xml:
mapred.job.tracker
localhost:9001
Then in the Linux shell I do:
ssh localhost
So I am connected through SSH to my local system.
Then I go into the Hadoop bin directory, /home/andrea/hadoop/hadoop-1.2.1/bin/ and here I perform this command that have to perform the format of the name node (what exactly means?):
bin/hadoop namenode –format
And this I the obtained output:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ ./hadoop namenode –format
16/01/17 12:55:25 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = andrea-virtual-machine/127.0.1.1
STARTUP_MSG: args = [–format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_79
************************************************************/
Usage: java NameNode [-format [-force ] [-nonInteractive]] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-recover [ -force ] ]
16/01/17 12:55:25 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at andrea-virtual-machine/127.0.1.1
************************************************************/
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
Then I try to start all the nodes performing this command:
./start–all.sh
and now I obtain:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ ./start-all.sh
starting namenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-namenode-andrea-virtual-machine.out
localhost: starting datanode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-datanode-andrea-virtual-machine.out
localhost: starting secondarynamenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-secondarynamenode-andrea-virtual-machine.out
starting jobtracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-jobtracker-andrea-virtual-machine.out
localhost: starting tasktracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-tasktracker-andrea-virtual-machine.out
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
Now I try to open in the browser the following URLs:
http//localhost:50070/
and can't open it (page not found)
and:
http://localhost:50030/
this is correctly opened and redirect to this jsp page:
http://localhost:50030/jobtracker.jsp
So, in the shell I perform the jps command that lists all the running Java process for the user:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ jps
6247 Jps
5720 DataNode
5872 SecondaryNameNode
6116 TaskTracker
5965 JobTracker
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
As you can see seems that the NameNode is not started.
On the tutorial that I am following say that:
If NameNode or DataNode is not listed than it might happen that the
namenode's or datanode's root directory which is set by the property
'dfs.name.dir' is getting messed up. It by default points to the /tmp
directory which operating system changes from time to time. Thus, HDFS
when comes up after some changes by OS, gets confused and namenode
doesn't start.
So to solve this problem provide this solution (that can't work for me).
First stop all nodes by the stop-all.sh script.
Then I have to explicitly set the 'dfs.name.dir' and 'dfs.data.dir'.
So I have created a dfs directory into the Hadoop path and into this directory I have created 2 directories (at the same level): data and name (the idea is to make two folders inside it which would be used for datanode demon and namenode demon).
So I have something like this:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/dfs$ tree
.
├── data
└── name
Then I use this configuration for the hdfs-site.xml where I explicitly set the previous 2 directories:
<configuration>
<property>
<name>dfs.data.dir</name>
<value>/home/andrea/hadoop/hadoop-1.2.1/dfs/data/</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/andrea/hadoop/hadoop-1.2.1/dfs/name/</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
So, after this changing, I run again the command to format the NameNode:
hadoop namenode –format
And I obtain this output:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/dfs$ hadoop namenode –format16/01/17 13:14:53 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = andrea-virtual-machine/127.0.1.1
STARTUP_MSG: args = [–format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_79
************************************************************/
Usage: java NameNode [-format [-force ] [-nonInteractive]] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-recover [ -force ] ]
16/01/17 13:14:53 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at andrea-virtual-machine/127.0.1.1
************************************************************/
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/dfs$
So I start again all the nodes by: start-all.sh and this is the obtained output:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ start-all.sh
starting namenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-namenode-andrea-virtual-machine.out
localhost: starting datanode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-datanode-andrea-virtual-machine.out
localhost: starting secondarynamenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-secondarynamenode-andrea-virtual-machine.out
starting jobtracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-jobtracker-andrea-virtual-machine.out
localhost: starting tasktracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-tasktracker-andrea-virtual-machine.out
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
Then I perform the jps command to see if all the nodes are correctly started but this is what I obtain:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ jps
8041 SecondaryNameNode
8310 TaskTracker
8406 Jps
8139 JobTracker
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
The situation worsened because now I have 2 nodes that are not started: the NameNode and the DataNode.
What am I missing? How can I try to solve this issue and start all my nodes?
Would you try to turn.of the IPTABLES.once and reformat along with exporting the java path.
IF you have configure in hdfs-site.xml with ,
While you are formatting name node
<property>
<name>dfs.name.dir</name>
<value>/home/andrea/hadoop/hadoop-1.2.1/dfs/name/</value>
</property>
then while formatting name node you should see
> successfully formatted /home/andrea/hadoop/hadoop-1.2.1/dfs/name/
message if name node format is successful. As per your logs I am not able to see those successful logs. Check permission issues may be there.
If it didn't start try using another command:
hadoop-daemon.sh start namenode
Hope it works...

Hadoop unable to format namenode - java.lang.NullPointerException

A few details about my installation:
Ubuntu 14.04 LTS 64 bit
Oracle Java JDK 1.8.0_40
Hadoop 2.6.0
I have been following the instructions from http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php to install Hadoop. Everything is working fine up to the point where I have to format the namenode.
When I run $ hadoop namenode -format I get the following error:
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/04/12 19:01:02 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = iulian-ThinkPad-T530/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.6.0
STARTUP_MSG: classpath = /usr/local/hadoop-2.6.0/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.0-tests.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-common-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-registry-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-common-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-api-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.jar:/usr/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-client-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG: java = 1.8.0_40
************************************************************/
15/04/12 19:01:02 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/04/12 19:01:02 INFO namenode.NameNode: createNameNode [-format]
15/04/12 19:01:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-72fe7fe8-d67b-481e-8288-9f835727d80a
15/04/12 19:01:03 FATAL namenode.NameNode: Failed to start namenode.
java.lang.NullPointerException
at java.io.File.<init>(File.java:277)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.setStorageDirectories(NNStorage.java:305)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.<init>(NNStorage.java:166)
at org.apache.hadoop.hdfs.server.namenode.FSImage.<init>(FSImage.java:127)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1379)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
15/04/12 19:01:03 INFO util.ExitUtil: Exiting with status 1
15/04/12 19:01:03 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at iulian-ThinkPad-T530/127.0.1.1
************************************************************/
The following have been added to ~/.bashrc:
#HADOOP VARIABLES
export JAVA_HOME=/usr/local/jdk1.8.0_40
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
This happens if you miss dfs.namenode.name.dir property in hdfs-site.xml
Please ensure that it is present, the the directory provided as 'value' is valid.
Example:
<property>
<name>dfs.namenode.name.dir</name>
<value>/data/namenode</value>
</property>
The format command clearly says that there is an error in namenode storage directory. Check if the namenode directory exists and has proper permissions assigned. Next check if the hdfs-site.xml has proper values set for dfs.namenode.name.dir property.

Hadoop 2.2 - datanode doesn't start up

I had Hadoop 2.4 this morning (see my previous 2 questions). Now I removed it and installed 2.2 as I had issues with 2.4, and also as I think 2.2 is the latest stable release. Now I followed the tutorial here:
http://codesfusion.blogspot.com/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1
I am pretty sure I did everything right but I am facing similar issues again.
When I run jps it is obvious that the data node is not starting up.
What am I doing wrong again?
hduser#test02:~$ start-dfs.sh
14/06/06 18:12:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-test02.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-test02.out
localhost: Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
localhost: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-test02.out
0.0.0.0: Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
0.0.0.0: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/06/06 18:13:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hduser#test02:~$ jps
2201 Jps
hduser#test02:~$ jps
2213 Jps
hduser#test02:~$ start-yarn
start-yarn: command not found
hduser#test02:~$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-test02.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-test02.out
hduser#test02:~$ jps
2498 NodeManager
2264 ResourceManager
2766 Jps
hduser#test02:~$ jps
2784 Jps
2498 NodeManager
2264 ResourceManager
hduser#test02:~$ jps
2498 NodeManager
2264 ResourceManager
2796 Jps
hduser#test02:~$
My problem was that I took these instructions from the tutorial too literally.
Paste following between <configuration>
fs.default.name
hdfs://localhost:9000
I suspected this was wrong while doing it but still I did it.
It seemed incorrect as the core-site.xml file is in XML format.
So actually, it needs to look like this.
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
Changing it to this fixed my problem.
I had similar issues with DataNode not starting up. What I did was reformat the namenode, then restarted the cluster. Then, running jps confirmed that data node was started up.
This can be caused by placing the HDFS directory in your "home" directory (on a linux box) since upon starting up and shutting down the OS affects these folders (not exactly sure how, but to prevent this problem in the future, move the HDFS directory out of your home directory).
Please let me know if this works.

error in run hadoop start-all.sh

i want to run hadoop in my arch linux but i have this error, how i can fix it?
[]# . /usr/lib/hadoop-2.2.0/sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on [OpenJDK 64-Bit Server VM warning: You have loaded library /usr/lib/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
2013-12-10 23:21:42,602 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable]
Error: Please specify one of --hosts or --hostnames options and not both.
cat: /etc/hadoop/slaves: No such file or directory
Starting secondary namenodes [OpenJDK 64-Bit Server VM warning: You have loaded library /usr/lib/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
2013-12-10 23:21:44,192 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
0.0.0.0]
Error: Please specify one of --hosts or --hostnames options and not both.
starting yarn daemons
starting resourcemanager, logging to /usr/lib/hadoop-2.2.0/logs/yarn-vahid-resourcemanager-kharazi.out
2013-12-10 23:21:47,901 INFO [main] resourcemanager.ResourceManager (StringUtils.java:startupShutdownMessage(601)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting ResourceManager
STARTUP_MSG: host = kharazi/192.168.1.3
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.2.0
STARTUP_MSG: classpath = /etc/hadoop:/etc/hadoop:/etc/hadoop:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/lib/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/etc/hadoop/rm-config/log4j.properties
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG: java = 1.7.0_45
************************************************************/
cat: /etc/hadoop/slaves: No such file or directory
cat: /etc/hadoop/slaves: No such file or directory
You need to fill in the /etc/hadoop/slaves file with the location of all your slave nodes. You put one slave node hostname per line. Example:
host1
host2
host3
Make sure you can ssh into these nodes without a password.

Resources