Hadoop / Yarn (v0.23.3) Psuedo-Distributed Mode setup :: No job node - hadoop

I just setup Hadoop/Yarn 2.x (specifically, v0.23.3) in Psuedo-Distributed mode.
I followed the instructions of a few blogs & websites which, more-or-less provide the
same prescription for setting it up. I also followed the 3rd-Edition of O'reilly's
Hadoop book (which ironically was the least helpful).
THE PROBLEM:
After running "start-dfs.sh" and then "start-yarn.sh", while all of the daemons
do start (as indicated by jps(1)), the Resource Manager web portal
(Here: http://localhost:8088/cluster/nodes) indicates 0 (zero) job-nodes in the
cluster. So while submitting the example/test Hadoop job indeed does get
scheduled, it pends forever because, I assume, the configuration doesn't see a
node to run it on.
Below are the steps I performed, including resultant configuration files.
Hopefully the community help me out... (And thank you in advance).
THE CONFIGURATION:
The following environment variables are set in both my and hadoop's UNIX account profiles: ~/.profile:
export HADOOP_HOME=/home/myself/APPS.d/APACHE_HADOOP.d/latest
# Note: /home/myself/APPS.d/APACHE_HADOOP.d/latest -> hadoop-0.23.3
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_INSTALL=${HADOOP_HOME}
export HADOOP_CLASSPATH=${HADOOP_HOME}/lib
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop/conf
export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop/conf
export JAVA_HOME=/usr/lib/jvm/jre
hadoop$ java -version
java version "1.7.0_06-icedtea<br>
OpenJDK Runtime Environment (fedora-2.3.1.fc17.2-x86_64)<br>
OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode)<br>
# Although the above shows OpenJDK, the same problem happens with Sun's JRE/JDK.
The NAMENODE & DATANODE directories, also specified in etc/hadoop/conf/hdfs-site.xml:
/home/myself/APPS.d/APACHE_HADOOP.d/latest/YARN_DATA.d/HDFS.d/DATANODE.d/
/home/myself/APPS.d/APACHE_HADOOP.d/latest/YARN_DATA.d/HDFS.d/NAMENODE.d/
Next, the various XML configuration files (again, YARN/MRv2/v0.23.3 here):
hadoop$ pwd; ls -l
/home/myself/APPS.d/APACHE_HADOOP.d/latest/etc/hadoop/conf
lrwxrwxrwx 1 hadoop hadoop 16 Sep 20 13:14 core-site.xml -> ../core-site.xml
lrwxrwxrwx 1 hadoop hadoop 16 Sep 20 13:14 hdfs-site.xml -> ../hdfs-site.xml
lrwxrwxrwx 1 hadoop hadoop 18 Sep 20 13:14 httpfs-site.xml -> ../httpfs-site.xml
lrwxrwxrwx 1 hadoop hadoop 18 Sep 20 13:14 mapred-site.xml -> ../mapred-site.xml
-rw-rw-r-- 1 hadoop hadoop 10 Sep 20 15:36 slaves
lrwxrwxrwx 1 hadoop hadoop 16 Sep 20 13:14 yarn-site.xml -> ../yarn-site.xml
core-site.xml
<?xml version="1.0"?>
<!-- core-site.xml -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost/</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<!-- mapred-site.xml -->
<configuration>
<!-- Same problem whether this (legacy) stanza is included or not. -->
<property>
<name>mapred.job.tracker</name>
<value>localhost:8021</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
hdfs-site.xml
<!-- hdfs-site.xml -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/myself/APPS.d/APACHE_HADOOP.d/YARN_DATA.d/HDFS.d/NAMENODE.d</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/myself/APPS.d/APACHE_HADOOP.d/YARN_DATA.d/HDFS.d/DATANODE.d</value>
</property>
</configuration>
yarn-site.xml
<?xml version="1.0"?>
<!-- yarn-site.xml -->
<configuration>
<property>
<name>yarn.resourcemanager.address</name>
<value>localhost:8032</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce.shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>4096</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/home/myself/APPS.d/APACHE_HADOOP.d/YARN_DATA.d/TEMP.d</value>
</property>
</configuration>
etc/hadoop/conf/saves
localhost
# Community/friends, is this entry correct/needed for my psuedo-dist mode?
Miscellaneous wrap-up notes:
(1) As you may have gleaned from above, all files/directories are owned
by the 'hadoop' UNIX user. There is a hadoop:hadoop, UNIX User and
Group, respectively.
(2) The following command was run after the NAMENODE & DATANODE directories
(listed above) were created (and whose paths were entered into
hdfs-site.xml):
hadoop$ hadoop namenode -format
(3) Next, I ran "start-dfs.sh", then "start-yarn.sh".
Here is jps(1) output:
hadoop#e6510$ jps
21979 DataNode
22253 ResourceManager
22384 NodeManager
22156 SecondaryNameNode
21829 NameNode
22742 Jps
Thank you!

After much toil on this problem without success (and trust me I tried it all), I instituted
hadoop using a different solution. Whereas above I downloaded a gzip/tar ball
of the hadoop distribution (again v0.23.3) from one of the download mirrors, this
time I used the Caldera CDH distribution of RPM packages, which I installed via
their YUM repos. In hopes that this will help someone, here are the detailed steps.
Step-1:
For Hadoop 0.20.x (MapReduce version 1):
# rpm -Uvh http://archive.cloudera.com/redhat/6/x86_64/cdh/cdh3-repository-1.0-1.noarch.rpm
# rpm --import http://archive.cloudera.com/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera
# yum install hadoop-0.20-conf-pseudo
-or-
For Hadoop 0.23.x (MapReduce version 2):
# rpm -Uvh http://archive.cloudera.com/cdh4/one-click-install/redhat/6/x86_64/cloudera-cdh-4-0.noarch.rpm
# rpm --import http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera
# yum install hadoop-conf-pseudo
In both cases above, installing that "psuedo" package (which stands for "pseudo-distributed
Hadoop" mode), will alone conveniently trigger the installation of all the other necessary packages you'll need (via dependency resolution).
Step-2:
Install Sun/Oracle's Java JRE (if you haven't already done so). You can
install it via the RPM that they provide, or the gzip/tar ball portable
version. It doesn't matter which as long as you set and export the "JAVA_HOME"
environment appropriately, and ensure ${JAVA_HOME}/bin/java is in your path.
# echo $JAVA_HOME; which java
/home/myself/APPS.d/JAVA-JRE.d/jdk1.7.0_07
/home/myself/APPS.d/JAVA-JRE.d/jdk1.7.0_07/bin/java
Note: I actually create a symlink called "latest" and point/re-point it to the JAVA
version specific directory whenever I update the JAVA. I was explicit above for
the reader's understanding.
Step-3: Format hdfs as the "hdfs" Unix user (created during "yum install" above).
# sudo su hdfs -c "hadoop namenode -format"
Step-4:
Manually start the hadoop daemons.
for file in `ls /etc/init.d/hadoop*`
do
{
${file} start
}
done
Step-5:
Check to see if things are working. The following is for MapReduce v1
(It's not that much different for MapReduce v2 at this superficial level).
root# jps
23104 DataNode
23469 TaskTracker
23361 SecondaryNameNode
23187 JobTracker
23267 NameNode
24754 Jps
# Do the next commands as yourself (not as "root").
myself$ hadoop fs -mkdir /foo
myself$ hadoop fs -rmr /foo
myself$ hadoop jar /usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u5-examples.jar pi 2 100000
I hope this helped!

Noel,
I followed this other day the steps in this tutorial http://www.thecloudavenue.com/search?q=0.23 and I managed to set up a small cluster of 3 centos 6.3 machines

Related

Java Hadoop installation: Error: Could not find or load main class

I want to install Java and Hadoop on Windows10. So far I have Java jdk1.8.0_211 as when I try
c:\>java -version
it returns:
java version "1.8.0_211"
Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode)
and when I try
c:\>javac -version
it returns:
javac 1.8.0_211
During hadoop-2.8.0 installation I try:
C:\hadoop\hadoop-2.8.0>hdfs namenode -format
which returns:
Error: Could not find or load main class
The same appears when I try:
c:\>hadoop --version
Again it returns:
Error: Could not find or load main class
When I try:
C:\hadoop\hadoop-2.8.0\sbin>start-all.cmd
it returns:
This script is Deprecated. Instead use start-dfs.cmd and start-yarn.cmd
starting yarn daemons
How can I fix this main class problem practically?
Thank you for your time and effort.
What I already did:
I have no idea what the main class is. I googled and read almost everywhere. I found a lot of theoretical stuff about main class and classpath etc which gives an idea but essentially is hard to understand for me. As I am only a hadoop user, no data engineer nor Java programmer, I appreciate a practical solution.
I only try to follow installation guides. As there are many guides which does not exactly what I want (maybe commercial interests?!) , I have set up my own installation plan:
(have I taken the right steps?)
Hadoop installation plan
Download
Download Java
jdk-8u211-windows-x64.exe (jdk 64x version!)
from Oracle
https://www.oracle.com/technetwork/pt/java/javase/downloads/jdk8-downloads-2133151.html?printOnly=1
Download
Hadoop 2.8.0 (binary version!)
hadoop-2.8.0.tar.gz
from Apache Hadoop
https://hadoop.apache.org/releases.html
Install
double click
jdk-8u211-windows-x64.exe
Extract
With WinRAR as admin
hadoop-2.8.0.tar.gz
To Locations
C:\Java\ (for jdk1.8.0_211)
>>> C:\Java\jdk1.8.0_211
C:\Java\jdk1.8.0_211\ (for jre1.8.0_211)
>>> C:\Java\jdk1.8.0_211\jre1.8.0_211
C:\hadoop\ (for hadoop-2.8.0)
>>> C:\hadoop\hadoop-2.8.0
Add folders
C:\hadoop\hadoop-2.8.0\data\datanode
C:\hadoop\hadoop-2.8.0\data\namenode
User variables for admin
1 New and add
JAVA_HOME= C:\Java\jdk1.8.0_211
HADOOP_HOME=”C:\hadoop\hadoop-2.8.0\
2 Edit path, add
C:\Java\jdk1.8.0_211\bin
C:\hadoop\hadoop-2.8.0\bin
C:\hadoop\hadoop-2.8.0\sbin
C:\hadoop\hadoop-2.8.0\share\hadoop\common\*
C:\hadoop\hadoop-2.8.0\share\hadoop\hdfs
C:\hadoop\hadoop-2.8.0\share\hadoop\hdfs\lib\*
C:\hadoop\hadoop-2.8.0\share\hadoop\hdfs\*
C:\hadoop\hadoop-2.8.0\share\hadoop\yarn\lib\*
C:\hadoop\hadoop-2.8.0\share\hadoop\yarn\*
C:\hadoop\hadoop-2.8.0\share\hadoop\mapreduce\lib\*
C:\hadoop\hadoop-2.8.0\share\hadoop\mapreduce\*
C:\hadoop\hadoop-2.8.0\share\hadoop\common\lib\*
System variables
1 New and add
JAVA_HOME=C:\Java\jdk1.8.0_211
HADOOP_HOME=C:\hadoop\hadoop-2.8.0
2 Edit path,
New,
Add
%JAVA_HOME%
%JAVA_HOME%\bin
%HADOOP_HOME%
%HADOOP_HOME%\bin
%HADOOP_HOME%\sbin
%HADOOP_HOME%\etc\hadoop,
%HADOOP_HOME%\share\hadoop\common\* ,
%HADOOP_HOME%\share\hadoop\common\lib\* ,
%HADOOP_HOME%\share\hadoop\hdfs\* ,
%HADOOP_HOME%\share\hadoop\hdfs\lib\* ,
%HADOOP_HOME%\share\hadoop\mapreduce\* ,
%HADOOP_HOME%\share\hadoop\mapreduce\lib\* ,
%HADOOP_HOME%\share\hadoop\yarn\* ,
%HADOOP_HOME%\share\hadoop\yarn\lib\ *
Make mapred-site.xml file
Copy
C:\hadoop\hadoop-2.8.0\etc\hadoop\mapred-site.xml.template
rename to
C:\hadoop\hadoop-2.8.0\etc\hadoop\mapred-site.xml
Configure files by entering code between
<configuration> and </configuration>
in all of those 4 files:
C:\hadoop\hadoop-2.8.0\etc\hadoop\core-site.xml
C:\hadoop\hadoop-2.8.0\etc\hadoop\hdfs-site.xml
C:\hadoop\hadoop-2.8.0\etc\hadoop\mapred-site.xml
C:\hadoop\hadoop-2.8.0\etc\hadoop\yarn-site.xml
Code to enter:
1 Code for core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
Alternatively:
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
2 Code for hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>C:\hadoop\hadoop-2.8.0\data\namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>C:\hadoop\hadoop-2.8.0\data\datanode</value>
</property>
3 Code for mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
4 Code for yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
Config hadoop-env.cmd:
C:\hadoop\hadoop-2.8.0\etc\hadoop\hadoop-env.cmd
change
set JAVA_HOME=%JAVA_HOME%
to:
#rem set JAVA_HOME=%JAVA_HOME%
set JAVA_HOME= C:\Java\jdk1.8.0_211
hdfs namenode –format Cmd line:
cd C:\hadoop\hadoop-2.8.0
hdfs namenode –format
returns:
Error: Could not find or load main class
testing in Cmd line:
java -version
javac -version
echo %JAVA_HOME%
echo %HADOOP_HOME%
are all ok, but
hadoop --version
returns:
Error: Could not find or load main class
starting hadoop in cmd line:
cd C:\hadoop\hadoop-2.8.0\sbin
start-all.cmd
it returns:
This script is Deprecated. Instead use start-dfs.cmd and start-yarn.cmd
starting yarn daemons
start-dfs.cmd
it returns:
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
start-yarn.cmd
it returns:
starting yarn daemons
Browser
http://localhost:8088 appears
http://localhost:50070 not OK
stop hadoop in cmd line:
cd C:\hadoop\hadoop-2.8.0\sbin
stop-all.cmd
stop-dfs.cmd
stop-yarn.cmd

HBase on docker NotServingRegionException because of hostname alisas

I am building a fully distributed hbase cluster with unmanaged zookeeper.
I pretty much used this example and install hbase on top of it: https://github.com/kiwenlau/hadoop-cluster-docker
Hadoop and hdfs works fine but I get this exception with hbase:
2016-09-05 06:27:12,268 INFO [hadoop-master:16000.activeMasterManager] zookeeper.MetaTableLocator: Failed verification of hbase:meta,,1 at address=hadoop-slave2,16020,1473052276351, exception=org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is not online on hadoop-slave2.hadoopnet,16020,1473056813966
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2910)
This is bloking because any command I enter on the hbase shell will return the following error:
ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
The containers are runned using --net=hadoopnet
which is a network create as such:
docker network create --driver=bridge hadoopnet
The hbase webui is showing this:
Region Servers
ServerName Start time Version Requests Per Second Num. Regions
hadoop-slave1,16020,1473056814064 Mon Sep 05 06:26:54 UTC 2016 1.2.2 0 0
hadoop-slave1.hadoopnet,16020,1473056814064 Mon Sep 05 06:26:54 UTC 2016 Unknown 0 0
hadoop-slave2,16020,1473056813966 Mon Sep 05 06:26:53 UTC 2016 1.2.2 0 0
hadoop-slave2.hadoopnet,16020,1473056813966 Mon Sep 05 06:26:53 UTC 2016 Unknown 0 0
Total:4 2 nodes with inconsistent version 0 0
I should have only 2 regionservers but 2 strange hadoop-slave1.hadoopnet and hadoop-slave2.hadoopnet are added to the list.
When I look at zk using:
/usr/local/hbase/bin/hbase zkcli -server zk:2181 ls /hbase/rs
I only see my 2 regionserver: hadoop-slave1,16020,1473056814064 and hadoop-slave2,16020,1473056813966
Looking at the zookeeper.MetaTableLocator: Failed verification error I see that hadoop-slave2,16020,1473052276351 and hadoop-slave2.hadoopnet,16020,1473056813966 get mixed up.
here is my config on all server
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop-master:9000/hbase</value>
<description>The directory shared by region servers. Should be fully-qualified to include the filesystem to use. E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR</description>
</property>
<property>
<name>hbase.master</name>
<value>hdfs://hadoop-master:60000</value>
<description>The host and port that the HBase master runs at.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)</description>
</property>
<property>
<name>hbase.master.info.port</name>
<value>60010</value>
<description>The UI interface of HBase master runs.</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>zk</value>
<description>string m_e_m_b_e_r_s is replaced by list of hosts separated by comma. Its generated by configure-slaves.sh on master node</description>
</property>
<property>
<name>hbase.zookeeper.property.maxClientCnxns</name>
<value>300</value>
</property>
<property>
<name>hbase.zookeeper.property.datadir</name>
<value>/tmp/zookeeper</value>
<description>location of storage of zookeeper data</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
</configuration>
I have the same problem, env as below :
hadoop 2.7.2
hbase 1.2.2
zookeeper 3.4.8
It came to my attention, that the hbase-1.2.2 included hadoop's jar as 2.5.1, zookeeper's jar as 3.4.6, I upgraded them to the version i'm using (hadoop & zookeeper), the error has gone, but still found [hostname].[docker-network] as a region-server, except this, everything is fine.

Hadoop - java.net.ConnectException: Connection refused

I want connect to hdfs (in localhost) and i have a error:
Call From despubuntu-ThinkPad-E420/127.0.1.1 to localhost:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
I follow all the steps in other posts, but i dont solve my problem. I use hadoop 2.7 and this is configurations:
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/despubuntu/hadoop/name/data</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
I type /usr/local/hadoop/bin/hdfs namenode -format and
/usr/local/hadoop/sbin/start-all.sh
But when i type "jps" the result is:
10650 Jps
4162 Main
5255 NailgunRunner
20831 Launcher
I need help...
Make sure that DFS which is set to port 9000 in core-site.xml is actually started. You can check with jps command. You can start it with sbin/start-dfs.sh
I guess that you didn't set up your hadoop cluster correctly please follow these steps :
Step1: begin with setting up .bashrc:
vi $HOME/.bashrc
put the following lines at the end of the file: (change the hadoop home as yours)
# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop
# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-sun
# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"
# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
hadoop fs -cat $1 | lzop -dc | head -1000 | less
}
# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin
step 2 : edit hadoop-env.sh as following:
# The java implementation to use. Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun
step 3 : Now create a directory and set the required ownerships and permissions
$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp
step 4 : edit core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
step 5 : edit mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
</property>
step 6 : edit hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
finally format your hdfs (You need to do this the first time you set up a Hadoop cluster)
$ /usr/local/hadoop/bin/hadoop namenode -format
hope this will help you
I got the same issue. You can see Name node, DataNode, Resource manager and Task manager daemons are running when you type. So just do start-all.sh then all daemons start running and now you can access HDFS.
First check is if java processes are working or not by typing jps command on command line. On running jps command following processes are mandatory to run-->>
DataNode
jps
NameNode
SecondaryNameNode
If following processes are not running then first start the name node by using following command-->>
start-dfs.sh
This worked out for me and removed the error you stated.
I was getting similar error. Upon checking I found that my namenode service was in stopped state.
check status of the namenode sudo status hadoop-hdfs-namenode
if its not in started/running state
start namenode service sudo start hadoop-hdfs-namenode
Do keep in mind that it takes time before name node service becomes fully functional after restart. It reads all the hdfs edits in memory. You can check progress of this in /var/log/hadoop-hdfs/ using command tail -f /var/log/hadoop-hdfs/{Latest log file}

hadoop no data node started

I am following this tutorial.
http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation
I got to this point and started the nodes.
Start NameNode daemon and DataNode daemon:
$ sbin/start-dfs.sh
But then when I run the next steps, it looks like no data node is running (as I get errors saying so).
Why is the data node down? And how can I fix this?
Here is the log from my data node.
hduser#test02:/usr/local/hadoop$ jps
3792 SecondaryNameNode
3929 Jps
3258 NameNode
hduser#test02:/usr/local/hadoop$ cat /usr/local/hadoop/logs/hadoop-hduser-datanode-test02.out
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
-m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 3781
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
hduser#test02:/usr/local/hadoop$
EDIT:
Seems I had this port number wrong.
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
Now when I made it right (i.e. equal to 9000) I have no name node starting up.
hduser#test02:/usr/local/hadoop$ jps
10423 DataNode
10938 Jps
10703 SecondaryNameNode
and I cannot browse:
http://my-server-name:50070/
any more.
Hope this gives you some hint what is happening.
I am total beginner with Hadoop and kind of lost now.
[core-site.xml]
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/lib/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
</configuration>
[hdfs-site.xml]
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
</configuration>
In mapred-site.xml I have nothing.
1.first stop all the entities like namenode, datanode etc. (you will be having some script or command to do that)
Format tmp directory
Go to /var/cache/hadoop-hdfs/hdfs/dfs/ and delete all the contents in the directory manually
Now format your namenode again
start all the entities then use jps command to confirm that the datanode has been started
Now run whichever application you may like or have.
Hope this helps.
Add this configuration
conf/core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/var/lib/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
conf/mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
conf/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
stop hadoop
bin/stop-all.sh
change permission and remove temp directory data
chmod 755 /var/lib/hadoop/tmp
rm -Rf /var/lib/hadoop/tmp/*
format name node
bin/hadoop namenode -format
After 1 day of struggle, I just removed version 2.4 and installed Hadoop 2.2 (as I realized 2.2 is the latest stable version). Then I got it all working by following this nice tutorial.
http://codesfusion.blogspot.com/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1
Something is not right with this document about 2.4 which I was reading.
Not to talk that it's not suitable for beginners, and it's usually beginners who stumble upon it.
Maybe your slave's data master's data are not synced, delete data & name folder in ./hadoop/hdfs and recreate them. re-format namenode. Than start dfs.

Error running mapreduce sample in hadoop 0.23.6

I deployed Hadoop 0.23.6 in Ubuntu 12.04 LTS. I am able to copy files across and do file manipulation. I am using YARN for mapreduce.
I am getting the following error, when I am trying to run any mapreduce application using the hadoop-mapreduce-examples-0.23.6.jar
Command used:
bin/hadoop jar hadoop-mapreduce-examples-0.23.6.jar randomwriter -Dmapreduce.randomwriter.mapsperhost=1 -Dmapreduce.job.user.name=$USER -Dmapreduce.randomwriter.bytespermap=10000 -Ddfs.blocksize=536870912 -Ddfs.block.size=536870912 -libjars hadoop-mapreduce-client-app-0.23.6.jar output
Hadoop version: 0.23.6
Container launch failed for container_1364342550899_0001_01_000002 : java.lang.IllegalStateException: Invalid shuffle port number -1 returned for attempt_1364342550899_0001_m_000000_0
Verify your yarn-site.xml configuration. You need to have below properties configured.
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce.shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
For more details, have look at jira
https://issues.apache.org/jira/browse/MAPREDUCE-2983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Resources