Hadoop: Format aborted in /mnt/hdfs/1/namenode - hadoop

I've created a few ebs filesystems on ec2 to use with hadoop. I've set the JAVE_HOME in the hadoop environment. But when I go to format the first volume it aborts with the following message
[root#hadoop-node01 conf]# sudo -u hdfs hadoop namenode -format
13/02/06 15:33:22 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoop-node01.mydomain.com/10.xx.xx.201
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2-cdh3u5
STARTUP_MSG: build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u5 -r 30233064aaf5f2492bc687d61d72956876102109; compiled by 'root' on Fri Oct 5 18:45:46 PDT 2012
************************************************************/
Re-format filesystem in /mnt/hdfs/1/namenode ? (Y or N) y
Format aborted in /mnt/hdfs/1/namenode
13/02/06 15:33:27 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop-node01.mydomain.com/10.xx.xx.201
************************************************************/
This is my namenode configuration:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.name.dir</name> <value>/mnt/hdfs/1/namenode,/mnt/hdfs/2/namenode,/mnt/hdfs/3/namenode,/mnt/hdfs/4/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/mnt/hdfs/1/datanode,/mnt/hdfs/2/datanode,/mnt/hdfs/3/datanode,/mnt/hdfs/4/datanode</value>
</property>
</configuration>
Does anyone have any idea why this error is happening or how to get around the problem?

Unfortunately in 1.x the format command's prompt is case-sensitive. Answer with a capital Y instead and it won't abort.

Related

hadoop hdfs namenode format not work

I'm a hadoop green hand, I try install hadoop3.0 in my vm, after I config hadoop, and then try :
hdfs namenode ‐format
and got output:
2017-12-26 00:20:56,255 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost/127.0.0.1
STARTUP_MSG: args = [‐format]
STARTUP_MSG: version = 3.0.0
STARTUP_MSG: classpath = /opt/hadoop-3.0.0/etc/hadoop:/opt/hadoop-3.0.0/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop-3.0.0/share/hadoop/common/lib/kerby-util-1.0.1.jar: ............. hadoop-yarn-applications-unmanaged-am-launcher-3.0.0.jar:/opt/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-registry-3.0.0.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r c25427ceca461ee979d30edd7a4b0f50718e6533; compiled by 'andrew' on 2017-12-08T19:16Z
STARTUP_MSG: java = 1.8.0_151
************************************************************/
2017-12-26 00:20:56,265 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-12-26 00:20:56,269 INFO namenode.NameNode: createNameNode [‐format]
Usage: hdfs namenode [-backup] |
[-checkpoint] |
[-format [-clusterid cid ] [-force] [-nonInteractive] ] |
[-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-rollback] |
[-rollingUpgrade <rollback|started> ] |
[-importCheckpoint] |
[-initializeSharedEdits] |
[-bootstrapStandby [-force] [-nonInteractive] [-skipSharedEditsCheck] ] |
[-recover [ -force] ] |
[-metadataVersion ]
2017-12-26 00:20:56,365 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
************************************************************/
I config hdfs-site.xml as follow:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/dan/hadoop_data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/dan/hadoop_data/datanode</value>
</property>
</configuration>
when I start namenode service, it fail and log tell :
2017-12-26 00:03:41,331 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.io.IOException: NameNode is not formatted.
2017-12-26 00:03:41,337 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
Can anyone tell me to fix this issue?
Thanks in advance!
Solution 1:
Sometimes It happens. First, stop all service and just go to your current directory and delete current directory. Hadoop current directory stores all logs files as well. By removing current directory start all service again.
Stop all service :
$HADOOP_HOME/sbin/stop-all.sh
After stop all service once you should format Namenode by the following command.
Format name node:
$HADOOP_HOME/bin/hadoop namenode –format
Now again start all service by following command.
Start all service:
$HADOOP_HOME/sbin/start-all.sh
Solution 2:
Sometimes Namenode went into safe-mode. You need to leave safe node by following command.
$HADOOP_HOME/bin/hdfs dfsadmin -safemode leave

Why I can't start the NameNode in this Hadoop 1.2.1 installation?

I am new in Apache Hadoop and I am following a video course on Udemy.
The course is based on Hadoop 1.2.1, is it a too old version? Is better start my study with another course based on a more recent version or is it ok?
So I have installed Hadoop 1.2.1 on an Ubuntu 12.04 system and I have configured it in pseudo distribution mode.
According with the tutorial I have do it using the following settings in the following configuration files:
conf/core-site.xml:
fs.default.name
hdfs://localhost:9000
conf/hdfs-site.xml:
dfs.replication
1
conf/mapred-site.xml:
mapred.job.tracker
localhost:9001
Then in the Linux shell I do:
ssh localhost
So I am connected through SSH to my local system.
Then I go into the Hadoop bin directory, /home/andrea/hadoop/hadoop-1.2.1/bin/ and here I perform this command that have to perform the format of the name node (what exactly means?):
bin/hadoop namenode –format
And this I the obtained output:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ ./hadoop namenode –format
16/01/17 12:55:25 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = andrea-virtual-machine/127.0.1.1
STARTUP_MSG: args = [–format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_79
************************************************************/
Usage: java NameNode [-format [-force ] [-nonInteractive]] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-recover [ -force ] ]
16/01/17 12:55:25 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at andrea-virtual-machine/127.0.1.1
************************************************************/
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
Then I try to start all the nodes performing this command:
./start–all.sh
and now I obtain:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ ./start-all.sh
starting namenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-namenode-andrea-virtual-machine.out
localhost: starting datanode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-datanode-andrea-virtual-machine.out
localhost: starting secondarynamenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-secondarynamenode-andrea-virtual-machine.out
starting jobtracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-jobtracker-andrea-virtual-machine.out
localhost: starting tasktracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-tasktracker-andrea-virtual-machine.out
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
Now I try to open in the browser the following URLs:
http//localhost:50070/
and can't open it (page not found)
and:
http://localhost:50030/
this is correctly opened and redirect to this jsp page:
http://localhost:50030/jobtracker.jsp
So, in the shell I perform the jps command that lists all the running Java process for the user:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ jps
6247 Jps
5720 DataNode
5872 SecondaryNameNode
6116 TaskTracker
5965 JobTracker
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
As you can see seems that the NameNode is not started.
On the tutorial that I am following say that:
If NameNode or DataNode is not listed than it might happen that the
namenode's or datanode's root directory which is set by the property
'dfs.name.dir' is getting messed up. It by default points to the /tmp
directory which operating system changes from time to time. Thus, HDFS
when comes up after some changes by OS, gets confused and namenode
doesn't start.
So to solve this problem provide this solution (that can't work for me).
First stop all nodes by the stop-all.sh script.
Then I have to explicitly set the 'dfs.name.dir' and 'dfs.data.dir'.
So I have created a dfs directory into the Hadoop path and into this directory I have created 2 directories (at the same level): data and name (the idea is to make two folders inside it which would be used for datanode demon and namenode demon).
So I have something like this:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/dfs$ tree
.
├── data
└── name
Then I use this configuration for the hdfs-site.xml where I explicitly set the previous 2 directories:
<configuration>
<property>
<name>dfs.data.dir</name>
<value>/home/andrea/hadoop/hadoop-1.2.1/dfs/data/</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/andrea/hadoop/hadoop-1.2.1/dfs/name/</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
So, after this changing, I run again the command to format the NameNode:
hadoop namenode –format
And I obtain this output:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/dfs$ hadoop namenode –format16/01/17 13:14:53 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = andrea-virtual-machine/127.0.1.1
STARTUP_MSG: args = [–format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_79
************************************************************/
Usage: java NameNode [-format [-force ] [-nonInteractive]] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-recover [ -force ] ]
16/01/17 13:14:53 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at andrea-virtual-machine/127.0.1.1
************************************************************/
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/dfs$
So I start again all the nodes by: start-all.sh and this is the obtained output:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ start-all.sh
starting namenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-namenode-andrea-virtual-machine.out
localhost: starting datanode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-datanode-andrea-virtual-machine.out
localhost: starting secondarynamenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-secondarynamenode-andrea-virtual-machine.out
starting jobtracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-jobtracker-andrea-virtual-machine.out
localhost: starting tasktracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-tasktracker-andrea-virtual-machine.out
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
Then I perform the jps command to see if all the nodes are correctly started but this is what I obtain:
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ jps
8041 SecondaryNameNode
8310 TaskTracker
8406 Jps
8139 JobTracker
andrea#andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$
The situation worsened because now I have 2 nodes that are not started: the NameNode and the DataNode.
What am I missing? How can I try to solve this issue and start all my nodes?
Would you try to turn.of the IPTABLES.once and reformat along with exporting the java path.
IF you have configure in hdfs-site.xml with ,
While you are formatting name node
<property>
<name>dfs.name.dir</name>
<value>/home/andrea/hadoop/hadoop-1.2.1/dfs/name/</value>
</property>
then while formatting name node you should see
> successfully formatted /home/andrea/hadoop/hadoop-1.2.1/dfs/name/
message if name node format is successful. As per your logs I am not able to see those successful logs. Check permission issues may be there.
If it didn't start try using another command:
hadoop-daemon.sh start namenode
Hope it works...

hadoop fs -mkdir failed on connection exception

I have been trying to set up and run Hadoop in the pseudo Distributed Mode.But when I type
bin/hadoop fs -mkdir input
I get
mkdir: Call From h1/192.168.1.13 to h1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
here is the details
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/grid/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://h1:9000</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>h1:9001</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>20</value>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>4</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>h1:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>h1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>h1:19888</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.http.address</name>
<value>h1:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address</name>
<value>h1:9001</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>h1:50090</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/grid/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.13 h1
192.168.1.14 h2
192.168.1.15 h3
After hadoop namenode -format and start-all.sh
1702 ResourceManager
1374 DataNode
1802 NodeManager
2331 Jps
1276 NameNode
1558 SecondaryNameNode
the problem occurs
[grid#h1 hadoop-2.6.0]$ bin/hadoop fs -mkdir input
15/05/13 16:37:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
mkdir: Call From h1/192.168.1.13 to h1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Where is the problems?
hadoop-grid-datanode-h1.log
2015-05-12 11:26:20,329 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = h1/192.168.1.13
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.6.0
hadoop-grid-namenode-h1.log
2015-05-08 16:06:32,561 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = h1/192.168.1.13
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.6.0
why the port 9000 does not work?
[grid#h1 ~]$ netstat -tnl |grep 9000
[grid#h1 ~]$ netstat -tnl |grep 9001
tcp 0 0 192.168.1.13:9001 0.0.0.0:* LISTEN
Please start dfs and yarn.
[hadoop#hadooplab sbin]$ ./start-dfs.sh
[hadoop#hadooplab sbin]$ ./start-yarn.sh
Now try using "bin/hadoop fs -mkdir input"
The issue usually comes when you install hadoop in a VM and then shut it down. When you shut down VM, dfs and yarn also stops. So you need to start dfs and yarn each time you restart the VM.
Firstly try command
bin/hadoop dfs -mkdir input
If you have followed micheal-roll post properly then you should not have any issue. I suspect that passwordless ssh is not working in your configuration, recheck it.
Following procedure resolved the issue for me:
Stop all the services.
Delete namenode and datanode directories as specified in hdfs-site.xml.
Create new namenode and datanode directories and modify hdfs-site.xml accordingly.
In core-site.xml, make the following changes or add the following properties:
fs.defaultFS
hdfs://172.20.12.168/
fs.default.name
hdfs://172.20.12.168:8020
Make the following changes in hadoop-2.6.4/etc/hadoop/hadoop-env.sh file:
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home
Restart dfs, yarn and mr as follows:
start-dfs.sh
start-yarn.sh
mr-jobhistory-daemon.sh start historyserver
This command worked for me:
hadoop namenode -format

What the command "hadoop namenode -format" will do

I am trying to learn Hadoop by following a tutorial and trying to do pseudo-distributed mode on my machine.
My core-site.xml is:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation.
</description>
</property>
</configuration>
My hdfs-site.xml file is:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>The actual number of replications can be specified when the
file is created.
</description>
</property>
</configuration>
My mapred-site.xml file is:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
<description>The host and port that the MapReduce job tracker runs
at.
</description>
</property>
</configuration>
When I run the command it ran successfully but what it is doing actually:
hadoop-1.2.1$ bin/hadoop namenode -format
14/11/26 12:37:16 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = myhost/127.0.0.8
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.6.0_45
************************************************************/
14/11/26 12:37:17 INFO util.GSet: Computing capacity for map BlocksMap
14/11/26 12:37:17 INFO util.GSet: VM type = 64-bit
14/11/26 12:37:17 INFO util.GSet: 2.0% max memory = 932118528
14/11/26 12:37:17 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/11/26 12:37:17 INFO util.GSet: recommended=2097152, actual=2097152
14/11/26 12:37:17 INFO namenode.FSNamesystem: fsOwner=myuser
14/11/26 12:37:17 INFO namenode.FSNamesystem: supergroup=supergroup
14/11/26 12:37:17 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/11/26 12:37:17 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/11/26 12:37:17 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/11/26 12:37:17 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
14/11/26 12:37:17 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/11/26 12:37:17 INFO common.Storage: Image file /tmp/hadoop-myuser/dfs/name/current/fsimage of size 115 bytes saved in 0 seconds.
14/11/26 12:37:18 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop-myuser/dfs/name/current/edits
14/11/26 12:37:18 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop-myuser/dfs/name/current/edits
14/11/26 12:37:18 INFO common.Storage: Storage directory /tmp/hadoop-myuser/dfs/name has been successfully formatted.
14/11/26 12:37:18 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at chaitanya-OptiPlex-3010/127.0.0.8
************************************************************/
Can someone please let me know what it is doing internally.
I have gone through these posts but there is no correct explanation.
What exactly is hadoop namenode formatting?
hadoop namenode is not formatting
How can I check this practically on my machine so I can see the differences before and after running the command. I am new to Hadoop so this can be a trivial question.
hadoop namenode -format this command deletes all files in your hdfs.
tmp directory contains two folders datanode, namenode in local filesystem. if you format the namenode these two folders becomes empty.
Note : if you want to format your namenode first stop all hadoop services then delete the tmp(contains namenode and datanode) folder in your local file system and start hadoop service surely it will take effect.
Reason for Hadoop namenode -format :
Hadoop NameNode is the centralized place of an HDFS file system which keeps the directory tree of all files in the file system, and tracks where across the cluster the file data is kept. In short, it keeps the metadata related to datanodes. When we format namenode it formats the meta-data related to data-nodes. By doing that, all the information on the datanodes are lost and they can be reused for new data.
By default the namenode location will be at "/tmp/hadoop-myuser/dfs/name"
While you formatting the namenode, this file location was cleared.
To change the namenode location add the follwing properties At hdfs-site.xml
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/search/data/dfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/search/data/dfs/datanode</value>
</property>
I hope this will help you.. :-)
Hadoop namenode -format
Hadoop namenode directory contains the fsimage and edit files which
holds the basic information's about hadoop file system such as where is
data available, which user created files like that
If you format the namenode then the above information's are deleted
from namenode directory which is specified in the hdfs-site.xml as dfs.namenode.name.dir
But you still have the datas on the hadoop but not namenode meta data
Actually formatting a Namenode will not format the Datanode.
It will just format the contents of your namenode (which contains details of datanode). Your namenode will no longer know where your data is. Also namenode -format will assign a new namespace ID to the namenode
You have to change your namespaceID in your datanode to make your datanode work. This will be at dfs/data/current/VERSION
There is a JIRA open now for the same suggesting to format Datanode as well when you format Namenode. HDFS-107
Namenode contains metadata about the Hadoop filesystem.
This command (hadoop-1.2.1$ bin/hadoop namenode -format) will format whole Hadoop distributed file system(HDFS). So if you run this command on existing filesystem you will lose all your data.
Steps
start all the services using "start-all.sh"
check the services are running or not using "JPS"
note: if you use hadoop2.3.0 then following services are need to run
Namenode
Datanode
Resourcemanager
Nodemanager
Move some file from local to HDFS using hdfs -put /
Now check at location "/tmp/hadoop-myuser/dfs/name" you may find this file split into some BLOCKS conatain 64 MB each.
Then start Formatting using **hadoop namenode -format**
Now the file is not available phisically on that location
Further information click here

Unable to create default datanode in hadoop Cluster

I am using Ubuntu 10.04. I installed hadoop in my local directory as a standalone one.
~-desktop:~$ hadoop/bin/hadoop version
Hadoop 1.2.0
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
Compiled by hortonfo on Mon May 6 06:59:37 UTC 2013
From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
This command was run using /home/circar/hadoop/hadoop-core-1.2.0.jar
conf/core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/circar/hadoop/dataFiles</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
</configuration>
I've formatd the namenode twice
~-desktop:~$ hadoop/bin/hadoop namenode -format
Then I start hadoop with
~-desktop:~$ hadoop/bin/start-all.sh
Showing Result as:
starting namenode, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-namenode-circar-desktop.out
circar#localhost's password:
localhost: starting datanode, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-datanode-circar-desktop.out
circar#localhost's password:
localhost: starting secondarynamenode, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-secondarynamenode-circar-desktop.out
starting jobtracker, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-jobtracker-circar-desktop.out
circar#localhost's password:
localhost: starting tasktracker, logging to /home/circar/hadoop/libexec/../logs/hadoop-circar-tasktracker-circar-desktop.out
But in
/logs/hadoop-circar-datanode-circar-desktop.log
It is showing error as:
2013-06-24 17:32:47,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = circar-desktop/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.0
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May 6 06:59:37 UTC 2013
STARTUP_MSG: java = 1.6.0_26
************************************************************/
2013-06-24 17:32:47,315 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-06-24 17:32:47,324 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-06-24 17:32:47,325 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-06-24 17:32:47,325 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2013-06-24 17:32:47,447 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-06-24 17:32:47,450 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-06-24 17:32:49,265 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/circar/hadoop/dataFiles/dfs/data: namenode namespaceID = 186782509; datanode namespaceID = 1733977738
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:412)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:319)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1698)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1637)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1655)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1781)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1798)
2013-06-24 17:32:49,266 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at circar-desktop/127.0.1.1
************************************************************/
JPS Showing:
~-desktop:~$ jps
8084 Jps
7458 JobTracker
7369 SecondaryNameNode
7642 TaskTracker
6971 NameNode
When I'm trying to Stop it , it shows:
~-desktop:~$ hadoop/bin/stop-all.sh
stopping jobtracker
circar#localhost's password:
localhost: stopping tasktracker
stopping namenode
circar#localhost's password:
localhost: *no datanode to stop*
circar#localhost's password:
localhost: stopping secondarynamenode
What wrong am i doing? Anyone help me?
Ravi is correct. But also make sure that your cluster_id in both ${dfs.data.dir}/current/VERSION and ${dfs.name.dir}/current/VERSION matches. If not change the data node's cluster_id to have the same as namenode. After making the changes follow the steps Ravi mentioned.
Namenode generates new namespaceID every time you format HDFS. Datanodes bind themselves to namenode through namespaceID.
Follow the below steps to fix the problem
a) Stop the problematic DataNode.
b) Edit the value of namespaceID in ${dfs.data.dir}/current/VERSION to match the corresponding value of the current NameNode in ${dfs.name.dir}/current/VERSION.
c) Restart the DataNode.

Resources