Hadoop Basic - error while creating directory - hadoop

I have started learning hadoop recently and I am getting the below error while creating new folder -
vm4learning#vm4learning:~/Installations/hadoop-1.2.1/bin$ ./hadoop fs
-mkdir helloworld Warning: $HADOOP_HOME is deprecated. 15/06/14 19:46:35 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1
SECONDS)
Request you to help.below are the namdenode logs -
015-06-14 22:01:08,158 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /home/vm4learning/Installations/hadoop-1.2.1/data/dfs/name does not exist
2015-06-14 22:01:08,161 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/vm4learning/Installations/hadoop-1.2.1/data/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2015-06-14 22:01:08,182 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/vm4learning/Installations/hadoop-1.2.1/data/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2015-06-14 22:01:08,185 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at vm4learning/192.168.1.102
************************************************************/

Before starting to create a directory, you should be sure that your hadoop installation is correct, through jps command, and looking for any process missing.
In your case, the namenode isn't up.
If you see in the logs, it appears to be that some folders aren't created. Do this:
mkdir -p $HADOOP_HOME/dfs/name
mkdir -p $HADOOP_HOME/dfs/name/data
And specify in hdfs-site.xml the following.
<property>
<name>dfs.data.dir</name>
<value>/usr/local/hadoop/dfs/name/data</value>
<final>true</final>
</property>
<property>
<name>dfs.name.dir</name>
<value>/usr/local/hadoop/dfs/name</value>
<final>true</final>
</property>
Reinitialize hadoop, and remember to format previous to do anything.

Another reason why the same error may appear is that you still not executed the formatting of the namenode like that:
hdfs namenode -format
I have missed the step 7 from this tutorial (https://kontext.tech/column/hadoop/377/latest-hadoop-321-installation-on-windows-10-step-by-step-guide ) and faced the same error
hdfs is the utility from HADOOP_HOME/bin

Related

Hadoop 3.3.0 on WSL 2 - Failed to start name node, but data node works fine

I'm running Hadoop 3.3.0 on WSL 2 in Windows 11, I followed this guide to setup my configuration: Install Hadoop 3.3.0 on WSL 2
When I start namenode and datanode with:
start-dfs.sh
It shows no error, but in jps the namenode is not running:
8805 DataNode
9034 Seconday Namenode
9212 Jps
Looking in the namenode log file, it shows failed to start namenode:
2023-01-02 20:50:51,714 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2023-01-02 20:50:51,715 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2023-01-02 20:50:51,715 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2023-01-02 20:50:51,719 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.io.IOException: Could not parse line: Filesystem 1024-blocks Used Available Capacity Mounted on
at org.apache.hadoop.fs.DF.parseOutput(DF.java:195)
at org.apache.hadoop.fs.DF.getFilesystem(DF.java:76)
at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:70)
at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:166)
at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:135)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:1266)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:862)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:783)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1014)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:987)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1756)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1821)
2023-01-02 20:50:51,722 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.io.IOException: Could not parse line: Filesystem 1024-blocks Used Available Capacity Mounted on
2023-01-02 20:50:51,725 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
Here is my configuration file core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
And for hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
I've been stuck with this problem for several days, thank you for all the help!

Secure Hadoop - Datanode cannot connect with namenode

I am using hadoop-2.6.0 and created HA enabled cluster with kerberos security in windows platform. Everything works fine if permission is set to false. But when I enable below property,
hdfs-site.xml
<property>
<name>dfs.permissions</name>
<value>true</value>
</property>
Datanode cannot connect with the namenode. I am getting the following exception
Exception
2015-05-21 10:44:42,461 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: kumar/192.168.3.4:9000
2015-05-21 10:44:46,079 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: dinesh/192.168.3.3:9000
2015-05-21 10:44:47,471 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: kumar/192.168.3.4:9000
2015-05-21 10:44:51,085 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: dinesh/192.168.3.3:9000
2015-05-21 10:44:52,477 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: kumar/192.168.3.4:9000
I cannot find the exact root cause for this problem. I need help to solve this.
I just changed the default supergroup name to newly created group which has members of all hadoop users. Now all user in that group act as superuser hence it works fine.
<property>
<name>dfs.permissions.superusergroup</name>
<value>Hadoopgroup</value>
</property>
Refer superuser

Getting Time out Error while executing hadoop jar ./hadoop-examples-1.0.3.jar pi 2 5

When I am trying hadoop jar ./hadoop-examples-1.0.3.jar pi 2 5.It shows following error.
hduser#ubuntu:/usr/local/hadoop-1.0.3$ hadoop jar ./hadoop-examples-1.0.3.jar pi 2 5
Warning: $HADOOP_HOME is deprecated.
Number of Maps = 2
Samples per Map = 5
Wrote input for Map #0
Wrote input for Map #1
Starting Job
14/12/07 09:47:33 INFO ipc.Client: Retrying connect to server: 172.16.76.1/172.16.76.1:9002. Already tried 0 time(s).
14/12/07 09:47:54 INFO ipc.Client: Retrying connect to server: 172.16.76.1/172.16.76.1:9002. Already tried 1 time(s).
^C^Chduser#ubuntu:/usr/local/hadoop-1.0.3$
hduser#ubuntu:~/data/dfs$ jps
7176 DataNode
7484 JobTracker
6960 NameNode
7704 TaskTracker
7400 SecondaryNameNode
7766 Jps
hduser#ubuntu:~/data/dfs$ cd /usr/local/hadoop-1.0.3/
As you can see that all the processes are running.But when I go to log I found the below error;
ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/hduser/data/dfs/namenode is in an inconsistent state: storage directory does not exist or is not accessible.
I checked my .xml files configuration.Which is as follows;
mapre-site.xml
<property>
<name>mapred.job.tracker</name>
<value>172.16.76.1:9002</value>
</property>
core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9001</value>
</property>
hdfs-site.xml
<property>
<name>dfs.name.dir</name>
<value>/home/hduser/data/dfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hduser/data/dfs/datanode</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
I tried formatting the namenode and starting again.But nothing happened.
I changed my directory owner permission to 755 and tried again but still the issue did not
resolve the issue.
I exhausted all the The suggestions given on this blog bur it did not help.
I though all the possible solution and tried so may times to solve it but still am not able to do
that.The same issue is coming again and again.
Here is the detailed error msg:
2014-12-07 06:25:05,376 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/home/hduser/data/dfs/namenode does not exist.
2014-12-07 06:25:05,377 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/hduser/data/dfs/namenode is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2014-12-07 06:25:05,399 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/hduser/data/dfs/namenode is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2014-12-07 06:25:05,407 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
Can you try delete the namenode storage directory /home/hduser/data/dfs/namenode and then format the namenode ?

hadoop/hdfs/name is in an inconsistent state: storage directory(hadoop/hdfs/data/) does not exist or is not accessible

I have tried all the different solutions provided at stackoverflow on this topic, but of no help
Asking again with the specific log and the details
Any help is appreciated
I have one master node and 5 slave nodes in my Hadoop cluster. ubuntu user and ubuntu group is the owner of the ~/Hadoop folder
Both the ~/hadoop/hdfs/data & ~/hadoop/hdfs/name folder exist
and permission for both the folders are set to 755
successfully formated the namenode before starting the script start-all.sh
THE SCRIPT FAILS TO LAUNCH THE "NAMENODE"
These are running at the master node
ubuntu#master:~/hadoop/bin$ jps
7067 TaskTracker
6914 JobTracker
7237 Jps
6834 SecondaryNameNode
6682 DataNode
ubuntu#slave5:~/hadoop/bin$ jps
31438 TaskTracker
31581 Jps
31307 DataNode
Below is the log from name-node log files.
..........
..........
.........
014-12-03 12:25:45,460 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2014-12-03 12:25:45,461 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 1013645312
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2014-12-03 12:25:45,588 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
2014-12-03 12:25:45,588 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2014-12-03 12:25:45,588 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2014-12-03 12:25:45,622 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2014-12-03 12:25:45,623 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2014-12-03 12:25:45,716 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2014-12-03 12:25:45,777 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2014-12-03 12:25:45,777 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2014-12-03 12:25:45,785 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /home/ubuntu/hadoop/file:/home/ubuntu/hadoop/hdfs/name does not exist
2014-12-03 12:25:45,787 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/ubuntu/hadoop/file:/home/ubuntu/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2014-12-03 12:25:45,801 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/ubuntu/hadoop/file:/home/ubuntu/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
Removed the "file:" from the hdfs-site.xml file
[WRONG HDFS-SITE.XML]
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hduser/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hduser/mydata/hdfs/datanode</value>
</property>
[CORRECT HDFS-SITE.XML]
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hduser/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hduser/mydata/hdfs/datanode</value>
</property>
Thanks to Erik for the help.
Follow the below steps,
1.Stop all services
2.Format your namenode
3.Delete your data node directory
4.start all services
run these commands on terminal
$ cd ~
$ mkdir -p mydata/hdfs/namenode
$ mkdir -p mydata/hdfs/datanode
give permission to both directory 755
then,
Add this property in conf/hdfs-site.xml
<property>
 <name>dfs.namenode.name.dir</name>
 <value>file:/home/hduser/mydata/hdfs/namenode</value>
</property>
<property>
 <name>dfs.datanode.data.dir</name>
 <value>file:/home/hduser/mydata/hdfs/datanode</value>
</property>
if not work ,then
stop-all.sh
start-all.sh
1) name node directory you should be owner and give chmod 750 appropriately
2)stop all services
3)use hadoop namenode -format to format namenode
4)add this to hdfs-site.xml
<property>
<name>dfs.data.dir</name>
<value>path/to/hadooptmpfolder/dfs/name/data</value>
<final>true</final>
</property>
<property>
<name>dfs.name.dir</name>
<value>path/to/hadooptmpfolder/dfs/name</value>
<final>true</final>
</property>
5) to run hadoop namenode -format
add export PATH=$PATH:/usr/local/hadoop/bin/ in ~/.bashrc
wherever hadoop is unzip add that in path
Had similar problem, I formatted the namenode then started it
Hadoop namenode -format
hadoop-daemon.sh start namenode
You can follow given below steps to remove this error:
Stop all hadoop daemons
Delete all files from given below directory:
/tmp/hadoop-{user}/dfs/name/current and /tmp/hadoop-{user}/dfs/data/current
where user is the user with which you logged in into the box.
Format namenode
Start all services
You will now see a new file VERSION created in directory /tmp/hadoop-/dfs/name/current
One thing to notice here is that value of Cluster ID in file /tmp/hadoop-eip/dfs/name/current/VERSION must be same as in /tmp/hadoop-eip/dfs/data/current/VERSION
-Hitesh

no namenode error in pseudo-mode

I'm new to hadoop and is in learning phase.
As per Hadoop Definitve guide, i have set up my hadoop in pseudo distributed mode and everything was working fine. I was even able to execute all the examples from chapter 3 yesterday. Today, when i rebooted my unix and tried to run start-dfs.sh and then tried localhost:50070... it is showing error and when i try to stop dfs (stop-dfs.sh) it says no namenode to stop. I have been googling the issue but no result. Also, when i format my namenode again...everything starts working fine and i'm able to connect to the localhost:50070 and even replicate files and directories in hdfs but as soon as i restart my linux and try to connect to hdfs the same problem comes up.
Below is the error log:
************************************************************/
2011-06-22 15:45:55,249 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = ubuntu/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.203.0
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011
************************************************************/
2011-06-22 15:45:56,383 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2011-06-22 15:45:56,455 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2011-06-22 15:45:57,007 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2011-06-22 15:45:57,031 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2011-06-22 15:45:57,059 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2011-06-22 15:45:57,070 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit
2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries
2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=anshu
2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2011-06-22 15:45:57,868 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2011-06-22 15:45:57,869 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2011-06-22 15:45:58,769 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2011-06-22 15:45:58,809 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
**2011-06-22 15:45:58,825 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anshu/dfs/name does not exist.
2011-06-22 15:45:58,827 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.h**adoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
2011-06-22 15:45:58,828 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
2011-06-22 15:45:58,829 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
Any help is appreciated
Thank-you
here is the kicker:
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory /tmp/hadoop-anshu/dfs/name
is in an inconsistent state: storage
directory does not exist or is not
accessible.
i'd been having similar issues. i used stop-all.sh to shut down hadoop. i guess it was foolish of me to think this would properly save the data in my HDFS.
but as far as i can tell from what appears to be the appropriate code chunk in the hadoop-daemon.sh script, this is not the case - it just kills the processes:
(stop)
if [ -f $pid ]; then
if kill -0 `cat $pid` > /dev/null 2>&1; then
echo stopping $command
kill `cat $pid`
else
echo no $command to stop
fi
else
echo no $command to stop
fi
did you look to see if the directory it's complaining about existed? i checked and mine did not, although there was an (empty!) data folder in there here I imagine data might have once lived.
so my guess was that what we need to do is configure Hadoop such that our namenode and datanode are NOT stored in a tmp directory. there is some possibility that the OS is doing maintenance and getting rid of these files. either that hadoop figures you don't care about them anymore because you wouldn't have left them in a tmp directory if you did, and you wouldn't be restarting your machine in the middle of a map-reduce job. I don't really think this should happen (i mean, that's not how i would design things) but it seemed like a good guess.
so, based on this site http://wiki.datameer.com/display/DAS11/Hadoop+configuration+file+templates
i edited my conf/hdfs-site.xml file to point to the following paths (obviously, make your own directories as you see fit):
<property>
<name>dfs.name.dir</name>
<value>/hadoopstorage/name/</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/hadoopstorage/data/</value>
</property>
Did this, formatted the new namenode (sadly, data loss seems inevitable in this situation), stopped and started hadoop with the shell scripts, restarted the machine, and my files were still there...
YMMV...hope this works for you! i'm on OS X but i don't think you should have dissimilar results.
J
If you dont care about losing data just execute the command:
./hadoop namenode -format
I had similar issue and this helped
chown -R hdfs:hadoop /path/to/namenode/date/dir
Setting this properties in conf/hdfs-site.xml file worked for me!!!
Thanks jsh
<property>
<name>dfs.name.dir</name>
<value>/hadoopstorage/name/</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/hadoopstorage/data/</value>`enter code here`
</property>
Dont forget to set proper permissions to those directories
JSH answer is correct.
Just a couple of changes for hadoop 2.6 i had to do:
<property>
<name>dfs.namenode.name.dir</name>
<value>/hadoopstorage/name/</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/hadoopstorage/data/</value>
</property>
If you have not resolved the problem, try this:
give the dfs.name.dir directory to in the user group hadoop and give the group to write permission.
See your coresite.xml in hadoop config directory
Go to config directory
vi core-site.xml,hdf.site.xml
Make sure your port numbers and paths are correct
I have the similar problem, but slightly different.
Running start-all.sh seams quite well, but jps shows that there is no namenodes and I could not see the list when I run hdfs dfs -ls /.
My first attempt is to run hadoop namenode -format, then namenode appears but datanode disappears.
After googling the solution, I run rm -rf /usr/local/hadoop_store/hdfs/datanode/* and restart hadoop, jps shows:
12912 ResourceManager
13391 FsShell
13420 Jps
13038 NodeManager
12733 SecondaryNameNode
12432 NameNode
12556 DataNode
Now I can use hadoop commands as usual.
HTH!

Resources