Namenode not starting (java.lang.IllegalArgumentException: Socket address is null) - hadoop

I had asked this question a few days back but did not know where the log files were located.
I have the config settings in core-site.xml as follows
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>blah blah....</description>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>
The logfile for Namenode is below
2013-01-29 02:12:30,078 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = dheerajvc-ThinkPad-T420/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.1.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1411108; compiled by 'hortonfo' on Mon Nov 19 10:48:11 UTC 2012
************************************************************/
2013-01-29 02:12:30,184 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-01-29 02:12:30,192 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-01-29 02:12:30,193 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-01-29 02:12:30,193 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-01-29 02:12:30,326 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-01-29 02:12:30,329 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-01-29 02:12:30,333 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-01-29 02:12:30,333 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-01-29 02:12:30,351 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit
2013-01-29 02:12:30,351 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB
2013-01-29 02:12:30,351 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries
2013-01-29 02:12:30,351 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2013-01-29 02:12:30,371 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
2013-01-29 02:12:30,371 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-01-29 02:12:30,371 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-01-29 02:12:30,377 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-01-29 02:12:30,377 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-01-29 02:12:30,393 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-01-29 02:12:30,408 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-01-29 02:12:30,416 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2013-01-29 02:12:30,421 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-01-29 02:12:30,421 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 loaded in 0 seconds.
2013-01-29 02:12:30,421 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-01-29 02:12:30,422 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds.
2013-01-29 02:12:30,525 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop/dfs/name/current/edits
2013-01-29 02:12:30,526 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop/dfs/name/current/edits
2013-01-29 02:12:30,878 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-01-29 02:12:30,879 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 513 msecs
2013-01-29 02:12:30,880 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.threshold.pct = 0.949999988079071
2013-01-29 02:12:30,880 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2013-01-29 02:12:30,880 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.extension = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of over-replicated blocks = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 7 msec
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
2013-01-29 02:12:30,888 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2013-01-29 02:12:30,888 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2013-01-29 02:12:30,892 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-01-29 02:12:30,892 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
2013-01-29 02:12:30,892 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec processing time, 1 msec clock time, 1 cycles
2013-01-29 02:12:30,892 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2013-01-29 02:12:30,892 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
2013-01-29 02:12:30,896 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-01-29 02:12:30,908 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-01-29 02:12:30,909 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort50070 registered.
2013-01-29 02:12:30,910 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort50070 registered.
2013-01-29 02:12:30,912 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:50070
2013-01-29 02:12:30,913 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.IllegalArgumentException: Socket address is null
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:142)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:340)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:306)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:529)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1412)
2013-01-29 02:12:30,913 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at dheerajvc-ThinkPad-T420/127.0.1.1
************************************************************/
netstat --numeric-ports | grep "5431" yielded nothing.. so I think the ports are free. What socket is the namenode expecting?.
EDIT: only the jobtracker and tasktracker are starting, is it necessary for the namenode to start before datanode and secondarynamenode start?.
I can format namenode , but why does it use the /tmp directory? I thought it was supposed to use the location mentioned in core-site.xml
$ hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
13/01/31 03:34:22 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = dheerajvc-ThinkPad-T420/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.1.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1411108; compiled by 'hortonfo' on Mon Nov 19 10:48:11 UTC 2012
************************************************************/
13/01/31 03:34:22 INFO util.GSet: VM type = 32-bit
13/01/31 03:34:22 INFO util.GSet: 2% max memory = 17.77875 MB
13/01/31 03:34:22 INFO util.GSet: capacity = 2^22 = 4194304 entries
13/01/31 03:34:22 INFO util.GSet: recommended=4194304, actual=4194304
13/01/31 03:34:22 INFO namenode.FSNamesystem: fsOwner=dheerajvc
13/01/31 03:34:22 INFO namenode.FSNamesystem: supergroup=supergroup
13/01/31 03:34:22 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/01/31 03:34:22 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/01/31 03:34:22 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/01/31 03:34:22 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/01/31 03:34:22 INFO common.Storage: Image file of size 115 saved in 0 seconds.
13/01/31 03:34:22 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop/dfs/name/current/edits
13/01/31 03:34:22 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop/dfs/name/current/edits
13/01/31 03:34:23 INFO common.Storage: Storage directory /tmp/hadoop/dfs/name has been successfully formatted.
13/01/31 03:34:23 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at dheerajvc-ThinkPad-T420/127.0.1.1
************************************************************/

You should mention directory locations in hdfs-site.xml file instead of core-site.xml.
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/D:/analytics/hadoop/data/dfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/D:/analytics/hadoop/data/dfs/datanode</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.http.address</name>
<value>127.0.0.1:50070</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
<description>to enable webhdfs</description>
<final>true</final>
</property>
</configuration>

Related

Hadoop hdfs namenode start command fails. Not formatted either?

Much like states when I run the command sudo service hadoop-hdfs-namenode start the command failed with the below message.
2015-02-01 16:51:22,032 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-02-01 16:51:22,379 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
2015-02-01 16:51:22,512 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-02-01 16:51:22,512 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-02-01 16:51:23,043 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-02-01 16:51:23,043 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-02-01 16:51:23,096 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-02-01 16:51:23,214 INFO org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2015-02-01 16:51:23,223 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-02-01 16:51:23,227 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-02-01 16:51:23,227 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-02-01 16:51:23,232 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-02-01 16:51:23,233 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2015-02-01 16:51:23,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-02-01 16:51:23,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2015-02-01 16:51:23,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2015-02-01 16:51:23,253 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
2015-02-01 16:51:23,254 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2015-02-01 16:51:23,254 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false
2015-02-01 16:51:23,254 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-02-01 16:51:23,259 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-02-01 16:51:23,555 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-02-01 16:51:23,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-02-01 16:51:23,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-02-01 16:51:23,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 0
2015-02-01 16:51:23,563 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name does not exist
2015-02-01 16:51:23,565 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-02-01 16:51:23,565 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-02-01 16:51:23,565 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-02-01 16:51:23,566 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:302)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:207)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:741)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:531)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:445)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
2015-02-01 16:51:23,571 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-02-01 16:51:23,573 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/127.0.0.1
************************************************************/
The error itself is pretty self explanatory, the directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name is missing which is correct. The cache directory was empty so I created /cache/hdfs/dfs/name. I also changed the owner and group to match that of the directory above them. hdfs:hadoop.
I again run the format command sudo -u hdfs hdfs namenode –format which ends the same way as it did prior to creating this directory.
STARTUP_MSG: build = file:///data/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.7.1/src/hadoop-common-project/hadoop-common -r Unknown; compiled by 'jenkins' on Tue Nov 18 08:10:25 PST 2014
STARTUP_MSG: java = 1.7.0_75
************************************************************/
15/02/01 17:09:04 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Usage: java NameNode [-backup] | [-checkpoint] | [-format [-clusterid cid ] [-force] [-nonInteractive] ] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-initializeSharedEdits] | [-bootstrapStandby] | [-recover [ -force ] ]
15/02/01 17:09:04 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/127.0.0.1
Now I run the namenode start command again and receive the following error:
STARTUP_MSG: build = file:///data/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.7.1/src/hadoop-common-project/hadoop-common -r Unknown; compiled by 'jenkins' on Tue Nov 18 08:10:25 PST 2014
STARTUP_MSG: java = 1.7.0_75
************************************************************/
2015-02-01 17:09:26,774 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-02-01 17:09:27,097 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
2015-02-01 17:09:27,215 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-02-01 17:09:27,216 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-02-01 17:09:27,721 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-02-01 17:09:27,721 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-02-01 17:09:27,779 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-02-01 17:09:27,883 INFO org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2015-02-01 17:09:27,890 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-02-01 17:09:27,895 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-02-01 17:09:27,895 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-02-01 17:09:27,899 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-02-01 17:09:27,899 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2015-02-01 17:09:27,910 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-02-01 17:09:27,924 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-02-01 17:09:28,178 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-02-01 17:09:28,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-02-01 17:09:28,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-02-01 17:09:28,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 0
2015-02-01 17:09:28,193 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/in_use.lock acquired by nodename 28482#hadoop
2015-02-01 17:09:28,196 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-02-01 17:09:28,196 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-02-01 17:09:28,196 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-02-01 17:09:28,197 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:217)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:741)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:531)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:445)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
2015-02-01 17:09:28,202 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-02-01 17:09:28,205 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/127.0.0.1
************************************************************/
My system is running in VirtualBox with a CentOS 6.6 guest, Oracle JDK 1.7, and attempting to run Cloudera CDH4. Any input on what to do next to resolve this issue would be appreciated.
If you copy and paste the format command from slides or something, can you actually type it and see if it works?
I don't know if you can see the different between
–format and -format.
The dash looks different to me.
Had the same issue formatting the namenode. Retyped the commando (not copy-paste):
hdfs namenode -format
it worked. Thx https://stackoverflow.com/users/4533812/james

Hadoop namenode not working on OSX (ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.SocketException: Permission denied)

I'm running Hadoop 1.2.1 on OSX (single-node cluster mode), and everything seems to be working except the namenode: When I run start-all.sh, the namenode fails to run. This can be seen when running stop-all.sh:
$ bin/stop-all.sh
stopping jobtracker
localhost: stopping tasktracker
no namenode to stop
localhost: stopping datanode
localhost: stopping secondarynamenode
I've been troubleshooting this for quite a while, and I can't seem to figure out the issue--I have no idea what is causing this permission error. I've reformatted the namenode and run chmod -R 777 on the /hadoopstorage directory (which as you can see below from the conf files, is where the namenode files are located), so hadoop should have the ability to modify it.
Here is the namenode log file:
2013-11-26 16:51:25,951 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = <my machine>
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.6.0_65
************************************************************/
2013-11-26 16:51:26,187 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-11-26 16:51:26,203 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-11-26 16:51:26,204 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-11-26 16:51:26,204 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-11-26 16:51:26,569 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-11-26 16:51:26,585 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-11-26 16:51:26,623 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-11-26 16:51:26,625 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-11-26 16:51:26,711 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2013-11-26 16:51:26,711 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2013-11-26 16:51:26,712 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 1039859712
2013-11-26 16:51:26,712 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2013-11-26 16:51:26,712 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-11-26 16:51:26,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=williammurphy
2013-11-26 16:51:26,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-11-26 16:51:26,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-11-26 16:51:26,755 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-11-26 16:51:26,755 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-11-26 16:51:26,997 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-11-26 16:51:27,070 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2013-11-26 16:51:27,070 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-11-26 16:51:27,092 INFO org.apache.hadoop.hdfs.server.common.Storage: Start loading image file /hadoopstorage/name/current/fsimage
2013-11-26 16:51:27,092 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2013-11-26 16:51:27,099 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-11-26 16:51:27,099 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /hadoopstorage/name/current/fsimage of size 119 bytes loaded in 0 seconds.
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start loading edits file /hadoopstorage/name/current/edits
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: EOF of /hadoopstorage/name/current/edits, reached end of edit log Number of transactions found: 0. Bytes read: 4
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start checking end of edit log (/hadoopstorage/name/current/edits) ...
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Checked the bytes after the end of edit log (/hadoopstorage/name/current/edits):
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Padding position = -1 (-1 means padding not found)
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edit log length = 4
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Read length = 4
2013-11-26 16:51:27,101 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Corruption length = 0
2013-11-26 16:51:27,101 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Toleration length = 0 (= dfs.namenode.edits.toleration.length)
2013-11-26 16:51:27,104 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Summary: |---------- Read=4 ----------|-- Corrupt=0 --|-- Pad=0 --|
2013-11-26 16:51:27,104 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edits file /hadoopstorage/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-11-26 16:51:27,106 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /hadoopstorage/name/current/fsimage of size 119 bytes saved in 0 seconds.
2013-11-26 16:51:27,187 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/hadoopstorage/name/current/edits
2013-11-26 16:51:27,189 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/hadoopstorage/name/current/edits
2013-11-26 16:51:27,244 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-11-26 16:51:27,244 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 520 msecs
2013-11-26 16:51:27,246 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.threshold.pct = 0.9990000128746033
2013-11-26 16:51:27,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2013-11-26 16:51:27,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.extension = 30000
2013-11-26 16:51:27,251 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks excluded by safe block count: 0 total blocks: 0 and thus the safe blocks: 0
2013-11-26 16:51:27,268 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2013-11-26 16:51:27,268 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2013-11-26 16:51:27,268 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2013-11-26 16:51:27,268 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of over-replicated blocks = 0
2013-11-26 16:51:27,268 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 17 msec
2013-11-26 16:51:27,268 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs
2013-11-26 16:51:27,269 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2013-11-26 16:51:27,269 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2013-11-26 16:51:27,283 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
2013-11-26 16:51:27,283 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec processing time, 1 msec clock time, 1 cycles
2013-11-26 16:51:27,284 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2013-11-26 16:51:27,284 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
2013-11-26 16:51:27,284 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-11-26 16:51:27,291 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-11-26 16:51:27,311 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedExceptionjava.lang.InterruptedException: sleep interrupted
2013-11-26 16:51:27,312 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
2013-11-26 16:51:27,312 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
at java.lang.Thread.run(Thread.java:695)
2013-11-26 16:51:27,312 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/hadoopstorage/name/current/edits
2013-11-26 16:51:27,314 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/hadoopstorage/name/current/edits
2013-11-26 16:51:27,321 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.SocketException: Permission denied
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:265)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:341)
at org.apache.hadoop.ipc.Server.<init>(Server.java:1539)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:569)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:530)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:324)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:23</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/hadoopstorage/name/</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:22</value>
</property>
</configuration>
If anyone has encountered a similar error or can shed some light on the situation, it would be really appreciated. Thanks in advance!
It looks like you're trying to start the Hadoop services on privileged ports (< 1024). In particular you're trying to start the Job Tracker on port 22, which is the well-known port for SSH. You should avoid binding to well-known ports for other applications.
You could check if this is the case by running start-all.sh as root. It that fixes it you could continue to run as root (generally a bad idea), or reconfigure to use higher numbered ports:
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>

Hadoop component is not starting

I'm new to hadoop well I've followed micheal install ( http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ ).
When I run the command /usr/local/hadoop/bin/start-all.sh, normally this will startup a Namenode, Datanode, Jobtracker and a Tasktracker on the machine.
I get only TaskTracker started here is the trace :
hduser#srv591 ~ $ /usr/local/hadoop/bin/start-all.sh
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-namenode-srv591.out
localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-srv591.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-secondarynamenode-srv591.out
localhost: Exception in thread "main" org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/namesecondary is in an inconsistent state: checkpoint directory does not exist or is not accessible.
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.recoverCreate(SecondaryNameNode.java:729)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:208)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:150)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:676)
starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-jobtracker-srv591.out
localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-tasktracker-srv591.out
hduser#srv591 ~ $ /usr/local/java/bin/jps
19469 TaskTracker
19544 Jps
The solution of Tariq works, but still to start jobtracker and namenode here is the content of logs
hduser#srv591 ~ $ cat /usr/local/hadoop/libexec/../logs/hadoop-hduser-namenode-srv591.log
2013-09-21 00:30:13,765 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = srv591.sd-france.net/46.21.207.111
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_40
************************************************************/
2013-09-21 00:30:13,904 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-09-21 00:30:13,913 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-09-21 00:30:13,914 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-09-21 00:30:13,914 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-09-21 00:30:14,140 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-09-21 00:30:14,144 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-09-21 00:30:14,148 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-09-21 00:30:14,149 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 932184064
2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-09-21 00:30:14,178 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
2013-09-21 00:30:14,178 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-09-21 00:30:14,178 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-09-21 00:30:14,185 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-09-21 00:30:14,185 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-09-21 00:30:14,335 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-09-21 00:30:14,370 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2013-09-21 00:30:14,370 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-09-21 00:30:14,373 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot access storage directory /app/hadoop/tmp/dfs/name
2013-09-21 00:30:14,374 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-09-21 00:30:14,404 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-09-21 00:30:14,405 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at srv591.sd-france.net/46.21.207.111
************************************************************/
2013-09-21 00:31:08,869 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = srv591.sd-france.net/46.21.207.111
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_40
************************************************************/
2013-09-21 00:31:09,012 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-09-21 00:31:09,021 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-09-21 00:31:09,022 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-09-21 00:31:09,022 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-09-21 00:31:09,240 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-09-21 00:31:09,244 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-09-21 00:31:09,248 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-09-21 00:31:09,249 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 932184064
2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-09-21 00:31:09,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
2013-09-21 00:31:09,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-09-21 00:31:09,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-09-21 00:31:09,286 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-09-21 00:31:09,286 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-09-21 00:31:09,457 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-09-21 00:31:09,496 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2013-09-21 00:31:09,496 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-09-21 00:31:09,501 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot lock storage /app/hadoop/tmp/dfs/name. The directory is already locked.
2013-09-21 00:31:09,501 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: Cannot lock storage /app/hadoop/tmp/dfs/name. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:599)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:452)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:299)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-09-21 00:31:09,508 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Cannot lock storage /app/hadoop/tmp/dfs/name. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:599)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:452)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:299)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-09-21 00:31:09,509 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at srv591.sd-france.net/46.21.207.111
************************************************************/
here is log for datanode:
************************************************************/
2013-09-21 01:01:24,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = srv591.sd-france.net/46.21.207.111
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_40
************************************************************/
2013-09-21 01:01:24,855 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-09-21 01:01:24,870 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-09-21 01:01:24,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-09-21 01:01:24,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2013-09-21 01:01:25,204 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-09-21 01:01:25,224 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-09-21 01:01:25,499 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1590050521; datanode namespaceID = 1863017904
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:414)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
2013-09-21 01:01:25,500 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
Please make sure you have created the directory /app/hadoop/tmp/dfs/namesecondary and have proper permissions. Also, inspecting the logs(NN, DN, SNN, JT) would be helpful. If you are still facing the issue show us the logs along with the configuration files.
In response to your comment :
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
Looks like you haven't created any of the directories you are using in your configuration files. This exception clearly shows that. Please make sure you have created all the directories with proper permissions which you are using as values of your configuration properties.
The default Hadoop installation takes the datanode and namenode data directories as some /tmp directory on the local disk. The correct way is to manually create the data directories for datanode and namenode on local and put the path in the hdfs-site.xml. Add the following properties to /etc/hadoop/hdfs-site.xml and reformat the name node and restart it should be fixed.
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///data/1/dfs/nn,file:///data/2/dfs/nn</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///data/1/dfs/dn,file:///data/2/dfs/dn</value>
</property>

Install Hadoop on Windows 7 fail to perform hadoop namenode –format

I'm installing Hadoop for the first time and on Windows 7 using this blog:
http://blog.sqltrainer.com/2012/01/installing-and-configuring-apache.html
Installed Cygwin with the package openssh as a service
Downloaded and unpacked hadoop-1.0.4
Configured JAVA_HOME then checked:
$ bin/hadoop version
Hadoop 1.0.4
Subversion svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290
Compiled by hortonfo on Wed Oct 3 05:13:58 UTC 2012
From source with checksum fe2baea87c4c81a2c505767f3f9b71f4
hdfs-site.xml content:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:47110</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>localhost:47111</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
mapred-site.xml content:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:8021</value>
</property>
</configuration>
when I try to format the disk space I get:
$ ./bin/hadoop namenode –format
13/02/12 10:16:34 INFO namenode.NameNode:
STARTUP_MSG: /************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = GUY-PC/192.168.1.5
STARTUP_MSG: args = [▒format]
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
Usage: java NameNode [-format] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] 13/02/12 10:16:34 INFO namenode.NameNode:
SHUTDOWN_MSG: /************************************************************
SHUTDOWN_MSG: Shutting down NameNode at GUY-PC/192.168.1.5
************************************************************/
when I try to start I get:
$ bin/start-dfs.sh
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-guy-namenode-GUY-PC.out
localhost: Connection closed by 127.0.0.1
localhost: Connection closed by 127.0.0.1
log hadoop-guy-namenode-GUY-PC.log content:
2013-02-12 10:21:27,101 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
STARTUP_MSG: /************************************************************
STARTUP_MSG: Starting NameNode STARTUP_MSG: host = GUY-PC/192.168.1.5
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
2013-02-12 10:21:27,187 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-02-12 10:21:27,194 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-02-12 10:21:27,195 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-02-12 10:21:27,195 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-02-12 10:21:27,244 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-02-12 10:21:27,248 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-02-12 10:21:27,249 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-02-12 10:21:27,259 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit
2013-02-12 10:21:27,259 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2013-02-12 10:21:27,259 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries
2013-02-12 10:21:27,259 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2013-02-12 10:21:27,285 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=guy
2013-02-12 10:21:27,285 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-02-12 10:21:27,285 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-02-12 10:21:27,288 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-02-12 10:21:27,288 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-02-12 10:21:27,356 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-02-12 10:21:27,370 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-02-12 10:21:27,372 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory C:\tmp\hadoop-guy\dfs\name does not exist.
2013-02-12 10:21:27,373 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory C:\tmp\hadoop-guy\dfs\name is in an inconsistent state:
storage directory does not exist or is not accessible. at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303) at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362) at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496) at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279) at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2013-02-12 10:21:27,373 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory C:\tmp\hadoop-guy\dfs\name is in an inconsistent state:
storage directory does not exist or is not accessible. at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303) at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362) at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496) at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279) at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2013-02-12 10:21:27,374 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
SHUTDOWN_MSG: /************************************************************
SHUTDOWN_MSG: Shutting down NameNode at GUY-PC/192.168.1.5
************************************************************/
2013-02-12 10:25:50,186 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
STARTUP_MSG: /************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = GUY-PC/192.168.1.5
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
2013-02-12 10:25:50,270 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-02-12 10:25:50,276 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-02-12 10:25:50,277 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-02-12 10:25:50,277 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-02-12 10:25:50,326 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-02-12 10:25:50,330 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-02-12 10:25:50,330 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-02-12 10:25:50,340 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit
2013-02-12 10:25:50,341 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2013-02-12 10:25:50,341 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries
2013-02-12 10:25:50,341 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2013-02-12 10:25:50,367 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=guy
2013-02-12 10:25:50,367 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-02-12 10:25:50,367 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-02-12 10:25:50,369 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-02-12 10:25:50,370 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-02-12 10:25:50,436 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-02-12 10:25:50,450 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-02-12 10:25:50,452 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory C:\tmp\hadoop-guy\dfs\name does not exist.
2013-02-12 10:25:50,453 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory C:\tmp\hadoop-guy\dfs\name is in an inconsistent state:
storage directory does not exist or is not accessible. at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303) at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362) at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496) at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279) at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2013-02-12 10:25:50,454 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory C:\tmp\hadoop-guy\dfs\name is in an inconsistent state:
storage directory does not exist or is not accessible. at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303) at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362) at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496) at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279) at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2013-02-12 10:25:50,454 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
SHUTDOWN_MSG: /************************************************************
SHUTDOWN_MSG: Shutting down NameNode at GUY-PC/192.168.1.5
************************************************************/
When I search my exception in the net, I get the answer to run: hadoop namenode –format like I did in section 6.
Please help.
I solved the problem:
I copied the command: hadoop namenode –format from an internet page.
When I simply wrote the character '-' with my keyboard it worked.

Have to format HDFS repeatedly whenever I stop and start hadoop services

I setup hadoop on a single node cluster. Everything works fine when I start all the hadoop services using start=all.sh. However whenever I stop all the services and restart hadoop services I get the following exception and I have to reformat the file system again. Right now I am in development and I am copying the files whenever I reformat it. But I cannot afford to have this kind of behaviour in production. I have checked the logs and here is the exception. Also my etc/hosts file is the same and I disabled IPV6
2012-11-03 18:49:45,542 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = nikhil-VirtualBox/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches /branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
2012-11-03 18:49:45,738 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2012-11-03 18:49:45,750 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2012-11-03 18:49:45,751 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2012-11-03 18:49:45,751 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2012-11-03 18:49:45,899 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2012-11-03 18:49:45,902 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2012-11-03 18:49:45,910 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2012-11-03 18:49:45,910 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2012-11-03 18:49:45,932 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit
2012-11-03 18:49:45,934 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2012-11-03 18:49:45,934 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries
2012-11-03 18:49:45,934 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2012-11-03 18:49:46,020 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
2012-11-03 18:49:46,020 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2012-11-03 18:49:46,021 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2012-11-03 18:49:46,024 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2012-11-03 18:49:46,024 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2012-11-03 18:49:46,169 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2012-11-03 18:49:46,191 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2012-11-03 18:49:46,194 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-hduser/dfs/name does not exist.
2012-11-03 18:49:46,196 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-hduser/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2012-11-03 18:49:46,197 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-hduser/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
Simply don't install your HDFS inside the /tmp directory. It will be cleaned up by your OS everytime you boot.
Personally I have installed my HDFS in /srv/hdfs/, but that is a matter of taste I guess.

Resources