Hadoop component is not starting - hadoop

I'm new to hadoop well I've followed micheal install ( http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ ).
When I run the command /usr/local/hadoop/bin/start-all.sh, normally this will startup a Namenode, Datanode, Jobtracker and a Tasktracker on the machine.
I get only TaskTracker started here is the trace :
hduser#srv591 ~ $ /usr/local/hadoop/bin/start-all.sh
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-namenode-srv591.out
localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-srv591.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-secondarynamenode-srv591.out
localhost: Exception in thread "main" org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/namesecondary is in an inconsistent state: checkpoint directory does not exist or is not accessible.
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.recoverCreate(SecondaryNameNode.java:729)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:208)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:150)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:676)
starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-jobtracker-srv591.out
localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-tasktracker-srv591.out
hduser#srv591 ~ $ /usr/local/java/bin/jps
19469 TaskTracker
19544 Jps
The solution of Tariq works, but still to start jobtracker and namenode here is the content of logs
hduser#srv591 ~ $ cat /usr/local/hadoop/libexec/../logs/hadoop-hduser-namenode-srv591.log
2013-09-21 00:30:13,765 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = srv591.sd-france.net/46.21.207.111
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_40
************************************************************/
2013-09-21 00:30:13,904 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-09-21 00:30:13,913 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-09-21 00:30:13,914 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-09-21 00:30:13,914 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-09-21 00:30:14,140 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-09-21 00:30:14,144 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-09-21 00:30:14,148 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-09-21 00:30:14,149 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 932184064
2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-09-21 00:30:14,178 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
2013-09-21 00:30:14,178 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-09-21 00:30:14,178 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-09-21 00:30:14,185 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-09-21 00:30:14,185 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-09-21 00:30:14,335 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-09-21 00:30:14,370 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2013-09-21 00:30:14,370 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-09-21 00:30:14,373 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot access storage directory /app/hadoop/tmp/dfs/name
2013-09-21 00:30:14,374 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-09-21 00:30:14,404 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-09-21 00:30:14,405 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at srv591.sd-france.net/46.21.207.111
************************************************************/
2013-09-21 00:31:08,869 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = srv591.sd-france.net/46.21.207.111
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_40
************************************************************/
2013-09-21 00:31:09,012 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-09-21 00:31:09,021 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-09-21 00:31:09,022 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-09-21 00:31:09,022 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-09-21 00:31:09,240 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-09-21 00:31:09,244 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-09-21 00:31:09,248 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-09-21 00:31:09,249 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 932184064
2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-09-21 00:31:09,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
2013-09-21 00:31:09,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-09-21 00:31:09,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-09-21 00:31:09,286 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-09-21 00:31:09,286 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-09-21 00:31:09,457 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-09-21 00:31:09,496 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2013-09-21 00:31:09,496 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-09-21 00:31:09,501 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot lock storage /app/hadoop/tmp/dfs/name. The directory is already locked.
2013-09-21 00:31:09,501 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: Cannot lock storage /app/hadoop/tmp/dfs/name. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:599)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:452)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:299)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-09-21 00:31:09,508 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Cannot lock storage /app/hadoop/tmp/dfs/name. The directory is already locked.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:599)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:452)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:299)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-09-21 00:31:09,509 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at srv591.sd-france.net/46.21.207.111
************************************************************/
here is log for datanode:
************************************************************/
2013-09-21 01:01:24,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = srv591.sd-france.net/46.21.207.111
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_40
************************************************************/
2013-09-21 01:01:24,855 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-09-21 01:01:24,870 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-09-21 01:01:24,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-09-21 01:01:24,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2013-09-21 01:01:25,204 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-09-21 01:01:25,224 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-09-21 01:01:25,499 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1590050521; datanode namespaceID = 1863017904
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:414)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
2013-09-21 01:01:25,500 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

Please make sure you have created the directory /app/hadoop/tmp/dfs/namesecondary and have proper permissions. Also, inspecting the logs(NN, DN, SNN, JT) would be helpful. If you are still facing the issue show us the logs along with the configuration files.
In response to your comment :
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
Looks like you haven't created any of the directories you are using in your configuration files. This exception clearly shows that. Please make sure you have created all the directories with proper permissions which you are using as values of your configuration properties.

The default Hadoop installation takes the datanode and namenode data directories as some /tmp directory on the local disk. The correct way is to manually create the data directories for datanode and namenode on local and put the path in the hdfs-site.xml. Add the following properties to /etc/hadoop/hdfs-site.xml and reformat the name node and restart it should be fixed.
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///data/1/dfs/nn,file:///data/2/dfs/nn</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///data/1/dfs/dn,file:///data/2/dfs/dn</value>
</property>

Related

hadoop Datanode shutting down

I have configured hadoop cluster with
namenode 192.168.56.101
secondarynode 192.168.56.102
datanode1 192.168.56.103
after running start-dfs.sh and start-mapred.sh
all demons are up except the datanode1 and I don't know why !
here is the log from the datanode1
2015-03-17 09:35:58,224 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = datanode1/192.168.56.103
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.0
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May 6 06:59:37 UTC 2013
STARTUP_MSG: java = 1.7.0_75
************************************************************/
2015-03-17 09:35:58,572 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-03-17 09:35:58,596 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2015-03-17 09:35:58,600 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-03-17 09:35:58,600 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2015-03-17 09:35:58,901 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2015-03-17 09:35:58,921 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2015-03-17 09:36:04,816 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /usr/local/hadoop/tmp/dfs/data: namenode namespaceID = 941733068; datanode namespaceID = 1890117295
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:412)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:319)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1698)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1637)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1655)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1781)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1798)
2015-03-17 09:36:04,820 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at datanode1/192.168.56.103
************************************************************/
the namespaceID of datanode1 do not match the current namenode's.Maybe you copied the /usr/local/hadoop/tmp/dfs/data dir from another cluster.If the data of datanode1 is irrelevant, you can delete the /usr/local/hadoop/tmp/dfs/* of datanode1

Unable to start TaskTracker.Says Can not start task tracker because java.lang.IllegalArgumentException: Does not contain a valid host:port authority:

Edited mapred-site.xml,core-site.xml,hadoop-env.sh,hdfs-site.xml,masters and slaves.
I have 1 DataNode and 2 Namenodes.Both of them started successfully and I am able to see it in browser.
Started start-mapred.sh and it started JobTracker and TaskTracker on the Namenode but unable to start Tasktracker on the datanaode.
Started Tasktracker and the following is the output.
->hadoop tasktracker
Warning: $HADOOP_HOME is deprecated.
13/10/17 03:21:55 INFO mapred.TaskTracker: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting TaskTracker
STARTUP_MSG: host = tintin/10.193.184.157
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.1.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782; compiled by 'hortonf o' on Thu Jan 31 02:03:24 UTC 2013
************************************************************/
13/10/17 03:21:55 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
13/10/17 03:21:55 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
13/10/17 03:21:55 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
13/10/17 03:21:55 INFO impl.MetricsSystemImpl: TaskTracker metrics system started
13/10/17 03:21:55 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/10/17 03:21:55 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.
13/10/17 03:21:55 WARN impl.MetricsSystemImpl: Source name ugi already exists!
13/10/17 03:21:55 ERROR mapred.TaskTracker: Can not start task tracker because java.lang.IllegalArgumentException: Does no t contain a valid host:port authority:
10.193.184.132:54311
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:149)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
at org.apache.hadoop.mapred.JobTracker.getAddress(JobTracker.java:2312)
at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1532)
at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3906)
13/10/17 03:21:55 INFO mapred.TaskTracker: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down TaskTracker at tintin/10.193.184.157
************************************************************/
Though quite late to answer this, still does your mapred-site.xml contain mapred.job.tracker property ?
My job tracker runs on 8021, so the configuration is as follows:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>http://localhost:8021</value>
</property>
</configuration>
I faced such an issue due to missing properties.

Install Hadoop on Windows 7 fail to perform hadoop namenode –format

I'm installing Hadoop for the first time and on Windows 7 using this blog:
http://blog.sqltrainer.com/2012/01/installing-and-configuring-apache.html
Installed Cygwin with the package openssh as a service
Downloaded and unpacked hadoop-1.0.4
Configured JAVA_HOME then checked:
$ bin/hadoop version
Hadoop 1.0.4
Subversion svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290
Compiled by hortonfo on Wed Oct 3 05:13:58 UTC 2012
From source with checksum fe2baea87c4c81a2c505767f3f9b71f4
hdfs-site.xml content:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:47110</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>localhost:47111</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
mapred-site.xml content:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:8021</value>
</property>
</configuration>
when I try to format the disk space I get:
$ ./bin/hadoop namenode –format
13/02/12 10:16:34 INFO namenode.NameNode:
STARTUP_MSG: /************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = GUY-PC/192.168.1.5
STARTUP_MSG: args = [▒format]
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
Usage: java NameNode [-format] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] 13/02/12 10:16:34 INFO namenode.NameNode:
SHUTDOWN_MSG: /************************************************************
SHUTDOWN_MSG: Shutting down NameNode at GUY-PC/192.168.1.5
************************************************************/
when I try to start I get:
$ bin/start-dfs.sh
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-guy-namenode-GUY-PC.out
localhost: Connection closed by 127.0.0.1
localhost: Connection closed by 127.0.0.1
log hadoop-guy-namenode-GUY-PC.log content:
2013-02-12 10:21:27,101 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
STARTUP_MSG: /************************************************************
STARTUP_MSG: Starting NameNode STARTUP_MSG: host = GUY-PC/192.168.1.5
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
2013-02-12 10:21:27,187 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-02-12 10:21:27,194 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-02-12 10:21:27,195 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-02-12 10:21:27,195 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-02-12 10:21:27,244 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-02-12 10:21:27,248 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-02-12 10:21:27,249 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-02-12 10:21:27,259 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit
2013-02-12 10:21:27,259 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2013-02-12 10:21:27,259 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries
2013-02-12 10:21:27,259 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2013-02-12 10:21:27,285 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=guy
2013-02-12 10:21:27,285 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-02-12 10:21:27,285 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-02-12 10:21:27,288 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-02-12 10:21:27,288 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-02-12 10:21:27,356 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-02-12 10:21:27,370 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-02-12 10:21:27,372 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory C:\tmp\hadoop-guy\dfs\name does not exist.
2013-02-12 10:21:27,373 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory C:\tmp\hadoop-guy\dfs\name is in an inconsistent state:
storage directory does not exist or is not accessible. at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303) at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362) at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496) at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279) at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2013-02-12 10:21:27,373 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory C:\tmp\hadoop-guy\dfs\name is in an inconsistent state:
storage directory does not exist or is not accessible. at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303) at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362) at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496) at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279) at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2013-02-12 10:21:27,374 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
SHUTDOWN_MSG: /************************************************************
SHUTDOWN_MSG: Shutting down NameNode at GUY-PC/192.168.1.5
************************************************************/
2013-02-12 10:25:50,186 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
STARTUP_MSG: /************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = GUY-PC/192.168.1.5
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
2013-02-12 10:25:50,270 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-02-12 10:25:50,276 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-02-12 10:25:50,277 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-02-12 10:25:50,277 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-02-12 10:25:50,326 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-02-12 10:25:50,330 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-02-12 10:25:50,330 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-02-12 10:25:50,340 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit
2013-02-12 10:25:50,341 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2013-02-12 10:25:50,341 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries
2013-02-12 10:25:50,341 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2013-02-12 10:25:50,367 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=guy
2013-02-12 10:25:50,367 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-02-12 10:25:50,367 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-02-12 10:25:50,369 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-02-12 10:25:50,370 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-02-12 10:25:50,436 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-02-12 10:25:50,450 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-02-12 10:25:50,452 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory C:\tmp\hadoop-guy\dfs\name does not exist.
2013-02-12 10:25:50,453 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory C:\tmp\hadoop-guy\dfs\name is in an inconsistent state:
storage directory does not exist or is not accessible. at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303) at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362) at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496) at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279) at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2013-02-12 10:25:50,454 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory C:\tmp\hadoop-guy\dfs\name is in an inconsistent state:
storage directory does not exist or is not accessible. at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303) at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388) at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362) at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496) at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279) at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2013-02-12 10:25:50,454 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
SHUTDOWN_MSG: /************************************************************
SHUTDOWN_MSG: Shutting down NameNode at GUY-PC/192.168.1.5
************************************************************/
When I search my exception in the net, I get the answer to run: hadoop namenode –format like I did in section 6.
Please help.
I solved the problem:
I copied the command: hadoop namenode –format from an internet page.
When I simply wrote the character '-' with my keyboard it worked.

Namenode not starting (java.lang.IllegalArgumentException: Socket address is null)

I had asked this question a few days back but did not know where the log files were located.
I have the config settings in core-site.xml as follows
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>blah blah....</description>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>
The logfile for Namenode is below
2013-01-29 02:12:30,078 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = dheerajvc-ThinkPad-T420/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.1.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1411108; compiled by 'hortonfo' on Mon Nov 19 10:48:11 UTC 2012
************************************************************/
2013-01-29 02:12:30,184 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-01-29 02:12:30,192 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-01-29 02:12:30,193 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-01-29 02:12:30,193 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-01-29 02:12:30,326 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-01-29 02:12:30,329 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-01-29 02:12:30,333 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-01-29 02:12:30,333 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-01-29 02:12:30,351 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit
2013-01-29 02:12:30,351 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB
2013-01-29 02:12:30,351 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries
2013-01-29 02:12:30,351 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2013-01-29 02:12:30,371 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
2013-01-29 02:12:30,371 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-01-29 02:12:30,371 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-01-29 02:12:30,377 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-01-29 02:12:30,377 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-01-29 02:12:30,393 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-01-29 02:12:30,408 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-01-29 02:12:30,416 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2013-01-29 02:12:30,421 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-01-29 02:12:30,421 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 loaded in 0 seconds.
2013-01-29 02:12:30,421 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-01-29 02:12:30,422 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds.
2013-01-29 02:12:30,525 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop/dfs/name/current/edits
2013-01-29 02:12:30,526 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop/dfs/name/current/edits
2013-01-29 02:12:30,878 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-01-29 02:12:30,879 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 513 msecs
2013-01-29 02:12:30,880 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.threshold.pct = 0.949999988079071
2013-01-29 02:12:30,880 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2013-01-29 02:12:30,880 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.extension = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of over-replicated blocks = 0
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 7 msec
2013-01-29 02:12:30,887 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
2013-01-29 02:12:30,888 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2013-01-29 02:12:30,888 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2013-01-29 02:12:30,892 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-01-29 02:12:30,892 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
2013-01-29 02:12:30,892 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec processing time, 1 msec clock time, 1 cycles
2013-01-29 02:12:30,892 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2013-01-29 02:12:30,892 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
2013-01-29 02:12:30,896 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-01-29 02:12:30,908 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-01-29 02:12:30,909 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort50070 registered.
2013-01-29 02:12:30,910 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort50070 registered.
2013-01-29 02:12:30,912 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:50070
2013-01-29 02:12:30,913 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.IllegalArgumentException: Socket address is null
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:142)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:340)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:306)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:529)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1412)
2013-01-29 02:12:30,913 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at dheerajvc-ThinkPad-T420/127.0.1.1
************************************************************/
netstat --numeric-ports | grep "5431" yielded nothing.. so I think the ports are free. What socket is the namenode expecting?.
EDIT: only the jobtracker and tasktracker are starting, is it necessary for the namenode to start before datanode and secondarynamenode start?.
I can format namenode , but why does it use the /tmp directory? I thought it was supposed to use the location mentioned in core-site.xml
$ hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
13/01/31 03:34:22 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = dheerajvc-ThinkPad-T420/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.1.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1411108; compiled by 'hortonfo' on Mon Nov 19 10:48:11 UTC 2012
************************************************************/
13/01/31 03:34:22 INFO util.GSet: VM type = 32-bit
13/01/31 03:34:22 INFO util.GSet: 2% max memory = 17.77875 MB
13/01/31 03:34:22 INFO util.GSet: capacity = 2^22 = 4194304 entries
13/01/31 03:34:22 INFO util.GSet: recommended=4194304, actual=4194304
13/01/31 03:34:22 INFO namenode.FSNamesystem: fsOwner=dheerajvc
13/01/31 03:34:22 INFO namenode.FSNamesystem: supergroup=supergroup
13/01/31 03:34:22 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/01/31 03:34:22 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/01/31 03:34:22 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/01/31 03:34:22 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/01/31 03:34:22 INFO common.Storage: Image file of size 115 saved in 0 seconds.
13/01/31 03:34:22 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop/dfs/name/current/edits
13/01/31 03:34:22 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop/dfs/name/current/edits
13/01/31 03:34:23 INFO common.Storage: Storage directory /tmp/hadoop/dfs/name has been successfully formatted.
13/01/31 03:34:23 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at dheerajvc-ThinkPad-T420/127.0.1.1
************************************************************/
You should mention directory locations in hdfs-site.xml file instead of core-site.xml.
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/D:/analytics/hadoop/data/dfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/D:/analytics/hadoop/data/dfs/datanode</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.http.address</name>
<value>127.0.0.1:50070</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
<description>to enable webhdfs</description>
<final>true</final>
</property>
</configuration>

Have to format HDFS repeatedly whenever I stop and start hadoop services

I setup hadoop on a single node cluster. Everything works fine when I start all the hadoop services using start=all.sh. However whenever I stop all the services and restart hadoop services I get the following exception and I have to reformat the file system again. Right now I am in development and I am copying the files whenever I reformat it. But I cannot afford to have this kind of behaviour in production. I have checked the logs and here is the exception. Also my etc/hosts file is the same and I disabled IPV6
2012-11-03 18:49:45,542 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = nikhil-VirtualBox/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches /branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
2012-11-03 18:49:45,738 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2012-11-03 18:49:45,750 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2012-11-03 18:49:45,751 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2012-11-03 18:49:45,751 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2012-11-03 18:49:45,899 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2012-11-03 18:49:45,902 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2012-11-03 18:49:45,910 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2012-11-03 18:49:45,910 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2012-11-03 18:49:45,932 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit
2012-11-03 18:49:45,934 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2012-11-03 18:49:45,934 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries
2012-11-03 18:49:45,934 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2012-11-03 18:49:46,020 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
2012-11-03 18:49:46,020 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2012-11-03 18:49:46,021 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2012-11-03 18:49:46,024 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2012-11-03 18:49:46,024 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2012-11-03 18:49:46,169 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2012-11-03 18:49:46,191 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2012-11-03 18:49:46,194 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-hduser/dfs/name does not exist.
2012-11-03 18:49:46,196 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-hduser/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2012-11-03 18:49:46,197 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-hduser/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
Simply don't install your HDFS inside the /tmp directory. It will be cleaned up by your OS everytime you boot.
Personally I have installed my HDFS in /srv/hdfs/, but that is a matter of taste I guess.

Resources