hadoop can't start ./sbin/start-dfs.sh - hadoop

I had started shall( ./sbin/start-dfs.sh )
jps
3098 Jps<br>
2492 NameNode<br>
2700 SecondaryNameNode
hadoop-datanode-log
2017-02-15 15:55:12,787 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/usr/local/Cellar/hadoop/2.7.3/libexec/%3E/data/hadoop/hdfs/datanode/
java.io.IOException: Incompatible clusterIDs in /usr/local/Cellar/hadoop/2.7.3/libexec/>/data/hadoop/hdfs/datanode: namenode clusterID = CID-4c9d5df1-10c6-45cb-9fe0-e1631e4d13e2; datanode clusterID = CID-6dc3d755-f713-4bec-a62a-c47e96dcbc0d
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:745)
2017-02-15 15:55:12,792 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:745)
2017-02-15 15:55:12,793 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000
2017-02-15 15:55:12,799 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2017-02-15 15:55:14,800 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-02-15 15:55:14,802 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2017-02-15 15:55:14,803 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

Looks like you have formatted the namenode on a working cluster,
Delete the data directories and start the datanode process again in all nodes.
rm -rf <dfs.datanode.data.dir>
./sbin/hadoop-daemon.sh start datanode

Related

Data node shuts automatically with error "WARN datanode.DataNode: Exiting Datanode"

I am recieving below error for data node, even resource manager shuts automatically
2021-05-05 01:13:32,029 WARN common.Storage: Failed to add storage directory
[DISK]file:/C:/hadoop/data/datanode
java.io.IOException: Incompatible clusterIDs in C:\hadoop\data\datanode: namenode clusterID = CID-c6716736-36ba-454d-840a-ef8d77ac52c3; datanode clusterID = CID-b5548c69-4ac5-46ab-a413-11436c51ffad
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:736)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:294)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:407)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:387)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:551)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1705)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1665)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
at java.lang.Thread.run(Thread.java:748)
2021-05-05 01:13:32,029 ERROR datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid c9a3733b-f995-4ed3-8441-b3dd2fd03b7c) service to localhost/127.0.0.1:9000. Exiting.
java.io.IOException: All specified directories have failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:552)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1705)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1665)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
at java.lang.Thread.run(Thread.java:748)
2021-05-05 01:13:32,029 WARN datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid c9a3733b-f995-4ed3-8441-b3dd2fd03b7c) service to localhost/127.0.0.1:9000
2021-05-05 01:13:32,129 INFO datanode.DataNode: Removed Block pool <registering> (Datanode Uuid c9a3733b-f995-4ed3-8441-b3dd2fd03b7c)
2021-05-05 01:13:34,135 WARN datanode.DataNode: Exiting Datanode
2021-05-05 01:13:34,150 INFO datanode.DataNode: SHUTDOWN_MSG:
/************************************************************

Hodoop namenode not starting

When I use start-all.cmd, then datanode, resourcemanager, nodemanager are working properly but namenode is not working!
19/11/04 22:09:14 WARN namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:220)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1012)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:691)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:634)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:695)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:898)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:877)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1603)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1671)
19/11/04 22:09:14 INFO mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup#0.0.0.0:50070
19/11/04 22:09:14 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system...
19/11/04 22:09:14 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.
19/11/04 22:09:14 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
19/11/04 22:09:14 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:220)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1012)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:691)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:634)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:695)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:898)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:877)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1603)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1671)
19/11/04 22:09:14 INFO util.ExitUtil: Exiting with status 1
19/11/04 22:09:14 INFO namenode.NameNode: SHUTDOWN_MSG:
Open cmd as administrator.
Type and run stop-all.cmd
Then run hadoop namenode –format
Finally run start-all.cmd
Hope it will work for you.

Hadoop Exception: All specified directories are failed to load

When I started the Hadoop cluster, the following Exception was thrown. I dont't have idea for solving it. Anyone help me. Thanks
2017-07-10 09:40:58,960 WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /tools/hadoop/hadoop_storage/hdfs/datanode: namenode clusterID = CID-47191263-b5b7-4a4d-b8b5-a78b782e66bb; datanode clusterID = CID-79a53373-9652-4c08-9735-b5972e0450ca
2017-07-10 09:40:58,960 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:54310. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1358)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1323)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:745)
2017-07-10 09:40:58,961 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:54310
2017-07-10 09:40:58,962 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2017-07-10 09:41:00,962 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-07-10 09:41:00,964 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2017-07-10 09:41:00,966 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
It perhaps you format your cluster one more time thus it generate different ID cluster in the master node and data node.
Your namenode and datanode cluster ID does not match and you make sure to make them the same.
In name node, change cluster id in the file located in:
$ nano HADOOP_FILE_SYSTEM/namenode/current/VERSION
In data node you cluster id is stored in the file:
$ nano HADOOP_FILE_SYSTEM/datanode/current/VERSION
Whatever the way you change ID, but assure that the ID in the cluster's nodes are the same.
#VanThaoNguyen is correct
In my case:
/installation directory/hdata/dfs/name/current
/installation directory/hdata/dfs/data/current
clusterID=xxxx-xxxx-xxxx-xxxx
should be same for name node and data node.

Datanode is Not Starting in Hadoop after Kerberos Auth

I have given permission to /app/hadoop/tmp/dfs/data.
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /app/hadoop/tmp/dfs/data :
EPERM: Operation not permitted
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:230)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:727)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:502)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:140)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2383)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2365)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2257)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2304)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2481)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2505)
2017-03-14 20:10:51,169 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/app/hadoop/tmp/dfs/data/"
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2392)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2365)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2257)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2304)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2481)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2505)
2017-03-14 20:10:51,172 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-03-14 20:10:51,174 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
Does the owner of that directory on the local file system match the service of the Kerberos principal the datanode is now using? So if it is hdfs/, then the directory (and all under it) should be owned by hdfs.

Hadoop 2 node cluster UI showing 1 live Node

I am trying to configure Hadoop 2.7 2-node cluster.When i start hadoop
using start-dfs.sh and start-yarn.sh.All services on master and slave start perfectly.
Here is my jps command on my master:
23913 Jps
22140 SecondaryNameNode
22316 ResourceManager
22457 NodeManager
21916 DataNode
21777 NameNode
Here is my jps command on my slave:
17223 Jps
14225 DataNode
14363 NodeManager
But if i see Hadoop cluster UI it shows only 1 live data node.
Here is the dfs admin report : /bin/hdfs dfsadmin -report
Live datanodes (1):
Name: 192.168.1.104:50010 (nn1.cluster.com)
Hostname: nn1.cluster.com
Decommission Status : Normal
Configured Capacity: 401224601600 (373.67 GB)
DFS Used: 237568 (232 KB)
Non DFS Used: 48905121792 (45.55 GB)
DFS Remaining: 352319242240 (328.12 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.81%
I am able to ssh on all machines.
Here is the sample of name node logs(i.p = 192.168.1.104) :
2016-07-12 01:17:34,293 INFO BlockStateChange: BLOCK* processReport: from storage DS-d9ed40cf-bd5d-4033-a6ca-14fb4a8c3587 node DatanodeRegistration(192.168.1.104:50010, datanodeUuid=b702b518-5daa-4fa1-8e69-e4d620a72470, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-e86d0353-9f33-495b-88fa-16035abd3672;nsid=616310490;c=0), blocks: 24, hasStaleStorage: false, processing time: 0 msecs
2016-07-12 01:17:35,501 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.1.104:50010, datanodeUuid=37038a9f-23ac-42e2-abea-bdf356aaefbe, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-e86d0353-9f33-495b-88fa-16035abd3672;nsid=616310490;c=0) storage 37038a9f-23ac-42e2-abea-bdf356aaefbe
2016-07-12 01:17:35,501 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: BLOCK* registerDatanode: 192.168.1.104:50010
2016-07-12 01:17:35,501 INFO org.apache.hadoop.net.NetworkTopology: Removing a node: /default-rack/192.168.1.104:50010
2016-07-12 01:17:35,501 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2016-07-12 01:17:35,502 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.1.104:50010
2016-07-12 01:17:35,504 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2016-07-12 01:17:35,504 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-495b6b0e-f1fc-407c-bb9f-6c314c2fdaec for DN 192.168.1.104:50010
here is the sample datanode logs (i.p = 192.168.1.104):
2016-07-12 02:02:12,044 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action : DNA_REGISTER from nn1.cluster.com/192.168.1.104:8020 with active state
2016-07-12 02:02:12,045 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1235752202-192.168.1.104-1468159707934 (Datanode Uuid b702b518-5daa-4fa1-8e69-e4d620a72470) service to nn1.cluster.com/192.168.1.104:8020 beginning handshake with NN
2016-07-12 02:02:12,047 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1235752202-192.168.1.104-1468159707934 (Datanode Uuid b702b518-5daa-4fa1-8e69-e4d620a72470) service to nn1.cluster.com/192.168.1.104:8020 successfully registered with NN
2016-07-12 02:02:12,050 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x236119eb3082, containing 1 storage report(s), of which we sent 1. The reports had 24 total blocks and used 1 RPC(s). This took 0 msec to generate and 1 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2016-07-12 02:02:12,050 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-1235752202-192.168.1.104-1468159707934
2016-07-12 02:02:15,049 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action : DNA_REGISTER from nn1.cluster.com/192.168.1.104:8020 with active state
2016-07-12 02:02:15,052 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1235752202-192.168.1.104-1468159707934 (Datanode Uuid b702b518-5daa-4fa1-8e69-e4d620a72470) service to nn1.cluster.com/192.168.1.104:8020 beginning handshake with NN
2016-07-12 02:02:15,056 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1235752202-192.168.1.104-1468159707934 (Datanode Uuid b702b518-5daa-4fa1-8e69-e4d620a72470) service to nn1.cluster.com/192.168.1.104:8020 successfully registered with NN
2016-07-12 02:02:15,061 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x2361cd4be40d, containing 1 storage report(s), of which we sent 1. The reports had 24 total blocks and used 1 RPC(s). This took 0 msec to generate and 2 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2016-07-12 02:02:15,061 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-1235752202-192.168.1.104-1468159707934
Here is the sample of 2nd datanode logs(ip :192.168.35.128)
2016-07-12 11:45:07,346 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action : DNA_REGISTER from nn1.cluster.com/192.168.1.104:8020 with active state
2016-07-12 11:45:07,349 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1235752202-192.168.1.104-1468159707934 (Datanode Uuid 37038a9f-23ac-42e2-abea-bdf356aaefbe) service to nn1.cluster.com/192.168.1.104:8020 beginning handshake with NN
2016-07-12 11:45:07,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1235752202-192.168.1.104-1468159707934 (Datanode Uuid 37038a9f-23ac-42e2-abea-bdf356aaefbe) service to nn1.cluster.com/192.168.1.104:8020 successfully registered with NN
2016-07-12 11:45:07,364 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0xb0de42ec7c, containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate and 4 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2016-07-12 11:45:07,364 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-1235752202-192.168.1.104-1468159707934
2016-07-12 11:45:10,360 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action : DNA_REGISTER from nn1.cluster.com/192.168.1.104:8020 with active state
2016-07-12 11:45:10,363 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1235752202-192.168.1.104-1468159707934 (Datanode Uuid 37038a9f-23ac-42e2-abea-bdf356aaefbe) service to nn1.cluster.com/192.168.1.104:8020 beginning handshake with NN
2016-07-12 11:45:10,370 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1235752202-192.168.1.104-1468159707934 (Datanode Uuid 37038a9f-23ac-42e2-abea-bdf356aaefbe) service to nn1.cluster.com/192.168.1.104:8020 successfully registered with NN
2016-07-12 11:45:10,377 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0xb191ea9cb9, containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate and 3 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2016-07-12 11:45:10,377 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-1235752202-192.168.1.104-1468159707934
2016-07-12 11:45:13,376 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action : DNA_REGISTER from nn1.cluster.com/192.168.1.104:8020 with active state
2016-07-12 11:45:13,380 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1235752202-192.168.1.104-1468159707934 (Datanode Uuid 37038a9f-23ac-42e2-abea-bdf356aaefbe) service to nn1.cluster.com/192.168.1.104:8020 beginning handshake with NN
2016-07-12 11:45:13,385 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1235752202-192.168.1.104-1468159707934 (Datanode Uuid 37038a9f-23ac-42e2-abea-bdf356aaefbe) service to nn1.cluster.com/192.168.1.104:8020 successfully registered with NN
2016-07-12 11:45:13,395 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0xb245b893c4, containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate and 5 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2016-07-12 11:45:13,396 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-1235752202-192.168.1.104-1468159707934
Why is this happening? Thank you so much for help!
Got The Solution. If the Slave Nodes/Data Nodes are alive individually and in the report if it is not showing it on the report hadoop dfsadmin - report...then there is the problem with the communication. The communication from data node to the master is not available. Technically speaking there is the issue in the firewall. the Fire wall in the master node is blocking the communication.
" We have to stop the firewall connection in the master" / Allow the specific Ip to access the master"
to stop Firewall in CentOs do the below Command in CentOs
service iptables save
service iptables stop
chkconfig iptables off
Got the Solution. The issue was in namenode Ipaddress. i had considered ipaddress from wlan0 interface which keeps
on changing.Since I have installed VMware workstation so considered the Ipaddress
from vmnet interface which was static ipaddress and after changing Live node shows 2 instead of 1.

Resources