hadoop3 can't find .nm-local-dir.usercache.hadoop.appcache. when doing pi test - hadoop

I'am trying to setup an hadoop3 cluster on a local computer network, in small scale for starting one master node and two workers node.
I think I manage to have something that should work, following this tutorial configure hadoop 3.1.0 in multinodes cluster
I downloaded hadoop version 3.1.1
the dfsadim report:
hadoop#######:~/hadoop3/hadoop-3.1.1$ hdfs dfsadmin -report
Configured Capacity: 1845878235136 (1.68 TB)
Present Capacity: 355431677952 (331.02 GB)
DFS Remaining: 355427651584 (331.02 GB)
DFS Used: 4026368 (3.84 MB)
DFS Used%: 0.00%
Replicated Blocks:
Under replicated blocks: 6
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0
Erasure Coded Block Groups:
Low redundancy block groups: 0
Block groups with corrupt internal blocks: 0
Missing block groups: 0
Pending deletion blocks: 0
-------------------------------------------------
Live datanodes (2):
Name: ######:9866 (######)
Hostname: ######
Decommission Status : Normal
Configured Capacity: 147511238656 (137.38 GB)
DFS Used: 2150400 (2.05 MB)
Non DFS Used: 46601465856 (43.40 GB)
DFS Remaining: 93390856192 (86.98 GB)
DFS Used%: 0.00%
DFS Remaining%: 63.31%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Sep 06 18:44:21 CEST 2018
Last Block Report: Thu Sep 06 18:08:09 CEST 2018
Num of Blocks: 17
Name: ######:9866 (######)
Hostname: ######
Decommission Status : Normal
Configured Capacity: 1698366996480 (1.54 TB)
DFS Used: 1875968 (1.79 MB)
Non DFS Used: 1350032670720 (1.23 TB)
DFS Remaining: 262036795392 (244.04 GB)
DFS Used%: 0.00%
DFS Remaining%: 15.43%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Sep 06 18:44:22 CEST 2018
Last Block Report: Thu Sep 06 18:08:10 CEST 2018
Num of Blocks: 12
So before continuing and tuning resource management I try to run a simple test and It failed.
here the pi example test
hadoop######:~/hadoop3/hadoop-3.1.1$ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar pi 2 10
Number of Maps = 2
Samples per Map = 10
Wrote input for Map #0
Wrote input for Map #1
Starting Job
2018-09-06 18:51:29,277 INFO client.RMProxy: Connecting to ResourceManager at nameMasterhost/IP:8032
2018-09-06 18:51:29,589 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/hadoop/.staging/job_1536250099280_0005
2018-09-06 18:51:29,771 INFO input.FileInputFormat: Total input files to process : 2
2018-09-06 18:51:30,338 INFO mapreduce.JobSubmitter: number of splits:2
2018-09-06 18:51:30,397 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2018-09-06 18:51:30,967 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1536250099280_0005
2018-09-06 18:51:30,970 INFO mapreduce.JobSubmitter: Executing with tokens: []
2018-09-06 18:51:31,175 INFO conf.Configuration: resource-types.xml not found
2018-09-06 18:51:31,175 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2018-09-06 18:51:31,248 INFO impl.YarnClientImpl: Submitted application application_1536250099280_0005
2018-09-06 18:51:31,295 INFO mapreduce.Job: The url to track the job: http://nameMAster:8088/proxy/application_1536250099280_0005/
2018-09-06 18:51:31,296 INFO mapreduce.Job: Running job: job_1536250099280_0005
2018-09-06 18:51:44,388 INFO mapreduce.Job: Job job_1536250099280_0005 running in uber mode : false
2018-09-06 18:51:44,390 INFO mapreduce.Job: map 0% reduce 0%
2018-09-06 18:51:44,409 INFO mapreduce.Job: Job job_1536250099280_0005 failed with state FAILED due to: Application application_1536250099280_0005 failed 2 times due to AM Container for appattempt_1536250099280_0005_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2018-09-06 18:51:38.416]Exception from container-launch.
Container id: container_1536250099280_0005_02_000001
Exit code: 1
Exception message: /bin/mv: target '/nm-local-dir/nmPrivate/application_1536250099280_0005/container_1536250099280_0005_02_000001/container_1536250099280_0005_02_000001.pid' is not a directory
[2018-09-06 18:51:38.421]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class .nm-local-dir.usercache.hadoop.appcache.application_1536250099280_0005.container_1536250099280_0005_02_000001.tmp
[2018-09-06 18:51:38.422]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class .nm-local-dir.usercache.hadoop.appcache.application_1536250099280_0005.container_1536250099280_0005_02_000001.tmp
For more detailed output, check the application tracking page: http://nameMaster:8088/cluster/app/application_1536250099280_0005 Then click on links to logs of each attempt.
. Failing the application.
2018-09-06 18:51:44,438 INFO mapreduce.Job: Counters: 0
Job job_1536250099280_0005 failed!
I'll add every information asked for, but I don't understand the problem and I don't want to flood the question with all configuration file if their are not relevant.
In the hdfs system file there is no "/nm-local-dir/".
I don't understand from where that path come.
Every help is warmly welcome.

HDFS is storage, YARN is compute. If you want to use your cluster for anything other than pure storage you'll need YARN which means you'll need Node Managers(NM).
Node Managers are servers that allow you to execute tasks so you need nm-local-dir defined in order run jobs like pi. The nm-local-dir needs to be defined in yarn-site.xml and is a local directory (not HDFS!) for every host that runs a Node Manager.

Related

Hadoop 3.2 : No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster)

I have a local Hadoop 3.2 installation : 1 master + 1 worker both running in my laptop. This is an experimental setup to make quick tests before submitting to a real cluster.
Everything is in good health:
$ jps
22326 NodeManager
21641 DataNode
25530 Jps
22042 ResourceManager
21803 SecondaryNameNode
21517 NameNode
$ hdfs fsck /
Connecting to namenode via http://master:9870/fsck?ugi=david&path=%2F
FSCK started by david (auth:SIMPLE) from /127.0.0.1 for path / at Wed Sep 04 13:54:59 CEST 2019
Status: HEALTHY
Number of data-nodes: 1
Number of racks: 1
Total dirs: 1
Total symlinks: 0
Replicated Blocks:
Total size: 0 B
Total files: 0
Total blocks (validated): 0
Minimally replicated blocks: 0
Over-replicated blocks: 0
Under-replicated blocks: 0
Mis-replicated blocks: 0
Default replication factor: 1
Average block replication: 0.0
Missing blocks: 0
Corrupt blocks: 0
Missing replicas: 0
Erasure Coded Block Groups:
Total size: 0 B
Total files: 0
Total block groups (validated): 0
Minimally erasure-coded block groups: 0
Over-erasure-coded block groups: 0
Under-erasure-coded block groups: 0
Unsatisfactory placement block groups: 0
Average block group size: 0.0
Missing block groups: 0
Corrupt block groups: 0
Missing internal blocks: 0
FSCK ended at Wed Sep 04 13:54:59 CEST 2019 in 0 milliseconds
The filesystem under path '/' is HEALTHY
When I'm running the provided Pi example, I get the following error:
$ yarn jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar pi 16 1000
Number of Maps = 16
Samples per Map = 1000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Wrote input for Map #10
Wrote input for Map #11
Wrote input for Map #12
Wrote input for Map #13
Wrote input for Map #14
Wrote input for Map #15
Starting Job
2019-09-04 13:55:47,665 INFO client.RMProxy: Connecting to ResourceManager at master/0.0.0.0:8032
2019-09-04 13:55:47,887 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/david/.staging/job_1567598091808_0001
2019-09-04 13:55:48,020 INFO input.FileInputFormat: Total input files to process : 16
2019-09-04 13:55:48,450 INFO mapreduce.JobSubmitter: number of splits:16
2019-09-04 13:55:48,508 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2019-09-04 13:55:49,000 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1567598091808_0001
2019-09-04 13:55:49,003 INFO mapreduce.JobSubmitter: Executing with tokens: []
2019-09-04 13:55:49,164 INFO conf.Configuration: resource-types.xml not found
2019-09-04 13:55:49,164 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2019-09-04 13:55:49,375 INFO impl.YarnClientImpl: Submitted application application_1567598091808_0001
2019-09-04 13:55:49,411 INFO mapreduce.Job: The url to track the job: http://cyclimse:8088/proxy/application_1567598091808_0001/
2019-09-04 13:55:49,412 INFO mapreduce.Job: Running job: job_1567598091808_0001
2019-09-04 13:55:55,477 INFO mapreduce.Job: Job job_1567598091808_0001 running in uber mode : false
2019-09-04 13:55:55,480 INFO mapreduce.Job: map 0% reduce 0%
2019-09-04 13:55:55,509 INFO mapreduce.Job: Job job_1567598091808_0001 failed with state FAILED due to: Application application_1567598091808_0001 failed 2 times due to AM Container for appattempt_1567598091808_0001_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2019-09-04 13:55:54.458]Exception from container-launch.
Container id: container_1567598091808_0001_02_000001
Exit code: 1
[2019-09-04 13:55:54.464]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
[2019-09-04 13:55:54.465]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
For more detailed output, check the application tracking page: http://cyclimse:8088/cluster/app/application_1567598091808_0001 Then click on links to logs of each attempt.
. Failing the application.
2019-09-04 13:55:55,546 INFO mapreduce.Job: Counters: 0
Job job_1567598091808_0001 failed!
It seems there's is something wrong with the configuration of Log4j: No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).. However it's using the default configuration ($HADOOP_CONF_DIR/log4j.properties).
After the execution, HDFS state looks like this:
$ hdfs fsck /
Connecting to namenode via http://master:9870/fsck?ugi=david&path=%2F
FSCK started by david (auth:SIMPLE) from /127.0.0.1 for path / at Wed Sep 04 14:01:43 CEST 2019
/tmp/hadoop-yarn/staging/david/.staging/job_1567598091808_0001/job.jar: Under replicated BP-24234081-0.0.0.0-1567598050928:blk_1073741841_1017. Target Replicas is 10 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
/tmp/hadoop-yarn/staging/david/.staging/job_1567598091808_0001/job.split: Under replicated BP-24234081-0.0.0.0-1567598050928:blk_1073741842_1018. Target Replicas is 10 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
Status: HEALTHY
Number of data-nodes: 1
Number of racks: 1
Total dirs: 11
Total symlinks: 0
Replicated Blocks:
Total size: 510411 B
Total files: 20
Total blocks (validated): 20 (avg. block size 25520 B)
Minimally replicated blocks: 20 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 2 (10.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 1
Average block replication: 1.0
Missing blocks: 0
Corrupt blocks: 0
Missing replicas: 18 (47.36842 %)
Erasure Coded Block Groups:
Total size: 0 B
Total files: 0
Total block groups (validated): 0
Minimally erasure-coded block groups: 0
Over-erasure-coded block groups: 0
Under-erasure-coded block groups: 0
Unsatisfactory placement block groups: 0
Average block group size: 0.0
Missing block groups: 0
Corrupt block groups: 0
Missing internal blocks: 0
FSCK ended at Wed Sep 04 14:01:43 CEST 2019 in 5 milliseconds
The filesystem under path '/' is HEALTHY
As I didn't find any solution on the Internet about it, here I am :).

Unable to write to HDFS: WARN hdfs.DataStreamer - Unexpected EOF

I'm following a tutorial and while running in a single cluster test environment I suddenly cannot run any MR jobs or write data to HDFS. It worked good before and suddenly I keep getting below error (rebooting didn't help).
I can read and delete files from HDFS, but not write.
$ hdfs dfs -put war-and-peace.txt /user/hands-on/
19/03/25 18:28:29 WARN hdfs.DataStreamer: Exception for BP-1098838250-127.0.0.1-1516469292616:blk_1073742374_1550
java.io.EOFException: Unexpected EOF while trying to read response from server
at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:399)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1020)
put: All datanodes [DatanodeInfoWithStorage[127.0.0.1:50010,DS-b90326de-a499-4a43-a66a-cc3da83ea966,DISK]] are bad. Aborting...
"hdfs dfsadmin -report" shows me everything is fine, enough disk space. I barely ran any jobs, just some test MRs and little test data.
$ hdfs dfsadmin -report
Configured Capacity: 52710469632 (49.09 GB)
Present Capacity: 43335585007 (40.36 GB)
DFS Remaining: 43334025216 (40.36 GB)
DFS Used: 1559791 (1.49 MB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 127.0.0.1:50010 (localhost)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 52710469632 (49.09 GB)
DFS Used: 1559791 (1.49 MB)
Non DFS Used: 6690530065 (6.23 GB)
DFS Remaining: 43334025216 (40.36 GB)
DFS Used%: 0.00%
DFS Remaining%: 82.21%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Mon Mar 25 18:30:45 EDT 2019
Also the NameNode WebUI (port 50070) shows me everything is fine, the logs too do not report any error. What could it be / how could I properly troubleshoot it?
CentOS Linux 6.9 minimal
Apache Hadoop 2.8.1

Hadoop add new datanode fail when build cluster

I'm build a hadoop cluster, about two node, step by step with official document.
But append datanode not join the cluster at Web UI: http://{host address}:50070/dfshealth.html#tab-datanode
with command:
[az-user#AZ-TEST1-SPARK-SLAVE ~]$ yarn node --list
17/11/27 09:16:04 INFO client.RMProxy: Connecting to ResourceManager at /10.0.4.12:8032
Total Nodes:2
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
AZ-TEST1-SPARK-MASTER:37164 RUNNING AZ-TEST1-SPARK-MASTER:8042 0
AZ-TEST1-SPARK-SLAVE:42608 RUNNING AZ-TEST1-SPARK-SLAVE:8042 0
It shows there are two node, but with another command just shows one livenode:
[az-user#AZ-TEST1-SPARK-SLAVE ~]$ hdfs dfsadmin -report
Configured Capacity: 1081063493632 (1006.82 GB)
Present Capacity: 1026027008000 (955.56 GB)
DFS Remaining: 1026026967040 (955.56 GB)
DFS Used: 40960 (40 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 10.0.4.12:50010 (10.0.4.12)
Hostname: AZ-TEST1-SPARK-MASTER
Decommission Status : Normal
Configured Capacity: 1081063493632 (1006.82 GB)
DFS Used: 40960 (40 KB)
Non DFS Used: 97816576 (93.29 MB)
DFS Remaining: 1026026967040 (955.56 GB)
DFS Used%: 0.00%
DFS Remaining%: 94.91%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Nov 27 09:22:36 UTC 2017
command show the same result on Master node.
Thanks for any advice.
other messages
the problem similar as number-of-nodes-in-hadoop-cluster but not work on my stage.
I'm use bare ip not config host ip file as usual.
Fixed
Use host name in every node and their configuration file.
In cluster mode, it must use host name rather then bare ip.

hadoop error not able to place enough replicas

I am using hadoop 1.2.1. It was active for about 2 years. Now following error start to appear in logs and hbase 0.94.14 could not connect with it.
NameNode Error:
2016-03-09 11:57:23,965 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1 to reach 2
Not able to place enough replicas
2016-03-09 11:57:23,965 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1 to reach 2
Not able to place enough replicas
2016-03-09 11:57:23,965 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1 to reach 2
Not able to place enough replicas
2016-03-09 11:57:26,966 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1 to reach 2
Not able to place enough replicas
And in hbase master log file error is like following
2016-03-09 11:16:31,192 INFO org.apache.hadoop.hbase.master.AssignmentManager$TimeoutMonitor: node1,12000,1457504177336.timeoutMonitor exiting
2016-03-09 11:16:31,193 INFO org.apache.hadoop.hbase.master.SplitLogManager$TimeoutMonitor: node1,12000,1457504177336.splitLogManagerTimeoutMonitor exiting
2016-03-09 11:16:31,192 INFO org.apache.hadoop.hbase.master.AssignmentManager$TimerUpdater: node1,12000,1457504177336.timerUpdater exiting
2016-03-09 11:16:31,218 INFO org.apache.zookeeper.ZooKeeper: Session: 0x2535a0114bb0001 closed
2016-03-09 11:16:31,218 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2016-03-09 11:16:31,218 INFO org.apache.hadoop.hbase.master.HMaster: HMaster main thread exiting
2016-03-09 11:16:31,218 ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
java.lang.RuntimeException: HMaster Aborted
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:160)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:104)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2120)
Wed Mar 9 11:22:26 PKT 2016 Stopping hbase (via master)
where is the problem. I have found a post who suggest that you should delete all data and format namdenode but I cannot do that as I cannot backup data.
This is the cluster summery
Configured Capacity: 3293363527680 (3 TB)
Present Capacity: 2630143946752 (2.39 TB)
DFS Remaining: 1867333337088 (1.7 TB)
DFS Used: 762810609664 (710.42 GB)
DFS Used%: 29%
Under replicated blocks: 35
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 3 (3 total, 0 dead)
Name: 10.11.21.44:50010
Decommission Status : Normal
Configured Capacity: 441499058176 (411.18 GB)
DFS Used: 246261780480 (229.35 GB)
Non DFS Used: 194947321856 (181.56 GB)
DFS Remaining: 289955840(276.52 MB)
DFS Used%: 55.78%
DFS Remaining%: 0.07%
Last contact: Thu Mar 10 15:20:15 PKT 2016
Name: 10.11.21.42:50010
Decommission Status : Normal
Configured Capacity: 2410365411328 (2.19 TB)
DFS Used: 304959569920 (284.02 GB)
Non DFS Used: 238646935552 (222.26 GB)
DFS Remaining: 1866758905856(1.7 TB)
DFS Used%: 12.65%
DFS Remaining%: 77.45%
Last contact: Thu Mar 10 15:20:15 PKT 2016
Name: 10.11.21.43:50010
Decommission Status : Normal
Configured Capacity: 441499058176 (411.18 GB)
DFS Used: 211589259264 (197.06 GB)
Non DFS Used: 229625323520 (213.86 GB)
DFS Remaining: 284475392(271.3 MB)
DFS Used%: 47.93%
DFS Remaining%: 0.06%
Last contact: Thu Mar 10 15:20:16 PKT 2016

Hadoop datanodes cannot find namenode in standalone setup

There are no errors in any log but I believe my datanode cannot find my namenode.
This is the error that leads me to this conclusion (according to what I've found online):
[INFO ]: org.apache.hadoop.ipc.Client - Retrying connect to server: /hadoop.server:9000. Already tried 4 time(s).
jps output:
7554 Jps
7157 NameNode
7419 SecondaryNameNode
7251 DataNode
Please can someone offer some advice?
Result of dfsadmin
Configured Capacity: 13613391872 (12.68 GB)
Present Capacity: 9255071744 (8.62 GB)
DFS Remaining: 9254957056 (8.62 GB)
DFS Used: 114688 (112 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Live datanodes:
Name: 192.172.1.49:50010 (Hadoop)
Hostname: Hadoop
Decommission Status : Normal
Configured Capacity: 13613391872 (12.68 GB)
DFS Used: 114688 (112 KB)
Non DFS Used: 4358320128 (4.06 GB)
DFS Remaining: 9254957056 (8.62 GB)
DFS Used%: 0.00%
DFS Remaining%: 67.98%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Fri Aug 08 17:25:57 SAST 2014
Give a hostname to your machines and make their entries in the /etc/hosts file, like this ,
#hostname hdserver.example.com
#vim /etc/hosts
192.168.0.25 hdserver.example.com
192.168.0.30 hdclient.example.com
and save it.(Use correct IP addresses)
On client also give hostname hdclient.example.com and make above entries in /etc/hosts. This will help the nameserver to locate the machines with hostnames.
delete all contents from tmp folder: rm -Rf path/of/tmp/directory
format namenode: :bin/hadoop namenode -format
start all processes again : bin/start-all.sh

Resources