Recently I format namenode to hdfs (hadoop namenode format ) ,but when I start the hdfs it couldn't upload anydata to HDFS then I delete the datanode directory to make sure the have the same namespace .
but when I hdfs dfsadmin -report it has a strange things
Live datanodes (3):
Name: 192.168.0.30:50010 (hadoop1)
Hostname: hadoop1
Rack: /default
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 8192 (8 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Thu Mar 22 14:10:41 CST 2018
all the datanodes dfs used 100% & remining 0%
But it also have available interspace on the disk,
[root#hadoop1 nn]# df -h /dfs/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 70G 55G 16G 78% /
when I open the namenode webpage the capacity is zero
Any ideas?
Cheers
Related
I'm following a tutorial and while running in a single cluster test environment I suddenly cannot run any MR jobs or write data to HDFS. It worked good before and suddenly I keep getting below error (rebooting didn't help).
I can read and delete files from HDFS, but not write.
$ hdfs dfs -put war-and-peace.txt /user/hands-on/
19/03/25 18:28:29 WARN hdfs.DataStreamer: Exception for BP-1098838250-127.0.0.1-1516469292616:blk_1073742374_1550
java.io.EOFException: Unexpected EOF while trying to read response from server
at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:399)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1020)
put: All datanodes [DatanodeInfoWithStorage[127.0.0.1:50010,DS-b90326de-a499-4a43-a66a-cc3da83ea966,DISK]] are bad. Aborting...
"hdfs dfsadmin -report" shows me everything is fine, enough disk space. I barely ran any jobs, just some test MRs and little test data.
$ hdfs dfsadmin -report
Configured Capacity: 52710469632 (49.09 GB)
Present Capacity: 43335585007 (40.36 GB)
DFS Remaining: 43334025216 (40.36 GB)
DFS Used: 1559791 (1.49 MB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 127.0.0.1:50010 (localhost)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 52710469632 (49.09 GB)
DFS Used: 1559791 (1.49 MB)
Non DFS Used: 6690530065 (6.23 GB)
DFS Remaining: 43334025216 (40.36 GB)
DFS Used%: 0.00%
DFS Remaining%: 82.21%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Mon Mar 25 18:30:45 EDT 2019
Also the NameNode WebUI (port 50070) shows me everything is fine, the logs too do not report any error. What could it be / how could I properly troubleshoot it?
CentOS Linux 6.9 minimal
Apache Hadoop 2.8.1
I'm build a hadoop cluster, about two node, step by step with official document.
But append datanode not join the cluster at Web UI: http://{host address}:50070/dfshealth.html#tab-datanode
with command:
[az-user#AZ-TEST1-SPARK-SLAVE ~]$ yarn node --list
17/11/27 09:16:04 INFO client.RMProxy: Connecting to ResourceManager at /10.0.4.12:8032
Total Nodes:2
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
AZ-TEST1-SPARK-MASTER:37164 RUNNING AZ-TEST1-SPARK-MASTER:8042 0
AZ-TEST1-SPARK-SLAVE:42608 RUNNING AZ-TEST1-SPARK-SLAVE:8042 0
It shows there are two node, but with another command just shows one livenode:
[az-user#AZ-TEST1-SPARK-SLAVE ~]$ hdfs dfsadmin -report
Configured Capacity: 1081063493632 (1006.82 GB)
Present Capacity: 1026027008000 (955.56 GB)
DFS Remaining: 1026026967040 (955.56 GB)
DFS Used: 40960 (40 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 10.0.4.12:50010 (10.0.4.12)
Hostname: AZ-TEST1-SPARK-MASTER
Decommission Status : Normal
Configured Capacity: 1081063493632 (1006.82 GB)
DFS Used: 40960 (40 KB)
Non DFS Used: 97816576 (93.29 MB)
DFS Remaining: 1026026967040 (955.56 GB)
DFS Used%: 0.00%
DFS Remaining%: 94.91%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Nov 27 09:22:36 UTC 2017
command show the same result on Master node.
Thanks for any advice.
other messages
the problem similar as number-of-nodes-in-hadoop-cluster but not work on my stage.
I'm use bare ip not config host ip file as usual.
Fixed
Use host name in every node and their configuration file.
In cluster mode, it must use host name rather then bare ip.
Command:
[hdfs#sandbox oozie]$ hadoop dfsadmin -report|head -n 100
Output:
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Configured Capacity: 44716605440 (41.65 GB)
Present Capacity: 31614091245 (29.44 GB)
DFS Remaining: 30519073792 (28.42 GB)
DFS Used: 1095017453 (1.02 GB)
DFS Used%: 3.46%
Under replicated blocks: 657
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (1):
Name: 10.0.2.15:50010 (sandbox.hortonworks.com)
Hostname: sandbox.hortonworks.com
Decommission Status : Normal
Configured Capacity: 44716605440 (41.65 GB)
DFS Used: 1095017453 (1.02 GB)
Non DFS Used: 13102514195 (12.20 GB)
DFS Remaining: 30519073792 (28.42 GB)
DFS Used%: 2.45%
DFS Remaining%: 68.25%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 4
Last contact: Thu Aug 11 23:12:04 UTC 2016
What is Cache Used%, Non DFS Used specially???
hdfs dfsadmin -report command:
Reports basic filesystem information and statistics. Optional flags
may be used to filter the list of displayed DataNodes.
..from official page of hadoop
About,
Cache Used%:
It depends on "Configured Cache Capacity". It's the percentage of the configured value. As you have not configured any space for cache, it is shown as 100% (0 B out of 0 B)
NonDFS used:
It is calculated using following formula
NonDFS used = Configured Capacity - DFS Used - DFS Remaining
I am building a prototype using CDH 5.1. I am using a 3 node cluster.
While uploading the data from the database to the HDFS the resource manager server aborted unexpectedly.
Now I see a file of size 3 GB in the HDFS.
Issue :
Total rows in the DB for the query is 42600000. The Server stopped after 7641242 rows transferred. I am using Talend to do the ETL. I know this time I will not be able to do much other than start the process all over again.
Is there a way we can mitigate this issue in future?
Update after running the dfsadmin command :
sudo -u hdfs hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Configured Capacity: 338351861760 (315.11 GB)
Present Capacity: 314764926976 (293.15 GB)
DFS Remaining: 303901577216 (283.03 GB)
DFS Used: 10863349760 (10.12 GB)
DFS Used%: 3.45%
Under replicated blocks: 1
Blocks with corrupt replicas: 1
Missing blocks: 0
-------------------------------------------------
Live datanodes (3):
Name: 10.215.204.196:50010 (txwlcloud3)
Hostname: txwlcloud3
Rack: /default
Decommission Status : Normal
Configured Capacity: 112783953920 (105.04 GB)
DFS Used: 3623538688 (3.37 GB)
Non DFS Used: 2971234304 (2.77 GB)
DFS Remaining: 106189180928 (98.90 GB)
DFS Used%: 3.21%
DFS Remaining%: 94.15%
Configured Cache Capacity: 522190848 (498 MB)
Cache Used: 0 (0 B)
Cache Remaining: 522190848 (498 MB)
Cache Used%: 0.00%
Cache Remaining%: 100.00%
Last contact: Tue Sep 30 10:55:11 CDT 2014
Name: 10.215.204.203:50010 (txwlcloud2)
Hostname: txwlcloud2
Rack: /default
Decommission Status : Normal
Configured Capacity: 112783953920 (105.04 GB)
DFS Used: 3645382656 (3.40 GB)
Non DFS Used: 2970497024 (2.77 GB)
DFS Remaining: 106168074240 (98.88 GB)
DFS Used%: 3.23%
DFS Remaining%: 94.13%
Configured Cache Capacity: 815792128 (778 MB)
Cache Used: 0 (0 B)
Cache Remaining: 815792128 (778 MB)
Cache Used%: 0.00%
Cache Remaining%: 100.00%
Last contact: Tue Sep 30 10:55:10 CDT 2014
Name: 10.215.204.213:50010 (txwlcloud1)
Hostname: txwlcloud1
Rack: /default
Decommission Status : Normal
Configured Capacity: 112783953920 (105.04 GB)
DFS Used: 3594428416 (3.35 GB)
Non DFS Used: 17645203456 (16.43 GB)
DFS Remaining: 91544322048 (85.26 GB)
DFS Used%: 3.19%
DFS Remaining%: 81.17%
Configured Cache Capacity: 3145728 (3 MB)
Cache Used: 0 (0 B)
Cache Remaining: 3145728 (3 MB)
Cache Used%: 0.00%
Cache Remaining%: 100.00%
Last contact: Tue Sep 30 10:55:10 CDT 2014
There are no errors in any log but I believe my datanode cannot find my namenode.
This is the error that leads me to this conclusion (according to what I've found online):
[INFO ]: org.apache.hadoop.ipc.Client - Retrying connect to server: /hadoop.server:9000. Already tried 4 time(s).
jps output:
7554 Jps
7157 NameNode
7419 SecondaryNameNode
7251 DataNode
Please can someone offer some advice?
Result of dfsadmin
Configured Capacity: 13613391872 (12.68 GB)
Present Capacity: 9255071744 (8.62 GB)
DFS Remaining: 9254957056 (8.62 GB)
DFS Used: 114688 (112 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Live datanodes:
Name: 192.172.1.49:50010 (Hadoop)
Hostname: Hadoop
Decommission Status : Normal
Configured Capacity: 13613391872 (12.68 GB)
DFS Used: 114688 (112 KB)
Non DFS Used: 4358320128 (4.06 GB)
DFS Remaining: 9254957056 (8.62 GB)
DFS Used%: 0.00%
DFS Remaining%: 67.98%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Fri Aug 08 17:25:57 SAST 2014
Give a hostname to your machines and make their entries in the /etc/hosts file, like this ,
#hostname hdserver.example.com
#vim /etc/hosts
192.168.0.25 hdserver.example.com
192.168.0.30 hdclient.example.com
and save it.(Use correct IP addresses)
On client also give hostname hdclient.example.com and make above entries in /etc/hosts. This will help the nameserver to locate the machines with hostnames.
delete all contents from tmp folder: rm -Rf path/of/tmp/directory
format namenode: :bin/hadoop namenode -format
start all processes again : bin/start-all.sh