I'm following a tutorial and while running in a single cluster test environment I suddenly cannot run any MR jobs or write data to HDFS. It worked good before and suddenly I keep getting below error (rebooting didn't help).
I can read and delete files from HDFS, but not write.
$ hdfs dfs -put war-and-peace.txt /user/hands-on/
19/03/25 18:28:29 WARN hdfs.DataStreamer: Exception for BP-1098838250-127.0.0.1-1516469292616:blk_1073742374_1550
java.io.EOFException: Unexpected EOF while trying to read response from server
at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:399)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1020)
put: All datanodes [DatanodeInfoWithStorage[127.0.0.1:50010,DS-b90326de-a499-4a43-a66a-cc3da83ea966,DISK]] are bad. Aborting...
"hdfs dfsadmin -report" shows me everything is fine, enough disk space. I barely ran any jobs, just some test MRs and little test data.
$ hdfs dfsadmin -report
Configured Capacity: 52710469632 (49.09 GB)
Present Capacity: 43335585007 (40.36 GB)
DFS Remaining: 43334025216 (40.36 GB)
DFS Used: 1559791 (1.49 MB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 127.0.0.1:50010 (localhost)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 52710469632 (49.09 GB)
DFS Used: 1559791 (1.49 MB)
Non DFS Used: 6690530065 (6.23 GB)
DFS Remaining: 43334025216 (40.36 GB)
DFS Used%: 0.00%
DFS Remaining%: 82.21%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Mon Mar 25 18:30:45 EDT 2019
Also the NameNode WebUI (port 50070) shows me everything is fine, the logs too do not report any error. What could it be / how could I properly troubleshoot it?
CentOS Linux 6.9 minimal
Apache Hadoop 2.8.1
Related
I'm build a hadoop cluster, about two node, step by step with official document.
But append datanode not join the cluster at Web UI: http://{host address}:50070/dfshealth.html#tab-datanode
with command:
[az-user#AZ-TEST1-SPARK-SLAVE ~]$ yarn node --list
17/11/27 09:16:04 INFO client.RMProxy: Connecting to ResourceManager at /10.0.4.12:8032
Total Nodes:2
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
AZ-TEST1-SPARK-MASTER:37164 RUNNING AZ-TEST1-SPARK-MASTER:8042 0
AZ-TEST1-SPARK-SLAVE:42608 RUNNING AZ-TEST1-SPARK-SLAVE:8042 0
It shows there are two node, but with another command just shows one livenode:
[az-user#AZ-TEST1-SPARK-SLAVE ~]$ hdfs dfsadmin -report
Configured Capacity: 1081063493632 (1006.82 GB)
Present Capacity: 1026027008000 (955.56 GB)
DFS Remaining: 1026026967040 (955.56 GB)
DFS Used: 40960 (40 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 10.0.4.12:50010 (10.0.4.12)
Hostname: AZ-TEST1-SPARK-MASTER
Decommission Status : Normal
Configured Capacity: 1081063493632 (1006.82 GB)
DFS Used: 40960 (40 KB)
Non DFS Used: 97816576 (93.29 MB)
DFS Remaining: 1026026967040 (955.56 GB)
DFS Used%: 0.00%
DFS Remaining%: 94.91%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Nov 27 09:22:36 UTC 2017
command show the same result on Master node.
Thanks for any advice.
other messages
the problem similar as number-of-nodes-in-hadoop-cluster but not work on my stage.
I'm use bare ip not config host ip file as usual.
Fixed
Use host name in every node and their configuration file.
In cluster mode, it must use host name rather then bare ip.
I have seen lots of answers on SO and on Quora along with many websites. Some problems were solved when they configured firewall for slaves IPs, Some said it's a UI glitch. I am confused . I have two datanodes: one is pure datanode and another is Namenode+datanode. Problem is when I do <master-ip>:50075 it shows only one datanode ( that of machine which has namenode too ). but my hdfs dfsadmin -report shows I have two datanodes and after starting hadoop on my master and if I do jps on my pure-datanode-machine or slave machine I can see datanode running.
Firewall on both machines is off. sudo ufw status verbose gives Status: inactive response. Same scenerio is with spark. Spark UI show worker node as the node with master node not the pure worker node.But worker is running on pure-worker-machine. Again, is this a UI glitch or I am missing something?
hdfs dfsadmin -report
Configured Capacity: 991216451584 (923.14 GB)
Present Capacity: 343650484224 (320.05 GB)
DFS Remaining: 343650418688 (320.05 GB)
DFS Used: 65536 (64 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0
-------------------------------------------------
Live datanodes (2):
Name: 10.10.10.105:50010 (ekbana)
Hostname: ekbana
Decommission Status : Normal
Configured Capacity: 24690192384 (22.99 GB)
DFS Used: 32768 (32 KB)
Non DFS Used: 7112691712 (6.62 GB)
DFS Remaining: 16299675648 (15.18 GB)
DFS Used%: 0.00%
DFS Remaining%: 66.02%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Jul 25 04:27:36 EDT 2017
Name: 110.44.111.147:50010 (saque-slave-ekbana)
Hostname: ekbana
Decommission Status : Normal
Configured Capacity: 966526259200 (900.15 GB)
DFS Used: 32768 (32 KB)
Non DFS Used: 590055215104 (549.53 GB)
DFS Remaining: 327350743040 (304.87 GB)
DFS Used%: 0.00%
DFS Remaining%: 33.87%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Jul 25 04:27:36 EDT 2017
/etc/hadoop/masters file on master node
ekbana
/etc/hadoop/slaves file on master node
ekbana
saque-slave-ekbana
/etc/hadoop/masters file on slave node
saque-master
Note: saque-master on slaves machine and ekbana on master machine is mapped to same IP.
Also UI looks similar to this question's UI
It's because of the same hostname(ekbana).
So in UI it will show only one entry for the same hostname.
if you want to confirm this, just start only one datanode which is not in master. you can see entry for that in the UI.
If you started other datanode too, it will mask second entry for the same hostname.
you can change the hostname and try.
I also Faced similar issue, where I couldn't see datanode information on dfshealth.html page. I had two hosts named master and slave.
etc/hadoop/masters (on master machine)
master
etc/hadoop/slaves
master
slave
etc/hadoop/masters (slave machine)
master
etc/hadoop/slaves
slave
and it was able to see datanodes on UI.
Command:
[hdfs#sandbox oozie]$ hadoop dfsadmin -report|head -n 100
Output:
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Configured Capacity: 44716605440 (41.65 GB)
Present Capacity: 31614091245 (29.44 GB)
DFS Remaining: 30519073792 (28.42 GB)
DFS Used: 1095017453 (1.02 GB)
DFS Used%: 3.46%
Under replicated blocks: 657
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (1):
Name: 10.0.2.15:50010 (sandbox.hortonworks.com)
Hostname: sandbox.hortonworks.com
Decommission Status : Normal
Configured Capacity: 44716605440 (41.65 GB)
DFS Used: 1095017453 (1.02 GB)
Non DFS Used: 13102514195 (12.20 GB)
DFS Remaining: 30519073792 (28.42 GB)
DFS Used%: 2.45%
DFS Remaining%: 68.25%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 4
Last contact: Thu Aug 11 23:12:04 UTC 2016
What is Cache Used%, Non DFS Used specially???
hdfs dfsadmin -report command:
Reports basic filesystem information and statistics. Optional flags
may be used to filter the list of displayed DataNodes.
..from official page of hadoop
About,
Cache Used%:
It depends on "Configured Cache Capacity". It's the percentage of the configured value. As you have not configured any space for cache, it is shown as 100% (0 B out of 0 B)
NonDFS used:
It is calculated using following formula
NonDFS used = Configured Capacity - DFS Used - DFS Remaining
Hadoop command: hadoop fs -mkdir /in tried in folder C:\hwork but it did not work properly. Please help me to solve this.
You should be able to visit the namenode UI before doing any filesystem operations to verify your HDFS cluster is actually working. The error message you have given indicates that it is not.
The namenode UI is typically available on http://localhost:50070/dfshealth.jsp.
You can also verify that HDFS is working properly by running the following command:
hdfs dfsadmin -report
If "Safe Mode is ON", then things are not running properly. You should also have a non-zero configured capacity and at least one datanode which is available.
A healthy pseudo-distributed HDFS environment should give a report that looks something like this:
Configured Capacity: 41746268160 (38.88 GB)
Present Capacity: 34658451456 (32.28 GB)
DFS Remaining: 34655678464 (32.28 GB)
DFS Used: 2772992 (2.64 MB)
DFS Used%: 0.01%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Live datanodes:
Name: 192.168.10.129:50010 (datanode)
Hostname: datanode
Decommission Status : Normal
Configured Capacity: 41746268160 (38.88 GB)
DFS Used: 2772992 (2.64 MB)
Non DFS Used: 7087816704 (6.60 GB)
DFS Remaining: 34655678464 (32.28 GB)
DFS Used%: 0.01%
DFS Remaining%: 83.02%
Last contact: Thu May 07 16:51:50 UTC 2015
There are no errors in any log but I believe my datanode cannot find my namenode.
This is the error that leads me to this conclusion (according to what I've found online):
[INFO ]: org.apache.hadoop.ipc.Client - Retrying connect to server: /hadoop.server:9000. Already tried 4 time(s).
jps output:
7554 Jps
7157 NameNode
7419 SecondaryNameNode
7251 DataNode
Please can someone offer some advice?
Result of dfsadmin
Configured Capacity: 13613391872 (12.68 GB)
Present Capacity: 9255071744 (8.62 GB)
DFS Remaining: 9254957056 (8.62 GB)
DFS Used: 114688 (112 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Live datanodes:
Name: 192.172.1.49:50010 (Hadoop)
Hostname: Hadoop
Decommission Status : Normal
Configured Capacity: 13613391872 (12.68 GB)
DFS Used: 114688 (112 KB)
Non DFS Used: 4358320128 (4.06 GB)
DFS Remaining: 9254957056 (8.62 GB)
DFS Used%: 0.00%
DFS Remaining%: 67.98%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Fri Aug 08 17:25:57 SAST 2014
Give a hostname to your machines and make their entries in the /etc/hosts file, like this ,
#hostname hdserver.example.com
#vim /etc/hosts
192.168.0.25 hdserver.example.com
192.168.0.30 hdclient.example.com
and save it.(Use correct IP addresses)
On client also give hostname hdclient.example.com and make above entries in /etc/hosts. This will help the nameserver to locate the machines with hostnames.
delete all contents from tmp folder: rm -Rf path/of/tmp/directory
format namenode: :bin/hadoop namenode -format
start all processes again : bin/start-all.sh