Hadoop: the ip of slave is not correct - hadoop

1.Host configuration:
127.0.0.1 localhost
192.168.1.3 master
172.16.226.129 slave1
2.slaves file:
slave1
3.JPS:
zqj#master:/usr/local/nodetmp$ jps
5377 Jps
4950 SecondaryNameNode
4728 NameNode
5119 ResourceManager
zqj#slave1:/usr/local/hadooptmp$ jps
2514 NodeManager
2409 DataNode
2639 Jps
4.hadoop dfsadmin -report:
zqj#master:/usr/local/nodetmp$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Configured Capacity: 22588977152 (21.04 GB)
Present Capacity: 16719790080 (15.57 GB)
DFS Remaining: 16719765504 (15.57 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (1):
Name: 192.168.1.3:50010 (master)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 22588977152 (21.04 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 5869187072 (5.47 GB)
DFS Remaining: 16719765504 (15.57 GB)
DFS Used%: 0.00%
DFS Remaining%: 74.02%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Jan 30 17:29:01 CST 2017
Datanode in localhost:50070
I want to know why the IP is not correct when the namenode is in the real machine and datanode is in the virtual machine. thanks!
When I use virtual machine as the namenode, everything works well and ip is correct. is there anything necessary to configure in VMware like gateway or ip?

Put your slaves file in the master node. (NOT in the slaves node). I hope the hosts config is on master node.. This will fix the issue

Related

Hadoop add new datanode fail when build cluster

I'm build a hadoop cluster, about two node, step by step with official document.
But append datanode not join the cluster at Web UI: http://{host address}:50070/dfshealth.html#tab-datanode
with command:
[az-user#AZ-TEST1-SPARK-SLAVE ~]$ yarn node --list
17/11/27 09:16:04 INFO client.RMProxy: Connecting to ResourceManager at /10.0.4.12:8032
Total Nodes:2
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
AZ-TEST1-SPARK-MASTER:37164 RUNNING AZ-TEST1-SPARK-MASTER:8042 0
AZ-TEST1-SPARK-SLAVE:42608 RUNNING AZ-TEST1-SPARK-SLAVE:8042 0
It shows there are two node, but with another command just shows one livenode:
[az-user#AZ-TEST1-SPARK-SLAVE ~]$ hdfs dfsadmin -report
Configured Capacity: 1081063493632 (1006.82 GB)
Present Capacity: 1026027008000 (955.56 GB)
DFS Remaining: 1026026967040 (955.56 GB)
DFS Used: 40960 (40 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 10.0.4.12:50010 (10.0.4.12)
Hostname: AZ-TEST1-SPARK-MASTER
Decommission Status : Normal
Configured Capacity: 1081063493632 (1006.82 GB)
DFS Used: 40960 (40 KB)
Non DFS Used: 97816576 (93.29 MB)
DFS Remaining: 1026026967040 (955.56 GB)
DFS Used%: 0.00%
DFS Remaining%: 94.91%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Nov 27 09:22:36 UTC 2017
command show the same result on Master node.
Thanks for any advice.
other messages
the problem similar as number-of-nodes-in-hadoop-cluster but not work on my stage.
I'm use bare ip not config host ip file as usual.
Fixed
Use host name in every node and their configuration file.
In cluster mode, it must use host name rather then bare ip.

Hadoop: two datanodes but UI shows one and Spark: two workers UI shows one

I have seen lots of answers on SO and on Quora along with many websites. Some problems were solved when they configured firewall for slaves IPs, Some said it's a UI glitch. I am confused . I have two datanodes: one is pure datanode and another is Namenode+datanode. Problem is when I do <master-ip>:50075 it shows only one datanode ( that of machine which has namenode too ). but my hdfs dfsadmin -report shows I have two datanodes and after starting hadoop on my master and if I do jps on my pure-datanode-machine or slave machine I can see datanode running.
Firewall on both machines is off. sudo ufw status verbose gives Status: inactive response. Same scenerio is with spark. Spark UI show worker node as the node with master node not the pure worker node.But worker is running on pure-worker-machine. Again, is this a UI glitch or I am missing something?
hdfs dfsadmin -report
Configured Capacity: 991216451584 (923.14 GB)
Present Capacity: 343650484224 (320.05 GB)
DFS Remaining: 343650418688 (320.05 GB)
DFS Used: 65536 (64 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0
-------------------------------------------------
Live datanodes (2):
Name: 10.10.10.105:50010 (ekbana)
Hostname: ekbana
Decommission Status : Normal
Configured Capacity: 24690192384 (22.99 GB)
DFS Used: 32768 (32 KB)
Non DFS Used: 7112691712 (6.62 GB)
DFS Remaining: 16299675648 (15.18 GB)
DFS Used%: 0.00%
DFS Remaining%: 66.02%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Jul 25 04:27:36 EDT 2017
Name: 110.44.111.147:50010 (saque-slave-ekbana)
Hostname: ekbana
Decommission Status : Normal
Configured Capacity: 966526259200 (900.15 GB)
DFS Used: 32768 (32 KB)
Non DFS Used: 590055215104 (549.53 GB)
DFS Remaining: 327350743040 (304.87 GB)
DFS Used%: 0.00%
DFS Remaining%: 33.87%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Jul 25 04:27:36 EDT 2017
/etc/hadoop/masters file on master node
ekbana
/etc/hadoop/slaves file on master node
ekbana
saque-slave-ekbana
/etc/hadoop/masters file on slave node
saque-master
Note: saque-master on slaves machine and ekbana on master machine is mapped to same IP.
Also UI looks similar to this question's UI
It's because of the same hostname(ekbana).
So in UI it will show only one entry for the same hostname.
if you want to confirm this, just start only one datanode which is not in master. you can see entry for that in the UI.
If you started other datanode too, it will mask second entry for the same hostname.
you can change the hostname and try.
I also Faced similar issue, where I couldn't see datanode information on dfshealth.html page. I had two hosts named master and slave.
etc/hadoop/masters (on master machine)
master
etc/hadoop/slaves
master
slave
etc/hadoop/masters (slave machine)
master
etc/hadoop/slaves
slave
and it was able to see datanodes on UI.

Hadoop - 3 Data nodes are alive up and running but the report/url is not showing the live data nodes

I have One Name Node (Master Node)and 3 Data Node(Slave Nodes) .I have configured a single data node in Name node itself which is working fine and Showing up in the report. All the daemon are up an running individually but the 3 Data Nodes(Slave Nodes) are not listed in the hadoop dfsadmin -report.
When the jps is initiated everything looks good. :
Name Node
[hadoop#master ~]$ jps
4338 Jps
2114 NameNode
2420 SecondaryNameNode
2696 NodeManager
2584 ResourceManager
2220 DataNode
Slave Node
[hadoop#slave1 ~]$ jps
2114 NodeManager
2229 Jps
2015 DataNode
Slave Node
[hadoop#slave2 ~]$ jps
2114 NodeManager
2229 Jps
2015 DataNode
Slave Node
[hadoop#slave3 ~]$ jps
2114 NodeManager
2229 Jps
2015 DataNode
[hadoop#master ~]$ **hadoop dfsadmin -report**
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
16/07/14 21:27:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 7092494336 (6.61 GB)
Present Capacity: 1852854272 (1.73 GB)
DFS Remaining: 1852821504 (1.73 GB)
DFS Used: 32768 (32 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Live datanodes (1):
Name: 192.168.1.160:50010 (nn1)*(### comment - this is data node configured in the name node itself)*
Hostname: nn1
Decommission Status : Normal
Configured Capacity: 7092494336 (6.61 GB)
DFS Used: 32768 (32 KB)
Non DFS Used: 5239640064 (4.88 GB)
DFS Remaining: 1852821504 (1.73 GB)
DFS Used%: 0.00%
DFS Remaining%: 26.12%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Jul 14 21:27:46 IST 2016
This issue resolved - The problem is just because the Data Node/Slave Nodes are not able to communicate with the Master Node. This is because the Firewall system in the master node is not accepting the incoming connection from the data node. There are two way to react to the situation
You have to allow incoming communication by the (IP) of salve nodes in Master node
Disable Firewall.
I have worked with the 2nd option:
Type in the following below command in the master node to disable the firewall.
service iptables save
service iptables stop
chkconfig iptables off

Hadoop: Conflict in JPS and hdfs admin report for checking the number of available data nodes

I am working on a five node hadoop multinode cluster. After setting up the clusers, I used JPS command to check whether all of the nodes are properly connected/not. Following were the results after running JPS command in one master node and all the other four slave nodes respectively.
master node
8825 SecondaryNameNode
8647 DataNode
9105 NodeManager
9418 Jps
8493 NameNode
8971 ResourceManager
slave nodes
1816 NodeManager
1711 DataNode
2154 Jps
But when I tried checking from the command hdfs dfsadmin -report, I got the following result:
Configured Capacity: 242317230080 (225.68 GB)
Present Capacity: 224333357056 (208.93 GB)
DFS Remaining: 224333332480 (208.93 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Live datanodes:
Name: 127.0.0.1:50010 (localhost)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 242317230080 (225.68 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 17983873024 (16.75 GB)
DFS Remaining: 224333332480 (208.93 GB)
DFS Used%: 0.00%
DFS Remaining%: 92.58%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
I am unable to understand as to why the Data nodes available is shown as 1 in the above report. Also, my program is running very slow and so I guess only of the datanodes is active. Kindly mention the cause behind this anomaly.

Hadoop datanodes cannot find namenode in standalone setup

There are no errors in any log but I believe my datanode cannot find my namenode.
This is the error that leads me to this conclusion (according to what I've found online):
[INFO ]: org.apache.hadoop.ipc.Client - Retrying connect to server: /hadoop.server:9000. Already tried 4 time(s).
jps output:
7554 Jps
7157 NameNode
7419 SecondaryNameNode
7251 DataNode
Please can someone offer some advice?
Result of dfsadmin
Configured Capacity: 13613391872 (12.68 GB)
Present Capacity: 9255071744 (8.62 GB)
DFS Remaining: 9254957056 (8.62 GB)
DFS Used: 114688 (112 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Live datanodes:
Name: 192.172.1.49:50010 (Hadoop)
Hostname: Hadoop
Decommission Status : Normal
Configured Capacity: 13613391872 (12.68 GB)
DFS Used: 114688 (112 KB)
Non DFS Used: 4358320128 (4.06 GB)
DFS Remaining: 9254957056 (8.62 GB)
DFS Used%: 0.00%
DFS Remaining%: 67.98%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Fri Aug 08 17:25:57 SAST 2014
Give a hostname to your machines and make their entries in the /etc/hosts file, like this ,
#hostname hdserver.example.com
#vim /etc/hosts
192.168.0.25 hdserver.example.com
192.168.0.30 hdclient.example.com
and save it.(Use correct IP addresses)
On client also give hostname hdclient.example.com and make above entries in /etc/hosts. This will help the nameserver to locate the machines with hostnames.
delete all contents from tmp folder: rm -Rf path/of/tmp/directory
format namenode: :bin/hadoop namenode -format
start all processes again : bin/start-all.sh

Resources