New datanode not tranferring data from existing hadoop cluster - hadoop

I have followed up the tutriolpoint guide and completed every step on setting up a new node into an existing hadoop cluster. But I am facing difficulty in figuring out why data isn’t being transferred. I finished checking host files and the files related to the authorized key that are supposed to be in the functioning slave node. The data node already contains information on the connection with the master node but isn't transferring the data from the master node. When I try to ping the master node from the new datanode, it's giving an response of no host master found but when I type dfsadmin -report on new node it shows the master node and 4 live slave nodes(doesn't show the new node that I am trying to setup).
I tried checking all the files related to hosts on new and master node and re-saved the IP regarding the new node. And expected the masternode to start transferring data to the node by starting the datanode but the node didn't increse in size and wasn't able to ping the master from the new node.

It sounds like you are using IP addresses instead of domain name. This is likely the cause of the issue.
Update your (all) you host file to have domain names and fix the domain name of the new node to match. Then fix your config to match the domain name.

Related

If master node failed then how can recover all data on master node and how to again start hadoop cluster?

I have three master,slave1,salve2 cluser server of hadoop and My question is like if master server of ambari system failed then how can we recover ? Do we need to add new server and install ambari again or how can we recover our data from failed server. if added new server we can assign as master then how can we do ? could suggest me about master server down then how can resolve this issue ?
Thanks in advance.
No data retrieval of data if the Name Node dies and you have no backup. You need a backup Name Node (aka Secondary Name Node) which will take metadata backup after every fixed interval. This interval is generally long so u still lose some data
With hadoop 2.0 u can take more frequent backup with help of a passive name node which becomes active if the main name node dies and data is still accessible.

hadoop 50070 Datanode tab won't show both data nodes

I know it is a duplicate question but since the other questions did not have answers with it, I am reposting it. I have recently installed hadoop cluster using 2 VMs on my laptop. I could go and checkout port 50070 and under datanodes tab I can see only one data node, but I have 2 data nodes, one on master node and other on slave node. What could be the reasons?
Sorry, feels like it's been a time. But still I'd like to share my answer: the root cause is from hadoop/etc/hadoop/hdfs-site.xml: the xml file has a property named dfs.datanode.data.dir. If you set all the datanodes with the same name, then hadoop is assuming the cluster has only one datanode. So the proper way of doing it is naming every datanode with a unique name:
Regards, YUN HANXUAN

How to delete datanode from hadoop clusters without losing data

I want to delete datanode from my hadoop cluster, but don't want to lose my data. Is there any technique so that data which are there on the node which I am going to delete may get replicated to the reaming datanodes?
What is the replication factor of your hadoop cluster?
If it is default which is generally 3, you can delete the datanode directly since the data automatically gets replicated. this process is generally controlled by name node.
If you changed the replication factor of the cluster to 1, then if you delete the node, the data in it will be lost. You cannot replicate it further.
Check all the current data nodes are healthy, for these you can go to the Hadoop master admin console under the Data nodes tab, the address is normally something link http://server-hadoop-master:50070
Add the server you want to delete to the files /opt/hadoop/etc/hadoop/dfs.exclude using the full domain name in the Hadoop master and all the current datanodes (your config directory installation can be different, please double check this)
Refresh the cluster nodes configuration running the command hdfs dfsadmin -refreshNodes from the Hadoop name node master
Check the Hadoop master admin home page to check the state of the server to remove at the "Decommissioning" section, this may take from couple of minutes to several hours and even days depending of the volume of data you have.
Once the server is shown as decommissioned complete, you may delete the server.
NOTE: if you have other services like Yarn running on the same server, the process is relative similar but with the file /opt/hadoop/etc/hadoop/yarn.exclude and then running yarn rmadmin -refreshNodes from the Yarn master node

How to run HDFS cluster without DNS

I'm building a local HDFS dev environment (actually hadoop + mesos + zk + kafka) to ease development of Spark jobs and facilitate local integrated testing.
All other components are working fine but I'm having issues with HDFS. When the Data Node tries to connect to the name node, I get a DisallowedDataNodeException:
org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode denied communication with namenode
Most questions related to the same issue boil down to name resolution of the data node at the name node either static through the etc/hosts files or using dns. Static resolution is not an option with docker, as I don't know the data nodes when the name node container is created. I would like to avoid creating and maintaining an additional DNS service. Ideally, I would like to wire everything using the --link feature of docker.
Is there a way to configure HDFS in such a way that it only uses IP addresses to work?
I found this property and set to false, but it didn't do the trick:
dfs.namenode.datanode.registration.ip-hostname-check (default: true)
Is there a way to have a multi-node local HDFS cluster working only using IP addresses and without using DNS?
I would look at reconfiguring your Docker image to use a different hosts file [1]. In particular:
In the Dockerfile(s), do the switch-a-roo [1]
Bring up the master node
Bring up the data nodes, linked
Before starting the datanode, copy over the /etc/hosts to the new location, /tmp/hosts
Append the master node's name and master node ip to the new hosts file
Hope this works for you!
[1] https://github.com/dotcloud/docker/issues/2267#issuecomment-40364340

formatting namenode, node is lost

I'am trying to set up the pseudo distributed mode for Hadoop. all first steps are ok, but when I format the namenode and browse the filesystem (50070), it shows "there are no datanode in the cluster"
http://hadoop.apache.org/docs/stable/single_node_setup.html
should I do other configuration? change directory path?
thanks
From your comments it looks like you've formatted your name node twice, but not deleted the underlying data for the data node.
I suggest you clean up as follows:
Ensure that all hadoop services are stopped (bin/stop-all.sh)
Remove all data in the directories named in conf/hdfs-site.xml conf file
dfs.name.dir
dfs.data.dir
Reformat the name node again
Start HDFS only (bin/start-dfs.sh)
Check the logs for both the name node and data node to ensure that everything started without error
If you're still having problems, post your name node and data node logs back into your original question

Resources