Name node is the single point of failure for HDFS. Is this correct?
Then what about Jobtracker? If Jobtracker fails, is HDFS available?
HDFS is completely independent of the Jobtracker. As long as at least the NN is up, HDFS is nominally usable, with overall degradation dependent on the number of Datanodes that are down.
As Ambar mentioned HDFS as in the file system does not depend on the JobTracker. The current released version of Hadoop does not support Namenode high availability out of the box but you can work around it (e.g. deploy the namenode using a traditional clustering solution of active/passive with shared storage).
The next release (2.0/0.23) does fix the namenode availability issue.
You can read more about it in a blog post by Aaron Myers "High Availability for the Hadoop Distributed File System (HDFS)"
If the JobTracker is not available you cannot execute map/reduce jobs
Related
I've setup a Hadoop cluster with 4 nodes, one of which serves as the NameNode for HDFS as well as the Yarn master. This node is also the most powerful.
Now, I've distributed 2 text files, one on the node01 (namenode) and one on node03 (datanode). When running the basic WordCount MapReduce job, I can see in the logs that only node01 was doing any calculations.
My question is why Hadoop didn't decide to do MapReduce on node03 and transfer the result instead of transferring the entire book to node01. I also checked, duplication is disabled and the book is only available on node03.
So, how does Hadoop decide between transferring the data and setting up the jobs and in this decision, does it check which machine has more compute power (e.g. did it decide to transfer to node01 because node01 is a 4 core 4gig ram machine vs 2core 1 gig on node03)?
I couldn't find anything on this topic, so any guidance would be appreciated.
Thank you!
Some more clarifications:
node01 is running a NameNode as well as a DataNode and a ResourceManager as well as a NodeManager. Thus, it serves as "main node" as well as a "compute node".
I made sure to put one file on node01 and one file on node03 by running:
hdfs dfs -put sample1.txt samples on node01 and hdfs dfs -put sample02.txt samples on node03. As replication is disabled, this leads to the data - that was available locally on node01 respective node03 - only being stored there.
I verified this using the HDFS Webinterface. For sample1.txt, it says the blocks are only available on node01; for sample2.txt, it says the blocks are only available on node03.
Regarding #cricket_007:
My concern is that sample2.txt is only available on node03. The YARN Webinterface tells me that that for the Application Attempt, only one container was allocated on node01. If the map task for file sample2.txt, there would have been a container on node03 as well.
Thus, node01 needs to have fetched the sample2.txt file from node03.
Yes, I know Hadoop is not running well on 1gig of RAM, but I am working with a Raspberry Pi cluster just to fiddle around and learn a little. This is not for production usage.
The YARN application master picks a node at random to run the calculation based on information available from the Namenode where files are stored. DataNodes and NodeManagers should run on the same machines.
If your file isn't larger than the HDFS block size, there is no reason to fetch the data from other nodes.
Note: Hadoop services don't run that well on only 1G of RAM, and you need to adjust the YARN settings differently for different sized nodes.
For anyone else wondering:
At least for me, the HistoryServer UI (which needs to be started manually) shows correctly that node03 and node01 were running map jobs. Thus, my statement was incorrect. I still wonder why the application attempt UI speaks of one container, but I guess that doesn't matter.
Thank you guys!
While reading ZooKeeper's documentation, it seems to me that HDFS relies on pretty much the same mechanisms of distribution/replication (broadly speeking) as ZooKeeper. I hear some echo from one to another, but I still can't distinguish things clearly and striclty.
I understand ZooKeeper is a Cluster Management / Sync tool, while HDFS is a Distributed File Management System, but could ZK be needed on an HDFS cluster for example?
Yes, the factor is distributed processing and high availability on a hadoop cluster with a zookeper's quorum
For ex. Hadoop Namenode fail over process.
Hadoop high availability is designed around Active Namenode & Standby Namenode for fail over process. At any point of time, you should not have two masters ( active Namenodes) at same time.
Zookeper resolves cluster address to an active namenode.
I am new to hadoop need to learn details about backup and recovery. I have revised oracle backup and recovery will it help in hadoop?From where should I start
There are a few options for backup and recovery. As s.singh points out, data replication is not DR.
HDFS supports snapshotting. This can be used to prevent user errors, recover files, etc. That being said, this isn't DR in the event of a total failure of the Hadoop cluster. (http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html)
Your best bet is keeping off-site backups. This can be to another Hadoop cluster, S3, etc and can be performed using distcp. (http://hadoop.apache.org/docs/stable1/distcp2.html), (https://wiki.apache.org/hadoop/AmazonS3)
Here is a Slideshare by Cloudera discussing DR (http://www.slideshare.net/cloudera/hadoop-backup-and-disaster-recovery)
Hadoop is designed to work on the big cluster with 1000's of nodes. Data loss is possibly less. You can increase the replication factor to replicate the data into many nodes across the cluster.
Refer Data Replication
For Namenode log backup, Either you can use the secondary namenode or Hadoop High Availability
Secondary Namenode
Secondary namenode will take backup for the namnode logs. If namenode fails then you can recover the namenode logs (which holds the data block information) from the secondary namenode.
High Availability
High Availability is a new feature to run more than one namenode in the cluster. One namenode will be active and the other one will be in standby. Log saves in both namenode. If one namenode fails then the other one becomes active and it will handle the operation.
But also we need to consider for Backup and Disaster Recovery in most cases. Refer #brandon.bell answer.
You can use the HDFS sync application on DataTorrent for DR use cases to backup high volumes of data from one HDFS cluster to another.
https://www.datatorrent.com/apphub/hdfs-sync/
It uses Apache Apex as a processing engine.
Start with official documentation website : HdfsUserGuide
Have a look at below SE posts:
Hadoop 2.0 data write operation acknowledgement
Hadoop: HDFS File Writes & Reads
Hadoop 2.0 Name Node, Secondary Node and Checkpoint node for High Availability
How does Hadoop Namenode failover process works?
Documentation page regarding Recovery_Mode:
Typically, you will configure multiple metadata storage locations. Then, if one storage location is corrupt, you can read the metadata from one of the other storage locations.
However, what can you do if the only storage locations available are corrupt? In this case, there is a special NameNode startup mode called Recovery mode that may allow you to recover most of your data.
You can start the NameNode in recovery mode like so: namenode -recover
We know that servers which used for big data processing should be tolerant with hardware failure. I mean, if we had 3 server (A, B, C) and suddenly the B server is down, A and C could replace it position. But in hadoop, we know hadoop using namenode and datanode, which is when the namenode is down,we can't process the data anymore and it sounds lack of tolerant with hardware failure.
is there any reason with this design arch of hadoop?
The issue you have mentioned is known as sinlgle point of failure which exists in older hadoop versions.
Try newer versions of hadoop like 2.x.x. Hadoop from version 2.0.0 avoids this single point of failure by assigning two namenodes namely active and standby namenodes. When active namenode fails due to hardware or power issues, the standby namenode will act as active namenode.
Check this link: Hadoop High Availability for further details.
I have a question about the name node High Availability. Name node is so important because it stores all the metadata, if it is down, the whole Hadoop Cluster will be down as well. So is there any good way to approach the name node High Availability, for example there is backup name node that can take over when the primary name node fails?
(now I use Hadoop 1.1.2)
For ASF Hadoop 1.1.2, there are no solid NameNode HA options. These were released for 2.0 and are included in popular distributions like Cloudera's CDH4.
The options for NameNode HA include running a primary NameNode and a hot standby NameNode. They share an edits log, either on a NFS mount, or through quorum journal mode in HDFS itself. The former gives you the benefit of having an external source for storing your HDFS metadata, while the latter gives you the benefit of having no dependencies external to Hadoop.
Personally, I like the NFS option, as you can easily snapshot/backup the data resident the file server. The disadvantage to this approach is potentially inconsistent performance in terms of latency.
For more detail, check out the following articles:
http://www.slideshare.net/hortonworks/nn-ha-hadoop-worldfinal-10173419
http://blog.cloudera.com/blog/2012/03/high-availability-for-the-hadoop-distributed-file-system-hdfs/
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html