Hadoop backup and recovery tool and guidance - hadoop

I am new to hadoop need to learn details about backup and recovery. I have revised oracle backup and recovery will it help in hadoop?From where should I start

There are a few options for backup and recovery. As s.singh points out, data replication is not DR.
HDFS supports snapshotting. This can be used to prevent user errors, recover files, etc. That being said, this isn't DR in the event of a total failure of the Hadoop cluster. (http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html)
Your best bet is keeping off-site backups. This can be to another Hadoop cluster, S3, etc and can be performed using distcp. (http://hadoop.apache.org/docs/stable1/distcp2.html), (https://wiki.apache.org/hadoop/AmazonS3)
Here is a Slideshare by Cloudera discussing DR (http://www.slideshare.net/cloudera/hadoop-backup-and-disaster-recovery)

Hadoop is designed to work on the big cluster with 1000's of nodes. Data loss is possibly less. You can increase the replication factor to replicate the data into many nodes across the cluster.
Refer Data Replication
For Namenode log backup, Either you can use the secondary namenode or Hadoop High Availability
Secondary Namenode
Secondary namenode will take backup for the namnode logs. If namenode fails then you can recover the namenode logs (which holds the data block information) from the secondary namenode.
High Availability
High Availability is a new feature to run more than one namenode in the cluster. One namenode will be active and the other one will be in standby. Log saves in both namenode. If one namenode fails then the other one becomes active and it will handle the operation.
But also we need to consider for Backup and Disaster Recovery in most cases. Refer #brandon.bell answer.

You can use the HDFS sync application on DataTorrent for DR use cases to backup high volumes of data from one HDFS cluster to another.
https://www.datatorrent.com/apphub/hdfs-sync/
It uses Apache Apex as a processing engine.

Start with official documentation website : HdfsUserGuide
Have a look at below SE posts:
Hadoop 2.0 data write operation acknowledgement
Hadoop: HDFS File Writes & Reads
Hadoop 2.0 Name Node, Secondary Node and Checkpoint node for High Availability
How does Hadoop Namenode failover process works?
Documentation page regarding Recovery_Mode:
Typically, you will configure multiple metadata storage locations. Then, if one storage location is corrupt, you can read the metadata from one of the other storage locations.
However, what can you do if the only storage locations available are corrupt? In this case, there is a special NameNode startup mode called Recovery mode that may allow you to recover most of your data.
You can start the NameNode in recovery mode like so: namenode -recover

Related

What is difference between Hadoop Namenode HA and HDFS federation

I am a bit confused with Hadoop Namenode HA using QJM and HDFS federation. Both uses multiple namenode and both provides High Availability. I am not able to decide which architecture to used for Namenode High Availability since both looks exactly same except the QJM thing.
Please pardon me if this is not the type of question to be discussed here.
The main difference between HDFS High Availability and HDFS Federation would be that the namenodes in Federation aren't related to each other.
In HDFS federation, all the namenodes share a pool of metadata in which each namenode has it's own pool hence providing fault-tolerance i.e if one namenode in a federation fails, it doesn't affect the data of other namenodes.
So, Federation = Multiple namenodes and no correlation.
While in case of HDFS HA, there are two namenodes - Primary NN and Standby NN.
Primary NN works hard all the time, everytime while Standby NN just sits there and chills and updates it's metadata with respect to the Primary Namenode once in a while which makes them related.
When Primary NN gets tired of this usual sheet (i.e it fails), the Standby NameNode takes over with whatever most recent metadata it has.
As for a HA Architecture, you need to have atleast two sepearte machines configured as Namenode, out of which only one should run in Active State.
More details here: HDFS High Availability

differences between HDFS and ZooKeeper?

While reading ZooKeeper's documentation, it seems to me that HDFS relies on pretty much the same mechanisms of distribution/replication (broadly speeking) as ZooKeeper. I hear some echo from one to another, but I still can't distinguish things clearly and striclty.
I understand ZooKeeper is a Cluster Management / Sync tool, while HDFS is a Distributed File Management System, but could ZK be needed on an HDFS cluster for example?
Yes, the factor is distributed processing and high availability on a hadoop cluster with a zookeper's quorum
For ex. Hadoop Namenode fail over process.
Hadoop high availability is designed around Active Namenode & Standby Namenode for fail over process. At any point of time, you should not have two masters ( active Namenodes) at same time.
Zookeper resolves cluster address to an active namenode.

secondary name node functionality

Can someone explain what exactly the words in bold mean which are taken from text book? What does "state of the secondary namenode lags that of the primary " mean?
Secondary name node keeps a copy of the merged namespace image, which can be used in the event of the namenode failing. **However, the state
of the secondary namenode lags that of the primary, so in the event of total failure of the primary, data loss is almost certain.**The usual course of action in this case is to copy the namenode’s metadata files that are on NFS to the secondary and run it as the new primary.
Thanks in advance
Hadoop 1.x:
When we start ha hadoop cluster its creates a file system image which keeps the metadata information of your entire hadopp cluster. When a new entry comes into the hadoop cluster it goes to edits log. Secondary NameNode periodically reads and query the edits and retrieve the information and merge the information with fsimage. In case NameNode fails, hadoop administrator can start the hadoop cluster with the help of fsimage and edits.(during start NameNode reads the edits and fsimage so there wont be data loss)
Fsimage and edits log already keeps the updated information about file system in the form of metadata so in case of total failure of primary hadoop administrator can recover the cluster information with help of edits log and fsimage.
Hadoop 2.x:
In hadoop 1.x NameNode was a single point of failure. Failure of NameNode was downtime for your entire hadoop cluster. Planned maintenance events such as software or hardware upgrades on the NameNode machine would result in periods of cluster downtime.To overcome this issue hadoop community added High Availability feature. During the setting up of hadoop cluster you can choose which type of cluster you want.
The HDFS NameNode High Availability feature enables you to run redundant NameNodes in the same cluster in an Active/Passive configuration with a hot standby.Both NameNode require the same type of hardware configuration.
In HA configuration one NameNode will be active and other will be in standby state.The ZKFailoverController (ZKFC) is a ZooKeeper client that monitors and manages the state of the NameNode. When active NameNode goes down, It makes standby as active NameNode, and primary NameNode will become standby when you start them. Please can get more on it on this website: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.8.0/bk_system-admin-guide/content/ch_hadoop-ha-5.html
In HA hadoop cluster Active NameNode reads and write metadata information in JournalNode(Quorum-based Storage only). JournalNode is a separate node in HA hadoop cluster used for reads and write edits log and fsimage.
Standby NameNodealways synchronized with active NameNode, both communicate with each other through Journal Node. When any namespace modification is performed by the Active node, it durably logs a record of the modification to a majority of these JNs. Standby NameNode constantly monitors edit logs at journal nodes and updates its namespace accordingly.In the event of failover, standby NameNode will ensure that its namespace is completely updated according to edit logs before it is changes to active state. When standby will be in active state it will start writing edits log into JournalNode.
Hadoop don't keep any data into NameNode, All data resides in datanode, In case of NameNode failure there wont be any loss of data.

Hadoop namenode High Availability

I have a question about the name node High Availability. Name node is so important because it stores all the metadata, if it is down, the whole Hadoop Cluster will be down as well. So is there any good way to approach the name node High Availability, for example there is backup name node that can take over when the primary name node fails?
(now I use Hadoop 1.1.2)
For ASF Hadoop 1.1.2, there are no solid NameNode HA options. These were released for 2.0 and are included in popular distributions like Cloudera's CDH4.
The options for NameNode HA include running a primary NameNode and a hot standby NameNode. They share an edits log, either on a NFS mount, or through quorum journal mode in HDFS itself. The former gives you the benefit of having an external source for storing your HDFS metadata, while the latter gives you the benefit of having no dependencies external to Hadoop.
Personally, I like the NFS option, as you can easily snapshot/backup the data resident the file server. The disadvantage to this approach is potentially inconsistent performance in terms of latency.
For more detail, check out the following articles:
http://www.slideshare.net/hortonworks/nn-ha-hadoop-worldfinal-10173419
http://blog.cloudera.com/blog/2012/03/high-availability-for-the-hadoop-distributed-file-system-hdfs/
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html

When will HDFS be unavailable?

Name node is the single point of failure for HDFS. Is this correct?
Then what about Jobtracker? If Jobtracker fails, is HDFS available?
HDFS is completely independent of the Jobtracker. As long as at least the NN is up, HDFS is nominally usable, with overall degradation dependent on the number of Datanodes that are down.
As Ambar mentioned HDFS as in the file system does not depend on the JobTracker. The current released version of Hadoop does not support Namenode high availability out of the box but you can work around it (e.g. deploy the namenode using a traditional clustering solution of active/passive with shared storage).
The next release (2.0/0.23) does fix the namenode availability issue.
You can read more about it in a blog post by Aaron Myers "High Availability for the Hadoop Distributed File System (HDFS)"
If the JobTracker is not available you cannot execute map/reduce jobs

Resources