Can HDFS replicate to other clusters? - hadoop

HDFS is replicating to fact 3 in the same cluster. That is fine, but is there a way to set up HDFS so it can replicate also to different clusters/servers? Let say 1 replication in to the same cluster and the other one somewhere far away in another HDFS cluster.
If HDFS is not supporting this, are there any tools around Hadoop that allow us to do so? How do you guys replicate over other servers?

Currently there are no mechanisms for what you're asking for. Cross-cluster replication has been implemented for HBase, but not for HDFS. There is a plan to support cross datacenter replication in HDFS but it's not implemented yet.
You can use the distcp mechanism to copy your data to another cluster on a regular interval. This will place 3 replicas on each cluster (which is typically what you want for cross dc/cluster replication anyway). Note however that since this has to be done periodically, it's not exactly a replacement for realtime replication. If you lose a cluster in between copies, whatever data was written to the "primary" cluster will be lost until the cluster has been restored.

Related

Cassandra vs HDFS to store analytics data

we have an Apache Spark cluster that analyse data stored in HDFS (.parquet).
The solution is optimal in terms of performance but it's not disaster safe as we would like, indeed, HDFS architecture has a single point of failure (the namenode) even using two namenode (you just have 2 point of failure but it's not enough).
To improve our cluster fault tolerance we would like to move to another data store solution like Cassandra.
Questions are:
With Cassandra as datastore is Spark able to leverage on DataLocality as it do with HDFS?
How this change can affect the performance?
Thanks
Matteo
There's article about data locality, spark and Cassandra, so yes, it is possible:
https://www.slideshare.net/SparkSummit/cassandra-and-spark-optimizing-russell-spitzer-1
I didn't done any performance checks with Spark on HDFS vs Cassandra, and i believe it will vary depending on different workflows, but since Netflix and Microsoft using Cassandra with Spark, i believe performance is acceptable in most cases, and probably is a trade-off between data ingestion speed, existence/nonexistence of ETL and speed of the analytical process.
About hadoop single point of failure - If you will run Cassandra with replication factor 3 and consistency level quorum, you will get same 2 nodes down that will make data unavailable :) , keep it in mind.
And maybe consider MapR hadoop distribution, they've tried to solve namenode problem.

Replicating data between multiple Hadoop clusters residing in different data centers

I was wondering what would be the best way to replicate the data present in a Hadoop cluster H1 in data center DC1 to another Hadoop cluster H2 in data center DC2 (warm backup preferably). I know that Hadoop does data replication and the number of copies of the data created is decided by the replication factor set in hdfs-site.xml. I have a few questions related to this
Would it make sense to have the data nodes of one cluster be spread across both data centers so that the data nodes for H1 would be present in both DC1 and DC2. If this makes sense and is viable, then does it mean we do not need H2?
Would it make sense to have the namenodes and datanodes distributed across both data centers rather than having only the datanodes distributed across both data centers?
I have also heard people use distcp and many tools build on top of distcp. But distcp does lazy backups and would prefer warm backups over cold ones.
Some people suggest using Kafka for this but I am not sure how to go about using it.
Any help would be appreciated. Thanks.
It depends on what you are trying to protect against. If you want to protect against site failure, distcp seems to be the only option for cross datacenter replication. However, as you pointed out, distcp has limitations. You can use snapshots to protect against user mistakes or application corruptions because replication or multiple replicas will not protect against that. Other commercial tools are available for automating the backup process as well if you don't want to write code and maintain it.

Is HDFS necessary for Spark workloads?

HDFS is not necessary but recommendations appear in some places.
To help evaluate the effort spent in getting HDFS running:
What are the benefits of using HDFS for Spark workloads?
Spark is a distributed processing engine and HDFS is a distributed storage system.
If HDFS is not an option, then Spark has to use some other alternative in form of Apache Cassandra Or Amazon S3.
Have a look at this comparision
S3 – Non urgent batch jobs. S3 fits very specific use cases, when data locality isn’t critical.
Cassandra – Perfect for streaming data analysis and an overkill for batch jobs.
HDFS – Great fit for batch jobs without compromising on data locality.
When to use HDFS as storage engine for Spark distributed processing?
If you have big Hadoop cluster already in place and looking for real time analytics of your data, Spark can use existing Hadoop cluster. It will reduce development time.
Spark is in-memory computing engine. Since data can't fit into memory always, data has to be spilled to disk for some operations. Spark will benifit from HDFS in this case. The Teragen sorting record achieved by Spark used HDFS storage for sorting operation.
HDFS is scalable, reliable and fault tolerant distributed file system ( since Hadoop 2.x release). With data locality principle, processing speed is improved.
Best for Batch-processing jobs.
The shortest answer is:"No, you don't need it". You can analyse data even without HDFS, but off course you need to replicate the data on all your nodes.
The long answer is quite counterintuitive and i'm still tryng to understand it with the help stackoverflow community.
Spark local vs hdfs permormance
HDFS (or any distributed Filesystems) makes distributing your data much simpler. Using a local filesystem you would have to partition/copy the data by hand to the individual nodes and be aware of the data distribution when running your jobs. In addition HDFS also handles failing nodes failures.
From an integration between Spark and HDFS, you can imagine spark knowing about the data distribution so it will try to schedule tasks to the same nodes where the required data resides.
Second: which problems did you face exactly with the instruction?
BTW: if you are just looking for an easy setup on AWS, DCOS allows you to install HDFS with a single command...
So you could go with Cloudera or Hortenworks distro and load up an entire stack very easily. CDH will be used with YARN though I find it so much more difficult to configure mesos in CDH. Horten is much easier to customize.
HDFS is great because of datanodes = data locality (process where the data is) as shuffling/data transfer is very expensive. HDFS also naturally blocks files which allows Spark to partition on the blocks. (128mb blocks, you can change this).
You could use S3 and Redshift.
See here:
https://github.com/databricks/spark-redshift

Elasticsearch on Hadoop - Should ES nodes be Colocated with Hadoop DataNodes?

From the Elasticsearch for Hadoop documentation:
Whenever possible, elasticsearch-hadoop shares the Elasticsearch
cluster information with Hadoop to facilitate data co-location. In
practice, this means whenever data is read from Elasticsearch, the
source nodes IPs are passed on to Hadoop to optimize task execution.
If co-location is desired/possible, hosting the Elasticsearch and
Hadoop clusters within the same rack will provide significant network
savings.
Does this mean to say that ideally an Elasticsearch node should be colocated with every DataNode on the Hadoop cluster, or am I misreading this?
You may find this joint presentation by Elasticsearch and Hortonworks useful in answering this question:
http://www.slideshare.net/hortonworks/hortonworks-elastic-searchfinal
You'll note that on slides 33 and 34 they show multiple architectures - one where the ES nodes are co-located on the Hadoop nodes and another where you have separate clusters. The first option clearly gives you the best co-location of data which is very important for managing Hadoop performance. The second approach allows you to tune each separately and scale them independently.
I don't know that you can say one approach is better than the other as there are clearly tradeoffs. Running on the same node clearly minimizes data access latency at the expense of a loss of isolation and ability to tune each cluster separately.

HBase: How does replication work?

I'm currently evaluating HBase as a Datastore, but one question was left unanswered: HBase stores many copies of the same object on many nodes (aka replication). As HBase features so-called strong consistency (in constrast to eventual consistent) it guarantees that every replica returns the same value if read.
As I understood the HBase concept, when reading values, first the HBase master is queried for a (there must be more than one) RegionServer providing the data. Then I can issue read and write requests without invention of the master. How can then replication work?
How does HBase provide concistency?
How do write operations internally work?
Do write operations block until all replicas are written (=> synchronous replication). If yes, who manages this transfer?
How does HDFS come into the game?
I have already read the BigTable-Paper and searched the docs, but I found no further information on the architecture of HBase.
Thanks!
hbase does not do any replication in the way that you are thinking. It is built on top of HDFS, which provides replication for the data blocks that make up the hbase tables. However, only one regionserver ever serves or writes data for any given row.
Usually regionservers are colocated with data nodes. All data writes in HDFS go to the local node first, if possible, another node on the same rack, and another node on a different rack (given a replication factor of 3 in HDFS). So, a region server will eventually end up with all of its data served from the local server.
As for blocking: the only block is until the WAL (write ahead log) is flushed to disk. This guarentees that no data is lost as the log can always be replayed. Note that older version of hbase did not have this worked out because HDFS did not support a durable append operation until recently. We are in a strange state for the moment as there is no official Apache release of Hadoop that supports both append and HBase. In the meantime, you can either apply the append patch yourself, or use the Cloudera distribution (recommended).
HBase does have a related replication feature that will allow you to replicate data from one cluster to another.

Resources