I started to learn hbase and I don't understand how it scales linearly.
The problem is that before you install hbase you have to have an hdfs cluster. The HDFS cluster have a master node which can be only one in the whole cluster, so it is a bottleneck. Ofcourse we can run 1 more master node (it is possible to run only 1 more master node) but it will be in the standby state.
As I understand hbase uses the HDFS cluster to store data. So, for me it is logically that it have no sense to run more than one Hmaster because all requests will go to the hdfs active master which performance can suffer if we have too much requests.
Also I don't understand properly do we need to install hbase on the same nodes with hdfs or separately. What are the benefits if we run hbase separately from HDFS.
As for me it is logically to install hbase cluster on the same nodes with hdfs as in the following example:
HDFS active master - HMaster
HDFS standby master - HMaster backup
HDFS Data node - HRegion server
for me it is the most logically structure because if we separate hdfs master from hmaster then probability to loose hbase cluster will be two times bigger.
I will be very happy if someone can share information about all these stuff. Because I really don't understand how hbase can linearly scales and how it works with hdfs.
First if you want you can install HBase over any supported file system. It is not mandatory to use it over Hdfs but using it with Hdfs give advantage to it like
Fault taulrence , Data replication, checksums etc.
That's why it is recommended to use HBase over hdfs
Moreover although there is a bottleneck of namenode in hdfs but it does not effect HBase efficiency because it is not that every operation internal working is dependent on namenode of hdfs for instance Region servers serve data for reads and writes. When accessing data, clients communicate with HBase RegionServers directly while Region assignment, DDL (create, delete tables) operations are handled by the HBase Master process. Which means that reading and writing of data is independent of creating and deleting of table.
You can refer https://www.mapr.com/blog/in-depth-look-hbase-architecture for more details about hdfs.
Also see this webinar on HBase by lars george. https://m.youtube.com/watch?v=_HLoH_PgrLk
Hope this will clear your doubts.
Related
I am new to big data and hadoop. I would like to know are name node, data node, secondary name node, job tracker, task tracker different systems ? If i want to process 1000 PB data, How data is divided and who is doing that task and where should i input 1000 PB data.
Yes namenode, dataNode, secondaryNameNode, jobTracker, taskTracker are all different virtual machines (JVMs you can call them). You can start them all in one physical machine (pseudo/local mode) or you can start them on different physical machines (distributed mode). These are all in Hadoop1.
Hadoop2 has introduced containers with YARN in which jobTracker and taskTracer are removed with more efficient resourceManager, applicationManager, nodeManager etc. You can find more info hadoop-yarn-site
Data are stored in HDFS (Hadoop Distributed File System) and are stored in blocks, default to 64MB. When data is loaded to hdfs, hadoop distributes the data equally in the cluster with the defined block size. When a job is run code is distributed to the nodes in cluster so that each processing occurs where the data is residing except in shuffle and sorting cases.
I hope you must have general idea of how hadoop and hdfs works. Followings are some links for you to start with
Map Reduce programming
cluster setup
hadoop commands
What is your experience with RAID1 on HDP cluster?
I have in my mind two options:
Setup RAID 1 for master and zoo nodes, and don't use RAID at all on slave nodes like kafka brokers, hbase regionservers and yarn nodemanager's.
Even if I loose one slave node, I will have two other replicas.
In my opinion, RAID will only slow down my cluster.
Despite everything, setup everything using RAID 1.
What do you think about it? What is you experience with HDP and RAID?
What do you think about using RAID 0 for slave nodes?
I'd recommend no RAID at all on Hadoop hosts. There is one caveat, in that if you are running services like Oozie and the Hive metastore that use a relational DB behind the scenes, raid may well make sense on the DB host.
On a master node, assuming you have Namenode, zookeeper etc - generally the redundancy is built into the service. For namenodes, all the data is stored on both namenodes. For Zookeeper, if you lose one node, then the other two nodes have all the information.
Zookeeper likes fast disks - ideally dedicate a full disk to zookeeper. If you have namenode HA, give the namenode edits directory and each journal node a dedicated disk too.
For the slave nodes, the datanode will write across all disks, effectively striping the data anyway. Each 'write' is at most the HDFS block size, so if you were writing a large file, you could get 128MB on disk 1, then the next 128MB on disk 2 etc.
HDFS is replicating to fact 3 in the same cluster. That is fine, but is there a way to set up HDFS so it can replicate also to different clusters/servers? Let say 1 replication in to the same cluster and the other one somewhere far away in another HDFS cluster.
If HDFS is not supporting this, are there any tools around Hadoop that allow us to do so? How do you guys replicate over other servers?
Currently there are no mechanisms for what you're asking for. Cross-cluster replication has been implemented for HBase, but not for HDFS. There is a plan to support cross datacenter replication in HDFS but it's not implemented yet.
You can use the distcp mechanism to copy your data to another cluster on a regular interval. This will place 3 replicas on each cluster (which is typically what you want for cross dc/cluster replication anyway). Note however that since this has to be done periodically, it's not exactly a replacement for realtime replication. If you lose a cluster in between copies, whatever data was written to the "primary" cluster will be lost until the cluster has been restored.
I am new to Qubole and wanted to know if data remains in HDFS after Hadoop cluster is down?
Any help is appreciated.
Thank you.
No data on HDFS is gone. We don't backup/restore HDFS. The model of computation on EC2/S3 is that the long-lived data always lives on S3 and HDFS is used only for intermediate and control data. We also use HDFS (and local disk), sometimes, as a cache.
That depends on what is down in the cluster. There are daemons in Hadoop, Namenode, data node, Resource manager, AppMaster and etc.
So if Namenode is down (Master node), then the data remains as is in the cluster, BUT you will not be able to access it at all. Because, Name node holds the meta data of the data nodes.
If a Data node is down on a cluster (slave node), then you will not be able to access the data from this node, but by default data will be stored in 3 locations in the cluster for fault tolerance. So you can still access the data from other two nodes.
I am fairly new to Hadoop and I have following questions on Hadoop framework. Can anybody please guide on this?
Is DataNode and TaskTracker located physically on separate machines in a production environment?
When does Hadoop splits a file into blocks? Does this happen when you copy a file from local filesystem into HDFS?
Short answer
Most of the time, but not necessarily.
Yes.
Long Answer
1)
An installation of Hadoop on a cluster will have 2 main types of nodes:
Master Nodes
Data Nodes
Master Nodes typically run at least:
CLDB
Zookeeper
JobTracker
Data Nodes typically run at least:
TaskTracker
The DataNode service can run on a different node than the TaskTracker service. However, the Hadoop Docs for the DataNode service recommend to run DataNode and TaskTracker on the same nodes so that MapReduce operations are performed close to the data.
For the MapR distribution of Hadoop, the two server roles typically run:
MapR Control Node
ZooKeeper *
CLDB *
JobTracker *
HBaseMaster
NFS Gateway
Webserver
MapR Data Node
TaskTracker *
RegionServer (sometimes)
Zookeeper (sometimes)
2)
While most filesystems store data in blocks, HDFS distributes & replicates the blocks across DataNodes. When you first store data in HDFS, it will break it into blocks and store it across different nodes according to the specified replication factor. However, if you add new DataNodes to the cluster, it will not automatically rebalance old blocks across them unless the replication factor is not met.
(Thanks to #javadba for clarifying this!)
Given TrinitronX has already answered #1 - though the Short Answer should be NO - the datanode/task tracker MAY be on different physical machines, but it is uncommon. You are best to start off with "slave" machines being datanode plus task tracker.
So this is an answer to the second part of the question
2) When does Hadoop splits a file into blocks? Does this happen when you copy a file from local filesystem into HDFS?
Yes. The file is broken into blocks upon loading into HDFS.
Data-node and Job tracker can be run on different machines.
Hadoop always stores a files as a blocks during all operations on hadoop
Refer
1.Hadoop Job tracker and task tracker
2.Hadoop block size and Replication