If you store something in HBase, can it be accessed directly from HDFS? - hadoop

I was told HBase is a DB that sits on top of HDFS.
But lets say you are using hadoop after you put some information into HBase.
Can you still access the information with map reduce?

You can read data of HBase tables either by using map reduce programs or hive queries or pig scripts.
Here is the example for map reduce
Here is the example for Hive. Once you create hive table, you can run select queries on top of HBase tables which will process data using map reduce.
You can easily integrate HBase tables even with other Hadoop eco system tools such as Pig.

Yes, HBase is a column oriented database that sits on top of hdfs.
HBase is a database that stores it's data in a distributed filesystem. The filesystem of choice typically is HDFS owing to the tight integration between HBase and HDFS. Having said that, it doesn't mean that HBase can't work on any other filesystem. It's just not proven in production and at scale to work with anything except HDFS.
HBase provides you with the following:
Low latency access to small amounts of data from within a large data set. You can access single rows quickly from a billion row table.
Flexible data model to work with and data is indexed by the row key.
Fast scans across tables.
Scale in terms of writes as well as total volume of data.

Related

HDFS vs HIVE partitioning

This may be a simple thing but i'm struggling to find the answer. When the data is loaded to HDFS its distributed and loaded into multiple nodes. The data is partitioned and distributed.
For HIVE there is a separate option to PARTITION the data. I'm pretty sure that even if you don't mention the PARTITION option, the data will be split and distributed to different nodes on the cluster, when loading a hive table. What additional benefit does this command give in this case.
summarizing comments and for Hadoop v1-v2.x:
a logical partitioning, eg. related to a date or field in a string, as written in the comments above, is only possible in hive, hcat or a another sql or parallel engine working on top of hadoop, using a fileformat which supports partitioning (Parquet, ORC, CSV are ok, but eg. XML is hard or nearly impossible to partition)
logical partitioning (like in hive, hcat) can be used as a replacement for not having an indexes
'partitioning of hdfs storage' on local or distributed nodes is possible by defining the partitions during setup of hdfs, see https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_cluster-planning/content/ch_partitioning_chapter.html
HDFS is able to "balance" or 'distribute' blocks over nodes
Natively, blocks can't be split and distributed to folders by HDFS according to their content, only moved at whole to another node
blocks (not files!) are replicated in the HDFS cluster according to the HDFS replication factor:
$ hdfs fsck /
(thanks David and Kris for your discussion above, also explains most of it and please take this post as summary)
HDFS partition : Mainly deals with the storage of files on the node. For fault tolerance, files are replicated across the cluster( Using replication factor)
Hive partition : It's an optimization technique in Hive.
Inside Hive DB, while storing tables and for better performance on the queries we go for partitioning.
Partitioning gives information about how data is stored in hive and how to read the data.
Hive Partitioning can be controlled on the column level of the table data.

Spark with HBASE vs Spark with HDFS

I know that HBASE is a columnar database that stores structured data of tables into HDFS by column instead of by row. I know that Spark can read/write from HDFS and that there is some HBASE-connector for Spark that can now also read-write HBASE tables.
Questions:
1) What are the added capabilities brought by layering Spark on top of HBASE instead of using HBASE solely? It depends only on programmer capabilities or is there any performance reason to do that? Are there things Spark can do and HBASE solely can't do?
2) Stemming from previous question, when you should add HBASE between HDFS and SPARK instead of using directly HDFS?
1) What are the added capabilities brought by layering Spark on top of
HBASE instead of using HBASE solely? It depends only on programmer
capabilities or is there any performance reason to do that? Are there
things Spark can do and HBASE solely can't do?
At Splice Machine, we use Spark for our analytics on top of HBase. HBase does not have an execution engine and spark provides a competent execution engine on top of HBase (Intermediate results, Relational Algebra, etc.). HBase is a MVCC storage structure and Spark is an execution engine. They are natural complements to one another.
2) Stemming from previous question, when you should add HBASE between
HDFS and SPARK instead of using directly HDFS?
Small reads, concurrent write/read patterns, incremental updates (most etl)
Good luck...
I'd say that using distributed computing engines like Apache Hadoop or Apache Spark imply basically a full scan of any data source. That's the whole point of processing the data all at once.
HBase is good at cherry-picking particular records, while HDFS certainly much more performant with full scans.
When you do a write to HBase from Hadoop or Spark, you won't write it to database is usual - it's hugely slow! Instead, you want to write the data to HFiles directly and then bulk import them into.
The reason people invent SQL databases is because HDDs were very very slow at that time. It took the most clever people tens of years to invent different kind of indexes to clever use the bottleneck resource (disk). Now people try to invent NoSQL - we like associative arrays and we need them be distributed (that's what essentially what NoSQL is) - they're very simple and very convenient. But in todays world with SSDs being cheap no one needs databases - file system is good enough in most cases. The one thing, though, is that it has to be distributed to keep up the distributed computations.
Answering original questions:
These are two different tools for completely different problems.
I think if you use Apache Spark for data analysis, you have to avoid HBase (Cassandra or any other database). They can be useful to keep aggregated data to build reports or picking specific records about users or items, but that's happen after the processing.
Hbase is a No SQL data base that works well to fetch your data in a fast fashion. Though it is a db, it used large number of Hfile(similar to HDFS files) to store your data and a low latency acces.
So use Hbase when it suits a requirement that your data needs to accessed by other big data.
Spark on the other hand, is the in-memory distributed computing engine which have connectivity to hdfs, hbase, hive, postgreSQL,json files,parquet files etc.
There is no considerable performance change while reading from a HDFS file or Hbase upto some gbs. After that Hbase connectivity is becoming faster....

Why hbase even though hdfs is present

Why is hadoop using hbase even though hdfs is available for storage?
We can also store table data as blocks in hdfs.
Is the data stored in hbase? If so, then role will hdfs serve?
HDFS is a distributed file system that is well suited for storing large files. It’s designed to support batch processing of data but doesn’t provide fast individual record lookups.
HBase is built on top of HDFS ,actually data gets store on HDFS and is designed to provide access to single rows of data in large tables.
Overall, the differences between HDFS and HBase are
HDFS –
Is suited for High Latency operations batch processing
Data is primarily accessed through MapReduce
Is designed for batch processing and hence doesn’t have a concept of random reads/writes
HBase –
Is built for Low Latency operations
Provides access to single rows from billions of records
Data is accessed through shell commands, Client APIs in Java, REST, Avro or Thrift
Hadoop can use HDFS as well as HBase. You need to see difference between filesystem (HDFS) and database (HBase) which offers many features compared to plain filesystem (e. g. random access to the data).
You will need the HDFS running in both cases, bacause HBase is built on top HDFS filesystem.

Hive over HBase vs Hive over HDFS

My data does not need to be loaded in realtime so I don't have to use HBASE, but I was wondering if there are any performance benefits of using HBASE in MR Jobs, shouldn't the joins be faster due to the indexed data?
Anybody have any benchmarks?
Generally speaking, hive/hdfs will be significantly faster than HBase. HBase sits on top of HDFS so it adds another layer. HBase would be faster if you are looking up individual records but you wouldn't use an MR job for that.
Performance of HBase vs. Hive:
Based on the results of HBase, Hive, and Hive on Hbase: it appears that the performance between either approach is comparable.
Hive on HBase Performance
Respectfully :) I want to tell you that if your data is not real and you are also thinking for mapreduce jobs then only go hive over hdfs as Weblogs can be processed by the Hadoop MapReduce program and stored in HDFS. Meanwhile, Hive supports fast reading of the data in the HDFS location, basic SQL, joins, and batch data load to the Hive database.
As hive also provide us
Bulk processing/ real time(if possible) as well as SQL like interface Built in optimized map-reduce Partitioning of large data which is more compatible with hdfs and help to reduce the layer of HBase otherwise if you add HBase here then it would be redundant features for you :)

How does Hive compare to HBase?

I'm interested in finding out how the recently-released (http://mirror.facebook.com/facebook/hive/hadoop-0.17/) Hive compares to HBase in terms of performance. The SQL-like interface used by Hive is very much preferable to the HBase API we have implemented.
It's hard to find much about Hive, but I found this snippet on the Hive site that leans heavily in favor of HBase (bold added):
Hive is based on Hadoop which is a batch processing system. Accordingly, this system does not and cannot promise low latencies on queries. The paradigm here is strictly of submitting jobs and being notified when the jobs are completed as opposed to real time queries. As a result it should not be compared with systems like Oracle where analysis is done on a significantly smaller amount of data but the analysis proceeds much more iteratively with the response times between iterations being less than a few minutes. For Hive queries response times for even the smallest jobs can be of the order of 5-10 minutes and for larger jobs this may even run into hours.
Since HBase and HyperTable are all about performance (being modeled on Google's BigTable), they sound like they would certainly be much faster than Hive, at the cost of functionality and a higher learning curve (e.g., they don't have joins or the SQL-like syntax).
From one perspective, Hive consists of five main components: a SQL-like grammar and parser, a query planner, a query execution engine, a metadata repository, and a columnar storage layout. Its primary focus is data warehouse-style analytical workloads, so low latency retrieval of values by key is not necessary.
HBase has its own metadata repository and columnar storage layout. It is possible to author HiveQL queries over HBase tables, allowing HBase to take advantage of Hive's grammar and parser, query planner, and query execution engine. See http://wiki.apache.org/hadoop/Hive/HBaseIntegration for more details.
Hive is an analytics tool. Just like pig, it was designed for ad hoc batch processing of potentially enourmous amounts of data by leveraging map reduce. Think terrabytes. Imagine trying to do that in a relational database...
HBase is a column based key value store based on BigTable. You can't do queries per se, though you can run map reduce jobs over HBase. It's primary use case is fetching rows by key, or scanning ranges of rows. A major feature is being able to have data locality when scanning across ranges of row keys for a 'family' of columns.
To my humble knowledge, Hive is more comparable to Pig. Hive is SQL-like and Pig is script based.
Hive seems to be more complicated with query optimization and execution engines as well as requires end user needs to specify schema parameters(partition etc).
Both are intend to process text files, or sequenceFiles.
HBase is for key value data store and retrieve...you can scan or filter on those key value pairs(rows). You can not do queries on (key,value) rows.
Hive and HBase are used for different purpose.
Hive:
Pros:
Apache Hive is a data warehouse infrastructure built on top of Hadoop.
It allows for querying data stored on HDFS for analysis via HQL, an SQL-like language, which will be converted into series of Map Reduce Jobs
It only runs batch processes on Hadoop.
it’s JDBC compliant, it also integrates with existing SQL based tools
Hive supports partitions
It supports analytical querying of data collected over a period of time
Cons:
It does not currently support update statements
It should be provided with a predefined schema to map files and directories into columns
HBase:
Pros:
A scalable, distributed database that supports structured data storage for large tables
It provides random, real time read/write access to your Big Data. HBase operations run in real-time on its database rather than MapReduce jobs
it supports partitions to tables, and tables are further split into column families
Scales horizontally with huge amount of data by using Hadoop
Provides key based access to data when storing or retrieving. It supports add or update rows.
Supports versoning of data.
Cons:
HBase queries are written in a custom language that needs to be learned
HBase isn’t fully ACID compliant
It can't be used with complicated access patterns (such as joins)
It is also not a complete substitute for HDFS when doing large batch MapReduce
Summary:
Hive can be used for analytical queries while HBase for real-time querying. Data can even be read and written from Hive to HBase and back again.
As of the most recent Hive releases, a lot has changed that requires a small update as Hive and HBase are now integrated. What this means is that Hive can be used as a query layer to an HBase datastore. Now if people are looking for alternative HBase interfaces, Pig also offers a really nice way of loading and storing HBase data. Additionally, it looks like Cloudera Impala may offer substantial performance Hive based queries on top of HBase. They are claim up to 45x faster queries over traditional Hive setups.
To compare Hive with Hbase, I'd like to recall the definition below:
A database designed to handle transactions isn’t designed to handle
analytics. It isn’t structured to do analytics well. A data warehouse,
on the other hand, is structured to make analytics fast and easy.
Hive is a data warehouse infrastructure built on top of Hadoop which is suitable for long running ETL jobs.
Hbase is a database designed to handle real time transactions

Resources