sensor data with SAP HANA and Hadoop/HDFS - hadoop

i would like to save sensor data in a suitable database.
I have 100.000 writes every minute with 100 bytes size each write.
Also i want to do analytics on the data.
i thought about hadoop, because it has many different frameworks to analyze the data.(e.g Apache spark)
Now my problem:
Hbase a nosql database would be suitable solution, because it has a column familiy data model to access large columns. But it runs on top of HDFS.
HDFS has 64 MB size of data Blocks. What does that mean for me if i have 100 byte data?
Also i would like to run machine learning on top of hadoop. Would it be possible to use HBASE and SAP Hana together?(SAP Hana runs with hadoop)

Let me try to address you points step by step:
I would like to save sensor data in a suitable database.
I would suggest something like OpenTSDB running on HBase here, since you also want to run a Hadoop cluster anyhow.
I have 100.000 writes every minute with 100 bytes size each write.
As you correctly point out, small messages/files are an issue for HDFS. Not so for HBase though (the block size is abstracted away by HBase, no need to adjust it for the underlying HDFS).
A solution like OpenTSDB on Hbase or pure Hbase will work just fine for this load.
That said since you apparently want to access your data via Hbase and also SAP Hana (which will probably require aggregating measurements from many 100byte files into larger files because now the HDFS block size comes into play) I would suggest handling incoming data via Kafka first and then reading from Kafka into raw HDFS (in some way compatible with Hana) and Hbase via separate consumers on Kafka.
Would it be possible to use HBASE and SAP Hana together?
See above explanation, Kafka (or a similar distributed queue) would be what you want for ingesting into multiple stores from a stream of small messages in my opinion.
HDFS has 64 MB size of data Blocks. What does that mean for me if i have 100 byte data?
Doesn't matter to Hbase.
Doesn't matter to Kafka (at least with your throughput and message sizes it doesn't :))
Raw HDFS storage will require you to manually aggregate those 100 byte messages into larger files (maybe Avro would be helpful for you here)
Also i would like to run machine learning on top of Hadoop.
Not an issue, HDFS is a distributed system, so you can scale things up to more performance and add a machine learning solution based Spark or whatever other thing you want to run on top of Hadoop at any time. Worst case you will have to add another machine to your cluster, but there is no hard limit on the number of things you can simultaneously run on your data once it's stored in HDFS and your cluster is powerful enough.

Related

Big data lambda architecture with cassandra and hadoop

I am working on a Big Data solution for sensor data and predictive analytics.
I am new to Big Data, and have read about the lambda-architecture.
I thought about using Cassandra Database together with Hadoop.
Cassandra is a high available and Partition tolerance database and Hadoop hdfs a file system for large analytics jobs.
If I receive the data from a Internet of Things Device, should the data be saved first in Hadoop and then to Cassandra?
The lambda architecture has Hadoop in batch layer, receiving the data and sending it to the serving layer to a nosql database.
Why should the data be first in Hadoop?
and what kind of data is stored in Cassandra if Hadoop contains the raw data?
The stream layer is out of Focus at the moment.
I just want to understand the usage of Cassandra and Hadoop together.
The data in Hadoop is for large analytics and in cassandra there should be the result from my Hadoop jobs.
Does that mean i can store my raw data in both? i can store my raw data in Cassandra and in Hadoop if not only the large analytics jobs are useful for my application?
Example
INSERT INTO temperature(weatherstation_id,event_time,temperature)
VALUES (’1234ABCD’,’2013-04-03 07:02:00′,’73F’);
if this is my insert and i have thousands of them in one single minute.
I want to do some large jobs i use Hadoop ?
But also i need every single Data Row for my application without analytics. Cassandra is storing it too?
The trade off is between the latency and throughput. Hadoop is supposed to provide the high throughput but the latency is quite high. So hadoop is used for batch processing in lambda architecture. But there may be requirement when you would like to pass on the pre-computed data ( Or summarized data) to another layer like visualization layer .These precomputed data is basically stored in cassandra or hbase to have low latency.
As you receive the data from a IoT Device, you need to save this data as quickly as you can. That's exactly what Cassandra is great for.
Than you need to process this data, and as the data amount is large, in the realistic case you do not want to have on-the-fly data processing, but to have batch(nightly, for example) processing instead.
And it's turn of Hadoop here.
So you have to extract the data from Cassandra, then put into Hadoop's file system (hdfs) and then do some processing (via Hive or Spark).
You could also think of having Cassandra-Spark direct streaming job, but I'd suggest to copy data from Cassandra first, as this allows to use this data as sandbox (to debug jobs, testing new algorithms, etc.) without any impact on Casandra cluster performance.
You can read about Cassandra and big data here.
Disclaimer: I am the author of this post.

Spark with HBASE vs Spark with HDFS

I know that HBASE is a columnar database that stores structured data of tables into HDFS by column instead of by row. I know that Spark can read/write from HDFS and that there is some HBASE-connector for Spark that can now also read-write HBASE tables.
Questions:
1) What are the added capabilities brought by layering Spark on top of HBASE instead of using HBASE solely? It depends only on programmer capabilities or is there any performance reason to do that? Are there things Spark can do and HBASE solely can't do?
2) Stemming from previous question, when you should add HBASE between HDFS and SPARK instead of using directly HDFS?
1) What are the added capabilities brought by layering Spark on top of
HBASE instead of using HBASE solely? It depends only on programmer
capabilities or is there any performance reason to do that? Are there
things Spark can do and HBASE solely can't do?
At Splice Machine, we use Spark for our analytics on top of HBase. HBase does not have an execution engine and spark provides a competent execution engine on top of HBase (Intermediate results, Relational Algebra, etc.). HBase is a MVCC storage structure and Spark is an execution engine. They are natural complements to one another.
2) Stemming from previous question, when you should add HBASE between
HDFS and SPARK instead of using directly HDFS?
Small reads, concurrent write/read patterns, incremental updates (most etl)
Good luck...
I'd say that using distributed computing engines like Apache Hadoop or Apache Spark imply basically a full scan of any data source. That's the whole point of processing the data all at once.
HBase is good at cherry-picking particular records, while HDFS certainly much more performant with full scans.
When you do a write to HBase from Hadoop or Spark, you won't write it to database is usual - it's hugely slow! Instead, you want to write the data to HFiles directly and then bulk import them into.
The reason people invent SQL databases is because HDDs were very very slow at that time. It took the most clever people tens of years to invent different kind of indexes to clever use the bottleneck resource (disk). Now people try to invent NoSQL - we like associative arrays and we need them be distributed (that's what essentially what NoSQL is) - they're very simple and very convenient. But in todays world with SSDs being cheap no one needs databases - file system is good enough in most cases. The one thing, though, is that it has to be distributed to keep up the distributed computations.
Answering original questions:
These are two different tools for completely different problems.
I think if you use Apache Spark for data analysis, you have to avoid HBase (Cassandra or any other database). They can be useful to keep aggregated data to build reports or picking specific records about users or items, but that's happen after the processing.
Hbase is a No SQL data base that works well to fetch your data in a fast fashion. Though it is a db, it used large number of Hfile(similar to HDFS files) to store your data and a low latency acces.
So use Hbase when it suits a requirement that your data needs to accessed by other big data.
Spark on the other hand, is the in-memory distributed computing engine which have connectivity to hdfs, hbase, hive, postgreSQL,json files,parquet files etc.
There is no considerable performance change while reading from a HDFS file or Hbase upto some gbs. After that Hbase connectivity is becoming faster....

Can druid replace hadoop?

Druid is used for both real time and batch processing. But can it totally replace hadoop?
If not why? As in what is the advantage of hadoop over druid?
I have read that druid is used along with hadoop. So can the use of Hadoop be avoided?
We are talking about two slightly related but very different technologies here.
Druid is a real-time analytics system and is a perfect fit for timeseries and time based events aggregation.
Hadoop is HDFS (a distributed file system) + Map Reduce (a paradigm for executing distributed processes), which together have created an eco system for distributed processing and act as underlying/influencing technology for many other open source projects.
You can setup druid to use Hadoop; that is to fire MR jobs to index batch data and to read its indexed data from HDFS (of course it will cache them locally on the local disk)
If you want to ignore Hadoop, you can do your indexing and loading from a local machine as well, of course with the penalty of being limited to one machine.
Can you avoid using Hadoop with Druid? Yes, you can stream data in real-time into a Druid cluster rather than batch-loading it with Hadoop. One way to do this is to stream data into Kafka, which will handle incoming events and pass them into Storm, which can then process and load them into Druid Realtime nodes.
Typically this setup is used with Hadoop in parallel, because streamed real-time data comes with its own baggage and often needs to be fixed up and backfilled. That whole architecture has been dubbed "Lambda" by some.
Druid is used for both real time and batch processing. But can it totally replace hadoop? If not why?
It depends on your cases. Have a look at Druid official website documentation.
Druid is good choice for below use cases:
Insert rates are very high, but updates are less common
Most of queries are aggregation and reporting with low latency of 100ms to a few seconds.
Data has a time component
Load data from Kafka, HDFS, flat files, or object storage like Amazon S3
Druid is not good choince for below use cases
Need low-latency updates of existing records using a primary key. Druid supports streaming inserts, but not streaming updates
Building an offline reporting system where query latency is not very important.
In case of big joins
So if you are looking for offline reporting system where query latency is not important, Hadoop may score in that scenario.

where is data stored in hadoop

Though I have understood the architecture of hadoop a bit , I have some void in understanding of where the data is exactly situated.
My question is like " Suppose I have large data of some random books .. is the data of books stored in multiple Nodes previously using HDFS and we perform MapReduce on each node and get the result in our system ?
'OR'
Do we store data some where in large database and whenever we want to perform the MapReduce operation, we take the chunks and store them in multiple Nodes for performing operation ?
Either is possible, it really depends on your use case and needs. However, generally Hadoop MapReduce runs against data stored in HDFS. The system is designed around data locality which requires the data be in HDFS. That is the Map tasks run on the same piece of hardware where the data is stored in order to improve performance.
That said if for some reason your data must be stored outside of HDFS and then processed using MapReduce it can be done but is a bit more work and is not as efficient as processing data in HDFS locally.
So lets take two use cases. Start with log files. Log files as they are are not particularly accessible. They just need to be stuck somewhere and stored for later analysis. HDFS is perfect for this. If you really need a log back out you can get it but generally people will be looking for the output of the analytics. So store your logs in HDFS and process them normally.
However, data in the format ideal for HDFS and Hadoop Map Reduce (many records in a single large flat file) is not what I would consider highly accessible. Hadoop Map Reduce expects to have input files that are multi megabytes in size with many records per file. The more you diverge from this case, the more your performance will decline. Sometimes your data is needed online at all times, and HDFS is not ideal for this. For instance we will use your book example. If these books are used in an application that needs the content accessible in an online fashion, I.E. editting and annotating, you may choose to store them in a database. Then when you need to run batch analytics you use a custom InputFormat to retrieve the records from the database and process them in MapReduce.
I am currently doing this with a web crawler that stores the web pages individually in Amazon S3. Web pages are too small to serve as a single efficient input to MapReduce, so I have a custom InputFormat that feeds each mapper several files. The output of this MapReduce job is eventually written back to S3, and because I am using Amazon EMR, the Hadoop cluster goes away.

Is Hadoop Suited to Serve 100 byte Records Out of 50GB Dataset?

We have a question on whether Hadoop is suitable for simple tasks that require no application running, but require very fast reads and writes of small amount of data.
The requirement is to be able to write roughly a 100-200 bytes long messages with couple of indexes at rate 30 per second, at the same time to be able to read (search by those two indexes) at rate roughly 10 per seconds. The read queries must be very fast - 100-200 milliseconds max per query and return few matching records.
The total data volume is expected to reach 50-100 gb and is to be maintained at this rate by removing older records (something like daily task to delete records that are older than 14 days)
As you can see the total data volume is not really that big, but we are concerned that the search speed of Hadoop may be slower than our need anyway.
Is Hadoop a solution for this?
Thanks
Nik
Hadoop, alone, is very bad at serving out many small segments of data. However, HBase is an indexed table database-like system meant to be run on top of Hadoop. It is excellent at serving out small indexed files. I would research that as a solution.
Another problem to keep an eye on is that importing data into HDFS or HBase is not trivial. It can slow your cluster down quite a bit, so if Hadoop is your choice, you have to also solve how to get those 75GB into HDFS so Hadoop can touch them.
As Sam noted HBase is the Hadoop stack solution that can handle your requirements. However I wouldn't go with Hadoop if these are your only requirements from the data.
You can go with other NoSQL solutions like MongoDB or CouchDB or even MySQL or Postgres

Resources