Using HBase / Hadoop / Storm - hadoop

I receive an input file which has 200 MM of records. The records are just a keys.
For each record from this file (which i'll call SAMPLE_FILE), i need to retrieve all records from a database (which i'll call EVENT_DATABASE ) that match key . The EVENT_DATABASE can have billions of records.
For example:
SAMPLE_FILE
1234
2345
3456
EVENT_DATABASE
2345 - content C - 1
1234 - content A - 3
1234 - content B - 5
4567 - content D - 7
1234 - content K - 7
1234 - content J - 2
So the system will iterate through each record from SAMPLE_RECORD and get all EVENTS which has the same key. For example, getting 1234 and query the EVENT_DATABASE will retrieve:
1234 - content A - 3
1234 - content B - 5
1234 - content K - 7
1234 - content J - 2
Then i will execute some calculations using the result set. For example, count, sum, mean
F1 = 4 (count)
F2 = 17 (sum(3+5+7+2))
I will approach the problem storing the EVENT_DATABASE using HBASE. Then i will run a map-reduce job, and in the map phase i will query the HBase, get he events and execute the calculations.
The process can be in batch. It is not necessary to be real time.
Does anyone suggests another architecture? Do i really need a map reduce job? Can i use another approach?

I personally solved this kind of problem using MapReduce, HDFS & HBase for batch analysis. Your approach seems to be good for implementing your use-case I am guessing you are going to store the calculations back into HBase.
Storm could also be used to implement the same usecase, but Storm really shines with streaming data & near real time processing rather than data at rest.

You don't really need to query Hbase for every single event. According to me this would be a better approach.
Create an external table in hive using your input file.
Create an external table in hive for your hbase table using Hive Hbase Integration (https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration)
Do a join on both the tables and get the retrieve the results.
Your approach would have been good if you were only querying for a subset of your input file but since you are querying hbase for all recrods (20M), using a join would be more efficient.

Related

Process huge data simultaneously

i have one table with a lot of data:
id | title | server 1 | server 2 | server 3
--------------------------------------------
1 | item1 | 110.0.0.1| 110.0.0.2| 110.0.0.3
2 | item2 | 110.0.0.4| 110.0.0.2| 110.0.0.5
..
n | itemn | 110.0.0.1| 110.0.0.2| 110.0.0.3
I want to process all this data using spring boot and save the result in database, so for that, what is the easiest, simplest and the best why to do that ?
It seems that the map reduce of apache can do this job but it is so big and complicated to set up.
The actual use case:
one spring boot instance
select * from item;
process item by item.
The expected use case:
n spring boot instance
select * from item limit n
process item by item
consolidation of result and save in database
Have a look at Spring Batch. It allows chunking (processing the data in multiple chunks in multiple threads) and should fit your use case very well.
https://docs.spring.io/spring-batch/trunk/reference/html/spring-batch-intro.html
What I may suggest is to have one data processing pipeline setup using Spring Batch rather than n spring boot instances.
Spring batch will have each of the steps as below:
Extract Data using Hive (select * from item) - Make sure to write them as file output to an external location.
The extracted data is the input to a MapReduce framework, where each item is processed and desired output is written.
The output of the mapreduce is consolidated in this batch step.
Another process (again distributed, if possible) to save into database.

Bulk Loading Key-value pair data into HBASE

I am evaluating HBASE for dealing with a very wide dataset with a variable number of columns per row. In its raw form, my data has a variable list of parameter names and values for each row. In its transformed form, it is available in key-value pairs.
I want to load this data into HBASE. It is very easy to translate my key-value pair processed data into individual "put" statements to get the data in. However I need to bulkload as I have 1000s of columns and millions of rows, leading to billions of individual key-value pairs, requiring billions of "put" statements. Also, the list of columns (a,b,c,d,...) is not fully known ahead of time. I investigated the following options so far:
importtsv: Cannot be used because that requires the data to be pivoted from rows to columns ahead of time, with a fixed set of known columns to import.
HIVE to generate HFile: This option too requires column names to be specified ahead of time and map each column in the hive table to a column in hbase.
My only option seems to be to parse a chunk of the data once, pivot it into a set of known columns, and bulk load that. This seems wasteful, as HBASE is going to blow it down into key-value pairs anyway. There really should be a simpler more efficient way of bulk loading the key value pairs?
Raw data format:
rowkey1, {a:a1, b:b1}
rowkey2, {a:a2, c:c2}
rowkey3, {a:a3, b:b3, c:c3, d:d3}
processed Data format:
rowkey1, a, a1
rowkey1, b, b1
rowkey2, a, a2
rowkey2, c, c2
rowkey3, a, a3
rowkey3, b, b3
rowkey3, c, c3
rowkey3, d, d3
You almost assuredly want to use a customer M/R job + Incremental loading (aka bulk loading).
There general process will be:
Submit an M/R job that has been configured using HFileOutputFormat.configureIncrementalLoad
Map over the raw data and write PUTs for HBase
Load the output of job into table using the following:
sudo -u hdfs hdfs dfs -chown -R hbase:hbase /path/to/job/output
sudo -u hbase hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles /path/to/job/output table-name-here
There are ways to do the load from java but it means impersonating HBase. The tricky part here is making sure that the files are owned by HBase and the user running the incremental load is also HBase. This Cloudera Blog Post talks a bit more about about those details.
In general I recommend taking a peek at this GH Repo which seems to cover the basics of the process.

Big data signal analysis: better way to store and query signal data

I am about doing some signal analysis with Hadoop/Spark and I need help on how to structure the whole process.
Signals are now stored in a database, that we will read with Sqoop and will be transformed in files on HDFS, with a schema similar to:
<Measure ID> <Source ID> <Measure timestamp> <Signal values>
where signal values are just string made of floating point comma separated numbers.
000123 S001 2015/04/22T10:00:00.000Z 0.0,1.0,200.0,30.0 ... 100.0
000124 S001 2015/04/22T10:05:23.245Z 0.0,4.0,250.0,35.0 ... 10.0
...
000126 S003 2015/04/22T16:00:00.034Z 0.0,0.0,200.0,00.0 ... 600.0
We would like to write interactive/batch queries to:
apply aggregation functions over signal values
SELECT *
FROM SIGNALS
WHERE MAX(VALUES) > 1000.0
To select signals that had a peak over 1000.0.
apply aggregation over aggregation
SELECT SOURCEID, MAX(VALUES)
FROM SIGNALS
GROUP BY SOURCEID
HAVING MAX(MAX(VALUES)) > 1500.0
To select sources having at least a single signal that exceeded 1500.0.
apply user defined functions over samples
SELECT *
FROM SIGNALS
WHERE MAX(LOW_BAND_FILTER("5.0 KHz", VALUES)) > 100.0)
to select signals that after being filtered at 5.0 KHz have at least a value over 100.0.
We need some help in order to:
find the correct file format to write the signals data on HDFS. I thought to Apache Parquet. How would you structure the data?
understand the proper approach to data analysis: is better to create different datasets (e.g. processing data with Spark and persisting results on HDFS) or trying to do everything at query time from the original dataset?
is Hive a good tool to make queries such the ones I wrote? We are running on Cloudera Enterprise Hadoop, so we can also use Impala.
In case we produce different derivated dataset from the original one, how we can keep track of the lineage of data, i.e. know how the data was generated from the original version?
Thank you very much!
1) Parquet as columnar format is good for OLAP. Spark support of Parquet is mature enough for production use. I suggest to parse string representing signal values into following data structure (simplified):
case class Data(id: Long, signals: Array[Double])
val df = sqlContext.createDataFrame(Seq(Data(1L, Array(1.0, 1.0, 2.0)), Data(2L, Array(3.0, 5.0)), Data(2L, Array(1.5, 7.0, 8.0))))
Keeping array of double allows to define and use UDFs like this:
def maxV(arr: mutable.WrappedArray[Double]) = arr.max
sqlContext.udf.register("maxVal", maxV _)
df.registerTempTable("table")
sqlContext.sql("select * from table where maxVal(signals) > 2.1").show()
+---+---------------+
| id| signals|
+---+---------------+
| 2| [3.0, 5.0]|
| 2|[1.5, 7.0, 8.0]|
+---+---------------+
sqlContext.sql("select id, max(maxVal(signals)) as maxSignal from table group by id having maxSignal > 1.5").show()
+---+---------+
| id|maxSignal|
+---+---------+
| 1| 2.0|
| 2| 8.0|
+---+---------+
Or, if you want some type-safety, using Scala DSL:
import org.apache.spark.sql.functions._
val maxVal = udf(maxV _)
df.select("*").where(maxVal($"signals") > 2.1).show()
df.select($"id", maxVal($"signals") as "maxSignal").groupBy($"id").agg(max($"maxSignal")).where(max($"maxSignal") > 2.1).show()
+---+--------------+
| id|max(maxSignal)|
+---+--------------+
| 2| 8.0|
+---+--------------+
2) It depends: if size of your data allows to do all processing in query time with reasonable latency - go for it. You can start with this approach, and build optimized structures for slow/popular queries later
3) Hive is slow, it's outdated by Impala and Spark SQL. Choice is not easy sometimes, we use rule of thumb: Impala is good for queries without joins if all your data stored in HDFS/Hive, Spark has bigger latency but joins are reliable, it supports more data sources and has rich non-SQL processing capabilities (like MLlib and GraphX)
4) Keep it simple: store you raw data (master dataset) de-duplicated and partitioned (we use time-based partitions). If new data arrives into partition and your already have downstream datasets generated - restart your pipeline for this partition.
Hope this helps
First, I believe Vitaliy's approach is very good in every aspect. (and I'm all for Spark)
I'd like to propose another approach, though. The reasons are:
We want to do Interactive queries (+ we have CDH)
Data is already structured
The need is to 'analyze' and not quite 'processing' of the data. Spark could be an overkill if (a) data being structured, we can form sql queries faster and (b) we don't want to write a program every time we want to run a query
Here are the steps I'd like to go with:
Ingestion using sqoop to HDFS: [optionally] use --as-parquetfile
Create an External Impala table or an Internal table as you wish. If you have not transferred the file as parquet file, you can do that during this step. Partition by, preferably Source ID, since our groupings are going to happen on that column.
So, basically, once we've got the data transferred, all we need to do is to create an Impala table, preferably in parquet format and partitioned by the column that we're going to use for grouping. Remember to do compute statistics after loading to help Impala run it faster.
Moving data:
- if we need to generate feed out of the results, create a separate file
- if another system is going to update the existing data, then move the data to a different location while creating->loading the table
- if it's only about queries and analysis and getting reports (i.e, external tables suffice), we don't need to move the data unnecessarily
- we can create an external hive table on top of the same data. If we need to run long-running batch queries, we can use Hive. It's a no-no for interactive queries, though. If we create any derived tables out of queries and want to use through Impala, remember to run 'invalidate metadata' before running impala queries on the hive-generated tables
Lineage - I have not gone deeper into it, here's a link on Impala lineage using Cloudera Navigator

Need suggestions implementing recursive logic in Hive UDF

We have a hive table that has around 500 million rows. Each row here represents a "version" of the data and Ive been tasked to create table which just contains the final version of each row. Unfortunately, each version of the data only contains a link to his previous version. The actual computation for deriving the final version of the row is not trivial but I believe the below example illustrates the issue.
For example:
id | xreference
----------------
1 | null -- original version of 1
2 | 1 -- update to id 1
3 | 2 -- update to id 2-1
4 | null -- original version of 4
5 | 4 -- update to version 4
6 | null -- original version of 6
When deriving the final version of the row from the above table I would expect the rows with ids 3, 5 and 6 to be produced.
I implemented a solution for this in cascading, which although is correct has an n^2 runtime and takes half a day to complete.
I also implemented a solution using giraffe which worked great on small data sets but I kept running out of memory for larger sets. With this implementation I basically created a vertices for every id and an edge between each id/xreference pair.
We have now been looking into consolidating/simplifying our ETL process and Ive been asked to provide an implementation that can run as a hive UDF. I know that Oracle provides functions for just this sort of thing but I haven't found much in the way of Hive functions.
Im looking for any suggestions for implementing this type of recursion specifically with Hive but Id like to hear any suggestions.
Hadoop 1.3
Hive 0.11
Cascading 2.2
12 node development cluster
20 node production cluster

how to implement lookup logic in hadoop pig

Hi, I would like to know how to implement lookup logic in Hadoop Pig. I have a set of records, say for a weblog user, and need to go back to fetch some fields from his first visit (not the current).
This is doable in Java but do we have a way to implement this in Hadoop pig.
Example:
Suppose for traversing one particular user, identified by col1 and col2, output the first value for that user in lookup_col, in this case '1'.
col1 col2 lookup_col
---- ---- -----
326 8979 1
326 8979 4
326 8979 3
326 8979 0
You can implement this as a pig UDF.
Alternatively, you can also use simple SQL-like logic and aggregate the visits by user (not sure how you define the user and how you plan to look up visit by user but that's another matter) and get the first one and then left-join users with agg_visits.
A 'replicated join' in Pig is essentially a look up in a set that distributed amongst nodes and loaded into memory. However, you can get more than a single result because it's a JOIN operation, and not a lookup - so if you aggregate the data beforehand, you make sure that you only have a single record per key.

Resources