Paritioning and Bucketing in Hive - hadoop

My hive table will have call record data.
3 columns of the table are field1- CALL_DATE, field2-FROM_PHONE_NUM, field3- TO_PHONE
I would query something like
1) i want to get all call records between particular dates.
2) I want to get all call records for a FROM_PHONE phone number between certain dates.
2) I want to get all call records for a TO_PHONE phone number between certain dates.
My table size is approximately 6TB.
Can i know How do i need to apply partitioning or bucketing for better performance of all of my queries?

Your requirement is always to get data between certain dates and do filtering on it, so do table partition biased on date .
How to create Link for dynamic partition
You can have partition key date as yyyymmdd .
(like -- 20170406 for today(6th april 2017 ))

Related

Hadoop partitioning. How do you efficiently design a Hive/Impala table?

How do you efficiently design a Hive/Impala table considering the following facts?
The table receives tool data of about 100 million rows every
day. The date on which it receives the data is stored in a column in
the table along with its tool id.
Each tool receives about
500 runs per day which is identified by column run id. Each run id
contains data approximately of size 1 mb.
The default size of the block is 64 mb.
The table can be searched by date, tool id and run id in this order.
If you are doing analytics on this data then a solid choice with Impala is using Parquet format. What has worked well for our users is to partition the date by year, month, day based a date value on the record.
So for example CREATE TABLE foo (tool_id int, eff_dt timestamp) partition (year int, month int, day int) stored as parquet
When loading the data into this table we use something like this to create dynamic partitions:
INSERT INTO foo partition (year, month, day)
SELECT tool_id, eff_dt, year(eff_dt), month(eff_dt), day(eff_dt)
FROM source_table;
Then you train your users that if they want the best performance to add YEAR, MONTH, DAY to their WHERE clause so that it hits the partition for better performance. Then have them add the eff_dt in the SELECT statement so they have a date value in the format they like see in their final results.
In CDH, Parquet is storing by default data in 256MB chunks (which is configurable). Here is how to configure it: http://www.cloudera.com/documentation/enterprise/latest/topics/impala_parquet_file_size.html

Query a table in different ways or orderings in Cassandra

I've recently started to play around with Cassandra. My understanding is that in a Cassandra table you define 2 keys, which can be either single column or composites:
The Partitioning Key: determines how to distribute data across nodes
The Clustering Key: determines in which order the records of a same partitioning key (i.e. within a same node) are written. This is also the order in which the records will be read.
Data from a table will always be sorted in the same order, which is the order of the clustering key column(s). So a table must be designed for a specific query.
But what if I need to perform 2 different queries on the data from a table. What is the best way to solve this when using Cassandra ?
Example Scenario
Let's say I have a simple table containing posts that users have written :
CREATE TABLE posts (
username varchar,
creation timestamp,
content varchar,
PRIMARY KEY ((username), creation)
);
This table was "designed" to perform the following query, which works very well for me:
SELECT * FROM posts WHERE username='luke' [ORDER BY creation DESC];
Queries
But what if I need to get all posts regardless of the username, in order of time:
Query (1): SELECT * FROM posts ORDER BY creation;
Or get the posts in alphabetical order of the content:
Query (2): SELECT * FROM posts WHERE username='luke' ORDER BY content;
I know that it's not possible given the table I created, but what are the alternatives and best practices to solve this ?
Solution Ideas
Here are a few ideas spawned from my imagination (just to show that at least I tried):
Querying with the IN clause to select posts from many users. This could help in Query (1). When using the IN clause, you can fetch globally sorted results if you disable paging. But using the IN clause quickly leads to bad performance when the number of usernames grows.
Maintaining full copies of the table for each query, each copy using its own PRIMARY KEY adapted to the query it is trying to serve.
Having a main table with a UUID as partitioning key. Then creating smaller copies of the table for each query, which only contain the (key) columns useful for their own sort order, and the UUID for each row of the main table. The smaller tables would serve only as "sorting indexes" to query a list of UUID as result, which can then be fetched using the main table.
I'm new to NoSQL, I would just want to know what is the correct/durable/efficient way of doing this.
The SELECT * FROM posts ORDER BY creation; will results in a full cluster scan because you do not provide any partition key. And the ORDER BY clause in this query won't work anyway.
Your requirement I need to get all posts regardless of the username, in order of time is very hard to achieve in a distributed system, it supposes to:
fetch all user posts and move them to a single node (coordinator)
order them by date
take top N latest posts
Point 1. require a full table scan. Indeed as long as you don't fetch all records, the ordering can not be achieve. Unless you use Cassandra clustering column to order at insertion time. But in this case, it means that all posts are being stored in the same partition and this partition will grow forever ...
Query SELECT * FROM posts WHERE username='luke' ORDER BY content; is possible using a denormalized table or with the new materialized view feature (http://www.doanduyhai.com/blog/?p=1930)
Question 1:
Depending on your use case I bet you could model this with time buckets, depending on the range of times you're interested in.
You can do this by making the primary key a year,year-month, or year-month-day depending on your use case (or finer time intervals)
The basic idea is that you bucket changes for what suites your use case. For example:
If you often need to search these posts over months in the past, then you may want to use the year as the PK.
If you usually need to search the posts over several days in the past, then you may want to use a year-month as the PK.
If you usually need to search the post for yesterday or a couple of days, then you may want to use a year-month-day as your PK.
I'll give a fleshed out example with yyyy-mm-dd as the PK:
The table will now be:
CREATE TABLE posts_by_creation (
creation_year int,
creation_month int,
creation_day int,
creation timeuuid,
username text, -- using text instead of varchar, they're essentially the same
content text,
PRIMARY KEY ((creation_year,creation_month,creation_day), creation)
)
I changed creation to be a timeuuid to guarantee a unique row for each post creation event. If we used just a timestamp you could theoretically overwrite an existing post creation record in here.
Now we can then insert the Partition Key (PK): creation_year, creation_month, creation_day based on the current creation time:
INSERT INTO posts_by_creation (creation_year, creation_month, creation_day, creation, username, content) VALUES (2016, 4, 2, now() , 'fromanator', 'content update1';
INSERT INTO posts_by_creation (creation_year, creation_month, creation_day, creation, username, content) VALUES (2016, 4, 2, now() , 'fromanator', 'content update2';
now() is a CQL function to generate a timeUUID, you would probably want to generate this in the application instead, and parse out the yyyy-mm-dd for the PK and then insert the timeUUID in the clustered column.
For a usage case using this table, let's say you wanted to see all of the changes today, your CQL would look like:
SELECT * FROM posts_by_creation WHERE creation_year = 2016 AND creation_month = 4 AND creation_day = 2;
Or if you wanted to find all of the changes today after 5pm central:
SELECT * FROM posts_by_creation WHERE creation_year = 2016 AND creation_month = 4 AND creation_day = 2 AND creation >= minTimeuuid('2016-04-02 5:00-0600') ;
minTimeuuid() is another cql function, it will create the smallest possible timeUUID for the given time, this will guarantee that you get all of the changes from that time.
Depending on the time spans you may need to query a few different partition keys, but it shouldn't be that hard to implement. Also you would want to change your creation column to a timeuuid for your other table.
Question 2:
You'll have to create another table or use materialized views to support this new query pattern, just like you thought.
Lastly if your not on Cassandra 3.x+ or don't want to use materialized views you can use Atomic batches to ensure data consistency across your several de-normalized tables (that's what it was designed for). So in your case it would be a BATCH statement with 3 inserts of the same data to 3 different tables that support your query patterns.
The solution is to create another tables to support your queries.
For SELECT * FROM posts ORDER BY creation;, you may need some special column for grouping it, maybe by month and year, e.g. PRIMARY KEY((year, month), timestamp) this way the cassandra will have a better performance on read because it doesn't need to scan the whole cluster to get all data, it will also save the data transfer between nodes too.
Same as SELECT * FROM posts WHERE username='luke' ORDER BY content;, you must create another table for this query too. All column may be same as your first table but with the different Primary Key, because you cannot order by the column that is not the clustering column.

How Hive Partition works

I wanna know how hive partitioning works I know the concept but I am trying to understand how its working and store the in exact partition.
Let say I have a table and I have created partition on year its dynamic, ingested data from 2013 so how hive create partition and store the exact data in exact partition.
If the table is not partitioned, all the data is stored in one directory without order. If the table is partitioned(eg. by year) data are stored separately in different directories. Each directory is corresponding to one year.
For a non-partitioned table, when you want to fetch the data of year=2010, hive have to scan the whole table to find out the 2010-records. If the table is partitioned, hive just go to the year=2010 directory. More faster and IO efficient
Hive organizes tables into partitions. It is a way of dividing a table into related parts based on the values of partitioned columns such as date.
Partitions - apart from being storage units - also allow the user to efficiently identify the rows that satisfy a certain criteria.
Using partition, it is easy to query a portion of the data.
Tables or partitions are sub-divided into buckets, to provide extra structure to the data that may be used for more efficient querying. Bucketing works based on the value of hash function of some column of a table.
Suppose you need to retrieve the details of all employees who joined in 2012. A query searches the whole table for the required information. However, if you partition the employee data with the year and store it in a separate file, it reduces the query processing time.

Partitioning method that can help to avoid having to specify the same information or column in Hive Partitioned Query?

I have daily transactions with up to 5-10 GB of data per day. In my view it makes more sense to partition by month..
Here is an example:
My Table has the following columns:
TRANSACTION_DATE TIMESTAMP -- transaction date
TRANSACTION_AMOUNT INTEGER - transaction amount
DWH_PARTITION STRING -- technical field that goes into PARTITIONED BY section
Now I want to query for the amount of transactions between January 15st 2015 and November 15th 2015.
My query would be
select sum(TRANSACTION_AMOUNT) from TEST where TRANSACTION_DATE >= CAST('2015-01-15' as timestamp) AND TRANSACTION_DATE < CAST('2015-11-15' as timestamp)
This query returns correct data but it does full table scan while I would like it to just use partitions 2015-01, 2015-02, .... 2015-11.
To do so I need to specify manually which partitions should I use so the query would be as follows:
select sum(TRANSACTION_AMOUNT) from TEST where TRANSACTION_DATE >= CAST('2015-01-15' as timestamp) AND TRANSACTION_DATE < CAST('2015-11-15' as timestamp) and DWH_PARTITION in ('2015-01',.........'2015-11');
Because we cannot partition by timestamp business analyst would have to know the exact partitioning pattern (whether given table is partitioned by month, day, etc.).
Please also note that information about dates need to be specified two times: one for transaction date and then for partitions.
Do you know some partitioning methods that can help to avoid having to specify the same information twice and release the user from having to know partitioning patters of all the tables they need to query?
It can be only achieved by range partitioning and currently it is not supported. Probably UDF might help, but 100% not sure.
We have solved that problem by providing simple web interface where user can choose the table, filter columns and under the covers application is intelligent enough to generate the query leveraging partition pruning.

How do I store a Cassandra table solely in descending date order?

I have a table that stores millions of url, date and name entries. Each row is unique in terms of either:
url + date
or
date + name.
I require this table to be stored in descending date order so that when I query it I can simply "SELECT * FROM mytable LIMIT 1000" to get me the most recent 1000 records, no sorting involved. Does anyone know how to set things up to do this please? To the best of my current understanding I am trying the following but it does not store them in date order:
CREATE TABLE mytable (
url text,
date timestamp,
name text,
PRIMARY KEY ((url, name), date)
)
WITH CLUSTERING ORDER BY (date DESC);
To store the data according to an order, you'd need to change the partitioner to byte ordered. This is no longer a good idea...it's maintained for back compat, but there are issues:
http://www.datastax.com/documentation/cassandra/2.1/cassandra/architecture/architecturePartitionerBOP_c.html
You could also apply bucketing and query over your buckets. Each bucket being a partition, and each partition would have data stored in order. Not exactly what you want, but worth trying.

Resources