Does order of partitioning columns matter in Hive? - hadoop

Lets say I have a partitioned table with multiple columns as partition keys e.g.
partitioned by (department string,year int, month int,day int)
So does this specific order really matter? All the online resources refer to advantage of scanning only specific sub-directories for search. But ultimately everything is a file in big data, directories seem to be more like logical grouping. And when one specifies a filter on partitioned column, hive just needs to know which files are involved and where they are located, not sure how directory is going to be useful -- it's not like as if directories are loaded in memory -- files are loaded in memory -- and the directory path is more like a label for a given file. If that's the case, no matter which order we specify for partitioning , it shouldn't matter. This is especially more evident in HDInsight where the underlying file system (BLOBs) has no concept of directory.

Although you're right about directories being logical constructs, if you consider the amount of metadata your HiveServer2 has to get and sift through in order to execute an average query, the order does matter. If a query contains ...WHERE department='IT'..., and the partitions are laid out as you show, given 100 departments total, the partition pruning mechanism will be able to eliminate 99 subdirectories from the tree right away. But if the order of partition columns is reversed, the same query will need to retrieve metadata for (30 days x 12 month x N years) partitions from Hive MetaStore, just to figure out whether partition /department=IT actually exists in all of them. So the order of partitions can be decided by analyzing predominant query patterns.
Another common factor to consider is devops/maintenance related, especially if data is loaded into a table incrementally. If one needs to backoff/recover from unsuccessful load, will he need to drop a partition (day=08) in each department subtree individually, or can all department data be cleared at once by dropping partition (day=08)?

Related

How to bucket a Hive table with ORC for a complex query?

Maybe this question is too generic but I think it is worth a try.
I am working with a table that has 270 fields. It is partitioned by the date (like dt=20180101). However when we are hitting this table with queries we are essentially doing a whole table scan because we use fields in the where clause that are not dt. I was wondering what is the right approach for enable bucketing for this table. I could pick one of the where clause fields and enable bucketing for that. For example:
PARTITIONED BY (
dt INT
)
CLUSTERED BY (
class
)
INTO 16 BUCKETS
Another approach is to use more than 1 field for bucketing:
PARTITIONED BY (
dt INT
)
CLUSTERED BY (
class, other_field, other_field_2
)
INTO 128 BUCKETS
Is it worth to bucker by multiple field? I guess it will only speed up queries when the same exact fields are present in the select.
Another question, is it worth at least sort by multiple fields so when the file is read it is sequential read? Like this:
PARTITIONED BY (
dt INT
)
CLUSTERED BY (
class
)
SORTED BY (
other_field, other_field_2
)
INTO 16 BUCKETS
First, if you dont usually query on date and your queries span over many dates, then you might want to change your partitioning strategy.
Its not necessary that you will always query only for 1 or few dates but if your queries are usually totally NOT related to 'date' filtering then you should change that!
Second, bucketing basically splits your data based on hash of your bucketing columns. So it helps you to split your data into equally sized folders in file system and helps mapReduce program runnig over it manage the partitions in an efficient way. But, bucketing into large number of buckets can also have negative effects as all such metadata is also stored in Hive metastore. So, this metadata is read first when you execute some query and based on the result from metadata query, actual data (part of actual data) is read from file system.
So in actual there's no specific rule for bucketing; as to how many buckets should be there and on what all columns you should bucket.
So you should look into your queries and plan accordingly!
Third, sorting does help at the time of querying, as its easy for the engine to push down filtering and sorting criteria. But when you enable sorting on a table, ingestion of data actually becomes a little slower than the case where sorting isnt enabled! But definitely in high queries system it is bound to get you good benefits.
So all in all, these three are all optimization techniques and dont hold any particular rules for their application. It purely depends on your use case!
Hope this helps!!

Determining Bucketing Configuration on Hive Table

I was curious if someone could provide a little more clarification on how to configure the bucketing property on a Hive table. I see that it helps with joins and I believe i read that its good to put it on a column that you will use to join. That could be wrong. I am also curious about how to determine the number of buckets to choose.
If anyone could give a brief explanation and some documentation on how to determine all of these things that would be great.
Thanks in advance for the assistance.
Craig
If you want to implement bucketing in your table first you should set the property
set hive.enforce.bucketing=true;
it will enforce the bucketing.
carnality : no.of possible values for column.
if your implementing bucketing using Cluster By clause, your bucketing column should have high carnality,then you will get the better performance.
if your implementing partitioning using Partitioned By clause your partitioned column should have low carnality,then you will get the better performance
depending on the use case you can choose the number of buckets.It's good to choose (number of buckets) < (your hdfs block size) and it should be power of 2.
bucketing will always creates file's not directories.
The following are few suggestions to be considered while designing buckets.
Buckets are generally created on the most critical columns , a single column or a set of columns, so it implies that these columns would be the primary columns for various join conditions , as the concept of bucketing is to hash these set of columns and store it in such a way that its easily accessible from the hdfs faster.Thus retrieving speed is fast.Its advised not to use all the join columns only the critical and which is we think would improve performance.
The number of buckets would be in exponents of 2. The number of buckets determine the number of reducers to be run and that determines the final number files in which the data is stored. So number of buckets has to be designed keeping in mind the size of data we are handling and there by keeping in mind of avoiding large number of small files in hdfs and few number of big files , thus improving the hive query retrieving speed and optimizations.

Partitioning or bucketing hive table based on only month/year to optimize queries

I'm building a table that contains about 400k rows of a messaging app's data.
The current table's columns looks something like this:
message_id (int)| sender_userid (int)| other_col (string)| other_col2 (int)| create_dt (timestamp)
A lot of queries I would be running in the future will rely on a where clause involving the create_dt column. Since I expect this table to grow, I would like to try and optimize it right now. I'm aware that partitioning is one way, but when I partition it based on create_dt the result is too many partitions since I have every single date spanning back to Nov 2013.
Is there a way to instead partition by a range of dates? How about partition for every 3 months? or even every month? If this is possible - Could I possibly have too many partitions in the future making it inefficient? What are some other possible partition methods?
I've also read about bucketing, but as far as I'm aware that's only useful if you would be doing joins on a column that the bucket is based on. I would most likely be doing joins only on column sender_userid (int).
Thanks!
I think this might be a case of premature optimization. I'm not sure what your definition of "too many partitions" is, but we have a similar use case. Our tables are partitioned by date and customer column. We have data that spans back to Mar 2013. This created approximately 160k+ partitions. We also use a filter on date and we haven't seen any performance problems with this schema.
On a side note, Hive is getting better at scaling up to 100s of thousands of partitions and tables.
On another side note, I'm curious as to why you're using Hive in the first place for this. 400k rows is a tiny amount of data and is not really suited for Hive.
Check out hive built in UDFs. With the right combination of them you can achieve what you want. Here's an example to partition on every month (produces "YEAR-MONTH" string that you can use as partition column value):
select concat(cast(year(to_date(create_dt)) as string),'-',cast(month(to_date(create_dt)) as string))
But when partitioning on dates it is usually useful to have multiple levels of the date dimension so in this case you should have two partition columns, first for year and second for month:
select year(to_date(create_dt)),month(to_date(create_dt))
Keep in mind that timestamps and dates are strings, and that functions like month() or year() return integers as values of date fields. You can use simple mathematical operations to figure out the right partition.

Skewed tables in Hive

I am learning hive and came across skewed tables. Help me understanding it.
What are skewed tables in Hive?
How do we create skewed tables?
How does it effect performance?
What are skewed tables in Hive?
A skewed table is a special type of table where the values that appear very often (heavy skew) are split out into separate files and rest of the values go to some other file..
How do we create skewed tables?
create table <T> (schema) skewed by (keys) on ('value1', 'value2') [STORED as DIRECTORIES];
Example :
create table T (c1 string, c2 string) skewed by (c1) on ('x1')
How does it affect performance?
By specifying the skewed values Hive will split those out into separate files automatically and take this fact into account during queries so that it can skip (or include) whole files if possible thus enhancing the performance.
EDIT :
x1 is actually the value on which column c1 is skewed. You can have multiple such values for multiple columns. For example,
create table T (c1 string, c2 string) skewed by (c1) on ('x1', 'x2', 'x3')
Advantage of having such a setup is that for the values that appear more frequently than other values get split out into separate files(or separate directories if we are using STORED AS DIRECTORIES clause). And this information is used by the execution engine during query execution to make processing more efficient.
In Skewed Tables, partition will be created for the column value which has many records and rest of the data will be moved to another partition. Hence number of partitions, number of mappers and number of intermediate files will be reduced.
For ex: out of 100 patients, 90 patients have high BP and other 10 patients have fever, cold, cancer etc. So one partition will be created for 90 patients and one partition will be created for other 10 patients.
I hope this will answer your question.

Index needed for max(col)?

I'm currently doing some data loading for a kind of warehouse solution. I get an data export from the production each night, which then must be loaded. There are no other updates on the warehouse tables. To only load new items for a certain table I'm currently doing the following steps:
get the current max value y for a specific column (id for journal tables and time for event tables)
load the data via a query like where x > y
To avoid performance issues (I load around 1 million rows per day) I removed most indices from the tables (there are only needed for production, not in the warehouse). But that way the retrieval of the max value takes some time...so my question is:
What is the best way to get the current max value for a column without an index on that column? I just read about using the stats but I don't know how to handle columns with 'timestamp with timezone'. Disabling the index before load, and recreate it afterwards takes much too long...
The minimum and maximum values that are computed as part of column-level statistics are estimates. The optimizer only needs them to be reasonably close, not completely accurate. I certainly wouldn't trust them as part of a load process.
Loading a million rows per day isn't terribly much. Do you have an extremely small load window? I'm a bit hard-pressed to believe that you can't afford the cost of indexing the row(s) you need to do a min/ max index scan.
If you want to avoid indexes, however, you probably want to store the last max value in a separate table that you maintain as part of the load process. After you load rows 1-1000 in table A, you'd update the row in this summary table for table A to indicate that the last row you've processed is row 1000. The next time in, you would read the value from the summary table and start at 1001.
If there is no index on the column, the only way for the DBMS to find the maximum value in the column is a complete table scan, which takes a long time for large tables.
I suppose a DBMS could try to keep track of the minimum and maximum values in the column (storing the values in the system catalog) as it does inserts, updates and deletes - but deletes are why no DBMS I know of tries to keep statistics up to date with per-row operations. If you delete the maximum value, finding the new maximum requires a table scan if the column is not indexed (and if it is indexed, the index makes it trivial to find the maximum value, so the information does not have to be stored in the system catalog). This is why they're called 'statistics'; they're an approximation to the values that apply. But when you request 'SELECT MAX(somecol) FROM sometable', you aren't asking for statistical maximum; you're asking for the actual current maximum.
Have the process that creates the extract file also extract a single row file with the min/max you want. I assume that piece is scripted on some cron or scheduler, so shouldn't be too much to ask to add min/max calcs to that script ;)
If not, just do a full scan. Million rows isn't much really, esp in a data warehouse environment.
This code was written with oracle, but should be compatible with most SQL versions:
This gets the key of the max(high_val) in the table according to the range.
select high_val, my_key
from (select high_val, my_key
from mytable
where something = 'avalue'
order by high_val desc)
where rownum <= 1
What this says is: Sort mytable by high_val descending for values where something = 'avalue'. Only grab the top row, which will provide you with the max(high_val) in the selected range and the my_key to that table.

Resources