This is how am dynamically partitioning the data, but only a single file is getting created per partition but I want each file to be of 256MB in size.
And since my data is heavily skewed, some files associated with Partitions are 2-3 GB in size rightnow... I tried so many settings, but none of them worked.
Am using MR mode.
Can someone please let me know what could be done here to create multiple files per partitions where each file should be of 256MB in size ?
SET hive.execution.engine=mr;
SET hive.exec.scratchdir=/tmp/hive/ ;
SET hive.exec.stagingdir=/tmp/hive/;
set hive.msck.path.validation=ignore;
SET hive.exec.dynamic.partition = true;
SET hive.exec.dynamic.partition.mode = nonstrict;
set hive.exec.reducers.bytes.per.reducer=250000000;
set mapreduce.reduce.shuffle.memory.limit.percent=0.15;
insert overwrite table madlytics.events_eors20_hive_dynamic_part partition(h='01',event_type)
select
*, h, event_type
from
madlytics.events_eors_2017_09_20 where h = '01' distribute by event_id sort by event_type;
Related
I am working on a big dataset of size around 3 TB on Hortonworks 2.6.5, the layout of the dataset is pretty straight forward.
The heirarchy of data is as follows -
-Country
-Warehouse
-Product
-Product Type
-Product Serial Id
We have transaction data in the above hierarchy for 30 countries each country have more than 200 warehouse, single country USA contributes around 75% of the entire data set.
Problem:
1) We have transaction data with transaction date column (trans_dt) for the above data set for each warehouse, I need to sort trans_dt in ascending order within each warehouse using Hive (1.1.2 version) MapReduce. I have created a partition at Country level and then applied DISTRIBUTE BY Warehouse SORT BY trans_dt ASC; Sorting takes around 8 hours to finish and last 6 hrs is being used at Reducer at 99% stage. I see a lot of shuffles at this stage.
2) We do lot of group by on this combination - Country,Warehouse,Product,Product Type,Product Serial Id any suggestion to optimize this operation will be very helpful.
3) How to handle Skewed dataset for USA country ?
We are using below hive properties.
SET hive.exec.compress.intermediate=true;
SET hive.intermediate.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
SET hive.intermediate.compression.type=BLOCK;
SET hive.exec.compress.output=true;
SET mapreduce.output.fileoutputformat.compress=true;
SET mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
SET mapreduce.output.fileoutputformat.compress.type=BLOCK;
SET hive.auto.convert.join=true;
SET hive.auto.convert.join.noconditionaltask=true;
SET hive.auto.convert.join.noconditionaltask.size=10000000;
SET hive.groupby.skewindata=true;
SET hive.optimize.skewjoin.compiletime=true;
SET hive.optimize.skewjoin=true;
SET hive.optimize.bucketmapjoin=true;
SET hive.exec.parallel=true;
SET hive.cbo.enable=true;
SET hive.stats.autogather=true;
SET hive.compute.query.using.stats=true;
SET hive.stats.fetch.column.stats=true;
SET hive.stats.fetch.partition.stats=true;
SET hive.vectorized.execution.enabled=true;
SET hive.vectorized.execution.reduce.enabled=true;
SET hive.optimize.index.filter=true;
SET hive.optimize.ppd=true;
SET hive.mapjoin.smalltable.filesize=25000000;
SET hive.exec.dynamic.partition=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.exec.max.dynamic.partitions.pernode=1000;
SET mapreduce.reduce.memory.mb=10240;
SET mapreduce.reduce.java.opts=-Xmx9216m;
SET mapreduce.map.memory.mb=10240;
SET mapreduce.map.java.opts=-Xmx9216m;
SET mapreduce.task.io.sort.mb=1536;
SET hive.optimize.groupby=true;
SET hive.groupby.orderby.position.alias=true;
SET hive.multigroupby.singlereducer=true;
SET hive.merge.mapfiles=true;
SET hive.merge.smallfiles.avgsize=128000000;
SET hive.merge.size.per.task=268435456;
SET hive.map.aggr=true;
SET hive.optimize.distinct.rewrite=true;
SET mapreduce.map.speculative=false;
set hive.fetch.task.conversion = more;
set hive.fetch.task.aggr=true;
set hive.fetch.task.conversion.threshold=1024000000;
For US and Non US use the same query but process them independently.
Select * from Table where Country = 'US'
UNION
Select * from Table where Country <> 'US'
OR
You can process them using a script where you fire one country at the query at a time, reducing the volume of data that needs to be processed at one instance.
INSERT INTO TABLE <AggregateTable>
SELECT * FROM <SourceTable>
WHERE Country in ('${hiveconf:ProcessCountry}')
I am having a Hive query like the one below:
select a.x as column from table1 a where a.y in (<long comma-separated list of parameters>)
union all
select b.x as column from table2 b where b.y in (<long comma-separated list of parameters>)
I have set hive.exec.parallel as true which is helping me achieve parallelism between the two queries between union all.
But, my IN clause has many comma separated values and each value is taken once in 1 job and then the next value. This is actually getting executed sequentially.
Is there any hive parameter which if enabled can help me fetch data parallelly for the parameters in the IN clause?
Currently, the solution I am having is fire the select query with = multiple times instead of one IN clause.
There is no need to read the same data many times in separate queries to achieve better parallelism. Tune proper mapper and reducer parallelism for the same.
First of all, enable PPD with vectorizing, use CBO and Tez:
SET hive.optimize.ppd=true;
SET hive.optimize.ppd.storage=true;
SET hive.vectorized.execution.enabled=true;
SET hive.vectorized.execution.reduce.enabled = true;
SET hive.cbo.enable=true;
set hive.stats.autogather=true;
set hive.compute.query.using.stats=true;
set hive.stats.fetch.partition.stats=true;
set hive.execution.engine=tez;
SET hive.stats.fetch.column.stats=true;
SET hive.tez.auto.reducer.parallelism=true;
Example settings for Mappers on Tez:
set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;
set tez.grouping.max-size=32000000;
set tez.grouping.min-size=32000;
Example settings for Mappers if you decide to run on MR instead of Tez:
set mapreduce.input.fileinputformat.split.minsize=32000;
set mapreduce.input.fileinputformat.split.maxsize=32000000;
--example settings for reducers:
set hive.exec.reducers.bytes.per.reducer=32000000; --decrease this to increase the number of reducers, increase to reduce parallelism
Play with these settings. Success criteria is more mappers/reducers and your map and reduce stages are running faster.
Read this article for better understanding of how to tune Tez: https://community.hortonworks.com/articles/14309/demystify-tez-tuning-step-by-step.html
I am currently trying to associate each lat long ping from a device to its ZIP code.
I have de-normalized lat long device ping data and created a cross-product/ Cartesian product join table in which each row has the ST_Point(long,lat), geometry_shape_of_ZIP and associated zip code for that geometry. for testing purpose I have around 45 million rows in the table and it'll increase in production to about 1 billion every day.
Even though the data is flattened and no join conditions, the query takes about 2 hours to complete. Is there any faster way to compute spatial queries? Or how can I optimize the following query.
Inline is some of the optimizations steps I have already performed. Using the optimizations all the other operations gets done in max 5 minutes except for this one step. I am using aws cluster 2 mater nodes and 5 data nodes.
set hive.vectorized.execution.enabled = true;
set hive.execution.engine=tez;
set hive.enforce.sorting=true;
set hive.cbo.enable=true;
set hive.compute.query.using.stats=true;
set hive.stats.fetch.column.stats=true;
set hive.stats.fetch.partition.stats=true;
analyze table tele_us_zipmatch compute statistics for columns;
CREATE TABLE zipcheck (
`long4` double,
`lat4` double,
state_name string,
country_code string,
country_name string, region string,
zip int,
countyname string) PARTITIONED by (state_id string)
STORED AS ORC TBLPROPERTIES ("orc.compress" = "SNAPPY",
'orc.create.index'='true',
'orc.bloom.filter.columns'='');
INSERT OVERWRITE TABLE zipcheck PARTITION(state_id)
select long4, lat4, state_name, country_code, country_name, region, zip, countyname, state_id from tele_us_zipmatch
where ST_Contains(wkt_shape,zip_point)=TRUE;
ST_Contains is the function from esri (ref: https://github.com/Esri/spatial-framework-for-hadoop/wiki/UDF-Documentation#relationship-tests ).
Any help is greatly appreciated.
Thanks.
If the ZIP-code dataset can fit into memory, try a custom Map-Reduce application that uses a just-in-time in-memory quadtree index on the ZIP-code data, by adapting the sample in the GIS-Tools-for-Hadoop.
[collaborator]
I have written insert overwrite partition in hive to merge all the files in a partition into bigger file,
SQL:
SET hive.exec.compress.output=true;
set hive.merge.smallfiles.avgsize=2560000000;
set hive.merge.mapredfiles=true;
set hive.merge.mapfiles =true;
SET mapreduce.max.split.size=256000000;
SET mapreduce.min.split.size=256000000;
SET mapreduce.output.fileoutputformat.compress.type =BLOCK;
SET hive.hadoop.supports.splittable.combineinputformat=true;
SET mapreduce.output.fileoutputformat.compress.codec=${v_compression_codec};
INSERT OVERWRITE TABLE ${source_database}.${table_name} PARTITION (${line}) \n SELECT ${prepare_sel_columns} \n from ${source_database}.${table_name} \n WHERE ${partition_where_clause};\n"
With the above setting I am getting the compressed output but the time it takes to generate the output file is too long.
Even though it runs map only jobs , Takes much time.
Looking for any further setting from hive side to tune the Insert to run faster.
Metrics.
15 GB files ==> taking 10 min.
SET hive.exec.compress.output=true;
SET mapreduce.input.fileinputformat.split.minsize=512000000;
SET mapreduce.input.fileinputformat.split.maxsize=5120000000;
SET mapreduce.output.fileoutputformat.compress.type =BLOCK;
SET hive.hadoop.supports.splittable.combineinputformat=true;
SET mapreduce.output.fileoutputformat.compress.codec=${v_compression_codec};
The above setting helped lot , The duration came down from 10 min to 1 min.
EDIT: added more file size details, and some other session information.
I have a seemingly straightforward Hive JOIN query that surprisingly requires several hours to run.
SELECT a.value1, a.value2, b.value
FROM a
JOIN b ON a.key = b.key
WHERE a.keyPart BETWEEN b.startKeyPart AND B.endKeyPart;
I'm trying to determine if the execution time is normal for my dataset and AWS hardware selection, or if I am simply trying to JOIN too much data.
Table A: ~2.2 million rows, 12MB compressed, 81MB raw, 4 files.
Table B: ~245 thousand rows, 6.7MB compressed, 14MB raw, one file.
AWS: emr-4.3.0, running on about 5 m3.2xlarge EC2 instances.
Records from A always matches one or more records in B, so logically I see that at most 500 billion rows are generated before they are pruned with the WHERE clause.
4 mappers are allocated for the job, which completes in 6 hours. Is this normal for this type of query and configuration? If not, what should I do to improve it?
I've partitioned B on the JOIN key, which yields 5 partitions, but haven't noticed a significant improvement.
Also, the logs show that the Hive optimizer starts a local map join task, presumably to cache or stream the smaller table:
2016-02-07 02:14:13 Starting to launch local task to process map join; maximum memory = 932184064
2016-02-07 02:14:16 Dump the side-table for tag: 1 with group count: 5 into file: file:/mnt/var/lib/hive/tmp/local-hadoop/hive_2016-02-07_02-14-08_435_7052168836302267808-1/-local-10003/HashTable-Stage-4/MapJoin-mapfile01--.hashtable
2016-02-07 02:14:17 Uploaded 1 File to: file:/mnt/var/lib/hive/tmp/local-hadoop/hive_2016-02-07_02-14-08_435_7052168836302267808-1/-local-10003/HashTable-Stage-4/MapJoin-mapfile01--.hashtable (12059634 bytes)
2016-02-07 02:14:17 End of local task; Time Taken: 3.71 sec.
What is causing this job to run slowly? The data set doesn't appear too large, and the "small-table" size is well under the "small-table" limit of 25MB that triggers the disabling of the MAPJOIN optimization.
A dump of the EXPLAIN output is copied on PasteBin for reference.
My session enables compression for output and intermediate storage. Could this be the culprit?
SET hive.exec.compress.output=true;
SET hive.exec.compress.intermediate=true;
SET mapred.output.compress=true;
SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec;
SET io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec;
SET io.seqfile.compression.type=BLOCK;
My solution to this problem is to express the JOIN predicate entirely within the JOIN ON clause, as this is the most efficient way to execute a JOIN in Hive. As for why the original query was slow, I believe that the mappers just need time when scanning the intermediate data set row by row, 100+ billion times.
Due to Hive only supporting equality expressions in the JOIN ON clause and rejecting function calls that use both table aliases as parameters, there is no way to rewrite the original query's BETWEEN clause as an algebraic expression. For example, the following expression is illegal.
-- Only handles exclusive BETWEEN
JOIN b ON a.key = b.key
AND sign(a.keyPart - b.startKeyPart) = 1.0 -- keyPart > startKeyPart
AND sign(a.keyPart - b.endKeyPart) = -1.0 -- keyPart < endKeyPart
I ultimately modified my source data to include every value between startKeyPart and endKeyPart in a Hive ARRAY<BIGINT> data type.
CREATE TABLE LookupTable
key BIGINT,
startKeyPart BIGINT,
endKeyPart BIGINT,
keyParts ARRAY<BIGINT>;
Alternatively, I could have generated this value inline within my queries using a custom Java method; the LongStream.rangeClosed() method is only available in Java 8, which is not part of Hive 1.0.0 in AWS emr-4.3.0.
Now that I have the entire key space in an array, I can transform the array to a table using LATERAL VIEW and explode(), rewriting the JOIN as follows.
WITH b AS
(
SELECT key, keyPart, value
FROM LookupTable
LATERAL VIEW explode(keyParts) keyPartsTable AS keyPart
)
SELECT a.value1, a.value2, b.value
FROM a
JOIN b ON a.key = b.key AND a.keyPart = b.keyPart;
The end result is that the above query takes approximately 3 minutes to complete, when compared with the original 6 hours on the same hardware configuration.