Have to perform incremental load into an internal table from an external table in hive when the source data file is appended with new records, on a daily basis. The new records can be filtered out based on the timestamp(column load_ts in the table) at which they were loaded. Trying to achieve this by selecting the records from source table whose load_ts is greater than the current max(load_ts) in the target table as given below:
INSERT INTO TABLE target_temp PARTITION (DATA_DT)
SELECT ms.* FROM temp_db.source_temp ms
JOIN (select max(load_ts) max_load_ts from target_temp) mt
ON 1=1
WHERE
ms.load_ts > mt.max_load_ts;
But the above query does not give the desired output. Takes very long time for execution (should not be the case with Map-Reduce paradigm).
Tried other scenarios also like passing the max(load_ts) as a variable, instead of joining. Still no improvement in the performance. Would be very helpful if anyone can give their insights as to what is possibly incorrect in this approach, with any alternate solutions.
First of all, the map/reduce model does not guarantee that your queries will take less. The main idea is that its performance will scale linearly with the number of nodes, but you have to still think about how you're doing things, more so than in normal SQL.
First thing to check is if the source table is partitioned by time. If not, it should as you'd be reading the whole table every single time.
Second, you're calculating the max as well every time, also, on the whole destination table. You could make it a lot faster if you just calculate the max on the last partition, so change this
JOIN (select max(load_ts) max_load_ts from target_temp) mt
to this (you didn't write the partition column so I am going to assume it's called 'dt'
JOIN (select max(load_ts) max_load_ts from target_temp WHERE dt=PREVIOUS_DATA_DT) mt
since we know the max load_ts is going to be in the last partition.
Otherwise, it's hard to help without knowing the structure of the source table, and, like somebody else commented, the sizes of the two tables.
JOIN is slower than variable in the WHERE clause. But the main problem with performance here is that your query performs full scan of target table and source table. I would recommend:
Query only the latest partition for max(load_ts).
Enable statistics gathering and usage
set hive.compute.query.using.stats=true;
set hive.stats.fetch.column.stats=true;
set hive.stats.fetch.partition.stats=true;
set hive.stats.autogather=true;
Compute statistics on both tables for columns.
Statistics will make queries like selecting MAX(partition) or max(ts) executing faster
Try to put source partition files into target partition folder instead of INSERT if applicable (target and source tables partitioning and storage format should enable this). It works fine for example for textfile storage format and if source table partition contain only rows>max(target_partition). You can combine both copy files method(for those source partitions that exactly contain rows to be inserting without filtering) and INSERT(for partitions containing mixed data that need to be filtering).
Hive may be merging your files during INSERT. This merge phase takes additional time and adds additional stage job. Check hive.merge.mapredfiles option and try to switch it off.
And of course use pre-calculated variable instead of join.
Use Cost-Based Optimisation Technique by enabling below properties
set hive.cbo.enable=true;
set hive.stats.autogather=true;
set hive.stats.fetch.column.stats=true;
set hive.compute.query.using.stats=true;
set hive.vectorized.execution.enabled=true;
set hive.exec.parallel=true;
Also analyze the table
ANALYZE TABLE temp_db.source_temp COMPUTE STATISTICS [comma_separated_column_list];
ANALYZE TABLE target_temp PARTITION(DATA_DT) COMPUTE STATISTICS;
Related
I have to add ability to store Nullable data into some column of existing tables, but this tables have ReplicatedMergeTree engine. So I got a question if exists command alter table on cluster modify column... ?
And how much it will be cost in time?
Thanks!
This is the syntax of the modify query on cluster, it will also update the replicas (but it can take a while).
ALTER TABLE example_table ON CLUSTER example_cluster MODIFY COLUMN IF EXISTS `example_column` Nullable(UInt8)
For the time required to do the operation, it depends on a lot of parameters (the amount of data, the hardware ...), we can't estimate it without more informations.
Usually when we have such modification and we want to ensure that it won't take too much time and that it is working, we make some benchmarks to estimate the time it would take.
This is a general question about the Oracle MERGE INTO statement with a particular scenario, on Oracle RDBMS 12c.
Daily data will be loaded to StagingTableA - about 10m rows.
This will be MERGEd INTO TableA.
TableA will vary between 0 to 10m rows (matcing StagingTableA).
There may be times when TableA will be pruned/emptied and left with 0 rows.
Clearly, when TableA is empty, a straight INSERT will do the job, but the procedure has been written to use a MERGE INTO method to handle all scenarios.
The MERGE .. MATCH is on a indexed column.
My question is an uncertainty about how the MERGE handles the MATCH in circumstances where TableA will start empty, and then grow hugely during the MERGE execution. The MATCH on indexed columns will use a FTS as the stats will show the table has 0 rows.
At some point during the MERGE transaction, this will become inefficient.
Is the MERGE statement clever enough to detect this and change the execution plan, and start using the index instead of the FTS?
If this was done the old way with CURSOR, UPDATE and INSERT then we could potentially introduce a ANALYZE at a appropriate point (say after 50,000 processed) on the TableA to switch to a optimal plan.
I haven't been able to find any documentation dealing with this specific question.
Hopefully you've got a UNIQUE index on that table, which is based on the incoming data. If I was you, rather than using a simple MERGE I'd:
Mark all indexes on the table as UNUSABLE, except for the unique index.
INSERT all records
Catch the DUPLICATE VALUE ON INDEX exception at the time of INSERT and issue the appropriate UPDATE.
DELETE processed rows from the input record.
Commit every N records (1000? 10000? 100000? Your choice...), calling DBMS_STATS.GATHER_TABLE_STATS for the table you've inserted into after each COMMIT.
Best of luck.
I have a hive table which is getting inserted few 1000s of record every hour. But when I execute select * from <table>, it is taking so much time to execute. What is the reason behind this?
Hive is not fast to begin with... Not sure what you're expecting, but it will not be on the order of milliseconds.
If you want performance improvements, use Tez or Spark rather than MapReduce execution, also use Hive 2 w/ LLAP, and land the data in ORC or Parquet format.
If you aren't able to do the above, at least place data into hourly partitions. Then actually query against the partition rather than scanning all the rows/columns because Hive does partition pruning.
Also, HDFS doesn't like files smaller than the hdfs block size (128 MB). Anything smaller means wasted time in map tasks
I agree with #cricket_007 of using execution engine tez/spark.There are some customization you can do from your end to achieve performance in hive:
Use of vectorization which executes in batches of 1024 rows at once
set hive.vectorized.execution.enabled = true;
set hive.vectorized.execution.reduce.enabled = true;
Use of CBO
set hive.cbo.enable=true;
set hive.compute.query.using.stats=true;
set hive.stats.fetch.column.stats=true;
set hive.stats.fetch.partition.stats=true;
It is best practice to partition your data to speed up the queries. Partitioning will make hive run the query on the subset of the data instead of the entire dataset. Creating partitions may be done as follows:
The folder structure should look something like this:
path/to/directory/partition=partition_name
Then on the table itself (assuming it's on an external table) you're create table statement should be something like:
CREATE EXTERNAL TABLE table_name (
...
fields
...
)
PARTITIONED BY (partition)
LOCATION '/path/to/directory'
You can then query the table and treat the partition as another column.
If you look at the Hive design and architecture you will see that a typical query will have some overhead. A query will be translated into code for distributed execution, send over to the cluster backend, executed there and then results are stored and collected for displaying. This will add latency to every of your queries even if the input data and the final result set are small.
I wanna know how hive partitioning works I know the concept but I am trying to understand how its working and store the in exact partition.
Let say I have a table and I have created partition on year its dynamic, ingested data from 2013 so how hive create partition and store the exact data in exact partition.
If the table is not partitioned, all the data is stored in one directory without order. If the table is partitioned(eg. by year) data are stored separately in different directories. Each directory is corresponding to one year.
For a non-partitioned table, when you want to fetch the data of year=2010, hive have to scan the whole table to find out the 2010-records. If the table is partitioned, hive just go to the year=2010 directory. More faster and IO efficient
Hive organizes tables into partitions. It is a way of dividing a table into related parts based on the values of partitioned columns such as date.
Partitions - apart from being storage units - also allow the user to efficiently identify the rows that satisfy a certain criteria.
Using partition, it is easy to query a portion of the data.
Tables or partitions are sub-divided into buckets, to provide extra structure to the data that may be used for more efficient querying. Bucketing works based on the value of hash function of some column of a table.
Suppose you need to retrieve the details of all employees who joined in 2012. A query searches the whole table for the required information. However, if you partition the employee data with the year and store it in a separate file, it reduces the query processing time.
I am converting GTT's to oracle types as explained in an excellent answer by APC. however, some GTT's are being updated based on a select query from another table. For example:
UPDATE my_gtt_1 c
SET (street, city, STATE, zip) = (SELECT src.unit_address,
src.unit_city,
src.unit_state,
src.unit_zip_code
FROM (SELECT mbr.ROWID row_id,
unit_address,
RTRIM(a.unit_city) unit_city,
RTRIM(a.unit_state) unit_state,
RTRIM(a.unit_zip_code) unit_zip_code
FROM table_1 b,
table_2 a,
my_gtt_1 mbr
WHERE type = 'ABC'
AND id = b.ssn_head
AND a.h_id = b.h_id
AND row_id >= v_start_row
AND row_id <= v_end_row) src
WHERE c.ROWID = src.row_id)
WHERE state IS NULL
OR state = ' ';
if my_gtt_1 was not a global temporary table but an oracle collection type then is it possible to do updates this complex? Or in these cases we are better off using the global temporary table?
you can not perform set UPDATE operations on object types. You will have to do it row by row, as in:
FOR i IN l_tab.FIRST..l_tab.LAST LOOP
SELECT src.unit_address,
src.unit_city,
src.unit_state,
src.unit_zip_code
INTO l_tab(i).street,
l_tab(i).city,
l_tab(i).STATE,
l_tab(i).zip
FROM (your_query) src;
END LOOP;
You should therefore try to do all computations at creation time (where you can BULK COLLECT). Obviously, if your process needs many steps you might find that a global temporary table outperforms an in-memory structure.
From the last questions you have asked, it seems you are trying to replace all global temporary tables with object tables. I would suggest caution because in general, they are not interchangeable:
Objects tables are in-memory structures: you don't want to load a million+ rows table into memory. They are mainly used as a buffer: you load a few (100 for example) rows into the structure, perform what you need to do with these rows then load the next batch. You can not easily treat this structure as a regular table: for example you can only search this structure efficiently with the standard indexing key (you cannot search by rowid in your example unless you define the structure to be indexed by rowid).
Temporary tables on the other hand are very similar to ordinary tables. You can load millions of rows in them, perform joins, complex set operations. You can index the temporary table for further optimization.
In my opinion, the change your are trying to conduct will take a massive overhaul of your logic and it may not perform better. In general, you would not replace GTT with object tables. You may be able to remove GTT with significant gain in performance by using SET operations directly (perform massive UPDATE/DELETE/INSERT on your data directly without a staging table).
I would suggest performing benchmarks before choosing a solution (this is probably what you are doing right now :)
I think this part of APC's answer to your previous question is relevant here:
Global temporary tables are also good
if we have a lot of intermediate
processing which is just too
complicated to be solved with a single
SQL query. Especially if that
processing must be applied to subsets
of the retrieved rows.
You cannot update the in-memory data with an UPDATE statement like you can a GTT; you would need to write procedural code to locate and change the array elements in question.