I am working on moving 70M rows from a source table to a target table and using a complete dump and restore it on the other end is not an option. I have decided to create a small SQL file that selects 1M rows at a time and inserts the rows to the new table (after some clean up). The problem becomes that I need to iterate over through the 70M rows with 1M chunks, and I just realised that every iteration is getting slower and slower.
Is there a way to create a partial index to speed up queries having OFFSET 0 LIMIT 1000000, OFFSET 1000000 LIMIT 1000000 etc?
Example:
Fast:
SELECT id FROM huge_table ORDER BY id OFFSET 0 LIMIT 1000000
Slower:
SELECT id FROM huge_table ORDER BY id OFFSET 1000000 LIMIT 1000000
Very slow:
SELECT id FROM huge_table ORDER BY id OFFSET 5000000 LIMIT 1000000
Related
I'm querying Clickhouse with a query that has ORDER BY and LIMIT 1, and the ORDER BY matches the table's sort order. The query returns 1 row as expected, however, 50+ rows were scanned to return the result.
I would expect ClickHouse to scan only 1 row as the ORDER BY is in the table's sort order. What's happening here and what can I do to fix this?
SELECT * FROM comp_intel_scrapes
order by
client_slug,
client_hotel_id,
argset_id,
scrape_datetime,
preferred_country,
preferred_currency,
adults,
children,
nights,
min_checkin_date,
max_checkin_date
limit 1
----
Elapsed: 0.004s
Read: 54 rows (8.84KB)
By the way, Clickhouse.com's cloud is being used here.
It depends on a table engine.
Primary index is sparse https://clickhouse.com/docs/en/guides/improving-query-performance/sparse-primary-indexes/sparse-primary-indexes-design/
Because of this CH is unable to read less than one granule ~8192 rows.
I have a table with approximately 9 million records. When I'm trying to select records with big offset(for pagination) it increase execution time to extremely values. Or even causing an exceeding of memory limits and fails.
Here are logs for query with two different offset values.
SELECT * WHERE set_date >= '2019-10-11 11:05:00' AND set_date <= '2019-10-19 18:09:59' ORDER BY id ASC LIMIT 1 OFFSET 30
Elapsed: 0.729 sec. Processed 9.92 million rows, 3.06 GB (13.61 million rows/s., 4.19 GB/s.)
MemoryTracker: Peak memory usage (for query): 181.65 MiB.
SELECT * WHERE set_date >= '2019-10-11 11:05:00' AND set_date <= '2019-10-19 18:09:59' ORDER BY id ASC LIMIT 1 OFFSET 3000000
Elapsed: 6.301 sec. Processed 9.92 million rows, 3.06 GB (1.57 million rows/s., 485.35 MB/s.)
MemoryTracker: Peak memory usage (for query): 5.89 GiB.
All databases including CH implement OFFSET the same way. They just read all rows and skip OFFSET in a resultset. There is no optimization to ascend right into OFFSET 3000000.
https://www.eversql.com/faster-pagination-in-mysql-why-order-by-with-limit-and-offset-is-slow/
try to disable optimize_read_in_order to fix memory usage
SELECT *
WHERE set_date >= '2019-10-11 11:05:00'
AND set_date <= '2019-10-19 18:09:59'
ORDER BY id ASC LIMIT 1 OFFSET 3000000
setting optimize_read_in_order=0
"all_members" is a table in hive with 10m rows and 1 column: "membership_nbr". I want to sample 3000 rows. This is what I have done:
hive>create table sample_members as select * from all_members limit 1;
hive>insert overwrite table sample_members select membership_nbr from all_members tablesample(3000 rows);
hive>select count(*) from sample_members;
OK 45000
The result wont change if I replace 3000 rows with 300 rows
Do I do something wrong?
Table Sampling using tablesample(3000 rows) wont fetch 3000 rows from entire table instead it will fetch 3000 rows from each input split.
So, your query might run 15 mappers. So, each mapper will fetch 3000 rows. Totally, 3000 * 15 = 45000 rows. Also, if you change the 3000 rows to 300 rows you will get 4500 rows as output after sampling.
So, as per your requirement you have to give tablesample(200 rows). As a result each mapper will fetch 200 rows. Finally, 15 mappers will fetch 3000 sampling rows.
Refer the below link for various types of sampling:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Sampling
I have an orders table. The table belongs to a multi-tenant application, so there are orders from several merchants in the same table. The table stores hundreds of millions of records. There are two relevant columns for this question:
MerchantID, an integer storing the merchant's unique ID
TransactionID, a string identifying the transaction
I want to know whether there is an efficient index to do the following:
Enforce a unique constraint on Transaction ID for each Merchant ID. The constraint should be enforced in constant time.
Do constant time queries involving exact matches on both columns (for instance, SELECT * FROM <table> WHERE TransactionID = 'ff089f89feaac87b98a' AND MerchantID = 24)
Further info:
I am using Oracle 11g. Maybe this Oracle article is relevant to my question?
I cannot change the column's data type.
constant time means an index performing in O(1) time complexity. Like a hashmap.
Hash clusters can provide O(1) access time, but not O(1) constraint enforcement time. However, in practice the constant access time of a hash cluster is worse than the O(log N) access time of a regular b-tree index. Also, clusters are more difficult to configure and do not scale well for some operations.
Create Hash Cluster
drop table orders_cluster;
drop cluster cluster1;
create cluster cluster1
(
MerchantID number,
TransactionID varchar2(20)
)
single table hashkeys 10000; --This number is important, choose wisely!
create table orders_cluster
(
id number,
MerchantID number,
TransactionID varchar2(20)
) cluster cluster1(merchantid, transactionid);
--Add 1 million rows. 20 seconds.
begin
for i in 1 .. 10 loop
insert into orders_cluster
select rownum + i * 100000, mod(level, 100)+ i * 100000, level
from dual connect by level <= 100000;
commit;
end loop;
end;
/
create unique index orders_cluster_idx on orders_cluster(merchantid, transactionid);
begin
dbms_stats.gather_table_stats(user, 'ORDERS_CLUSTER');
end;
/
Create Regular Table (For Comparison)
drop table orders_table;
create table orders_table
(
id number,
MerchantID number,
TransactionID varchar2(20)
) nologging;
--Add 1 million rows. 2 seconds.
begin
for i in 1 .. 10 loop
insert into orders_table
select rownum + i * 100000, mod(level, 100)+ i * 100000, level
from dual connect by level <= 100000;
commit;
end loop;
end;
/
create unique index orders_table_idx on orders_table(merchantid, transactionid);
begin
dbms_stats.gather_table_stats(user, 'ORDERS_TABLE');
end;
/
Trace Example
SQL*Plus Autotrace is a quick way to find the explain plan and track I/O activity per statement. The number of I/O requests is labeled as "consistent gets" and is a decent way of measuring the amount of work done. This code demonstrates how the numbers were generated for other sections. The queries often need to be run more than once to warm things up.
SQL> set autotrace on;
SQL> select * from orders_cluster where merchantid = 100001 and transactionid = '2';
no rows selected
Execution Plan
----------------------------------------------------------
Plan hash value: 621801084
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 16 | 1 (0)| 00:00:01 |
|* 1 | TABLE ACCESS HASH| ORDERS_CLUSTER | 1 | 16 | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("MERCHANTID"=100001 AND "TRANSACTIONID"='2')
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
31 consistent gets
0 physical reads
0 redo size
485 bytes sent via SQL*Net to client
540 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
Find Optimal Hashkeys, Trade-Offs
For optimal read performance all the hash collisions should fit in one block (all Oracle I/O is done per block, usually 8K). Getting the ideal storage right is tricky and requires knowing the hash algorithm, storage size (not the same as the block size), and number of hash keys (the buckets). Oracle has a default algorithm and size so it is possible to focus on only one attribute, the number of hash keys.
More hash keys leads to fewer collisions. This is good for TABLE ACCESS HASH performance as there is only one block to read. Below are the number of consistent gets for different hashkey sizes. For comparison an index access is also included. With enough hashkeys the number of blocks decreases to the optimal number, 1.
Method Consistent Gets (for transactionid = 1, 20, 300, 4000, and 50000)
Index 4, 3, 3, 3, 3
Hashkeys 100 1, 31, 31, 31, 31
Hashkeys 1000 1, 3, 4, 4, 4
Hashkeys 10000 1, 1, 1, 1, 1
More hash keys also lead to more buckets, more wasted space, and a slower TABLE ACCESS FULL operation.
Table type Space in MB
HeapTable 24MB
Hashkeys 100 26MB
hashkeys 1000 30MB
hashkeys 10000 81MB
To reproduce my results, use a sample query like select * from orders_cluster where merchantid = 100001 and transactionid = '1'; and change the last value to 1, 20, 300, 4000, and 50000.
Performance Comparison
Consistent gets are predictable and easy to measure, but at the end of the day only the wall clock time matters. Surprisingly, the index access with 4 times more
consistent gets is still faster than the optimal hash cluster scenario.
--3.5 seconds for b-tree access.
declare
v_count number;
begin
for i in 1 .. 100000 loop
select count(*)
into v_count
from orders_table
where merchantid = 100000 and transactionid = '1';
end loop;
end;
/
--3.8 seconds for hash cluster access.
declare
v_count number;
begin
for i in 1 .. 100000 loop
select count(*)
into v_count
from orders_cluster
where merchantid = 100000 and transactionid = '1';
end loop;
end;
/
I also tried the test with variable predicates but the results were similar.
Does it Scale?
No, hash clusters do not scale. Despite the O(1) time complexity of TABLE ACCESS HASH, and the O(log n) time complexity of INDEX UNIQUE SCAN, hash clusters never seem to outperform b-tree indexes.
I tried the above sample code with 10 million rows. The hash cluster was painfully slow to load, and still under-performed the index on SELECT performance. I tried to scale it up to 100 million rows but the insert was going to take 11 days.
The good news is that b*trees scale well. Adding 100 million rows to the above example only require 3 levels in the index. I looked at all DBA_INDEXES for a large database environment (hundreds of databases and a petabyte of data) - the worst index had only 7 levels. And that was a pathological index on VARCHAR2(4000) columns. In most cases your b-tree indexes will stay shallow regardless of the table size.
In this case, O(log n) beats O(1).
But WHY?
Poor hash cluster performance is perhaps a victim of Oracle's attempt to simplify things and hide the kind of details necessary to make a hash cluster work well. Clusters are difficult to setup and use properly and would rarely provide a significant benefit anyway. Oracle has not put a lot of effort into them in the past few decades.
The commenters are correct that a simple b-tree index is best. But it's not obvious why that should be true and it's good to think about the algorithms used in the database.
I have a table in an Oracle db that gets a couple of million new rows every month. Each row has a column which states the date when it was created.
I'd like to run a query that gets the disk space growth over the last 6 months. In other words, the result would be a table with two columns where each row would have the month's name and disk space used during that month.
Thanks,
This article reports a method of getting the table growth: http://www.dba-oracle.com/t_table_growth_reports.htm
column "Percent of Total Disk Usage" justify right format 999.99
column "Space Used (MB)" justify right format 9,999,999.99
column "Total Object Size (MB)" justify right format 9,999,999.99
set linesize 150
set pages 80
set feedback off
select * from (select to_char(end_interval_time, 'MM/DD/YY') mydate, sum(space_used_delta) / 1024 / 1024 "Space used (MB)", avg(c.bytes) / 1024 / 1024 "Total Object Size (MB)",
round(sum(space_used_delta) / sum(c.bytes) * 100, 2) "Percent of Total Disk Usage"
from
dba_hist_snapshot sn,
dba_hist_seg_stat a,
dba_objects b,
dba_segments c
where begin_interval_time > trunc(sysdate) - &days_back
and sn.snap_id = a.snap_id
and b.object_id = a.obj#
and b.owner = c.owner
and b.object_name = c.segment_name
and c.segment_name = '&segment_name'
group by to_char(end_interval_time, 'MM/YY'))
order by to_date(mydate, 'MM/YY');
DBA_TABLES (or the equivalent) gives an AVG_ROW_LEN, so you could simply multiply that by the number of rows created per month.
The caveats to that are, it assumes that the row length of new rows is similar to that of existing rows. If you've got a bunch of historical data that were 'small' (eg 50 bytes) but new rows are larger (150 bytes), then the estimates will be too low.
Also, how do updates figure into things ? If a row starts at 50 bytes and grows to 150 two months later, how do you account for those 100 bytes ?
Finally, tables don't grow for each row insert. Every so often the allocated space will fill up and it will go and allocate another chunk. Depending on the table settings, that next chunk may be, for example, 50% of the existing table size. So you might not physically grow for three months and then have a massive jump, then not grow for another six months.