Performance tuning on reading huge table - performance

I have a huge table with more than one hundred million of rows and I have to query this table to return a set of data in a minimum of time.
So I have created a test environment with this table definition:
CREATE TABLE [dbo].[Test](
[Dim1ID] [nvarchar](20) NOT NULL,
[Dim2ID] [nvarchar](20) NOT NULL,
[Dim3ID] [nvarchar](4) NOT NULL,
[Dim4ID] [smalldatetime] NOT NULL,
[Dim5ID] [nvarchar](20) NOT NULL,
[Dim6ID] [nvarchar](4) NOT NULL,
[Dim7ID] [nvarchar](4) NOT NULL,
[Dim8ID] [nvarchar](4) NOT NULL,
[Dim9ID] [nvarchar](4) NOT NULL,
[Dim10ID] [nvarchar](4) NOT NULL,
[Dim11ID] [nvarchar](20) NOT NULL,
[Value] [decimal](21, 6) NOT NULL,
CONSTRAINT [PK_Test] PRIMARY KEY CLUSTERED
(
[Dim1ID] ASC,
[Dim2ID] ASC,
[Dim3ID] ASC,
[Dim4ID] ASC,
[Dim5ID] ASC,
[Dim6ID] ASC,
[Dim7ID] ASC,
[Dim8ID] ASC,
[Dim9ID] ASC,
[Dim10ID] ASC,
[Dim11ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
This table is the fact table of Star schema architecture (fact/dimensions). As you can see I have a clustered index on all the columns except for the “Value” column.
I have filled this data with approx. 10,000,000 rows for testing purpose. The fragmentation is currently at 0.01%.
I would like to improve the performance when reading a set of rows from this table using this query:
DECLARE #Dim1ID nvarchar(20) = 'C1'
DECLARE #Dim9ID nvarchar(4) = 'VRT1'
DECLARE #Dim10ID nvarchar(4) = 'S1'
DECLARE #Dim6ID nvarchar(4) = 'FRA'
DECLARE #Dim7ID nvarchar(4) = '' -- empty = all
DECLARE #Dim8ID nvarchar(4) = '' -- empty = all
DECLARE #Dim2 TABLE ( Dim2ID nvarchar(20) NOT NULL )
INSERT INTO #Dim2 VALUES ('A1'), ('A2'), ('A3'), ('A4');
DECLARE #Dim3 TABLE ( Dim3ID nvarchar(4) NOT NULL )
INSERT INTO #Dim3 VALUES ('P1');
DECLARE #Dim4ID TABLE ( Dim4ID smalldatetime NOT NULL )
INSERT INTO #Dim4ID VALUES ('2009-01-01'), ('2009-01-02'), ('2009-01-03');
DECLARE #Dim11 TABLE ( Dim11ID nvarchar(20) NOT NULL )
INSERT INTO #Dim11 VALUES ('Var0001'), ('Var0040'), ('Var0060'), ('Var0099')
SELECT RD.Dim2ID,
RD.Dim3ID,
RD.Dim4ID,
RD.Dim5ID,
RD.Dim6ID,
RD.Dim7ID,
RD.Dim8ID,
RD.Dim9ID,
RD.Dim10ID,
RD.Dim11ID,
RD.Value
FROM dbo.Test RD
INNER JOIN #Dim2 R
ON RD.Dim2ID = R.Dim2ID
INNER JOIN #Dim3 C
ON RD.Dim3ID = C.Dim3ID
INNER JOIN #Dim4ID P
ON RD.Dim4ID = P.Dim4ID
INNER JOIN #Dim11 V
ON RD.Dim11ID = V.Dim11ID
WHERE RD.Dim1ID = #Dim1ID
AND RD.Dim9ID = #Dim9ID
AND ((#Dim6ID <> '' AND RD.Dim6ID = #Dim6ID) OR #Dim6ID = '')
AND ((#Dim7ID <> '' AND RD.Dim7ID = #Dim7ID) OR #Dim7ID = '')
AND ((#Dim8ID <>'' AND RD.Dim8ID = #Dim8ID) OR #Dim8ID = '')
I have tested this query and that’s returned 180 rows with these times:
1st execution: 1 min 32 sec; 2nd execution: 1 min.
I would like to return the data in a few seconds if it’s possible.
I think I can add the non-clustered indexes but I am not sure what the best way is to set the non-clustered indexes!
If having sorted order data in this table could improve the performances?
Or are there other solutions than indexes?
Thanks.

Consider your datatypes as one problem. Do you need nvarchar? It's measurably slower
Second problem: the PK is wrong for your query, It should be Dim1ID, Dim9ID first (or vice versa based on selectivity). or some flavour with the JOIN columns in.
Third problem: use of OR. This construct usually works despite what nay-sayers who don't try it will post.
RD.Dim7ID = ISNULL(#Dim7ID, RD.Dim7ID)
This assumes #Dim7ID is NULL though. The optimiser will short circuit it in most cases.

I'm with gbn on this. Typically in star schema data warehouses, the dimension IDs are int, which is 4 bytes. Not only are all your dimensions larger than that, the nvarchar are both varying and using wide characters.
As far as indexing, just one clustering index may be fine since in the case of your fact table, you really don't have many facts. As gbn says, with your particular example, your index needs to be in the order of the columns which you are going to be providing so that the index can actually be used.
In a real-world case of a fact table with a number of facts, your clustered index is simply for data organization - you'll probably be expecting some non-clustered indexes for specific usages.
But I'm worried that your query specifies an ID parameter. Typically in a DW environment, you don't know the IDs, for selective queries, you select based on the dimensions, and the ids are meaningless surrogates:
SELECT *
FROM fact
INNER JOIN dim1
ON fact.dim1id = dim1.id
WHERE dim1.attribute = ''
Have you looked at Kimball's books on dimensional modeling? I think if you are going to a star schema, you should probably be familiar with his design techniques, as well as the various pitfalls he discusses with the too many and too few dimensions.

see this: Dynamic Search Conditions in T-SQL Version for SQL 2008 (SP1 CU5 and later)
quick answer, if you are on the right service pack of SQL Server 2008, is to
try adding that to the end of the query:
OPTION(RECOMPILE)
when on the proper service pack of SQL Server 2008, the OPTION(RECOMPILE) will build the execution plan based on the runtime value of the local variables.
For people still using SQl Server 2008 without the proper service packs or still on 2005 see: Dynamic Search Conditions in T-SQLVersion for SQL 2005 and Earlier

I'd be a little concerned about having all the non-value columns in your clustered index. That will make for a large index in the non-leaf levels. And, that key will be used in the nonclustered indexes. And, it will only provide any benefit when [Dim1ID] is included in the query. So, even if you're only optimizing this query, you're probably getting a full scan.
I would consider a clustered index on the most-commonly used key, and if you have a lot of date-related queries (e.g., date between a and b), go with the date key. Then, create non clustered indexes on the other key values.

Related

Query spilling to disk

We have a query which spills to disk (about 1 Terabyte!), here's the code:
WITH q (year,quarter) AS ( SELECT * FROM temp.quarters) SELECT *,
(SELECT price FROM prices WHERE EXTRACT(QUARTER FROM
pricing_date::TIMESTAMP) = q.quarter AND EXTRACT(YEAR FROM
pricing_date::TIMESTAMP) = q.year ORDER BY pricing_date LIMIT 1) FROM
q ORDER BY q.year,q.quarter
the price table has millions of rows, is there a way to improve this query so that it doesn't spill too much? We think it may be due to the fact we're using the "With" clause and not temp tables?
Table definition below:
CREATE TABLE public.record_pricing (
record_id int8 NOT NULL,
pricing_date date NOT NULL,
price numeric(26,10) NOT NULL,
)
WITH (
appendonly=true
);
There are no indexes, no constraints
Thanks,
The answer is already in your question
There are no indexes, no constraints
To improve query time in such a large table, the usage of indexes are imperative. Try adding partial indexes to your table, so that the query planer knows beforehand where to find year and quarter, e.g.
CREATE INDEX idx_pricing_quarter ON record_pricing (EXTRACT(QUARTER FROM pricing_date::TIMESTAMP));
CREATE INDEX idx_pricing_year ON record_pricing (EXTRACT(YEAR FROM pricing_date::TIMESTAMP));
.. or even
CREATE INDEX idx_pricing_year_quarter ON record_pricing
(EXTRACT(QUARTER FROM pricing_date::TIMESTAMP) ,
EXTRACT(YEAR FROM pricing_date::TIMESTAMP));
You might wanna consider indexing pricing_date as well, take a look at the documentation
CREATE INDEX idx_pricing_date ON record_pricing (pricing_date);
Note that indexes may slow down INSERTS in your table! But since it is most likely a Data Warehouse, this might not be a problem.
After that your query should get significantly faster. The best way to see if your query is using the indexes properly is to EXPLAIN it. Play with it and you will see the improvements yourself.
Btw, the CTE isn't the issue here, since it is a full scan in a presumably small table temp.quartes, but if you wanna get rid of it try:
SELECT q.year,q.quarter,
(SELECT price
FROM record_pricing
WHERE EXTRACT(QUARTER FROM pricing_date::TIMESTAMP) = q.quarter AND
EXTRACT(YEAR FROM pricing_date::TIMESTAMP) = q.year
ORDER BY pricing_date LIMIT 1)
FROM quarters q;

MonetDB Query Performance - Multi-Table Joins vs. Single-Table Predicate Selections

I use MonetDB to execute IR-Tasks and I have come across the following issue I do not fully understand!
In this setting I created two very simple tables.
Dictionary Table: (3 267 008 tuples)
CREATE TABLE "ir"."dict" (
"tid" INTEGER NOT NULL,
"term" VARCHAR(100),
CONSTRAINT "pk_dict" PRIMARY KEY ("tid")
);
Dictionary History Table: (113 574 247 tuples)
CREATE TABLE "ir"."dict_histu" (
"tid" INTEGER,
"added" TIMESTAMP,
"removed" TIMESTAMP,
"df" INTEGER,
CONSTRAINT "fk_dict_histu" FOREIGN KEY ("tid") REFERENCES "ir"."dict" ("tid")
);
And executed the following queries.
Query 1:
SELECT dict_histu.tid, dict_histu.df FROM dict
JOIN dict_histu ON dict.tid = dict_histu.tid
WHERE dict.tid IN (25,26,27)
AND dict_histu.added <= timest
AND (dict_histu.removed IS NULL OR dict_histu.removed > timest)
LIMIT 100;
Execution Time (~1,2s)
Query 2:
SELECT tid, df FROM dict_histu
WHERE dict_histu.added <= %timest%
AND (dict_histu.removed is NULL OR dict_histu.removed > %timest%)
AND dict_histu.tid IN (25,26,27)
LIMIT 100;
Execution Time (~4,7s)
Intuitively I would have assumed the second approach to be faster
because there is only one table concerned and an unchanged number of predicates and candidates to be eliminated.
My Question: Why does the first query outperform the second? What is happening behind the scenes (literature available?)? How can this observation contribute to c-store database design in general(best-practice)?
Unfortunately the trace-logs are way to long to be attached!
Thank you for your help!

Oracle SQL Query Performance, Function based Indexes

I have been trying to fine tune a SQL Query that takes 1.5 Hrs to process approx 4,000 error records. The run time increases along with the number of rows.
I figured out there is one condition in my SQL that is actually causing the issue
AND (DECODE (aia.doc_sequence_value,
NULL, DECODE(aia.voucher_num,
NULL, SUBSTR(aia.invoice_num, 1, 10),
aia.voucher_num) ,
aia.doc_sequence_value) ||'_' ||
aila.line_number ||'_' ||
aida.distribution_line_number ||'_' ||
DECODE (aca.doc_sequence_value,
NULL, DECODE(aca.check_voucher_num,
NULL, SUBSTR(aca.check_number, 1, 10),
aca.check_voucher_num) ,
aca.doc_sequence_value)) = " P_ID"
(P_ID - a value from the first cursor sql)
(Note that these are standard Oracle Applications(ERP) Invoice tables)
P_ID column is from the staging table that is derived the same way as above derivation and compared here again in the second SQL to get the latest data for that record. (Basically reprocessing the error records, the value of P_ID is something like "999703_1_1_9995248" )
Q1) Can I create a function based index on the whole left side derivation? If so what is the syntax.
Q2) Would it be okay or against the oracle standard rules, to create a function based index on standard Oracle tables? (Not creating directly on the table itself)
Q3) If NOT what is the best approach to solve this issue?
Briefly, no you can't place a function-based index on that expression, because the input values are derived from four different tables (or table aliases).
What you might look into is a materialised view, but that's a big and potentially difficult to solve a single query optimisation problem with.
You might investigate decomposing that string "999703_1_1_9995248" and applying the relevant parts to the separate expressions:
DECODE(aia.doc_sequence_value,
NULL,
DECODE(aia.voucher_num,
NULL, SUBSTR(aia.invoice_num, 1, 10),
aia.voucher_num) ,
aia.doc_sequence_value) = '999703' and
aila.line_number = '1' and
aida.distribution_line_number = '1' and
DECODE (aca.doc_sequence_value,
NULL,
DECODE(aca.check_voucher_num,
NULL, SUBSTR(aca.check_number, 1, 10),
aca.check_voucher_num) ,
aca.doc_sequence_value)) = '9995248'
Then you can use indexes on the expressions and columns.
You could separate the four components of the P_ID value using regular expressions, or a combination of InStr() and SubStr()
Ad 1) Based on the SQL you've posted, you cannot create function based index on that. The reason is that function based indexes must be:
Deterministic - i.e. the function used in index definition has to always return the same result for given input arguments, and
Can only use columns from the table the index is created for. In your case - based on aliases you're using - you have four tables (aia, aila, aida, aca).
Req #2 makes it impossible to build a functional index for that expression.

How to define if table is a good candidate for a clustered columnstore index?

I have read (here,here and here) about clustered columnstore indexes introduced in SQL Server 2014. Basically, now:
Column store indexes can be updatable
Table schema can be modified (without drop column store indexes)
Structure of the base table can be columnar
Space saved by compression effects (with a column store index, you
can save between 40 to 50 percent of initial space used for the
table)
In addition, they support:
Row mode and Batch mode processing
BULK INSERT statement
More data types
AS I have understood there are some restrictions, like:
Unsupported data types
Other indexes cannot be created
But as it is said:
With a clustered column store index, all filter possibilities are
already covered; Query Processor, using Segment Elimination, will be
able to consider only the segments required by the query clauses. On
the columns where it cannot apply the Segment Elimination, all scans
will be faster than B-Tree index scans because data are compressed so
less I/O operations will be required.
I am interested in the following:
Does the statement above say that a clustered column store index is always better for extracting data than a B-Tree index when a lot of duplicated values exist?
What about the performance between clustered column store index and non-clustered B-Tree covering index, when the table has many columns for example?
Can I have a combination of clustered and non-clustered columnstores indexes on one table?
And most importantly, can anyone tell how to determine whether a table is a good candidate for a columned stored index?
It is said that the best candidates are tables for which update/delete/insert operations are not performed often. For example, I have a table with storage size above 17 GB (about 70 millions rows) and new records are inserted and deleted constantly. On the other hand, a lot of queries using its columns are performed. Or I have a table with storage size about 40 GB (about 60 millions rows) with many inserts performed each day - it is not queried often but I want to reduce its size.
I know the answer is mostly in running production tests but before that I need to pick the better candidates.
One of the most important restrictions for Clustered Columnstore is their locking, you can find some details over here: http://www.nikoport.com/2013/07/07/clustered-columnstore-indexes-part-8-locking/
Regarding your questions:
1) Does the statement above say that a clustered column store index is always better for extracting data then a B-Tree index when a lot of duplicated values exist
Not only duplicates are faster scanned by Batch Mode, but for data reading the mechanisms for Columnstore Indexes are more effective, when reading all data out of a Segment.
2) What about the performance between clustered column store index and non-clustered B-Tree covering index, when the table has many columns for example
Columnstore Index has a significantly better compression than Page or Row, available for the Row Store, Batch Mode shall make the biggest difference on the processing side and as already mentioned even reading of the equally-sized pages & extents should be faster for Columnstore Indexes
3) Can I have a combination of clustered and non clustered columnstores indexes on one table
No, at the moment this is impossible.
4) ... can anyone tell how to define if a table is a good candidate for a columned stored index?
Any table which you are scanning & processing in big amounts (over 1 million rows), or maybe even whole table with over 100K scanned entirely might be a candidate to consider.
There are some restrictions on the used technologies related to the table where you want to build Clustered Columnstore indexes, here is a query that I am using:
select object_schema_name( t.object_id ) as 'Schema'
, object_name (t.object_id) as 'Table'
, sum(p.rows) as 'Row Count'
, cast( sum(a.total_pages) * 8.0 / 1024. / 1024
as decimal(16,3)) as 'size in GB'
, (select count(*) from sys.columns as col
where t.object_id = col.object_id ) as 'Cols Count'
, (select count(*)
from sys.columns as col
join sys.types as tp
on col.system_type_id = tp.system_type_id
where t.object_id = col.object_id and
UPPER(tp.name) in ('VARCHAR','NVARCHAR')
) as 'String Columns'
, (select sum(col.max_length)
from sys.columns as col
join sys.types as tp
on col.system_type_id = tp.system_type_id
where t.object_id = col.object_id
) as 'Cols Max Length'
, (select count(*)
from sys.columns as col
join sys.types as tp
on col.system_type_id = tp.system_type_id
where t.object_id = col.object_id and
(UPPER(tp.name) in ('TEXT','NTEXT','TIMESTAMP','HIERARCHYID','SQL_VARIANT','XML','GEOGRAPHY','GEOMETRY') OR
(UPPER(tp.name) in ('VARCHAR','NVARCHAR') and (col.max_length = 8000 or col.max_length = -1))
)
) as 'Unsupported Columns'
, (select count(*)
from sys.objects
where type = 'PK' AND parent_object_id = t.object_id ) as 'Primary Key'
, (select count(*)
from sys.objects
where type = 'F' AND parent_object_id = t.object_id ) as 'Foreign Keys'
, (select count(*)
from sys.objects
where type in ('UQ','D','C') AND parent_object_id = t.object_id ) as 'Constraints'
, (select count(*)
from sys.objects
where type in ('TA','TR') AND parent_object_id = t.object_id ) as 'Triggers'
, t.is_tracked_by_cdc as 'CDC'
, t.is_memory_optimized as 'Hekaton'
, t.is_replicated as 'Replication'
, coalesce(t.filestream_data_space_id,0,1) as 'FileStream'
, t.is_filetable as 'FileTable'
from sys.tables t
inner join sys.partitions as p
ON t.object_id = p.object_id
INNER JOIN sys.allocation_units as a
ON p.partition_id = a.container_id
where p.data_compression in (0,1,2) -- None, Row, Page
group by t.object_id, t.is_tracked_by_cdc, t.is_memory_optimized, t.is_filetable, t.is_replicated, t.filestream_data_space_id
having sum(p.rows) > 1000000
order by sum(p.rows) desc

How to otimize select from several tables with millions of rows

Have the following tables (Oracle 10g):
catalog (
id NUMBER PRIMARY KEY,
name VARCHAR2(255),
owner NUMBER,
root NUMBER REFERENCES catalog(id)
...
)
university (
id NUMBER PRIMARY KEY,
...
)
securitygroup (
id NUMBER PRIMARY KEY
...
)
catalog_securitygroup (
catalog REFERENCES catalog(id),
securitygroup REFERENCES securitygroup(id)
)
catalog_university (
catalog REFERENCES catalog(id),
university REFERENCES university(id)
)
Catalog: 500 000 rows, catalog_university: 500 000, catalog_securitygroup: 1 500 000.
I need to select any 50 rows from catalog with specified root ordered by name for current university and current securitygroup. There is a query:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c, catalog_securitygroup cs, catalog_university cu
WHERE c.root = 100
AND cs.catalog = c.id
AND cs.securitygroup = 200
AND cu.catalog = c.id
AND cu.university = 300
ORDER BY name
) cc
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
Where 100 - some catalog, 200 - some securitygroup, 300 - some university. This query return 50 rows from ~ 170 000 in 3 minutes.
But next query return this rows in 2 sec:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c
WHERE c.root = 100
ORDER BY name
) cc
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
I build next indexes: (catalog.id, catalog.name, catalog.owner), (catalog_securitygroup.catalog, catalog_securitygroup.index), (catalog_university.catalog, catalog_university.university).
Plan for first query (using PLSQL Developer):
http://habreffect.ru/66c/f25faa5f8/plan2.jpg
Plan for second query:
http://habreffect.ru/f91/86e780cc7/plan1.jpg
What are the ways to optimize the query I have?
The indexes that can be useful and should be considered deal with
WHERE c.root = 100
AND cs.catalog = c.id
AND cs.securitygroup = 200
AND cu.catalog = c.id
AND cu.university = 300
So the following fields can be interesting for indexes
c: id, root
cs: catalog, securitygroup
cu: catalog, university
So, try creating
(catalog_securitygroup.catalog, catalog_securitygroup.securitygroup)
and
(catalog_university.catalog, catalog_university.university)
EDIT:
I missed the ORDER BY - these fields should also be considered, so
(catalog.name, catalog.id)
might be beneficial (or some other composite index that could be used for sorting and the conditions - possibly (catalog.root, catalog.name, catalog.id))
EDIT2
Although another question is accepted I'll provide some more food for thought.
I have created some test data and run some benchmarks.
The test cases are minimal in terms of record width (in catalog_securitygroup and catalog_university the primary keys are (catalog, securitygroup) and (catalog, university)). Here is the number of records per table:
test=# SELECT (SELECT COUNT(*) FROM catalog), (SELECT COUNT(*) FROM catalog_securitygroup), (SELECT COUNT(*) FROM catalog_university);
?column? | ?column? | ?column?
----------+----------+----------
500000 | 1497501 | 500000
(1 row)
Database is postgres 8.4, default ubuntu install, hardware i5, 4GRAM
First I rewrote the query to
SELECT c.id, c.name, c.owner
FROM catalog c, catalog_securitygroup cs, catalog_university cu
WHERE c.root < 50
AND cs.catalog = c.id
AND cu.catalog = c.id
AND cs.securitygroup < 200
AND cu.university < 200
ORDER BY c.name
LIMIT 50 OFFSET 100
note: the conditions are turned into less then to maintain comparable number of intermediate rows (the above query would return 198,801 rows without the LIMIT clause)
If run as above, without any extra indexes (save for PKs and foreign keys) it runs in 556 ms on a cold database (this is actually indication that I oversimplified the sample data somehow - I would be happier if I had 2-4s here without resorting to less then operators)
This bring me to my point - any straight query that only joins and filters (certain number of tables) and returns only a certain number of the records should run under 1s on any decent database without need to use cursors or to denormalize data (one of these days I'll have to write a post on that).
Furthermore, if a query is returning only 50 rows and does simple equality joins and restrictive equality conditions it should run even much faster.
Now let's see if I add some indexes, the biggest potential in queries like this is usually the sort order, so let me try that:
CREATE INDEX test1 ON catalog (name, id);
This makes execution time on the query - 22ms on a cold database.
And that's the point - if you are trying to get only a page of data, you should only get a page of data and execution times of queries such as this on normalized data with proper indexes should take less then 100ms on decent hardware.
I hope I didn't oversimplify the case to the point of no comparison (as I stated before some simplification is present as I don't know the cardinality of relationships between catalog and the many-to-many tables).
So, the conclusion is
if I were you I would not stop tweaking indexes (and the SQL) until I get the performance of the query to go below 200ms as rule of the thumb.
only if I would find an objective explanation why it can't go below such value I would resort to denormalisation and/or cursors, etc...
First I assume that your University and SecurityGroup tables are rather small. You posted the size of the large tables but it's really the other sizes that are part of the problem
Your problem is from the fact that you can't join the smallest tables first. Your join order should be from small to large. But because your mapping tables don't include a securitygroup-to-university table, you can't join the smallest ones first. So you wind up starting with one or the other, to a big table, to another big table and then with that large intermediate result you have to go to a small table.
If you always have current_univ and current_secgrp and root as inputs you want to use them to filter as soon as possible. The only way to do that is to change your schema some. In fact, you can leave the existing tables in place if you have to but you'll be adding to the space with this suggestion.
You've normalized the data very well. That's great for speed of update... not so great for querying. We denormalize to speed querying (that's the whole reason for datawarehouses (ok that and history)). Build a single mapping table with the following columns.
Univ_id, SecGrp_ID, Root, catalog_id. Make it an index organized table of the first 3 columns as pk.
Now when you query that index with all three PK values, you'll finish that index scan with a complete list of allowable catalog Id, now it's just a single join to the cat table to get the cat item details and you're off an running.
The Oracle cost-based optimizer makes use of all the information that it has to decide what the best access paths are for the data and what the least costly methods are for getting that data. So below are some random points related to your question.
The first three tables that you've listed all have primary keys. Do the other tables (catalog_university and catalog_securitygroup) also have primary keys on them?? A primary key defines a column or set of columns that are non-null and unique and are very important in a relational database.
Oracle generally enforces a primary key by generating a unique index on the given columns. The Oracle optimizer is more likely to make use of a unique index if it available as it is more likely to be more selective.
If possible an index that contains unique values should be defined as unique (CREATE UNIQUE INDEX...) and this will provide the optimizer with more information.
The additional indexes that you have provided are no more selective than the existing indexes. For example, the index on (catalog.id, catalog.name, catalog.owner) is unique but is less useful than the existing primary key index on (catalog.id). If a query is written to select on the catalog.name column, it is possible to do and index skip scan but this starts being costly (and most not even be possible in this case).
Since you are trying to select based in the catalog.root column, it might be worth adding an index on that column. This would mean that it could quickly find the relevant rows from the catalog table. The timing for the second query could be a bit misleading. It might be taking 2 seconds to find 50 matching rows from catalog, but these could easily be the first 50 rows from the catalog table..... finding 50 that match all your conditions might take longer, and not just because you need to join to other tables to get them. I would always use create table as select without restricting on rownum when trying to performance tune. With a complex query I would generally care about how long it take to get all the rows back... and a simple select with rownum can be misleading
Everything about Oracle performance tuning is about providing the optimizer enough information and the right tools (indexes, constraints, etc) to do its job properly. For this reason it's important to get optimizer statistics using something like DBMS_STATS.GATHER_TABLE_STATS(). Indexes should have stats gathered automatically in Oracle 10g or later.
Somehow this grew into quite a long answer about the Oracle optimizer. Hopefully some of it answers your question. Here is a summary of what is said above:
Give the optimizer as much information as possible, e.g if index is unique then declare it as such.
Add indexes on your access paths
Find the correct times for queries without limiting by rowwnum. It will always be quicker to find the first 50 M&Ms in a jar than finding the first 50 red M&Ms
Gather optimizer stats
Add unique/primary keys on all tables where they exist.
The use of rownum is wrong and causes all the rows to be processed. It will process all the rows, assigned them all a row number, and then find those between 0 and 50. When you want to look for in the explain plan is COUNT STOPKEY rather than just count
The query below should be an improvement as it will only get the first 50 rows... but there is still the issue of the joins to look at too:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c
WHERE c.root = 100
ORDER BY name
) cc
where rownum <= 50
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
Also, assuming this for a web page or something similar, maybe there is a better way to handle this than just running the query again to get the data for the next page.
try to declare a cursor. I dont know oracle, but in SqlServer would look like this:
declare #result
table (
id numeric,
name varchar(255)
);
declare __dyn_select_cursor cursor LOCAL SCROLL DYNAMIC for
--Select
select distinct
c.id, c.name
From [catalog] c
inner join university u
on u.catalog = c.id
and u.university = 300
inner join catalog_securitygroup s
on s.catalog = c.id
and s.securitygroup = 200
Where
c.root = 100
Order by name
--Cursor
declare #id numeric;
declare #name varchar(255);
open __dyn_select_cursor;
fetch relative 1 from __dyn_select_cursor into #id,#name declare #maxrowscount int
set #maxrowscount = 50
while (##fetch_status = 0 and #maxrowscount <> 0)
begin
insert into #result values (#id, #name);
set #maxrowscount = #maxrowscount - 1;
fetch next from __dyn_select_cursor into #id, #name;
end
close __dyn_select_cursor;
deallocate __dyn_select_cursor;
--Select temp, final result
select
id,
name
from #result;

Resources