Oracle optimizing query involving date calculation - oracle

Database
Table1
Id
Table2Id
...
Table2
Id
StartTime
Duration //in hours
Query
select * from Table1 join Table2 on Table2Id = Table2.Id
where starttime < :starttime and starttime + Duration/24 > :endtime
This query is currently taking about 2 seconds to run which is too long. There is an index on the id columns and a function index on Start_time+duration/24 In Sql Developer the query plan shows no indexes being used. The query returns 475 rows for my test start and end times. Table2 has ~800k rows Table1 has ~200k rows
If the duration/24 calculation is removed from the query, replaced with a static value the query time is reduced by half. This does not retrieve the exact same data, but leads me to believe that the division is expensive.
I have also tested adding an endtime column to Table2 that is populated with (starttime + duration/24) The column was prepopulated via a single update, if it would be used in production I would populate it via an update trigger.
select * from Table1 join Table2 on Table2Id = Table2.Id
where starttime < :starttime and endtime > :endtime
This query will run in about 600ms and it uses an index for the join. It is less then ideal because of the additional column with redundant data.
Are there any methods of making this query faster?

Create a function index on both starttime and the expression starttime + Duration/24:
create index myindex on table2(starttime, starttime + Duration / 24);
A compound index on the entire predicate of your query should be selected, whereas individually indexed the optimizer is likely deciding that repeated table accesses by rowid based on a scan of one of those indexes is actually slower than a full table scan.
Also make sure that you're not doing an implicit conversion from varchar to date, by ensuring that you're passing DATEs in your bind variables.
Try lowering the optimizer_index_cost_adj system parameter. I believe the default is 100. Try setting that to 10 and see if your index is selected.
Consider partitioning the table by starttime.

You have two criteria with range predicates (greater than/less than). An index range scan can start at one point in the index and end at another.
For a compound index on starttime and "Starttime+duration/24", since the leading column is starttime and the predicate is "less than bind value", it will start at the left most edge of the index (earliest starttime) and range scan all rows up to the point where the starttime reaches the limit. For each of those matches, it can evaluate the calculated value for "Starttime+duration/24" on the index against the bind value and pass or reject the row. I'd suspect most of the data in the table is old, so most entries have an old starttime and you'd end up scanning most of the index.
For a compound index on "Starttime+duration/24" and starttime, since the leading column is the function and the predicate is "greater than bindvalue", it will start partway through the index and work its way to the end. For each of those matches, it can evaluate the starttime on the index against the bind value and pass or reject the row. If the enddate passed in is recent, I suspect this would actually involve a much smaller amount of the index being scanned.
Even without the starttime as a second column on the index, the existing function based index on "Starttime+duration/24" should still be useful and used. Check the explain plan to make sure the bindvalue is either a date or converted to a date. If it is converted, make sure the appropriate format mask is used (eg an entered value of '1/Jun/09' may be converted to year 0009, so Oracle will see the condition as very relaxed and would tend not to use the index - plus the result could be wrong).
"In Sql Developer the query plan shows no indexes being used. " If the index wasn't being used to find the table2 rows, I suspect the optimizer thought most/all of table2 would be returned [which it obviously isn't, by your numbers]. I'd guess that it though most of table1 would be returned, and thus neither of your predicates did a lot of filtering. As I said above, I think the "less than" predicate isn't selective, but the "greater than" should be. Look at the explain plan, especially the ROWS value, to see what Oracle thinks
PS.
Adjusting the value means the optimizer changes the basis for its estimates. If a journey planner says you'll take six hours for a trip because it assumes an average speed of 50, if you tell it to assume an average of 100 it will comes out with three hours. it won't actually affect the speed you travel at, or how long it takes to actually make the journey.
So you only want to change that value to make it more accurately reflect the actual value for your database (or session).

Oracle would not use indexes if the selectivity of the where clause is not very good. Index would be used if the number of rows returned would be some percentage of the total number of rows in the table (the percentage varies, since oracle will count the cost of reading the index as well as reading the tables).
Also, when the index columns are modified in where clause, the index would get disabled. For example, UPPERCASE(some_index_column), would disable the usage of the index on some_index_column. This is why starttime + Duration/24 > :endtime does not use the Index.
Can you try this
select * from Table1 join Table2 on Table1.Id = Table2.Table1Id
where starttime < :starttime and starttime > :endtime - Duration/24
This should allow the use of the Index and there is no need for an additional column.

Related

Efficent use of an index for a self join with a group by

I'm trying to speed up the following
create table tab2 parallel 24 nologging compress for query high as
select /*+ parallel(24) index(a ix_1) index(b ix_2)*/
a.usr
,a.dtnum
,a.company
,count(distinct b.usr) as num
,count(distinct case when b.checked_1 = 1 then b.usr end) as num_che_1
,count(distinct case when b.checked_2 = 1 then b.usr end) as num_che_2
from tab a
join tab b on a.company = b.company
and b.dtnum between a.dtnum-1 and a.dtnum-0.0000000001
group by a.usr, a.dtnum, a.company;
by using indexes
create index ix_1 on tab(usr, dtnum, company);
create index ix_2 on tab(usr, company, dtnum, checked_1, checked_2);
but the execution plan tells me that it's going to be an index full scan for both indexes, and the calculations are very long (1 day is not enough).
About the data. Table tab has over 3 mln records. None of the single columns are unique. The unique values here are pairs of (usr, dtnum), where dtnum is a date with time written as a number in the format yyyy,mmddhh24miss. Columns checked_1, checked_2 have values from set (null, 0, 1, 2). Company holds an id for a company.
Each pair can only have one value checked_1, checked_2 and company as it is unique. Each user can be in multple pairs with different dtnum.
Edit
#Roberto Hernandez: I've attached the picture with the execution plan. As for parallel 24, in our company we are told to create tables with options 'parallel [num] nologging compress for query high'. I'm using 24 but I'm no expert in this field.
#Sayan Malakshinov: http://sqlfiddle.com/#!4/40b6b/2 Here I've simplified by giving data with checked_1 = checked_2, but in real life this may not be true.
#scaisEdge:
For
create index my_id1 on tab (company, dtnum);
create index my_id2 on tab (company, dtnum, usr);
I get
For table tab Your join condition is based on columns
company, datun
so you index should be primarly based on these columns
create index my_id1 on tab (company, datum);
The indexes you are using are useless because don't contain in left most position columsn use ij join /where condition
Eventually you can add user right most potition for avoid the needs of table access and let the db engine retrive alla the inf inside the index values
create index my_id1 on tab (company, datum, user, checked_1, checked_2);
Indexes (bitmap or otherwise) are not that useful for this execution. If you look at the execution plan, the optimizer thinks the group-by is going to reduce the output to 1 row. This results in serialization (PX SELECTOR) So I would question the quality of your statistics. What you may need is to create a column group on the three group-by columns, to improve the cardinality estimate of the group by.

Function index does not work in oracle where it is used with other operator

You assume this simple query:
select name, code
from item
where length(code) > 5
Due to avoiding of full access table, there is an function-index on length(code) by following command:
create index index_len_code on item(length(code));
The optimizer detects the index and use it(INDEX RANGE SCAN). Nonetheless the optimizer does not detect the above index for the below query:
select i.name, i.code
from item i, item ii
where length(i.code) - length(ii.code) > 0
When I see the execution plan, it is the access full table, not to be index range scan while index is existed on length(code).
Where is wrong and what is wrong?
If you have an EMP table with a column HIREDATE, and that column is indexed, then the optimizer may choose to use the index for accessing the table in a query with a condition like
... HIREDATE >= ADD_MONTHS(SYSDATE, -12)
to find employees hired in the last 12 months.
However, HIREDATE has to be alone on the left-hand side. If you add or subtract months or days to it, or if you wrap it within a function call like ADD_MONTHS, the index can't be used. The optimizer will not perform trivial arithmetic manipulations to convert the condition into one where HIREDATE by itself must satisfy an inequality.
The same happened in your second query. If you change the condition to
... length(i.code) > length(ii.code)
then the optimizer can use the function-based index on length(code). But even in your first query, if you change the condition to
... length(code) - 5 > 0
the index will NOT be used, because this is not an inequality condition on length(code). Again, the optimizer is not smart enough to perform trivial algebraic manipulations to rewrite this in a form where it's an inequality condition on length(code) itself.

SQLite SELECT with max() performance

I have a table with about 1.5 million rows and three columns. Column 'timestamp' is of type REAL and indexed. I am accessing the SQLite database via PHP PDO.
The following three selects run in less than a millisecond:
select timestamp from trades
select timestamp + 1 from trades
select max(timestamp) from trades
The following select needs almost half a second:
select max(timestamp) + 1 from trades
Why is that?
EDIT:
Lasse has asked for a "explain query plan", I have run this within a PHP PDO query since I have no direct SQLite3 command line tool access at the moment. I guess it does not matter, here is the result:
explain query plan select max(timestamp) + 1 from trades:
[selectid] => 0
[order] => 0
[from] => 0
[detail] => SCAN TABLE trades (~1000000 rows)
explain query plan select max(timestamp) from trades:
[selectid] => 0
[order] => 0
[from] => 0
[detail] => SEARCH TABLE trades USING COVERING INDEX tradesTimestampIdx (~1 rows)
The reason this query
select max(timestamp) + 1 from trades
takes so long is that the query engine must, for each record, compute the MAX value and then add one to it. Computing the MAX value involves doing a full table scan, and this must be repeated for each record because you are adding one to the value.
In the query
select timestamp + 1 from trades
you are doing a calculation for each record, but the engine only needs to scan the entire table once. And in this query
select max(timestamp) from trades
the engine does have to scan the entire table, however it also does so only once.
From the SQLite documentation:
Queries that contain a single MIN() or MAX() aggregate function whose argument is the left-most column of an index might be satisfied by doing a single index lookup rather than by scanning the entire table.
I emphasized might from the documentation, because it appears that a full table scan may be necessary for a query of the form SELECT MAX(x)+1 FROM table
if column x be not the left-most column of an index.

PostgreSQL - How to decrease select statement excution time

My Postgres version: "PostgreSQL 9.4.1, compiled by Visual C++ build
1800, 32-bit"
The table which i am going to deal with; contains columns
eventtime - timestamp without timezone
serialnumber - character varying(32)
sourceid - integer
and 4 other columns
here is my select statement
SELECT eventtime, serialnumber
FROM t_el_eventlog
WHERE
eventtime at time zone 'CET' > CURRENT_DATE and
sourceid = '14';
the excution time for the above query is 59647ms
And in my r script i have 5 these kind of queries (excution time = 59647ms*5).
Without using time zone 'CET', the excution time is very less - but in my case I must use time zone 'CET' and if I am right the high excution time is beacuse of these timezone.
my query plan
text query
explain analyze query(without timezone)
Is there anyway that I can decrease the query excution time for my select statement
Since the distribution of the values is unknown to me, there is no clear way of solving the problem.
But one problem is obvious: There is an index for the eventtime column, but since the query operates with a function over that column, the index can't be used.
eventtime in time zone 'UTC' > CURRENT_DATE
Either the index has to be dropped and recreated with that function or the query has to be rewritten.
Recreate the index (example):
CREATE INDEX ON t_el_eventlog (timezone('UTC'::text, eventtime));
(this is the same as eventtime in time zone 'UTC')
This matches the filter with the function, the index can be used.
I suspect the sourceid not having a great distribution, not having very much different values. In that case, dropping the index on sourceid AND dropping the index on eventtime with creating a new index over eventtime and sourceid could be an idea:
CREATE INDEX ON t_el_eventlog (timezone('UTC'::text, eventtime), sourceid);
This is what the theory is telling us. I made a few tests around that, with a table with around 10 Million Rows, eventtime distribution within 36 hours and only 20 different sourceids (1..20). Distribution is very random. The best results were in an index over eventtime, sourceid (no function index) and adjusting the query.
CREATE INDEX ON t_el_eventlog (eventtime, sourceid);
-- make sure there is no index on source id. we need to force postgres to this index.
-- make sure, postgres learns about our index
ANALYZE; VACUUM;
-- use timezone function on current date (guessing timezone is CET)
SELECT * FROM t_el_eventlog
WHERE eventtime > timezone('CET',CURRENT_DATE) AND sourceid = 14;
With the table having 10'000'000 rows, this query returns me about 500'000 rows in only 400ms. (instead of about 1400 up to 1700 in all other combinations).
Finding the best match between the indexes and the query is the quest. I suggest some research, a recommendation is http://use-the-index-luke.com
this is what the query plan looks like with the last approach:
Index Only Scan using evlog_eventtime_sourceid_idx on evlog (cost=0.45..218195.13 rows=424534 width=0)
Index Cond: ((eventtime > timezone('CET'::text, (('now'::cstring)::date)::timestamp with time zone)) AND (sourceid = 14))
as you can see, this is a perfect match...

Optimized Query Execution Time

My Query is
SELECT unnest(array [repgroupname,repgroupname||'-'
||masteritemname,repgroupname||'-' ||masteritemname||'-'||itemname]) AS grp
,unnest(array [repgroupname,masteritemname,itemname]) AS disp
,groupname1
,groupname2
,groupname3
,sum(qty) AS qty
,sum(freeqty) AS freeqty
,sum(altqty) AS altqty
,sum(discount) AS discount
,sum(amount) AS amount
,sum(stockvalue) AS stockvalue
,sum(itemprofit) AS itemprofit
FROM (
SELECT repgroupname
,masteritemname
,itemname
,groupname1
,groupname2
,groupname3
,units
,unit1
,unit2
,altunits
,altunit1
,altunit2
,sum(s2.totalqty) AS qty
,sum(s2.totalfreeqty) AS freeqty
,sum(s2.totalaltqty) AS altqty
,sum(s2.totaltradis + s2.totaladnldis) AS discount
,sum(amount) AS amount
,sum(itemstockvalue) AS stockvalue
,sum(itemprofit1) AS itemprofit
FROM sales1 s1
INNER JOIN sales2 s2 ON s1.txno = s2.txno
INNER JOIN items i ON i.itemno = s2.itemno
GROUP BY repgroupname
,masteritemname
,itemname
,groupname1
,groupname2
,groupname3
,units
,unit1
,unit2
,altunits
,altunit1
,altunit2
ORDER BY itemname
) AS tt
GROUP BY grp
,disp
,groupname1
,groupname2
,groupname3
Here
Sales1 table have 144513 Records
Sales2 Table have 438915 Records
items Table have 78512 Records
This Query take 6 seconds to produce result.
How to Optimize this query?
am using postgresql 9.3
That is a truly horrible query.
You should start by losing the ORDER BY in the sub-select - the ordering is discarded by the outer query.
Beyond that, ask yourself why you need to look to see a summary of every songle row in th DBMS - does this serve any useful purpose (if the query is returning more than 20 rows, then the answer is no).
You might be able to make it go faster by ensuring that the foreign keys in the tables are indexed (indexes are THE most important bit of information to look at whenever you're talking about performance and you've told us nothing about them).
Maintaining the query as a regular snapshot will mitigate the performance impact.

Resources