ORA-13249: SDO_NN cannot be evaluated without using index - oracle

I get an error when I run the following Sql Script in ORACLE
SELECT D.ID
FROM DOOR D
JOIN STREET S ON (S.ID=D.STREET_ID)
WHERE
SDO_NN(D.LOCATION,SDO_UTIL.FROM_WKTGEOMETRY('POINT (11112.0111 321314.2222)'),'sdo_num_res=6') = 'TRUE'
or
S.ID IN (17);
but when I change 'or' to 'and' or delete 'or S.ID IN (17)' I get no error.
SELECT D.ID
FROM DOOR D
JOIN STREET S ON (S.ID=D.STREET_ID)
WHERE
SDO_NN(D.LOCATION,SDO_UTIL.FROM_WKTGEOMETRY('POINT (11112.0111 321314.2222)'),'sdo_num_res=6') = 'TRUE'
and
S.ID IN (17);
Type of Location field in DOOR Table is MDSYS.SDO_GEOMETRY
and
Type of ID field in STREET Table is NUMBER
I want first SQL to work.
Can anyone help with the solution?

Would hint do any good?
SELECT /*+ LEADING(d) USE_NL(d s) INDEX s spatial_index) */
D.ID
FROM DOOR D JOIN STREET S ON (S.ID = D.STREET_ID)
WHERE SDO_NN (
D.LOCATION,
SDO_UTIL.FROM_WKTGEOMETRY ('POINT (11112.0111 321314.2222)'),
'sdo_num_res=6') =
'TRUE'
AND S.ID IN (17);

The problem here is that the SDO_NN operator (as opposed to the SDO_RELATE or SDO_WITHIN_DISTANCE operators) can only be solved via a spatial index.
With the first query, no actual filter is applied to select from the DOOR table, so the query optimizer naturally uses the spatial index.
With the second query, you restrict the DOOR table to only those rows (doors) for a given street (STREET.ID=17). That makes the optimizer choose the index that applies that filter (probably the index on STREET.STREET_ID.
The or vs and makes for a very different query and a very different result. With or, you are asking for the 6 closest doors to the selected point, from any street + all the doors on street 17 irrespective of proximity.
With and you are asking for the 6 closest doors on street 17 only. Which makes much more sense than the first syntax.
In order to make that work in your case, you need to use a hint to tell the optimizer to use the spatial index instead of the index on DOOR.STREET_ID:
SELECT /*+ INDEX (d <spatial index name>) */ D.ID
FROM DOOR D
JOIN STREET S ON (S.ID=D.STREET_ID)
WHERE SDO_NN(D.LOCATION,SDO_UTIL.FROM_WKTGEOMETRY('POINT (11112.0111 321314.2222)'),'sdo_num_res=6') = 'TRUE'
AND S.ID IN (17);
Notice the syntax of the hint: d refers to the alias that refers to the DOOR table, since this is the one you are doing the spatial filtering on. <spatial index name> is the name of the spatial index you have defined on the DOOR table (not the string "spatial_index".
Hints have a "fragile" syntax: if you make any mistake, the hint will just be ignored. Also they are not orders to the optimizer: they are just suggestions. If the optimizer finds a hint impossible to apply (like you ask for an index that does not exist), it will also ignore it.
One way to confirm that hints have been accepted and used by the optimizer is to look at the query plan it has produced. If you use sqlplus or sqlcl, you can use the EXPLAIN PLAN command that will show you the plan without actually executing the statement. If you use SQLDeveloper, that has specific buttons to show query plans.
EDIT: The INDEX hint might not be sufficient. It may be that the optimizer needs to be told also how to implement the join, as #littlefoot has indicated. So you may also need the LEADING and USE_NL hints. But try first with only the INDEX hint.

Related

select and calculate numdays from two tables in oracle

I m doing a leave calculation. I have Leave requested table and Employee Table.
Their relationship is Employee can request many Leaves. i.e
Leave request table has Employee_Serail_ID as one to many. I have done the following query to select all leave request and calculkate the number of days.
SELECT (LR.DATE_TO - LR.DATE_FROM) as NumDays ,
LR.EMPLOYEE_SERIAL_ID, LR.ID as LEAVE_REQUEST_ID
FROM TBL_LEAVE_REQUEST LR ;
NUMDAYS EMPLOYEE_SERIAL_ID LEAVE_REQUEST_ID
3 EMP_286 LEAVE_35
2 EMP_243 LEAVE_36
2 EMP_284 LEAVE_37
3 EMP_243 LEAVE_38
32 EMP_243 LEAVE_39
0 EMP_303 LEAVE_40
1 EMP_241 LEAVE_41
But , i figured out that all employee who have not requested leave will not be selected using this query.
I want to modify this query that - if the employee has rquested a leave it will show the numdays , and if it has not this query should return Numdays 0 for all employees.
Numdays
You'll need to left join your leave request table to your actual employee table. This will give you an employee record, even if they don't have a leave request.
Since you haven't posted your schema, and you haven't specified what database you're actually using, I can't write much of the query for you. Your logic will look something like this:
SELECT
T.EMPLOYEE_ID
, ISNULL((LR.DATE_TO - LR.DATE_FROM), 0) as NumDays
, LR.EMPLOYEE_SERIAL_ID
, LR.ID as LEAVE_REQUEST_ID
FROM
TBL_EMPLOYEE T
LEFT JOIN TBL_LEAVE_REQUEST LR
on T.EMPLOYEE_ID = LR.EMPLOYEE_ID
;
The ISNULL function is used by MSSQL Server. Other databases require different functions.
If you're using Oracle, replace ISNULL( with NVL(.
If you're using PostgreSQL or MySQL, you'll want the command COALESCE(.
A note in ISNULL() and NVL() vs COALESCE(). As #Ronnis pointed out, any ANSI compliant database should support the COALESCE() function.
Looking into the documentation a little further, you may get better query performance using COALESCE() than NVL() or ISNULL(). The former will short circuit its evaluation, whereas the other two will not.

oracle not using defined indexes

As seen below there is a simple join between my Tables A And B.
In addition, there is a condition on each table which is combined with Or operator.
SELECT /*+ NO_EXPAND */
* FROM IIndustrialCaseHistory B ,
IIndustrialCaseProduct A
where (
A.ProductId IN ('9_2') OR
contains(B.KeyWords,'%some text goes here%' ) <=0
)
and ( B.Id = A.IIndustrialCaseHistoryId)
on ProductId defined a b-tree index and for KeyWords there is a function index.
but I dont know why my execution plan dose not use these indexes and performs table access full?
as I found in this URL NO_EXPAND optimization hint could couse using indexes in execution plan(The NO_EXPAND hint prevents the cost-based optimizer from considering OR-expansion for queries having OR conditions or IN-lists in the WHERE clause ). But I didn't see any use of defined indexes
whats is oracle problem with my query?!
Unless there is something magical about the contains() function that I don't know about, Oracle cannot use an index to find a matching value that leads with a wildcard, i.e. a text string value within a varchar2 column but not starting in the first position with that value. [OR B.KeyWords LIKE'%some text goes here%' -- as opposed to -- OR B.KeyWords LIKE'Some text starts here%' -- optimizable via index.] The optimizer will default back to the full table scan in that case.
Also, although it may not be material, why use IN() if there is only one value in the list? Why not A.ProductId = '9_2' ?

Selecting data from one table or another in multiple queries PL/SQL

The easiest way to ask my question is with a Hypothetical Scenario.
Lets say we have 3 tables. Singapore_Prices, Produce_val, and Bosses_unreasonable_demands.
So Prices is a pretty simple table. Item column containing a name, and a Price column containing a number.
Produce_Val is also simple 2 column table. Type column containing what type the produce is (Fruit or veggie) and then Name column (Tomato, pineapple, etc.)
The Bosses_unreasonable_demands only contains one column, Fruit, which CAN contain the names of some fruits.
OK? Ok.
SO, My boss wants me to write a query that returns the prices for every fruit in his unreasonable demands table. Simple enough. BUT, if he doesn't have any entries in his table, he just wants me to output the prices of ALL fruits that exist in produce_val.
Now, assuming I don't know where the DBA who designed this silly hypothetical system lives (and therefore can't get him to fix this), our query would look like this:
if <Logic to determine if Bosses demands are empty>
Then
select Item, Price
from Singapore_Prices
where Item in (select Fruit from Bosses_Unreasonable_demands)
Else
select Item, Price
from Singapore_Prices
where Item in (select Name from Produce_val where type = 'Fruit')
end if;
(Well, we'd select those into a variable, and then output the variable, probably with bulk-collect shenanigans, but that's not important)
Which works. It is entirely functional, and won't be slow, even if we extend it out to 2000 other stores other than Singapore. (Well, no slower than anything else that touches 2000 some tables) BUT, I'm still doing two different select statements that are practically identical. My Comp Sci teacher rolls in their grave every time my fingers hit ctrl-V. I can cut this code in half and only do one select statement. I KNOW I can.
I just have no earthly idea how. I can't use cursors as an in statement, I can't use nested tables or varrays, I can't use cleverly crafted strings, I... I just... I don't know. I don't know how to do this. Is there a way? Does it exist?
Or do I have to copy/paste forever?
Your best bet would be dynamic SQL, because you can't parameterize table or column names.
You will have a SQL query template, have a logic to determine tables and columns that you want to query, then blend them together and execute.
Another aproach, (still a lot of ctrl-v like code) is to use set construction UNION ALL:
select 1st query where boss_condition
union all
select 2nd query where not boss_condition
Try this:
SELECT *
FROM (SELECT s.*, 'BOSS' AS FRUIT_SOURCE
FROM BOSSES_UNREASONABLE_DEMANDS b
INNER JOIN SINGAPORE_FRUIT_LIST s
ON s.ITEM = b.FRUIT
CROSS JOIN (SELECT COUNT(*) AS BOSS_COUNT
FROM BOSSES_UNREASONABLE_DEMANDS)) x
UNION ALL
(SELECT s.*, 'NORMAL' AS FRUIT_SOURCE
FROM PRODUCE_VAL p
INNER JOIN SINGAPORE_FRUIT_LIST s
ON (s.ITEM = p.NAME AND
s.TYPE = 'Fruit')
CROSS JOIN (SELECT COUNT(*) AS BOSS_COUNT
FROM BOSSES_UNREASONABLE_DEMANDS)) n
WHERE (BOSS_COUNT > 0 AND FRUIT_SOURCE = 'BOSS') OR
(BOSS_COUNT = 0 AND FRUIT_SOURCE = 'NORMAL')
Share and enjoy.
I think you can use nested tables. Assume you have a schema-level nested table type FRUIT_NAME_LIST (defined using CREATE TYPE).
SELECT fruit
BULK COLLECT INTO my_fruit_name_list
FROM bosses_unreasonable_demands
;
IF my_fruit_name_list.count = 0 THEN
SELECT name
BULK COLLECT INTO my_fruit_name_list
FROM produce_val
WHERE type='Fruit'
;
END IF;
SELECT item, price
FROM singapore_prices
WHERE item MEMBER OF my_fruit_name_list
;
(or, WHERE item IN (SELECT column_value FROM TABLE(CAST(my_fruit_name_list AS fruit_name_list)) if you like that better)

Using spatial locator for Oracle

I am able to run the below query but when i swap the parameter for sdo_nn, I get an error of SDO_NN cannot be evaluated without using index
Works:
SELECT
c.customer_id,
c.first_name,
c.last_name,
sdo_nn_distance (1) distance
FROM stores s,
customers c
AND sdo_nn
(c.cust_geo_location, s.store_geo_location, 'sdo_num_res=1', 1)= 'TRUE'
ORDER BY distance;
Does not work:
SELECT
c.customer_id,
c.first_name,
c.last_name,
sdo_nn_distance (1) distance
FROM stores s,
customers c
AND sdo_nn
(s.store_geo_location,c.cust_geo_location, 'sdo_num_res=1', 1)= 'TRUE'
ORDER BY distance;
Anyone can explain to me why the sequence matter?
From Oracle's online docs sdo_nn needs the first parameter to be spatially indexed. the second parameter does not have that necessity/constraint.
So when swapping parameters you need to make sure the "now first" parameter (i.e. s.store_geo_location) is spatially indexed. See this on how to create a spatial index in Oracle.
Add the compiler hint to tell the query parser what index to use e.g.:
select
/*+ index(tableName,sdoIndexName) */
...
WARNING Compiler hints fail SILENTLY

How to otimize select from several tables with millions of rows

Have the following tables (Oracle 10g):
catalog (
id NUMBER PRIMARY KEY,
name VARCHAR2(255),
owner NUMBER,
root NUMBER REFERENCES catalog(id)
...
)
university (
id NUMBER PRIMARY KEY,
...
)
securitygroup (
id NUMBER PRIMARY KEY
...
)
catalog_securitygroup (
catalog REFERENCES catalog(id),
securitygroup REFERENCES securitygroup(id)
)
catalog_university (
catalog REFERENCES catalog(id),
university REFERENCES university(id)
)
Catalog: 500 000 rows, catalog_university: 500 000, catalog_securitygroup: 1 500 000.
I need to select any 50 rows from catalog with specified root ordered by name for current university and current securitygroup. There is a query:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c, catalog_securitygroup cs, catalog_university cu
WHERE c.root = 100
AND cs.catalog = c.id
AND cs.securitygroup = 200
AND cu.catalog = c.id
AND cu.university = 300
ORDER BY name
) cc
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
Where 100 - some catalog, 200 - some securitygroup, 300 - some university. This query return 50 rows from ~ 170 000 in 3 minutes.
But next query return this rows in 2 sec:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c
WHERE c.root = 100
ORDER BY name
) cc
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
I build next indexes: (catalog.id, catalog.name, catalog.owner), (catalog_securitygroup.catalog, catalog_securitygroup.index), (catalog_university.catalog, catalog_university.university).
Plan for first query (using PLSQL Developer):
http://habreffect.ru/66c/f25faa5f8/plan2.jpg
Plan for second query:
http://habreffect.ru/f91/86e780cc7/plan1.jpg
What are the ways to optimize the query I have?
The indexes that can be useful and should be considered deal with
WHERE c.root = 100
AND cs.catalog = c.id
AND cs.securitygroup = 200
AND cu.catalog = c.id
AND cu.university = 300
So the following fields can be interesting for indexes
c: id, root
cs: catalog, securitygroup
cu: catalog, university
So, try creating
(catalog_securitygroup.catalog, catalog_securitygroup.securitygroup)
and
(catalog_university.catalog, catalog_university.university)
EDIT:
I missed the ORDER BY - these fields should also be considered, so
(catalog.name, catalog.id)
might be beneficial (or some other composite index that could be used for sorting and the conditions - possibly (catalog.root, catalog.name, catalog.id))
EDIT2
Although another question is accepted I'll provide some more food for thought.
I have created some test data and run some benchmarks.
The test cases are minimal in terms of record width (in catalog_securitygroup and catalog_university the primary keys are (catalog, securitygroup) and (catalog, university)). Here is the number of records per table:
test=# SELECT (SELECT COUNT(*) FROM catalog), (SELECT COUNT(*) FROM catalog_securitygroup), (SELECT COUNT(*) FROM catalog_university);
?column? | ?column? | ?column?
----------+----------+----------
500000 | 1497501 | 500000
(1 row)
Database is postgres 8.4, default ubuntu install, hardware i5, 4GRAM
First I rewrote the query to
SELECT c.id, c.name, c.owner
FROM catalog c, catalog_securitygroup cs, catalog_university cu
WHERE c.root < 50
AND cs.catalog = c.id
AND cu.catalog = c.id
AND cs.securitygroup < 200
AND cu.university < 200
ORDER BY c.name
LIMIT 50 OFFSET 100
note: the conditions are turned into less then to maintain comparable number of intermediate rows (the above query would return 198,801 rows without the LIMIT clause)
If run as above, without any extra indexes (save for PKs and foreign keys) it runs in 556 ms on a cold database (this is actually indication that I oversimplified the sample data somehow - I would be happier if I had 2-4s here without resorting to less then operators)
This bring me to my point - any straight query that only joins and filters (certain number of tables) and returns only a certain number of the records should run under 1s on any decent database without need to use cursors or to denormalize data (one of these days I'll have to write a post on that).
Furthermore, if a query is returning only 50 rows and does simple equality joins and restrictive equality conditions it should run even much faster.
Now let's see if I add some indexes, the biggest potential in queries like this is usually the sort order, so let me try that:
CREATE INDEX test1 ON catalog (name, id);
This makes execution time on the query - 22ms on a cold database.
And that's the point - if you are trying to get only a page of data, you should only get a page of data and execution times of queries such as this on normalized data with proper indexes should take less then 100ms on decent hardware.
I hope I didn't oversimplify the case to the point of no comparison (as I stated before some simplification is present as I don't know the cardinality of relationships between catalog and the many-to-many tables).
So, the conclusion is
if I were you I would not stop tweaking indexes (and the SQL) until I get the performance of the query to go below 200ms as rule of the thumb.
only if I would find an objective explanation why it can't go below such value I would resort to denormalisation and/or cursors, etc...
First I assume that your University and SecurityGroup tables are rather small. You posted the size of the large tables but it's really the other sizes that are part of the problem
Your problem is from the fact that you can't join the smallest tables first. Your join order should be from small to large. But because your mapping tables don't include a securitygroup-to-university table, you can't join the smallest ones first. So you wind up starting with one or the other, to a big table, to another big table and then with that large intermediate result you have to go to a small table.
If you always have current_univ and current_secgrp and root as inputs you want to use them to filter as soon as possible. The only way to do that is to change your schema some. In fact, you can leave the existing tables in place if you have to but you'll be adding to the space with this suggestion.
You've normalized the data very well. That's great for speed of update... not so great for querying. We denormalize to speed querying (that's the whole reason for datawarehouses (ok that and history)). Build a single mapping table with the following columns.
Univ_id, SecGrp_ID, Root, catalog_id. Make it an index organized table of the first 3 columns as pk.
Now when you query that index with all three PK values, you'll finish that index scan with a complete list of allowable catalog Id, now it's just a single join to the cat table to get the cat item details and you're off an running.
The Oracle cost-based optimizer makes use of all the information that it has to decide what the best access paths are for the data and what the least costly methods are for getting that data. So below are some random points related to your question.
The first three tables that you've listed all have primary keys. Do the other tables (catalog_university and catalog_securitygroup) also have primary keys on them?? A primary key defines a column or set of columns that are non-null and unique and are very important in a relational database.
Oracle generally enforces a primary key by generating a unique index on the given columns. The Oracle optimizer is more likely to make use of a unique index if it available as it is more likely to be more selective.
If possible an index that contains unique values should be defined as unique (CREATE UNIQUE INDEX...) and this will provide the optimizer with more information.
The additional indexes that you have provided are no more selective than the existing indexes. For example, the index on (catalog.id, catalog.name, catalog.owner) is unique but is less useful than the existing primary key index on (catalog.id). If a query is written to select on the catalog.name column, it is possible to do and index skip scan but this starts being costly (and most not even be possible in this case).
Since you are trying to select based in the catalog.root column, it might be worth adding an index on that column. This would mean that it could quickly find the relevant rows from the catalog table. The timing for the second query could be a bit misleading. It might be taking 2 seconds to find 50 matching rows from catalog, but these could easily be the first 50 rows from the catalog table..... finding 50 that match all your conditions might take longer, and not just because you need to join to other tables to get them. I would always use create table as select without restricting on rownum when trying to performance tune. With a complex query I would generally care about how long it take to get all the rows back... and a simple select with rownum can be misleading
Everything about Oracle performance tuning is about providing the optimizer enough information and the right tools (indexes, constraints, etc) to do its job properly. For this reason it's important to get optimizer statistics using something like DBMS_STATS.GATHER_TABLE_STATS(). Indexes should have stats gathered automatically in Oracle 10g or later.
Somehow this grew into quite a long answer about the Oracle optimizer. Hopefully some of it answers your question. Here is a summary of what is said above:
Give the optimizer as much information as possible, e.g if index is unique then declare it as such.
Add indexes on your access paths
Find the correct times for queries without limiting by rowwnum. It will always be quicker to find the first 50 M&Ms in a jar than finding the first 50 red M&Ms
Gather optimizer stats
Add unique/primary keys on all tables where they exist.
The use of rownum is wrong and causes all the rows to be processed. It will process all the rows, assigned them all a row number, and then find those between 0 and 50. When you want to look for in the explain plan is COUNT STOPKEY rather than just count
The query below should be an improvement as it will only get the first 50 rows... but there is still the issue of the joins to look at too:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c
WHERE c.root = 100
ORDER BY name
) cc
where rownum <= 50
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
Also, assuming this for a web page or something similar, maybe there is a better way to handle this than just running the query again to get the data for the next page.
try to declare a cursor. I dont know oracle, but in SqlServer would look like this:
declare #result
table (
id numeric,
name varchar(255)
);
declare __dyn_select_cursor cursor LOCAL SCROLL DYNAMIC for
--Select
select distinct
c.id, c.name
From [catalog] c
inner join university u
on u.catalog = c.id
and u.university = 300
inner join catalog_securitygroup s
on s.catalog = c.id
and s.securitygroup = 200
Where
c.root = 100
Order by name
--Cursor
declare #id numeric;
declare #name varchar(255);
open __dyn_select_cursor;
fetch relative 1 from __dyn_select_cursor into #id,#name declare #maxrowscount int
set #maxrowscount = 50
while (##fetch_status = 0 and #maxrowscount <> 0)
begin
insert into #result values (#id, #name);
set #maxrowscount = #maxrowscount - 1;
fetch next from __dyn_select_cursor into #id, #name;
end
close __dyn_select_cursor;
deallocate __dyn_select_cursor;
--Select temp, final result
select
id,
name
from #result;

Resources