I had the following problem with our Oracle 11 database and although there's a fix,
I want to understand why its reacting this way.
We have two schemas: a "dev" schema which contains all the tables, views, plsql, ... and
a "app" schema which contains synonyms to the dev-objects, i.e. statements don't contain
schema names.
A dev-view references tables (select * from a a1 -> b -> a a2 union select * from c)
in which there's a common column which is used for the selection, i.e. the selection
predicate is pushed into table "a" (300k rows) and "b" (90k rows) selection via seperate
index access, resulting in a very performant plan.
dev-fun is a deterministic, parallel function which simply does some string manipulation
without further database access.
The selection on the view looks like: select * from view where common-column = fun(string)
This works as expected on the dev schema, but if this is executed on the app schema,
the plan becomes relatively very expensive, i.e. the result of fun(string) is not pushed down,
but the tables are hash joined and the result is scanned for the element.
Still in the app schema, when I replace fun(string) with the function-result the plan becomes
cheap again.
To solve the problem, I duplicated the view in the app schema instead of referencing it via
a synonym, but in case of view/table-changes that means a potential source of defect, as we
normally don't check the app schema ...
The call to the function is still via the synonym and the view was duplicated as-is, i.e. it
accesses the synonyms for the underlying tables ... and the plan is the same as if it was executed
on the dev schema.
Apart of having selection grants on all underlying tables, I've also tried granting "query rewrite", "references" on the tables and "references" on the view. Furthermore I've tried the authid-options on the function. I have to admit, that I haven't yet checked for row-level security, but we are not using them.
What else can I check for?
The oracle version is 11.0.2.2. Opening a oracle-ticket would be only a theoretical option,
as we don't have direct support access and the layer in-between is even more frustrating as
living with the maintaining issue.
I know that typically a explain-plan would be helpful, but lets try it first without it, as
I suspect the problem somewhere else.
Update (14.10.2013):
Hinting to use nested loops doesn't work.
function based index aren't used.
Indexed access: select * from v_vt_betreuer where vtid = 11803056;
Hashed access: select * from v_vt_betreuer where vtid = VTNRVOLL_TO_VTID(11803056);
Copied view: i.e. when the view is copied into the app schema
select * from v_vt_betreuer where vtid = VTNRVOLL_TO_VTID(11803056);
Try creating an index like this:
CREATE INDEX func_index ON agency(fun(common_column))
This is called a function based index.
My guess is that this type of queries:
select a1.vtid, a2.*
from agency a1 join agency_employee b on (b.vtid = a1.vtid)
join agency a2 on (a2.vtid = b.employee_vtid)
is causing the query optimizer to do this:
select a1.vtid, a2.*
from agency a1 join agency_employee b on (func(b.vtid) = func(a1.vtid))
join agency a2 on (func(a2.vtid) = func(b.employee_vtid))
http://www.akadia.com/services/ora_function_based_index_2.html
http://www.oracle-base.com/articles/8i/function-based-indexes.php
If this approach does not help, check if you have ROW LEVEL SECURITY:
http://docs.oracle.com/cd/E16655_01/server.121/e17609/tdpsg_ols.htm
http://docs.oracle.com/cd/B19306_01/network.102/b14266/apdvcntx.htm#i1007410
Are you sure that you are NOT using actual function in schema NPS_WEBCC, but a synonym to a function in schema NPS_WEBCC_DEV?
Condition can't be pushed if DEV schema is not allowed to access objects in APP schema.
You must grant permission for synonym to DEV schema, because view is in DEV schema. That is why it starts working when you copy view into APP schema.
Another problem may occur if you use extended statistics in DEV schema based on DEV function, but that needs to be sorted out after the permissions problem.
You can verify it by checking explain plans of following queries. They shall give optimized result:
-- q1
-- "v_vt_betreuer" is a synonym in app schema to a view in dev schema
select * from v_vt_betreuer where vtid = NPS_WEBCC_DEV.VTNRVOLL_TO_VTID(11803056);
-- q2
select * from NPS_WEBCC_DEV.v_vt_betreuer where vtid=NPS_WEBCC_DEV.VTNRVOLL_TO_VTID(11803056);
UPD
According to additional investigation most likely problem happens because MERGE grant is missing on the view. It must be granted for the view and all sub-views that are used inside it.
GRANT MERGE VIEW ON v_vt_betreuer TO NPS_WEBCC;
Related
I am working in an environment where we have separate tables for each client (this is something which I can't change due to security and other requirements). For example, if we have clients ACME and MEGAMART then we'd have an ACME_INFO table and MEGAMART_INFO tables and both tables would have the same structure (let's say ID, SOMEVAL1, SOMEVAL2).
I would like to have a way to easily access the different tables dynamically.
To this point I've dealt with this in a few ways including:
Using dynamic SQL in procedures/functions (not fun)
Creating a view which does a UNION ALL on all of the tables and which adds a CLIENT_ID COLUMN (i.e. "CREATE VIEW COMBINED_VIEW AS SELECT 'ACME' CLIENT_ID, ID, SOMEVAL1, SOMEVAL2 FROM ACME_INFO UNION ALL SELECT 'MEGMART' CLIENT_ID, ID, SOMEVAL1, SOMEVAL2") which performs surprisingly well, but is a pain to maintain and kind of defeats some of the requirements which dictate that we have separate tables for each client.
SYNONYMs won't work because we need different connections to act on different clients
A view which refers to a package which has a package variable for the active client. This is just evil and doesn't even work out all that well.
What I'd really like is to be able to create a table function, macro, or something else where I can do something like
SELECT * FROM FN_CLIENT_INFO('ACME');
or even
UPDATE (SELECT * FROM FN_CLIENT_INFO('ACME')) SET SOMEVAL1 = 444 WHERE ID = 3;
I know that I can partially achieve this with a pipelined function, but this mechanism will need to be used by a reporting platform and if the reporting platform does something like
SELECT * FROM FN_CLIENT_INFO('ACME') WHERE SOMEVAL1 = 4
then I want it to run efficiently (assuming SOMEVAL1 has an index for example). This is where a macro would do well.
Macros seem like a good solution, but the above won't work due to protections put in place to prevent against SQL injection.
Is there a way to create a macro that somehow verifies that the passed in VARCHAR2 is a valid table name and therefore can be used or is there some other approach to address what I need?
I was thinking that if I had a function which could translate a client name to a DBMS_TF.TABLE_T then I could use a macro, but I haven't found a way to do that well.
A lesser-known method for such cases is to use a system-partitioned table. For instance, consider the following code:
Full example: https://dbfiddle.uk/UQsAgHCk
create table t_common(a int, b int)
partition by system (
partition ACME_INFO,
partition MEGAMART_INFO
);
insert into t_common partition(acme_info)
values(1,1);
insert into t_common partition(megamart_info)
values(2,2);
commit;
select * from t_common partition(acme_info);
select * from t_common partition(megamart_info);
As demonstrated, a common table can be used with different partitions for different clients, allowing it to be used as a regular table. We can create a system-partitioned table and utilize the exchange partition feature with older tables. Then, we can drop the older tables and create views with the same names, so that older code continues to work with views while all new code can work with the common table by specifying a partition.
I have 3 tables:
table1: id, person_code
table2: id, address, person_code_foreing(same with that one from table 1), admission_date_1
table3: id, id_table2, admission_date_2, something
(the tables are fictive)
I'm trying to make a view who takes infos from this 3 tables using left join, i'm doing like this because in the first table i have some record who don't have the person_code in the others tables but I want also this info to be returned by the view:
CREATE OR REPLACE VIEW schema.my_view
SELECT t1.name, t2.adress, t3.something
from schema.table1#ambient1 t1
left join schema.table2#ambient1 t2
on t1.person_code = t2.person_code_foreing
left join schema.table3#ambient1 t3
on t3.id_table2 = t2.id
and t1.admission_date_1=t2.admission_date_2;
This view needs to be created in another ambient (ambient2).
I tried using a subquery, there I need also a left join to use, and this thing is very confusing because I don't get it, the subquery and the left join are the big no-no?! Or just de left-join?!
Has this happened to anyone?
How did you risolved it?
Thanks a lot.
ORA-2019 indicates that your database link (#ambient1) does not exist, or is not visible to the current user. You can confirm by checking the ALL_DB_LINKS view, which should list all links to which the user has access:
select owner, db_link from all_db_links;
Also keep in mind that Oracle will perform the joins in the database making the call, not the remote database, so you will almost certainly have to pull the entire contents of all three tables over the network to be written into TEMP for the join and then thrown away, every time you run a query. You will also lose the benefit of any indexes on the data and most likely wind up with full table scans on the temp tables within your local database.
I don't know if this is an option for you, but from a performance perspective and given that it isn't joining with anything in the local database, it would make much more sense to create the view in the remote database and just query that through the database link. That way all of the joins are performed efficiently where the data lives, only the result set is pushed over the network, and your client database SQL becomes much simpler.
I managed to make it work, but apparently ambient2 doesn't like my "left-join", and i used only a subquery and the operator (+), this is how it worked:
CREATE OR REPLACE VIEW schema.my_view
SELECT t1.name, all.adress, all.something
from schema.table1#ambient1 t1,(select * from
schema.table3#ambient1 t3, schema.table2#ambient1 t2
where t3.id_table2 = t2.id(+)
and (t1.admission_date_1=t2.admission_date_2 or t1.admission_date is null))
all
where t1.person_code = t2.person_code_foreing(+);
I tried to test if a query in ambient2 using a right-join works (with 2 tables created there) and it does. I thought there is a problem with that ambient..
For me, there is no sense why in my case this kind of join retrieves that error.
The versions are different?! I don't know, and I don't find any official documentation about that.
Maybe some of you guys have any clue..
There is a mistery for me :))
Thanks.
I understand that the performance of our queries is improved when we use EXISTS and NOT EXISTS in the place of IN and NOT IN, however, is performance improved further when we replace NOT IN with an OUTER JOIN as opposed to NOT EXISTS?
For example, the following query selects all models from a PRODUCT table that are not in another table called PC. For the record, no model values in the PRODUCT or PC tables are null:
select model
from product
where not exists(
select *
from pc
where product.model = pc.model);
The following OUTER JOIN will display the same results:
select product.model
from product left join pc
on pc.model = product.model
where pc.model is null;
Seeing as these both return the same values, which option should we use to better improve the performance of our queries?
The query plan will tell you. It will depend on the data and tables. In the case of OUTER JOIN and NOT EXISTS they are the same.
However to your opening sentence, NOT IN and NOT EXISTS are not the same if NULL is accepted on model. In this case you say model cannot be null so you might find they all have the same plan anyway. However when making this assumption, the database must be told there cannot be nulls (using NOT NULL) as opposed to there simply not being any. If you don't it will make different plans for each query which may result in different performance depending on your actual data. This is generally true and particularly true for ORACLE which does not index NULLs.
Check out EXPLAIN PLAN
I'm using Oracle SQL Developer version 4.02.15.21.
I need to write a query that accesses multiple databases. All that I'm trying to do is get a list of all the IDs present in "TableX" (There is an instance of Table1 in each of these databases, but with different values) in each database and union all of the results together into one big list.
My problem comes with accessing more than 4 databases -- I get this error: ORA-02020: too many database links in use. I cannot change the INIT.ORA file's open_links maximum limit.
So I've tried dynamically opening/closing these links:
SELECT Local.PUID FROM TableX Local
UNION ALL
----
SELECT Xdb1.PUID FROM TableX#db1 Xdb1;
ALTER SESSION CLOSE DATABASE LINK db1
UNION ALL
----
SELECT Xdb2.PUID FROM TableX#db2 Xdb2;
ALTER SESSION CLOSE DATABASE LINK db2
UNION ALL
----
SELECT Xdb3.PUID FROM TableX#db3 Xdb3;
ALTER SESSION CLOSE DATABASE LINK db3
UNION ALL
----
SELECT Xdb4.PUID FROM TableX#db4 Xdb4;
ALTER SESSION CLOSE DATABASE LINK db4
UNION ALL
----
SELECT Xdb5.PUID FROM TableX#db5 Xdb5;
ALTER SESSION CLOSE DATABASE LINK db5
However this produces 'ORA-02081: database link is not open.' On whichever db is being closed out last.
Can someone please suggest an alternative or adjustment to the above?
Please provide a small sample of your suggestion with syntactically correct SQL if possible.
If you can't change the open_links setting, you cannot have a single query that selects from all the databases you want to query.
If your requirement is to query a large number of databases via database links, it seems highly reasonable to change the open_links setting. If you have one set of people telling you that you need to do X (query data from a large number of tables) and another set of people telling you that you cannot do X, it almost always makes sense to have those two sets of people talk and figure out which imperative wins.
If we can solve the problem without writing a single query, then you have options. You can write a bit of PL/SQL, for example, that selects the data from each table in turn and does something with it. Depending on the number of database links involved, it may make sense to write a loop that generates a dynamic SQL statement for each database link, executes the SQL, and then closes the database link.
If you want need to provide a user with the ability to run a single query that returns all the data, you can write a pipelined table function that implements this sort of loop with dynamic SQL and then let the user query the pipelined table function. This isn't really a single query that fetches the data from all the tables. But it is as close as you're likely to get without modifying the open_links limit.
I need to take information from two different data bases.
select * from TABLE_ONDB2 where column_on_db2 in ( select column_on_db1 from TABLE_ONDB1 );
Problem is both are on different db instances so I am not able to figure out how to put table names and column names etc.
I hope my question is clear.
I'd try to do it with a Database Link:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/ds_concepts002.htm
That is, however, not a SQL*Plus feature. It works by makeing a connection from DB2 to DB1 (the database is doing that).
You can then query both tables from DB2 with the '#db-link' name notation. e.g.,
select *
from TABLE_ONDB2
where column_on_db2
in (select column_on_db1 from TABLE_ONDB1#DB_LINK_NAME);
^^^^^^^^^^^^^
The benefit is that you can access the table in all different ways, also as a join.