Switching between multiple tables at runtime in Oracle - oracle

I am working in an environment where we have separate tables for each client (this is something which I can't change due to security and other requirements). For example, if we have clients ACME and MEGAMART then we'd have an ACME_INFO table and MEGAMART_INFO tables and both tables would have the same structure (let's say ID, SOMEVAL1, SOMEVAL2).
I would like to have a way to easily access the different tables dynamically.
To this point I've dealt with this in a few ways including:
Using dynamic SQL in procedures/functions (not fun)
Creating a view which does a UNION ALL on all of the tables and which adds a CLIENT_ID COLUMN (i.e. "CREATE VIEW COMBINED_VIEW AS SELECT 'ACME' CLIENT_ID, ID, SOMEVAL1, SOMEVAL2 FROM ACME_INFO UNION ALL SELECT 'MEGMART' CLIENT_ID, ID, SOMEVAL1, SOMEVAL2") which performs surprisingly well, but is a pain to maintain and kind of defeats some of the requirements which dictate that we have separate tables for each client.
SYNONYMs won't work because we need different connections to act on different clients
A view which refers to a package which has a package variable for the active client. This is just evil and doesn't even work out all that well.
What I'd really like is to be able to create a table function, macro, or something else where I can do something like
SELECT * FROM FN_CLIENT_INFO('ACME');
or even
UPDATE (SELECT * FROM FN_CLIENT_INFO('ACME')) SET SOMEVAL1 = 444 WHERE ID = 3;
I know that I can partially achieve this with a pipelined function, but this mechanism will need to be used by a reporting platform and if the reporting platform does something like
SELECT * FROM FN_CLIENT_INFO('ACME') WHERE SOMEVAL1 = 4
then I want it to run efficiently (assuming SOMEVAL1 has an index for example). This is where a macro would do well.
Macros seem like a good solution, but the above won't work due to protections put in place to prevent against SQL injection.
Is there a way to create a macro that somehow verifies that the passed in VARCHAR2 is a valid table name and therefore can be used or is there some other approach to address what I need?
I was thinking that if I had a function which could translate a client name to a DBMS_TF.TABLE_T then I could use a macro, but I haven't found a way to do that well.

A lesser-known method for such cases is to use a system-partitioned table. For instance, consider the following code:
Full example: https://dbfiddle.uk/UQsAgHCk
create table t_common(a int, b int)
partition by system (
partition ACME_INFO,
partition MEGAMART_INFO
);
insert into t_common partition(acme_info)
values(1,1);
insert into t_common partition(megamart_info)
values(2,2);
commit;
select * from t_common partition(acme_info);
select * from t_common partition(megamart_info);
As demonstrated, a common table can be used with different partitions for different clients, allowing it to be used as a regular table. We can create a system-partitioned table and utilize the exchange partition feature with older tables. Then, we can drop the older tables and create views with the same names, so that older code continues to work with views while all new code can work with the common table by specifying a partition.

Related

Combining 2 tables with the number of users on a different table ORACLE SQl

Is this considered good practice?
Been trying to merge data from 2 different tables (dsb_nb_users_option & dsb_nb_default_options) for the number of users existing on dsb_nb_users table.
Would a JOIN statement be the best option?
Or a sub-query would work better for me to access the data laying on dsb_nb_users table?
This might be an operation i will have to perform a few times so i want to understand the mechanics of it.
INSERT INTO dsb_nb_users_option(dsb_nb_users_option.code, dsb_nb_users_option.code_value, dsb_nb_users_option.status)
SELECT dsb_nb_default_option.code, dsb_nb_default_option.code_value, dsb_nb_default_option.status
FROM dsb_nb_default_options
WHERE dsb_nb_users.user_id IS NOT NULL;
Thank you for your time!!
On the face of it, I see nothing wrong with your query to achieve your goal. That said, I see several things worth pointing out.
First, please learn to format your code - for your own sanity as well as that of others who have to read it. At the very least, put each column name on a line of its own, indented. A good IDE tool like SQL Developer will do this for you. Like this:
INSERT INTO dsb_nb_users_option (
dsb_nb_users_option.code,
dsb_nb_users_option.code_value,
dsb_nb_users_option.status
)
SELECT
dsb_nb_default_option.code,
dsb_nb_default_option.code_value,
dsb_nb_default_option.status
FROM
dsb_nb_default_options
WHERE
dsb_nb_users.user_id IS NOT NULL;
Now that I've made your code more easily readable, a couple of other things jump out at me. First, it is not necessary to prefix every column name with the table name. So your code gets even easier to read.
INSERT INTO dsb_nb_users_option (
code,
code_value,
status
)
SELECT
code,
code_value,
status
FROM
dsb_nb_default_options
WHERE
dsb_nb_users.user_id IS NOT NULL;
Note that there are times you need to qualify a column name, either because oracle requires it to avoid ambiguity to the parser, or because the developer needs it to avoid ambiguity to himself and those that follow. In this case we usually use table name aliasing to shorten the code.
select a.userid,
a.username,
b.user_mobile_phone
from users a,
join user_telephones b on a.userid=b.userid;
Finally, and more critical your your overall design, It appears that you are unnecessarily duplicating data across multiple tables. This goes against all the rules of data design. Have you read up on 'data normalization'? It's the foundation of relational database theory.

Difference between select * into newtable and create table newtable AS

I am working with huge table which has 9 million records. I am trying to take backup of the table. As below.
First Query
Create table newtable1 as select * from hugetable ;
Second Query
Select * into newtable2 from hugetable ;
Here first query seems to be quite fast. I want to know the functional difference and Why it is fast compare to second query?, Which one is to be preferred?
Quote from the manual:
This command (create table as) is functionally similar to SELECT INTO, but it is preferred since it is less likely to be confused with other uses of the SELECT INTO syntax. Furthermore, CREATE TABLE AS offers a superset of the functionality offered by SELECT INTO.
So both do the same thing, but CREATE TABLE AS is part of the SQL standard (and thus works on other DBMS as well), whereas the SELECT INTO is non-standard. As the manuals says, you should prefer CREATE TABLE AS over the non-standard syntax.
Performance wise both are identical, the differences you see are probably related to caching or other activities on your system.

Oracle accessing multiple databases

I'm using Oracle SQL Developer version 4.02.15.21.
I need to write a query that accesses multiple databases. All that I'm trying to do is get a list of all the IDs present in "TableX" (There is an instance of Table1 in each of these databases, but with different values) in each database and union all of the results together into one big list.
My problem comes with accessing more than 4 databases -- I get this error: ORA-02020: too many database links in use. I cannot change the INIT.ORA file's open_links maximum limit.
So I've tried dynamically opening/closing these links:
SELECT Local.PUID FROM TableX Local
UNION ALL
----
SELECT Xdb1.PUID FROM TableX#db1 Xdb1;
ALTER SESSION CLOSE DATABASE LINK db1
UNION ALL
----
SELECT Xdb2.PUID FROM TableX#db2 Xdb2;
ALTER SESSION CLOSE DATABASE LINK db2
UNION ALL
----
SELECT Xdb3.PUID FROM TableX#db3 Xdb3;
ALTER SESSION CLOSE DATABASE LINK db3
UNION ALL
----
SELECT Xdb4.PUID FROM TableX#db4 Xdb4;
ALTER SESSION CLOSE DATABASE LINK db4
UNION ALL
----
SELECT Xdb5.PUID FROM TableX#db5 Xdb5;
ALTER SESSION CLOSE DATABASE LINK db5
However this produces 'ORA-02081: database link is not open.' On whichever db is being closed out last.
Can someone please suggest an alternative or adjustment to the above?
Please provide a small sample of your suggestion with syntactically correct SQL if possible.
If you can't change the open_links setting, you cannot have a single query that selects from all the databases you want to query.
If your requirement is to query a large number of databases via database links, it seems highly reasonable to change the open_links setting. If you have one set of people telling you that you need to do X (query data from a large number of tables) and another set of people telling you that you cannot do X, it almost always makes sense to have those two sets of people talk and figure out which imperative wins.
If we can solve the problem without writing a single query, then you have options. You can write a bit of PL/SQL, for example, that selects the data from each table in turn and does something with it. Depending on the number of database links involved, it may make sense to write a loop that generates a dynamic SQL statement for each database link, executes the SQL, and then closes the database link.
If you want need to provide a user with the ability to run a single query that returns all the data, you can write a pipelined table function that implements this sort of loop with dynamic SQL and then let the user query the pipelined table function. This isn't really a single query that fetches the data from all the tables. But it is as close as you're likely to get without modifying the open_links limit.

Process SQL result set entirely

I need to work with a SQL result set in order to do some processing for each column (medians, standard deviations, several control statements included)
The SQL is dynamic so I don't know the number of columns, rows.
First I tried to use temporary tables, views, etc to store the results, however I did not manage to overcome the 30 character limit of Oracle columns when using the below sql:
create table (or view or global temporary table) as select * from (
SELECT
DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE,
SUM(DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI_CHZ +DMTTBF_MAT_MATURATO_BILL_POS. MAT_N_NUM_EVENTI) <-- exceeds the 30 character limit
FROM DMTTBF_MAT_MATURATO_BILL_POS
WHERE DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE >= '201301'
GROUP BY DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE
)
Second choice was to use some PL/SQL types to store the entire table information, so I could call it like in other programming languages (e.g. a matrix result[i][j]) but I could not find anything similar.
Third variant, using files for reading and writing: i did not try it yet; i'm still expecting a more elegant pl/sql solution
It's possible that I have the wrong approach here so any advice is more than welcome.
UPDATE: Modifying the input SQL is not an option. The program has to accept any select statement.
Note that you can alias both tables and fields. Using a table alias keeps references to it from producing walls of text in the query. Using one for a field gives it a new name in the output.
SELECT A.LONG_FIELD_NAME_HERE AS SHORTNAME
FROM REALLY_LONG_TABLE_NAME_HERE A
The auto naming adds _1 and _2 etc to differentiate the same column name coming from different table references. This often puts a field already borderline over the limit. Giving the fields names yourself bypasses this.
You can put the alias also in dynamic SQL:
sqlstr := 'create table (or view or global temporary table) as select * from (
SELECT
DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE,
SUM(DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI_CHZ + DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI) AS '||SUBSTR('SUM(DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI_CHZ +DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI)', 1, 30)
||' FROM DMTTBF_MAT_MATURATO_BILL_POS
WHERE DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE >= ''201301''
GROUP BY DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE
)'

Synonyms causing an expensive plan

I had the following problem with our Oracle 11 database and although there's a fix,
I want to understand why its reacting this way.
We have two schemas: a "dev" schema which contains all the tables, views, plsql, ... and
a "app" schema which contains synonyms to the dev-objects, i.e. statements don't contain
schema names.
A dev-view references tables (select * from a a1 -> b -> a a2 union select * from c)
in which there's a common column which is used for the selection, i.e. the selection
predicate is pushed into table "a" (300k rows) and "b" (90k rows) selection via seperate
index access, resulting in a very performant plan.
dev-fun is a deterministic, parallel function which simply does some string manipulation
without further database access.
The selection on the view looks like: select * from view where common-column = fun(string)
This works as expected on the dev schema, but if this is executed on the app schema,
the plan becomes relatively very expensive, i.e. the result of fun(string) is not pushed down,
but the tables are hash joined and the result is scanned for the element.
Still in the app schema, when I replace fun(string) with the function-result the plan becomes
cheap again.
To solve the problem, I duplicated the view in the app schema instead of referencing it via
a synonym, but in case of view/table-changes that means a potential source of defect, as we
normally don't check the app schema ...
The call to the function is still via the synonym and the view was duplicated as-is, i.e. it
accesses the synonyms for the underlying tables ... and the plan is the same as if it was executed
on the dev schema.
Apart of having selection grants on all underlying tables, I've also tried granting "query rewrite", "references" on the tables and "references" on the view. Furthermore I've tried the authid-options on the function. I have to admit, that I haven't yet checked for row-level security, but we are not using them.
What else can I check for?
The oracle version is 11.0.2.2. Opening a oracle-ticket would be only a theoretical option,
as we don't have direct support access and the layer in-between is even more frustrating as
living with the maintaining issue.
I know that typically a explain-plan would be helpful, but lets try it first without it, as
I suspect the problem somewhere else.
Update (14.10.2013):
Hinting to use nested loops doesn't work.
function based index aren't used.
Indexed access: select * from v_vt_betreuer where vtid = 11803056;
Hashed access: select * from v_vt_betreuer where vtid = VTNRVOLL_TO_VTID(11803056);
Copied view: i.e. when the view is copied into the app schema
select * from v_vt_betreuer where vtid = VTNRVOLL_TO_VTID(11803056);
Try creating an index like this:
CREATE INDEX func_index ON agency(fun(common_column))
This is called a function based index.
My guess is that this type of queries:
select a1.vtid, a2.*
from agency a1 join agency_employee b on (b.vtid = a1.vtid)
join agency a2 on (a2.vtid = b.employee_vtid)
is causing the query optimizer to do this:
select a1.vtid, a2.*
from agency a1 join agency_employee b on (func(b.vtid) = func(a1.vtid))
join agency a2 on (func(a2.vtid) = func(b.employee_vtid))
http://www.akadia.com/services/ora_function_based_index_2.html
http://www.oracle-base.com/articles/8i/function-based-indexes.php
If this approach does not help, check if you have ROW LEVEL SECURITY:
http://docs.oracle.com/cd/E16655_01/server.121/e17609/tdpsg_ols.htm
http://docs.oracle.com/cd/B19306_01/network.102/b14266/apdvcntx.htm#i1007410
Are you sure that you are NOT using actual function in schema NPS_WEBCC, but a synonym to a function in schema NPS_WEBCC_DEV?
Condition can't be pushed if DEV schema is not allowed to access objects in APP schema.
You must grant permission for synonym to DEV schema, because view is in DEV schema. That is why it starts working when you copy view into APP schema.
Another problem may occur if you use extended statistics in DEV schema based on DEV function, but that needs to be sorted out after the permissions problem.
You can verify it by checking explain plans of following queries. They shall give optimized result:
-- q1
-- "v_vt_betreuer" is a synonym in app schema to a view in dev schema
select * from v_vt_betreuer where vtid = NPS_WEBCC_DEV.VTNRVOLL_TO_VTID(11803056);
-- q2
select * from NPS_WEBCC_DEV.v_vt_betreuer where vtid=NPS_WEBCC_DEV.VTNRVOLL_TO_VTID(11803056);
UPD
According to additional investigation most likely problem happens because MERGE grant is missing on the view. It must be granted for the view and all sub-views that are used inside it.
GRANT MERGE VIEW ON v_vt_betreuer TO NPS_WEBCC;

Resources