Oracle accessing multiple databases - oracle

I'm using Oracle SQL Developer version 4.02.15.21.
I need to write a query that accesses multiple databases. All that I'm trying to do is get a list of all the IDs present in "TableX" (There is an instance of Table1 in each of these databases, but with different values) in each database and union all of the results together into one big list.
My problem comes with accessing more than 4 databases -- I get this error: ORA-02020: too many database links in use. I cannot change the INIT.ORA file's open_links maximum limit.
So I've tried dynamically opening/closing these links:
SELECT Local.PUID FROM TableX Local
UNION ALL
----
SELECT Xdb1.PUID FROM TableX#db1 Xdb1;
ALTER SESSION CLOSE DATABASE LINK db1
UNION ALL
----
SELECT Xdb2.PUID FROM TableX#db2 Xdb2;
ALTER SESSION CLOSE DATABASE LINK db2
UNION ALL
----
SELECT Xdb3.PUID FROM TableX#db3 Xdb3;
ALTER SESSION CLOSE DATABASE LINK db3
UNION ALL
----
SELECT Xdb4.PUID FROM TableX#db4 Xdb4;
ALTER SESSION CLOSE DATABASE LINK db4
UNION ALL
----
SELECT Xdb5.PUID FROM TableX#db5 Xdb5;
ALTER SESSION CLOSE DATABASE LINK db5
However this produces 'ORA-02081: database link is not open.' On whichever db is being closed out last.
Can someone please suggest an alternative or adjustment to the above?
Please provide a small sample of your suggestion with syntactically correct SQL if possible.

If you can't change the open_links setting, you cannot have a single query that selects from all the databases you want to query.
If your requirement is to query a large number of databases via database links, it seems highly reasonable to change the open_links setting. If you have one set of people telling you that you need to do X (query data from a large number of tables) and another set of people telling you that you cannot do X, it almost always makes sense to have those two sets of people talk and figure out which imperative wins.
If we can solve the problem without writing a single query, then you have options. You can write a bit of PL/SQL, for example, that selects the data from each table in turn and does something with it. Depending on the number of database links involved, it may make sense to write a loop that generates a dynamic SQL statement for each database link, executes the SQL, and then closes the database link.
If you want need to provide a user with the ability to run a single query that returns all the data, you can write a pipelined table function that implements this sort of loop with dynamic SQL and then let the user query the pipelined table function. This isn't really a single query that fetches the data from all the tables. But it is as close as you're likely to get without modifying the open_links limit.

Related

Switching between multiple tables at runtime in Oracle

I am working in an environment where we have separate tables for each client (this is something which I can't change due to security and other requirements). For example, if we have clients ACME and MEGAMART then we'd have an ACME_INFO table and MEGAMART_INFO tables and both tables would have the same structure (let's say ID, SOMEVAL1, SOMEVAL2).
I would like to have a way to easily access the different tables dynamically.
To this point I've dealt with this in a few ways including:
Using dynamic SQL in procedures/functions (not fun)
Creating a view which does a UNION ALL on all of the tables and which adds a CLIENT_ID COLUMN (i.e. "CREATE VIEW COMBINED_VIEW AS SELECT 'ACME' CLIENT_ID, ID, SOMEVAL1, SOMEVAL2 FROM ACME_INFO UNION ALL SELECT 'MEGMART' CLIENT_ID, ID, SOMEVAL1, SOMEVAL2") which performs surprisingly well, but is a pain to maintain and kind of defeats some of the requirements which dictate that we have separate tables for each client.
SYNONYMs won't work because we need different connections to act on different clients
A view which refers to a package which has a package variable for the active client. This is just evil and doesn't even work out all that well.
What I'd really like is to be able to create a table function, macro, or something else where I can do something like
SELECT * FROM FN_CLIENT_INFO('ACME');
or even
UPDATE (SELECT * FROM FN_CLIENT_INFO('ACME')) SET SOMEVAL1 = 444 WHERE ID = 3;
I know that I can partially achieve this with a pipelined function, but this mechanism will need to be used by a reporting platform and if the reporting platform does something like
SELECT * FROM FN_CLIENT_INFO('ACME') WHERE SOMEVAL1 = 4
then I want it to run efficiently (assuming SOMEVAL1 has an index for example). This is where a macro would do well.
Macros seem like a good solution, but the above won't work due to protections put in place to prevent against SQL injection.
Is there a way to create a macro that somehow verifies that the passed in VARCHAR2 is a valid table name and therefore can be used or is there some other approach to address what I need?
I was thinking that if I had a function which could translate a client name to a DBMS_TF.TABLE_T then I could use a macro, but I haven't found a way to do that well.
A lesser-known method for such cases is to use a system-partitioned table. For instance, consider the following code:
Full example: https://dbfiddle.uk/UQsAgHCk
create table t_common(a int, b int)
partition by system (
partition ACME_INFO,
partition MEGAMART_INFO
);
insert into t_common partition(acme_info)
values(1,1);
insert into t_common partition(megamart_info)
values(2,2);
commit;
select * from t_common partition(acme_info);
select * from t_common partition(megamart_info);
As demonstrated, a common table can be used with different partitions for different clients, allowing it to be used as a regular table. We can create a system-partitioned table and utilize the exchange partition feature with older tables. Then, we can drop the older tables and create views with the same names, so that older code continues to work with views while all new code can work with the common table by specifying a partition.

Trouble Understanding NEXTVAL and CURRVAL

I did search for a similar question, but if I overlooked an existing answer I am glad to be redirected there.
I am working to untangle an Oracle Stored Proceedure in a legacy system written by a long departed developer.
The focus of the proceedure is to upload user data into the existing table structure in a bulk collection and save keystroke time adding 1-x-1 records.
The procedure appears to work without error and the user group would like to expand it to allow additional data to load to separate but related tables.
The author is using the NEXTVAL and CURRVAL commands to add primary key information as new records are added using the CSV data.
But I am confused because my understanding of NEXTVAL/CURRVAL was that they required context and declaration to be used correctly.
For example the Proceedure has the following:
SELECT seq_site.nextval INTO v_curr
FROM DUAL;
UPDATE temp_table
SET site_id = seq_site.currval
However [SEQ_SITE] is not declared anywhere in the preceding lines of the Procedure.
Am I inferring correctly that the clause [SELECT seq_site.nextval INTO v_curr] is the declaration for [SEQ_Record_count]?
(...v_curr is declared an integer early in the procedure declarations btw...)

SDO_JOIN table join on large tables no longer returns results after upgrade from Oracle 11.2.0.1 to 11.2.0.4

I have the same database on 2 different Oracle servers, one is 11.2.0.1.0 and the other is 11.2.0.4.0.
I have the same 2 geometry tables in both databases and run the following query on both servers. When run on an 11.2.0.1.0 version of Oracle, the query runs for a few minutes and I get results, the same query when run on 11.2.0.4.0 runs for about 3 seconds and returns no results.
The BLPUs table holds 36 million points and the PD_B2 table holds a polygon. I am trying to find all the points that fall in the polygon.
Other spatial queries do return rows but it takes hours and hours whereas the table join suggested in the Oracle Spatial documentation, takes 15 minutes to return all the points.
SELECT /*+ ordered */ a.uprn
FROM TABLE(SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC','mask=ANYINTERACT')) c, blpus a, PD_B2 b
WHERE c.rowid1 = a.rowid
AND c.rowid2 = b.rowid;
The spatial queryies below return SDO_ROWIDSET() when run on the 11.2.0.4 server
select SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC','mask=ANYINTERACT')
from dual;
select SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC')
from dual;
On the 11.2.0.1 server they return results.
I have discovered that a much smaller table of points will work on 11.2.0.4 so it seems that there is a size limit on 11.2.0.4 when using SDO_JOIN where as 11.2.0.1 seems to cope with the large table.
Does anyone know why this is or if there is an actual limit on table size when using SDO_JOIN?
This is strange. I see no reason why SDO_JOIN will not work the same way in 11.2.0.4. At least I have not seen that sort of behavior before. It looks like a bug to me and I suggest you file a service request with Oracle Support so we can take a look. You may need to provide a dump of the tables - or at least of a small enough subset that demonstrates the problem.
That said there are a few things to check: did you apply the 11.2.0.4 patch on the same database ? I.e. nothing changed in terms of table structures or content, grants, etc ?
When you say that the query returns no rows, does it do so immediately ? Or does it perform some processing before completing without anything being returned ?
How large is the PD_B2 table ?
What happens when you do:
select SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC','mask=ANYINTERACT')
from dual;
Does this also return nothing ?
What happens if you do
select SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC')
from dual;
Both should return something that looks like this:
SDO_ROWIDSET(SDO_ROWIDPAIR('AAAW3LAAGAAAADmAAu', 'AAAW3TAAGAAAAg7AAC'), SDO_ROWIDPAIR('AAAW3LAAGAAAADmABE', 'AAAW3TAAGAAAAgrAAA'),...)
You will see this if you run the query in sqlplus. [If you use a GUI (like TOAD or SQLDeveloper) then you may not see that. All those GUIs have problems dealing with objects or arrays.]
But the fact that your 11.2.0.4 tests complete very quickly probably means that you get an empty result back, maybe like this:
SDO_ROWIDSET()
and that confirms that something did not work.
Now, from what you say PD_B2 only contains one row ? If so then there is no reason whatsoever to go via SDO_JOIN. A straightforward spatial query is easier to write. SDO_JOIN only makes sense when both tables being joined contain multiple rows. Then again, if one of the tables is very small (like the PD_B2 table in your case), then it anyway falls back to a simple query.
A simple query would look like this:
SELECT a.uprn
FROM blpus a, PD_B2 b
WHERE sdo_anyinteract (a.geoloc, b.geoloc) = 'TRUE';
Does this return what you expect in 11.2.0.4 ?
You may also want to examine the cost of the query and the query plan used. In sqlplus, do this:
set timing on
set autotrace traceonly
then run the above query. The effect is that sqlplus will not display any results - but it will still fetch and format the output: it will just not print them. At the end you will get a printout of the query plan used as well as some execution statistics and the elapsed time. Please add those results to your question.
Running the query from within your application should have a similar profile: similar response time and database-side costs - assuming you do fetch all the 1.3 million rows.
BTW, what do you do with those results ? Do you show them out as a report ? Do you save them into a table for later analysis ? Surely you do not want to show them all on a map ?
To clarify: SDO_JOIN is only for matching many to many geometries. For matching one to many, the simple SDO_ANYINTERACT() is what you need. As a matter of fact, SDO_JOIN will automatically fall back to a simple SDO_ANYINTERACT when one of the sets is very much smaller than the other. I can't see how SDO_JOIN could be faster in your circumstances (i.e. when searching for all objects that match one object) since it will perform the same as an SDO_ANYINTERACT and will require extra joins to the two tables.
Going back to the original issue - that of a difference in behavior in SDO_JOIN between 11.2.0.1 and 11.2.0.4 that IMO is definitely a bug that you need to report to Oracle Support.

Temporarily storing multiple values in Oracle [duplicate]

This question already has an answer here:
Alternate method to global temp tables for Oracle Stored Procedure
(1 answer)
Closed 8 years ago.
I need a way to temporarily store and use multiple values returned from an Oracle query. In SQL Server, I stored my values in a temp table, did my work, then dropped the table. I'm discovering the Oracle equivalent isn't as clear cut.
Here's a SQL Server example of what I'm trying to do:
select id into #temp from SomeTable where SomeColumn = 'Some Value'
:
(do whatever I need to do with #temp data)
:
drop table #temp
I can code my way around SQL Server pretty well, but am almost clueless when it comes to Oracle syntax. I've been reading various Oracle references, and they haven't been very helpful. I did read that Oracle temp tables work differently than SQL, and are often not recommended.
I'm looking into the temp table route, but if there's a better way to do this that doesn't use temp tables, I'm all ears. Anyone know a better way to do this in Oracle?
Thanks in advance.
As with many things, that depends. It depends on how much data you'll be retrieving and how you want to use it. If you don't have too much data to work with ("too much" meaning, oh, say, more than a couple thousand rows) and you want to manipulate the data procedurally, i.e. in a PL/SQL procedure or script, AND you don't want to access it using DML, i.e. you don't want to say something like SELECT * FROM your_temp_data... than loading the data into a PL/SQL collection, as #EgorSkriptunoff mentions above, might be a workable solution.
However, if the temp data is large (more than a couple thousand rows) and/or you need to be able to do something like SELECT * FROM your_temp_data... then your best bet it to use Oracle's Global Temp Tables. A GTT is a table which is used to hold data which should only last as long as either a single transaction or a complete session (i.e. as long as you're attached to the database). Documentation here and here, and another write-up on them here.
Share and enjoy.

need to connect to two different db from sqlplus

I need to take information from two different data bases.
select * from TABLE_ONDB2 where column_on_db2 in ( select column_on_db1 from TABLE_ONDB1 );
Problem is both are on different db instances so I am not able to figure out how to put table names and column names etc.
I hope my question is clear.
I'd try to do it with a Database Link:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/ds_concepts002.htm
That is, however, not a SQL*Plus feature. It works by makeing a connection from DB2 to DB1 (the database is doing that).
You can then query both tables from DB2 with the '#db-link' name notation. e.g.,
select *
from TABLE_ONDB2
where column_on_db2
in (select column_on_db1 from TABLE_ONDB1#DB_LINK_NAME);
^^^^^^^^^^^^^
The benefit is that you can access the table in all different ways, also as a join.

Resources