I am having a bit of trouble with a select into insert across a dblink in oracle 10. I am using the following statement:
INSERT INTO LOCAL.TABLE_1 ( COL1, COL2)
SELECT COL1, COL2
FROM REMOTE.TABLE1#dblink s
WHERE COL1 IN ( SELECT COL1 FROM WORKING_TABLE)
When I run the statement the following is what gets run against the remote server on the DB Link:
SELECT /*+ OPAQUE_TRANSFORM */ "COL1", "COL2"
FROM "REMOTE"."TABLE1" "S"
If I run the select only and do not do the insert into the following is run:
SELECT /*+ */ "A1"."COL1"
, "A1"."COL2"
FROM "REMOTE"."TABLE1" "A1"
WHERE "A1"."COL1" =
ANY ( SELECT "A2"."COL1"
FROM "LOCAL"."TABLE1"#! "A2")
The issue is in the insert case the enitre table is being pulled across the dblink and then limited localy which takes a fair bit of time given the table size. Is there any reason adding the insert would change the behavior in this manner?
You may want to use the driving_site hint. There is a good explanation here:
http://www.dba-oracle.com/t_sql_dblink_performance.htm
When it comes to DML, oracle chooses to ignore any driving_site hint and executes the statement at the target site. So I doubt if you would be able to change that (even using WITH approach described above). A possible workaround is you can create a synonym for LOCAL.TABLE1 on the remote database and use the same in your INSERT statement.
Leveraging the WITH clause could optimize your retrieval of your working set:
WITH remote_rows AS
(SELECT /*+DRIVING_SITE(s)*/COL1, COL2
FROM REMOTE.TABLE1#dblink s
WHERE COL1 IN ( SELECT COL1 FROM WORKING_TABLE))
INSERT INTO LOCAL.TABLE_1 ( COL1, COL2)
SELECT COL1, COL2
FROM remote_rows
Oracle will ignore the driving_site hint for insert statements, as DML is always executed locally. The way to do this is to create a cursor with the driving site hint, and then loop through the cursor with a bulkcollect/forall and insert into the target local table.
How big is WORKING_TABLE ?
If it is small enough, you could try selecting from work_table into a collection, and then passing the elements of that collect as elements in an IN list.
declare
TYPE t_type IS TABLE OF VARCHAR2(60);
v_coll t_type;
begin
dbms_application_info.set_module('TEST','TEST');
--
select distinct object_type
bulk collect into v_coll
from user_objects;
--
IF v_coll.count > 20 THEN
raise_application_error(-20001,'You need '||v_coll.count||' elements in the IN list');
ELSE
v_coll.extend(20);
END IF;
insert into abc (object_type, object_name)
select object_type, object_name
from user_objects#tmfprd
where object_type in
(v_coll(1), v_coll(2), v_coll(3), v_coll(4), v_coll(5),
v_coll(6), v_coll(7), v_coll(8), v_coll(9), v_coll(10),
v_coll(11), v_coll(12), v_coll(13), v_coll(14), v_coll(15),
v_coll(16), v_coll(17), v_coll(18), v_coll(19), v_coll(20)
);
--
dbms_output.put_line(sql%rowcount);
end;
/
Insert into zith cardinality hint seems to work in 11.2
INSERT /*+ append */
INTO MIG_CGD30_TEST
SELECT /*+ cardinality(ZFD 400000) cardinality(CGD 60000000)*/
TRIM (CGD.NUMCPT) AS NUMCPT, TRIM (ZFD.NUMBDC_NEW) AS NUMBDC
FROM CGD30#DBL_MIG_THALER CGD,
ZFD10#DBL_MIG_THALER ZFD,
EVD01_ADS_DR3W2 EVD
Related
Could you please tell me how to compare differences between table and my select query and insert those results in separate table? My plan is to create one base table (name RESULT) by using select statement and populate it with current result set. Then next day I would like to create procedure which will going to compare same select with RESULT table, and insert differences into another table called DIFFERENCES.
Any ideas?
Thanks!
You can create the RESULT_TABLE using CTAS as follows:
CREATE TABLE RESULT_TABLE
AS SELECT ... -- YOUR QUERY
Then you can use the following procedure which calculates the difference between your query and data from RESULT_TABLE:
CREATE OR REPLACE PROCEDURE FIND_DIFF
AS
BEGIN
INSERT INTO DIFFERENCES
--data present in the query but not in RESULT_TABLE
(SELECT ... -- YOUR QUERY
MINUS
SELECT * FROM RESULT_TABLE)
UNION
--data present in the RESULT_TABLE but not in the query
(SELECT * FROM RESULT_TABLE
MINUS
SELECT ... );-- YOUR QUERY
END;
/
I have used the UNION and the difference between both of them in a different order using MINUS to insert the deleted data also in the DIFFERENCES table. If this is not the requirement then remove the query after/before the UNION according to your requirement.
-- Create a table with results from the query, and ID as primary key
create table result_t as
select id, col_1, col_2, col_3
from <some-query>;
-- Create a table with new rows, deleted rows or updated rows
create table differences_t as
select id
-- Old values
,b.col_1 as old_col_1
,b.col_2 as old_col_2
,b.col_3 as old_col_3
-- New values
,a.col_1 as new_col_1
,a.col_2 as new_col_2
,a.col_3 as new_col_3
-- Execute the query once again
from <some-query> a
-- Outer join to detect also detect new/deleted rows
full join result_t b using(id)
-- Null aware comparison
where decode(a.col_1, b.col_1, 1, 0) = 0
or decode(a.col_2, b.col_2, 1, 0) = 0
or decode(a.col_3, b.col_3, 1, 0) = 0;
I have a procedure where it has few select statements (from different tables) and the output of these select statements will be loaded into a temp table. All the records which are loaded in this temp table will be displayed as an output. Now I have a requirement where my procedure should not have this temp table.
Can you please let me know the options of achieving it?
Assuming that the SELECT queries have same number of COLUMNS and Datatype. Your best approach could be using union all and refcursor to display the output. Hope below snippet helps.
--You can try using nested table types here instead of using temp tables or simply UNIONALL
--Hope below example helps.
DECLARE
p_lst sys_refcursor;
BEGIN
--Assuming that all the SELECT statements have same number of columns as well as datatype
OPEN p_lst FOR
(SELECT 'AV',1 FROM DUAL
UNION ALL
SELECT'SH',2 FROM DUAL
UNION ALL
SELECT 'TK',3 FROM DUAL
);
END;
Assuming mysql...
You could do something like
UPDATE [table1] AS t1
INNER JOIN [table2] AS t2
ON t1.[col1] = t2.[col1]
SET t1.[col2] = t2.[col2];
What is the data type of ORA_ROWSCN? It seems to be NUMBER but I cannot find it specified in the documentation.
declare
myscn ???;
begin
select ora_rowscn into myscn from t where ...;
end;
It is a NUMBER. You can see yourself by creating a simple view and describing it (or selecting from *_tab_columns). Here's a simple sqlfiddle demonstration
create table foo (
col1 number
);
create view vw_foo
as
select col1, ora_rowscn scn
from foo;
select *
from user_tab_cols
where table_name = 'VW_FOO';
If you want more detail than you'd probably ever care about on the format of the SCN (system change number), here is one decent article
Oracle 12 introduced nice feature (which should have been there long ago btw!) - identity columns. So here's a script:
CREATE TABLE test (
a INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
b VARCHAR2(10)
);
-- Ok
INSERT INTO test (b) VALUES ('x');
-- Ok
INSERT INTO test (b)
SELECT 'y' FROM dual;
-- Fails
INSERT INTO test (b)
SELECT 'z' FROM dual UNION ALL SELECT 'zz' FROM DUAL;
First two inserts run without issues providing values for 'a' of 1 and 2. But the third one fails with ORA-01400: cannot insert NULL into ("DEV"."TEST"."A"). Why did this happen? A bug? Nothing like this is mentioned in the documentation part about identity column restrictions. Or am I just doing something wrong?
I believe the below query works, i havent tested!
INSERT INTO Test (b)
SELECT * FROM
(
SELECT 'z' FROM dual
UNION ALL
SELECT 'zz' FROM dual
);
Not sure, if it helps you any way.
For, GENERATED ALWAYS AS IDENTITY Oracle internally uses a Sequence only. And the options on general Sequence applies on this as well.
NEXTVAL is used to fetch the next available sequence, and obviously it is a pseudocolumn.
The below is from Oracle
You cannot use CURRVAL and NEXTVAL in the following constructs:
A subquery in a DELETE, SELECT, or UPDATE statement
A query of a view or of a materialized view
A SELECT statement with the DISTINCT operator
A SELECT statement with a GROUP BY clause or ORDER BY clause
A SELECT statement that is combined with another SELECT statement with the UNION, INTERSECT, or MINUS set operator
The WHERE clause of a SELECT statement
DEFAULT value of a column in a CREATE TABLE or ALTER TABLE statement
The condition of a CHECK constraint
The subquery and SET operations rule above should answer your Question.
And for the reason for NULL, when pseudocolumn(eg. NEXTVAL) is used with a SET operation or any other rules mentioned above, the output is NULL, as Oracle couldnt extract them in effect with combining multiple selects.
Let us see the below query,
select rownum from dual
union all
select rownum from dual
the result is
ROWNUM
1
1
In Oracle, I have a requirement where in I need to insert records from Source to Target and then update the PROCESSED_DATE field of source once the target has been updated.
1 way is to use cursors and loop row by row to achieve the same.
Is there any other way to do the same in an efficient way?
No need for a cursor. Assuming you want to transfer those rows that have not yet been transfered (identified by a NULL value in processed_date).
insert into target_table (col1, col2, col3)
select col1, col2, col3
from source_table
where processed_date is null;
update source_table
set processed_date = current_timestamp
where processed_date is null;
commit;
To avoid updating rows that were inserted during the runtime of the INSERT or between the INSERT and the update, start the transaction in serializable mode.
Before you run the INSERT, start the transaction using the following statement:
set transaction isolation level SERIALIZABLE;
For more details see the manual:
http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_10005.htm#i2067247
http://docs.oracle.com/cd/E11882_01/server.112/e25789/consist.htm#BABCJIDI
A trigger should work. The target table can have a trigger that on update, updates the source table's column with the processed date.
My preferred solution in this sort of instance is to use a PL/SQL array along with batch DML, e.g.:
DECLARE
CURSOR c IS SELECT * FROM tSource;
TYPE tarrt IS TABLE OF c%ROWTYPE INDEX BY BINARY_INTEGER;
tarr tarrt;
BEGIN
OPEN c;
FETCH c BULK COLLECT INTO tarr;
CLOSE c;
FORALL i IN 1..tarr.COUNT
INSERT INTO tTarget VALUES tarr(i);
FORALL i IN 1..tarr.COUNT
UPDATE tSource SET processed_date = SYSDATE
WHERE tSource.id = tarr(i).id;
END;
The above code is an example only and makes some assumptions about the structure of your tables.
It first queries the source table, and will only insert and update those records - which means you don't need to worry about other sessions concurrently inserting more records into the source table while this is running.
It can also be easily changed to process the rows in batches (using the fetch LIMIT clause and a loop) rather than all-at-once like I have here.
Got another answer from some one else. Thought that solution seems much more reasonable than enabling isolation level as all my new records will have the PROCESSED_DATE as null (30 rows which inserted with in the time the records got inserted in Target table)
Also the PROCESSED_DATE = NULL rows can be updated only by using my job. No other user can update these records at any point of time.
declare
date_stamp date;
begin
select sysdate
into date_stamp
from dual;
update source set processed_date = date_stamp
where procedded_date is null;
Insert into target
select * from source
where processed_date = date_stamp;
commit;
end;
/
Let me know any further thoughts on this. Thanks a lot for all your help on this.