Overcoming the restriction on bulk inserts over a database link - oracle

It appears as though there's an implementation restriction that forbids the use of forall .. insert on Oracle, when used over a database link. This is a simple example to demonstrate:
connect schema/password#db1
create table tmp_ben_test (
a number
, b number
, c date
, constraint pk_tmp_ben_test primary key (a, b)
);
Table created.
connect schema/password#db2
Connected.
declare
type r_test is record ( a number, b number, c date);
type t__test is table of r_test index by binary_integer;
t_test t__test;
cursor c_test is
select 1, level, sysdate
from dual
connect by level <= 10
;
begin
open c_test;
fetch c_test bulk collect into t_test;
forall i in t_test.first .. t_test.last
insert into tmp_ben_test#db1
values t_test(i)
;
close c_test;
end;
/
Very confusingly this fails in 9i with the following error:
ERROR at line 1: ORA-01400: cannot insert NULL into
("SCHEMA"."TMP_BEN_TEST"."A") ORA-02063: preceding line from DB1
ORA-06512: at line 18
If was only after checking in 11g that I realised this was an implementation restriction.
ERROR at line 18: ORA-06550: line 18, column 4: PLS-00739: FORALL
INSERT/UPDATE/DELETE not supported on remote tables
The really obvious way round this is to change forall .. to:
for i in t_test.first .. t_test.last loop
insert into tmp_ben_test#db1
values t_test(i);
end loop;
but, I'd rather keep it down to a single insert if at all possible. Tom Kyte suggests the use of a global temporary table. Inserting the data into a GTT and then over a DB link seems like massive overkill for a set of data that is already in a user-defined type.
Just to clarify this example is extremely simplistic compared to what is actually happening. There is no way we will be able to do a simple insert into and there is no way all the operations could be done on a GTT. Large parts of the code have to be done in user-defined type.
Is there another, simpler or less DMLy, way around this restriction?

What restrictions do you face on the remote database? If you can create objects there you have a workaround: on the remote database create the collection type and a procedure which takes the collection as a parameter and executes the FORALL statement.

If you create the t__test/r_test type in db2 and then create a public synonym for them on db1 then you should be able to call a procedure from db1 to db2 filling the t_table and returning in to db1. Then you should be able to insert into your local table.
I'm assuming you would use packaged types and procedures in the real world, not anonymous blocks.
Also, it would not be the ideal solution for big datasets, then GTT or similar would be better.

Related

Does Oracle support RETURNING in an SQL statement?

PostgreSQL supports a RETURNING clause, for instance as in
UPDATE some_table SET x = 'whatever' WHERE conditions RETURNING x, y, z
and MSSQL supports a variant of that syntax with the OUTPUT clause.
However Oracle "RETURNING INTO" seems intended to placing values in variables, from within the context of a stored procedure.
Is there a way to have an SQL equivalent to the one above that would work in Oracle, without involving a stored procedure ?
Note: I am looking for a pure-SQL solution if there exists one, not one that is language-specific, or would require special handling in the code. The actual SQL is dynamic, the code that makes the call is database-agnostic, with only the SQL being adapted.
Oracle does not directly support using the DML returning clause in a SELECT statement, but you can kind of fake that behavior by using a WITH function. Although the below code uses PL/SQL, the statement is still a pure SQL statement and can run anywhere a regular SELECT statement can run.
SQL> create table some_table(x varchar2(100), y number);
Table created.
SQL> insert into some_table values('something', 1);
1 row created.
SQL> commit;
Commit complete.
SQL> with function update_and_return return number is
2 v_y number;
3 --Necessary to avoid: ORA-14551: cannot perform a DML operation inside a query
4 pragma autonomous_transaction;
5 begin
6 update some_table set x = 'whatever' returning y into v_y;
7 --Necessary to avoid: ORA-06519: active autonomous transaction detected and rolled back
8 commit;
9 return v_y;
10 end;
11 select update_and_return from dual;
12 /
UPDATE_AND_RETURN
-----------------
1
Unfortunately there are major limitations with this approach that may make it impractical for non-trivial cases:
The DML must be committed within the statement.
The WITH function syntax requires both client and server versions of 12.1 and above.
Returning multiple columns or rows will require more advanced features. Multiple rows will require the function to return a collection, and the SELECT portion of the statement will have to use the TABLE function. Multiple columns will require a new type for each different result set. If you're lucky, you can use one of the built-in types, like SYS.ODCIVARCHAR2LIST. Otherwise you may need to create your own custom types.
You can do it in SQL, not need to pl/sql, and this depends on your tool and/or language. Here's an example in sqlplus:
SQL> create table t0 as select * from dual;
Table created.
SQL> var a varchar2(2)
SQL> update t0 set dummy='v' returning dummy into :a;
1 row updated.
SQL> print a
A
--------------------------------
v

Oracle Forms compiler mark a type as function

I'm making a procedure inside an Oracle Form that calls a backtracking which inserts in a table (the solution table). That backtracking is fed by a varray (ítem_array) of ítems (beans). Problem is the compiler says that there is no function name ítem.
existing objects in the database:
CREATE TYPE item IS object( NUM_OPERACIO NUMBER, TITULS NUMBER);
CREATE TYPE item_array IS VARRAY(1000) OF item;
create table my_table (NUM_OPERACIO NUMBER, TITULS NUMBER);
insert into my_table ( NUM_OPERACIO,TITULS ) values (1,10);
insert into my_table ( NUM_OPERACIO,TITULS ) values (2,20)
insert into my_table ( NUM_OPERACIO,TITULS ) values (3,30)
procedure
PROCEDURE solver
IS
arr item_array;
BEGIN
SELECT item( NUM_OPERACIO,TITULS )
BULK COLLECT INTO arr
FROM my_table;
delete from solucion ;
backtra(arr,1,0,30);
END;
What can I do to solve this?
That is a PLS error number. From the documentation:
"PLS-00591: this feature is not supported in client-side programs
Cause: One of the following features was used in a wrong context: pragma AUTONOMOUS_TRANSACTION, dynamic SQL statements, (e.g. EXECUTE IMMEDIATE), and bulk binds. These listed features can only be used in server-side programs but not client-side programs."
Your code uses a bulk-bind, so the error fits. Forms PL/SQL and database PL/SQL are similar but they use different engines, and features supported in both is only a sub-set of all features in either. The solution is to pass the array population into a database procedure and call it from Forms as needed.
Without knowing precisely what problem you're trying to solve, it's harder to give a better answer.

Alternate method to global temp tables for Oracle Stored Procedure

I have read and understand that Oracle uses only global temp tables unlike MS SQL which allows #temp tables. The situation that I have would call for me to create hundreds of Global temp tables in order to complete the DB conversion I am working on from MS SQL to Oracle. I want to know if there is another method out there, within a Oracle Stored Procedure, other than creating all of these tables which will have to be maintained in the DB.
Thank You
" Most of the time the only thing the temp tables are used within a
stored proc and then truncated at the end. We do constant upgrades to
our applications and having them somewhat comparable ensures that when
a change is made in one version that it can be easily merged to the
other."
T-SQL Temp tables are essentially memory structures. They provide benefits in MSSQL which are less obvious in Oracle, because of differences in the two RDBMS architectures. So if you were looking to migrate then you would be well advised to take an approach more fitted to Oracle.
However, you have a different situation, and obviously keeping the two code bases in sync will make your life easier.
The closest thing to temporary tables as you want to use them are PL/SQL collections; specifically, nested tables.
There are a couple of ways of declaring these. The first is to use a SQL template - a cursor - and define a nested table type based on it. The second is to declare a record type and then define a nested table on that. In either case, populate the collection variable with a bulk operation.
declare
-- approach #1 - use a cursor
cursor c1 is
select *
from t23;
type nt1 is table of c1%rowtype;
recs1 nt1;
-- approach #1a - use a cursor with an explicit projection
cursor c1a is
select id, col_d, col_2
from t23;
type nt1a is table of c1a%rowtype;
recs1 nt1a;
-- approach #2 - use a PL/SQL record
type r2 is record (
my_id number
, some_date date
, a_string varchar2(30)
);
type nt2 is table of r2;
recs2 nt2;
begin
select *
bulk collect into recs1
from t23;
select id, col_d, col_2
bulk collect into recs2
from t23;
end;
/
Using a cursor offers the advantage of automatically reflecting changes in the underlying table(s). Although the RECORD provides the advantage of stability in the face of changes in the underlying table(s). It just depends what you want :)
There's a whole chapter in the PL/SQL reference manual. Read it to find out more.

Oracle PL/SQL: Calling a procedure from a trigger

I get this error when ever I try to fire a trigger after insert on passengers table. this trigger is supposed to call a procedure that takes two parameters of the newly inserted values and based on that it updates another table which is the booking table. however, i am getting this error:
ORA-04091: table AIRLINESYSTEM.PASSENGER is mutating, trigger/function may not see it
ORA-06512: at "AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE", line 11 ORA-06512: at
"AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE", line 15 ORA-06512: at
"AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE_T1", line 3 ORA-04088: error during execution of
trigger 'AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE_T1' (Row 3)
I complied and tested the procedure in the SQL command line and it works fine. The problem seems to be with the trigger. This is the trigger code:
create or replace trigger "CALCULATE_FLIGHT_PRICE_T1"
AFTER
insert on "PASSENGER"
for each row
begin
CALCULATE_FLIGHT_PRICE(:NEW.BOOKING_ID);
end;​​​​​
Why is the trigger isn't calling the procedure?
You are using database triggers in a way they are not supposed to be used. The database trigger tries to read the table it is currently modifying. If Oracle would allow you to do so, you'd be performing dirty reads.
Fortunately, Oracle warns you for your behaviour, and you can modify your design.
The best solution would be to create an API. A procedure, preferably in a package, that allows you to insert passengers in exactly the way you would like it. In pseudo-PL/SQL-code:
procedure insert_passenger
( p_passenger_nr in number
, p_passenger_name in varchar2
, ...
, p_booking_id in number
, p_dob in number
)
is
begin
insert into passenger (...)
values
( p_passenger_nr
, p_passenger_name
, ...
, p_booking_id
, p_dob
);
calculate_flight_price
( p_booking_id
, p_dob
);
end insert_passenger;
/
Instead of your insert statement, you would now call this procedure. And your mutating table problem will disappear.
If you insist on using a database trigger, then you would need to avoid the select statement in cursor c_passengers. This doesn't make any sense: you have just inserted a row into table passengers and know all the column values. Then you call calculate_flight_price to retrieve the column DOB, which you already know.
Just add a parameter P_DOB to your calculate_flight_price procedure and call it with :new.dob, like this:
create or replace trigger calculate_flight_price_t1
after insert on passenger
for each row
begin
calculate_flight_price
( :new.booking_id
, :new.dob
);
end;
Oh my goodness... You are trying a Dirty Read in the cursor. This is a bad design.
If you allow a dirty read, it return the wrong answer, but also it returns an answer that never existed in the table. In a multiuser database, a dirty read can be a dangerous feature.
The point here is that dirty read is not a feature; rather, it's a liability. In Oracle Database, it's just not needed. You get all of the advantages of a dirty read—no blocking—without any of the incorrect results.
Read more on "READ UNCOMMITTED isolation level" which allows dirty reads. It provides a standards-based definition that allows for nonblocking reads.
Other way round
You are misusing the trigger. I mean wrong trigger used.
you insert / update a row in table A and a trigger on table A (for each row) executes a query on table A (through a procedure)??!!!
Oracle throws an ORA-04091 which is an expected and normal behavior, Oracle wants to protect you from yourself since it guarantees that each statement is atomic (i.e will either fail or succeed completely) and also that each statement sees a consistent view of the data
You would expect the query (2) not to see the row inserted on (1). This would be in contradiction
Solution: -- use before instead of after
CREATE OR REPLACE TRIGGER SOMENAME
BEFORE INSERT OR UPDATE ON SOMETABLE

select * through dblink

I have some trouble when trying to update a table by looping cursor which select from source table through dblink.
I have two database DB1, DB2.
They are two different database instance.
And I am using this following statement in DB1:
CURSOR TestCursor IS
SELECT a.*, 'A' TEST_COL_A, 'B' TEST_COL_B
FROM rpt.SOURCE#DB2 a;
BEGIN
For C1 in TestCursor loop
INSERT into RPT.TARGET
(
/*The company_name and cust_id are select from SOURCE table from DB2*/
COMPANY_NAME, CUST_ID, TEST_COL_A, TEST_COL_B
)
values
(
C1.COMPANY_NAME, C1.CUST_ID, C1.TEST_COL_A , C1.TEST_COL_B
) ;
End loop;
/*Some code...*/
End
Everything works fine until I add a column "NEW_COL" to SOURCE table#DB2
The insert data got the wrong value.
The value of TEST_COL_A , as I expect, should be 'A'.
However, it contains the value of NEW_COL which i add at SOURCE table.
And the value of TEST_COL_B contains 'A'.
Have anyone encounter the same issue?
It seems like oracle cache the table columns when it compile.
Is there any way to add a column to source table without recompile?
According to this:
Oracle Database does not manage
dependencies among remote schema
objects other than
local-procedure-to-remote-procedure
dependencies.
For example, assume that a local view
is created and defined by a query that
references a remote table. Also assume
that a local procedure includes a SQL
statement that references the same
remote table. Later, the definition of
the table is altered.
Therefore, the local view and
procedure are never invalidated, even
if the view or procedure is used after
the table is altered, and even if the
view or procedure now returns errors
when used. In this case, the view or
procedure must be altered manually so
that errors are not returned. In such
cases, lack of dependency management
is preferable to unnecessary
recompilations of dependent objects.
In this case you aren't quite seeing errors, but the cause is the same. You also wouldn't have a problem if you used explicit column names instead of *, which is usually safer anyway. If you're using * you can't avoid recompiling (unless, I suppose, the * is the last item in the select list, in which case any extra columns on the end wouldn't cause a problem - as long as their names didn't clash).
I recommend that you use a single set processing insert statement in DB1 rather than a row at a time cursor for loop for the insert, for example:
INSERT into RPT.TARGET
select COMPANY_NAME, CUST_ID, 'A' TEST_COL_A, 'B' TEST_COL_B
FROM rpt.SOURCE#DB2
;
Rationale:
Set processing will almost always out perform Row-at-a-time
processing [which is really slow-at-a-time processing].
Set processing the insert is a scalable solution. If the application will need to scale to tens of thousands of rows or millions of rows, the row-at-a-time solution will not likely scale.
Also, using the select * construct is dangerous for the reason you
encountered [and other similar reasons].

Resources