I have some trouble when trying to update a table by looping cursor which select from source table through dblink.
I have two database DB1, DB2.
They are two different database instance.
And I am using this following statement in DB1:
CURSOR TestCursor IS
SELECT a.*, 'A' TEST_COL_A, 'B' TEST_COL_B
FROM rpt.SOURCE#DB2 a;
BEGIN
For C1 in TestCursor loop
INSERT into RPT.TARGET
(
/*The company_name and cust_id are select from SOURCE table from DB2*/
COMPANY_NAME, CUST_ID, TEST_COL_A, TEST_COL_B
)
values
(
C1.COMPANY_NAME, C1.CUST_ID, C1.TEST_COL_A , C1.TEST_COL_B
) ;
End loop;
/*Some code...*/
End
Everything works fine until I add a column "NEW_COL" to SOURCE table#DB2
The insert data got the wrong value.
The value of TEST_COL_A , as I expect, should be 'A'.
However, it contains the value of NEW_COL which i add at SOURCE table.
And the value of TEST_COL_B contains 'A'.
Have anyone encounter the same issue?
It seems like oracle cache the table columns when it compile.
Is there any way to add a column to source table without recompile?
According to this:
Oracle Database does not manage
dependencies among remote schema
objects other than
local-procedure-to-remote-procedure
dependencies.
For example, assume that a local view
is created and defined by a query that
references a remote table. Also assume
that a local procedure includes a SQL
statement that references the same
remote table. Later, the definition of
the table is altered.
Therefore, the local view and
procedure are never invalidated, even
if the view or procedure is used after
the table is altered, and even if the
view or procedure now returns errors
when used. In this case, the view or
procedure must be altered manually so
that errors are not returned. In such
cases, lack of dependency management
is preferable to unnecessary
recompilations of dependent objects.
In this case you aren't quite seeing errors, but the cause is the same. You also wouldn't have a problem if you used explicit column names instead of *, which is usually safer anyway. If you're using * you can't avoid recompiling (unless, I suppose, the * is the last item in the select list, in which case any extra columns on the end wouldn't cause a problem - as long as their names didn't clash).
I recommend that you use a single set processing insert statement in DB1 rather than a row at a time cursor for loop for the insert, for example:
INSERT into RPT.TARGET
select COMPANY_NAME, CUST_ID, 'A' TEST_COL_A, 'B' TEST_COL_B
FROM rpt.SOURCE#DB2
;
Rationale:
Set processing will almost always out perform Row-at-a-time
processing [which is really slow-at-a-time processing].
Set processing the insert is a scalable solution. If the application will need to scale to tens of thousands of rows or millions of rows, the row-at-a-time solution will not likely scale.
Also, using the select * construct is dangerous for the reason you
encountered [and other similar reasons].
Related
I am using mybatis to perform a massive batch insert on an oracle DB.
My process is very simple: I am taking records from a list of files and inserting them into a specific table after performing some checks on the data.
-Each file contains an average of 180.000 records and I can have more than one file.
-Some records can be present in more than one file.
-A record is identical to another one if EVERY column matches, in other words I cannot simply perform a check on a specific field. And I have defined a constraint in my DB which makes sure this condition is satisfied.
To put it simply I want to just ignore the constraint exception Oracle will give to me in case that constraint is violated.
Record is not present?-->insert
Record is already present?--> go ahead
is this possible with mybatis?Or can I accomplish something at DB level?
I have control on both Application Server and DB so please tell me what's the most efficient way to accomplish this task (even though I'd like to avoid being too much DB dependant...)
of course, I'd like to avoid performing a select* before each insertion...given the number of records I am dealing with it would ruin my application's performances
Use the IGNORE_ROW_ON_DUPKEY_INDEX hint:
insert /*+ IGNORE_ROW_ON_DUPKEY_INDEX(table_name index_name) */
into table_name
select * ...
I'm not sure about JDBC, but at least in OCI it is possible. With batch operations you pass vectors as bind variables and you also get back vector(s) of returned IDs and also a vector of error codes.
You can also use MERGE on database server side together with custon collection types. Something like:
merge into t
using ( select * from TABLE(:var) v)
on ( v.id = t.id )
when not matched then insert ...
Where :var is bind variable of SQL type: TABLE OF <recordname>
The word "TABLE" is a construct used to cast from bind variable into a table.
Another option is to use SQL error loggin clause:
DBMS_ERRLOG.create_error_log (dml_table_name => 't');
insert into t(...) values(...) log errors reject limit unlimited;
Then after the load you will have to truncate error loging table err$_t;
another option would be to use external tables
It looks that any solution is quite a lot work to do, when compared to using sqlldr.
Ignore error with error table
insert
into table_name
select *
from selected_table
LOG ERRORS INTO SANJI.ERROR_LOG('some comment' )
REJECT LIMIT UNLIMITED;
and error table schema is :
CREATE GLOBAL TEMPORARY TABLE SANJI.ERROR_LOG (
ora_err_number$ number,
ora_err_mesg$ varchar2(2000),
ora_err_rowid$ rowid,
ora_err_optyp$ varchar2(2),
ora_err_tag$ varchar2(2000),
n1 varchar2(128)
)
ON COMMIT PRESERVE ROWS;
I have read and understand that Oracle uses only global temp tables unlike MS SQL which allows #temp tables. The situation that I have would call for me to create hundreds of Global temp tables in order to complete the DB conversion I am working on from MS SQL to Oracle. I want to know if there is another method out there, within a Oracle Stored Procedure, other than creating all of these tables which will have to be maintained in the DB.
Thank You
" Most of the time the only thing the temp tables are used within a
stored proc and then truncated at the end. We do constant upgrades to
our applications and having them somewhat comparable ensures that when
a change is made in one version that it can be easily merged to the
other."
T-SQL Temp tables are essentially memory structures. They provide benefits in MSSQL which are less obvious in Oracle, because of differences in the two RDBMS architectures. So if you were looking to migrate then you would be well advised to take an approach more fitted to Oracle.
However, you have a different situation, and obviously keeping the two code bases in sync will make your life easier.
The closest thing to temporary tables as you want to use them are PL/SQL collections; specifically, nested tables.
There are a couple of ways of declaring these. The first is to use a SQL template - a cursor - and define a nested table type based on it. The second is to declare a record type and then define a nested table on that. In either case, populate the collection variable with a bulk operation.
declare
-- approach #1 - use a cursor
cursor c1 is
select *
from t23;
type nt1 is table of c1%rowtype;
recs1 nt1;
-- approach #1a - use a cursor with an explicit projection
cursor c1a is
select id, col_d, col_2
from t23;
type nt1a is table of c1a%rowtype;
recs1 nt1a;
-- approach #2 - use a PL/SQL record
type r2 is record (
my_id number
, some_date date
, a_string varchar2(30)
);
type nt2 is table of r2;
recs2 nt2;
begin
select *
bulk collect into recs1
from t23;
select id, col_d, col_2
bulk collect into recs2
from t23;
end;
/
Using a cursor offers the advantage of automatically reflecting changes in the underlying table(s). Although the RECORD provides the advantage of stability in the face of changes in the underlying table(s). It just depends what you want :)
There's a whole chapter in the PL/SQL reference manual. Read it to find out more.
Friend, I have question about cascade trigger.
I have 2 tables, table data that has 3 attributes (id_data, sum, and id_tool), and table tool that has 3 attributes (id_tool, name, sum_total). table data and tool are joined using id_tool.
I want create trigger for update info sum_total. So , if I inserting on table data, sum_total on table tool where tool.id_tool = data.id_tool will updating too.
I create this trigger, but error ora-04090.
create or replace trigger aft_ins_tool
after insert on data
for each row
declare
v_stok number;
v_jum number;
begin
select sum into v_jum
from data
where id_data= :new.id_data;
select sum_total into v_stok
from tool
where id_tool=
(select id_tool
from data
where id_data= :new.id_data);
if inserting then
v_stok := v_stok + v_jum;
update tool
set sum_total=v_stok
where id_tool=
(select id_tool
from data
where id_data= :new.id_data);
end if;
end;
/
please give me opinion.
Thanks.
The ora-04090 indicates that you already have an AFTER INSERT ... FOR EACH ROW trigger on that table. Oracle doesn't like that, because the order in which the triggers fire is unpredictable, which may lead to unpredictable results, and Oracle really doesn't like those.
So, your first step is to merge the two sets of code into a single trigger. Then the real fun begins.
Presumably there is only one row in data matching the current value of id_data (if not your data model is rally messed up and there's no hope for your situation). Anyway, that means the current row already gives you access to the values of :new.sum and :new.id_tool. So you don't need those queries on the data table: removing those selects will remove the possibility of "mutating table" errors.
As a general observation, maintaining aggregate or summary tables like this is generally a bad idea. Usually it is better just to query the information when it is needed. If you really have huge volumes of data then you should use a materialized view to maintain the summary, rather than hand-rolling something.
We are trying to copy the current row of a table to mirror table by using a trigger before delete / update. Below is the working query
BEFORE UPDATE OR DELETE
ON CurrentTable FOR EACH ROW
BEGIN
INSERT INTO MirrorTable
( EMPFIRSTNAME,
EMPLASTNAME,
CELLNO,
SALARY
)
VALUES
( :old.EMPFIRSTNAME,
:old.EMPLASTNAME,
:old.CELLNO,
:old.SALARY
);
END;
But the problem is we have more than 50 coulmns in the current table and dont want to mention all those column names. Is there a way to select all coulmns like
:old.*
SELECT * INTO MirrorTable FROM CurrentTable
Any suggestions would be helpful.
Thanks,
Realistically, no. You'll need to list all the columns.
You could, of course, dynamically generate the trigger code pulling the column names from DBA_TAB_COLUMNS. But that is going to be dramatically more work than simply typing in 50 column names.
If your table happens to be an object table, :new would be an instance of that object so you could insert that. But it would be rather rare to have an object table.
If your 'current' and 'mirror' tables have EXACTLY the same structure you may be able to use something like
INSERT INTO MirrorTable
SELECT *
FROM CurrentTable
WHERE CurrentTable.primary_key_column = :old.primary_key_column
Honestly, I think that this is a poor choice and wouldn't do it, but it's a more-or-less free world and you're free (more or less :-) to make your own choices.
Share and enjoy.
For what it's worth, I've been writing the same stuff and used this to generate the code:
SQL> set pagesize 0
SQL> select ':old.'||COLUMN_NAME||',' from all_tab_columns where table_name='BIGTABLE' and owner='BOB';
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
...
If you feed all columns, no need to mention them twice (and you may use NULL for empty columns):
INSERT INTO bigtable VALUES (
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
NULL,
NULL);
people writing tables with that many columns should have no desserts ;-)
I've just found the following code:
select max(id) from TABLE_NAME ...
... do some stuff ...
insert into TABLE_NAME (id, ... )
VALUES (max(id) + 1, ...)
I can create a sequence for the PK, but there's a bunch of existing code (classic asp, existing asp.net apps that aren't part of this project) that's not going to use it.
Should I just ignore it, or is there a way to fix it without going into the existing code?
I'm thinking that the best option is just to do:
insert into TABLE_NAME (id, ... )
VALUES (select max(id) + 1, ...)
Options?
You can create a trigger on the table that overwrites the value for ID with a value that you fetch from a sequence.
That way you can still use the other existing code and have no problems with concurrent inserts.
If you cannot change the other software and they still do the select max(id)+1 insert that is most unfortunate. What you then can do is:
For your own insert use a sequence and populate the ID field with -1*(sequence value).
This way the insert will not interfere with the existing programs, but also not conflict with the existing programs.
(of do the insert without a value for id and use a trigger to populate the ID with the negative value of a sequence).
As others have said, you can override the max value in a database trigger using a sequence. However, that could cause problems if any of the application code uses that value like this:
select max(id) from TABLE_NAME ...
... do some stuff ...
insert into TABLE_NAME (id, ... )
VALUES (max(id) + 1, ...)
insert into CHILD_TABLE (parent_id, ...)
VALUES (max(id) + 1, ...)
Use a seqeunce in a before insert row trigger. select max(id) + 1 doesn't work in a multi concerrency environment.
This quickly turns in to a discussion of application architecture, especially when the question boils down to "what should I do?"
Primary keys in Oracle really need to come from sequences and since you're dealing with complex insert logic (parent/child inserts, at least) in your application code, you should go into the existing code, as you say (since triggers probably won't help you).
On one extreme you could take away direct SQL access from applications and make them call services so the insert/update/delete code can be centralized. Or you could rewrite your code using some sort of MVC architecture. I'm assuming both are overkill for your situation.
Is the id column at least set to be a true primary key so there's a constraint that will keep duplicates from occurring? If not, start there.
Once the primary key is in place, or if it already is, it's only a matter of time until inserts start to fail; you'll know when they start to fail, right? If not, get on the error-logging.
Now fix the application code. While you're in there, you should at least write and call helper code so your database interactions are in as few places as possible. Then provide some leadership to the other developers and make sure they use the helper code too.
Big question: does anybody rely on the value of the PK? If not I would recommend using a trigger, fetching the id from a sequence and setting it. The inserts wouldn't specify and id at all.
I am not sure but the
insert into TABLE_NAME (id, ... )
VALUES (select max(id) + 1, ...)
might cause problems when to sessions reach that code. It might be that oracle reads the table (calculating max(id)) and then trys to get the lock on the PK for insertion. If that is the case two concurrent session might try to use the same id, causing an exception in the second session.
You could add some logging to the trigger, to check if inserts get processed that already have an ID set. So you know you have still to hunt down some place where the old code is used.
It can be done by fetching the max value in a variable and then just insert it in the table like
Declare
v_max int;
select max(id) into v_max from table;
insert into table values((v_max+rownum),val1,val2....,valn);
commit;
This will create a sequence in a single as well as Bulk inserts.