Im doing data migration form table1 to table 2 and drop some columns once the data migration is completed. But when i re-run the script again it is giving undeclared identifier for the dropped column.
Ex:
table 1 has column test1, test2, test3
table 2 has column col1, col2, col3
Now im filling table2 using table 1 data's
cursor cur select * from table1
loop
inserting into col1, col2, col3 values (cur.test1, cur.test2, cur.test3);
end loop;
and now drop column test3 from table1.
This works fine for the first and in the second run cur.test3 will not be available and giving col.test3 as undeclared identifier.
How to fix this issue
You cannot run a script referencing a column that does not exist. Either:
Do not drop the column.
If you do drop the column then edit the script to remove the dropped column so it becomes syntactically valid before you try to run it.
If you do drop the column then recreate the column before running the script so it becomes syntactically valid before you try to run it.
Convert all the queries to dynamic SQL (so that the queries are not checked at compile time but will be evaluated at run-time) and then catch any errors relating to missing columns. However, this option seems like an over-reaction to the issue that should be solved with option #1, #2 or #3.
Related
I have a spring batch program that reads data from one database then processes the data then writes to an Oracle database. This batch program runs on a schedule once a day. How can I avoid adding the same record each time it runs and only new values from the source DB?
One option is to create unique index (or primary key, if possible (depending on whether you want to allow nulls or not)) which will cause Oracle to automatically reject all rows whose column(s) violate uniqueness.
Other options requires some programming.
[EDIT: "silently" skip errors]
This is what I meant:
for cur_r in (select col1, col2
from another_table#db_link)
loop
begin --> you need inner BEGIN-EXCEPTION-END block
insert into new_table (col1, col2)
values (cur_r.col1, cur_r.col2);
exception
when dup_val_on_index then
null;
end;
end loop;
Another option uses pure SQL (i.e. no PL/SQL's loop):
insert into new_table (col1, col2)
select col1, col2
from another_table#db_link
where (col1, col2) not in (select col1, col2
from new_table);
This option doesn't even require unique index (which wouldn't harm, though) because NOT IN won't insert rows whose column values already exist in the target table.
It sounds like you're concerned about not processing the same source record multiple times. If that's the case you can add a field on your source table indicating that the data has already been extracted.
Oh, and - put a unique primary key on your tables. All of them. Even the ones where you don't think you need it. The primary key you add today is the one where you WON'T say at a later date, "Damn. I wish that table had a primary key". Don't ask me how I know...
Best of luck.
I guess you are using RepositoryItemReader as the source. If that is the case, you can add a custom method in the source Repository, including a validation condition to avoid already processed records, using #Query, and then go with that method in the RepositoryItemReader.
It would be something like
#Query("SELECT r FROM Records r WHERE r.isNew = 1")
Collection<Record> findAllNewRecords();
And then configure the reader like
RepositoryItemReader<Record> recordsReader = new RepositoryItemReader<>();
recordsReader.setRepository(recordsRepository);
recordsReader.setMethodName("findAllNewRecords");
Hope it helps
create table T_UNIQUE_VALUE( a number,b number, c varchar2(100));
alter table T_UNIQUE_VALUE ADD CONSTRAINT C_UNIQUE_VALUE UNIQUE (a,b);
Define log error table. Oralce will create err$_table_name
BEGIN
DBMS_ERRLOG.create_error_log(dml_table_name => 'T_UNIQUE_VALUE');
END;
Test it.
execut 2 times
insert into T_UNIQUE_VALUE values(1,1,'a') LOG ERRORS REJECT LIMIT UNLIMITED;
--check table T_UNIQUE_VALUE and err$_T_UNIQUE_VALUE
select * from err$_T_UNIQUE_VALUE; -- 1 row
select * from T_UNIQUE_VALUE; -- 1 row
Modify spring annotation.
#Modifying
#Query(value = "insert into T_UNIQUE_VALUE values(:a,:b,:c) LOG ERRORS REJECT LIMIT UNLIMITED", nativeQuery = true)
void insert_balbla(...)
I'm trying to insert data into a Time dimension table I created for my data warehouse. I manage to get the data to output to the server but cannot insert into the table. I have supplied partial code that reproduces the same error. I have tried multiple ways which include:
changing the row type from the table name to the cursor name(same
error).
placing the cursor(c1) into/AS a variable in the SELECT statement (same error).
I have attached a screenshot of running the code successfully printing it to the screen with DBMS_OUTPUT.PUT_LINE.
Screenshot of successful Anonymous Block completed
The below code produces the following error report:
ORA-06550:Line 15, column 17:
PL/SQL:ORA-00913: too many values
ORA-06550: line 15, column 5:
PL/SQL: SQL Statement ignored
---The following script populates a time dimension table for a standard calendar. ---
DECLARE
---Declare cursor for columns in Time dimension table---
cursor c1 is SELECT
DAY_ID,DAY_TIME_SPAN,DAY_END_DATE AS r_time
FROM time_calendar_dim_2;
--Rowtype variable for the cursor(c1) to insert values in multiple columns--
r_time c1%rowtype;
BEGIN
---initiating loop to insert multiple rows--
FOR r_time IN c1 LOOP
---Insert values into the time-dimensional table---
INSERT INTO time_calendar_dim_2(DAY_ID,DAY_TIME_SPAN,DAY_END_DATE)
VALUES
(SEQ_Time_Dim_IDSTART.nextval,r_time.DAY_ID,r_time.DAY_TIME_SPAN,r_time.DAY_END_DATE);
**END LOOP;
END;**
/
Presumably, this line:
INSERT INTO time_calendar_dim_2 VALUES (SEQ_Time_Dim_IDSTART.nextval,r_time.DAY_ID,r_time.DAY_TIME_SPAN,r_time.DAY_END_DATE,r_time.WEEK_DAY_FULL,r_time.WEEK_DAY_SHORT,r_time.DAY_NUM_OF_WEEK,r_time.DAY_NUM_OF_MONTH,r_time.DAY_NUM_OF_YEAR,r_time.MONTH_ID,r_time.MONTH_TIME_SPAN,r_time.MONTH_END_DATE,r_time.MONTH_SHORT_DESC,r_time.MONTH_LONG_DESC,r_time.MONTH_SHORT,r_time.MONTH_LONG,r_time.MONTH_NUM_OF_YEAR,r_time.QUARTER_ID,r_time.QUARTER_TIME_SPAN,r_time.QUARTER_END_DATE,r_time.QUARTER_NUM_OF_YEAR,r_time.HALF_NUM_OF_YEAR,r_time.YEAR_ID,r_time.YEAR_TIME_SPAN,r_time.YEAR_END_DATE);
fails because you're providing more values than your time_calendar_dim_2 has columns (it's hard to say without the DDL statement for time_calendar_dim_2.
Also, you should always explicitly enumerate the target columns in an INSERT statement, so instead of
INSERT INTO time_calendar_dim_2
VALUES (SEQ_Time_Dim_IDSTART.nextval, ...)
use
INSERT INTO time_calendar_dim_2 (idstart, ..)
VALUES (SEQ_Time_Dim_IDSTART.nextval, ...)
Otherwise, you're in for a nasty surprise if columns in your table are added or removed or (worse) the column ordering is not what you expect.
I'm getting ORA-04091: mutating error
I have a trigger on tableA. Inside this trigger it uses a start & end date field on the record the trigger is running for and breaks it into how many months it spans. I then loop over each month and add the exact duplicate record to tableB if it doesn't exist or update fields if it does exist. I was trying to do that with a merge where the 'using' is tableA (the one the trigger is firing on) but this causes the error.
I could check if the record exists (:NEW) in tableB and insert/update based on that, but since that's basically what a merge command is doing, is there a way to use merge in this fashion without getting the mutating error?
Assuming that the only information you need from A is the data in the row that is being modified, you can do something like
MERGE INTO b
USING( SELECT :new.col1, :new.col2, :new.col3, ... , :new.colN
FROM dual )
ON( ... )
...
That's basically the same thing that you'd do if you wanted to code a MERGE where the source was data from parameters passed into the procedure.
I know the database and table name and need to find a column name. Example as in emp table; I know data 7369 and table name as emp, and I need to get the column name as empno. My table has hundreds of columns and it is getting difficult to search each column name.
You don't have any choice but to search in every column. Please note though that this value could, potentially, appear in multiple columns and/or multiple times in a single column. There's no way to restrict how often it appears across an entire table.
This is the point of a database; everything stored in a column and, most importantly, that column has meaning. If you disassociate the data stored in a column from a meaning then you will have to search everything.
Two steps, not using cursors or complex pl/sql, only SQL Plus.
Produce your search queries:
select select '||
COLUMN_NAME ||
',count(*) from emp where ' ||
column_name || ' = 7369 group by '||
COLUMN_NAME || ';'
from cols
where table_name = 'EMP';
EG:
--------------------------------------------------------------------------------------
select SECOND,count(*) from TESTER where SECOND = 7369 group by SECOND;
(in my env, Second was a column in table TESTER)
Capture the output, clean up the headers and the like, and run it.
It will return every column that matches, along with a count of how many rows matched.
I have some trouble when trying to update a table by looping cursor which select from source table through dblink.
I have two database DB1, DB2.
They are two different database instance.
And I am using this following statement in DB1:
CURSOR TestCursor IS
SELECT a.*, 'A' TEST_COL_A, 'B' TEST_COL_B
FROM rpt.SOURCE#DB2 a;
BEGIN
For C1 in TestCursor loop
INSERT into RPT.TARGET
(
/*The company_name and cust_id are select from SOURCE table from DB2*/
COMPANY_NAME, CUST_ID, TEST_COL_A, TEST_COL_B
)
values
(
C1.COMPANY_NAME, C1.CUST_ID, C1.TEST_COL_A , C1.TEST_COL_B
) ;
End loop;
/*Some code...*/
End
Everything works fine until I add a column "NEW_COL" to SOURCE table#DB2
The insert data got the wrong value.
The value of TEST_COL_A , as I expect, should be 'A'.
However, it contains the value of NEW_COL which i add at SOURCE table.
And the value of TEST_COL_B contains 'A'.
Have anyone encounter the same issue?
It seems like oracle cache the table columns when it compile.
Is there any way to add a column to source table without recompile?
According to this:
Oracle Database does not manage
dependencies among remote schema
objects other than
local-procedure-to-remote-procedure
dependencies.
For example, assume that a local view
is created and defined by a query that
references a remote table. Also assume
that a local procedure includes a SQL
statement that references the same
remote table. Later, the definition of
the table is altered.
Therefore, the local view and
procedure are never invalidated, even
if the view or procedure is used after
the table is altered, and even if the
view or procedure now returns errors
when used. In this case, the view or
procedure must be altered manually so
that errors are not returned. In such
cases, lack of dependency management
is preferable to unnecessary
recompilations of dependent objects.
In this case you aren't quite seeing errors, but the cause is the same. You also wouldn't have a problem if you used explicit column names instead of *, which is usually safer anyway. If you're using * you can't avoid recompiling (unless, I suppose, the * is the last item in the select list, in which case any extra columns on the end wouldn't cause a problem - as long as their names didn't clash).
I recommend that you use a single set processing insert statement in DB1 rather than a row at a time cursor for loop for the insert, for example:
INSERT into RPT.TARGET
select COMPANY_NAME, CUST_ID, 'A' TEST_COL_A, 'B' TEST_COL_B
FROM rpt.SOURCE#DB2
;
Rationale:
Set processing will almost always out perform Row-at-a-time
processing [which is really slow-at-a-time processing].
Set processing the insert is a scalable solution. If the application will need to scale to tens of thousands of rows or millions of rows, the row-at-a-time solution will not likely scale.
Also, using the select * construct is dangerous for the reason you
encountered [and other similar reasons].