11g get invalid invalid rowid through v$logminer_content - oracle

I used logminer to get change data from archivelog, but get invalid rowid 'AAAAAAAAAAAAAAAAAA'. How could this happened. It just a insert operation.
copy catalog
begin
sys.dbms_logmnr_d.build(options => dbms_logmnr_d.STORE_IN_REDO_LOGS);
end;
/
add logfile
begin
sys.dbms_logmnr.add_logfile(LogFileName => '/arch/archlog/SZO1ABS9/ARC0000286133_0846017616.0001',
Options => sys.dbms_logmnr.NEW);
end;
/
start logmnr
begin
sys.dbms_logmnr.start_logmnr(Options => sys.dbms_logmnr.DICT_FROM_REDO_LOGS +
sys.dbms_logmnr.COMMITTED_DATA_ONLY);
end;
/
fetch result
select scn,start_scn,commit_scn,timestamp,operation,row_id,sql_redo,sql_undo from v$logmnr_contents
where row_id = 'AAAAAAAAAAAAAAAAAA' and scn = '7590067871061';

I'm sure you got around this by now lol, but recently had to deal with the same problem.
I'm not sure what the underlying cause but I worked out two approaches to fix it.
In a postprocessing pass, I fixed up these rowid's by issuing a new query to get the table's pkey value via MINE_VALUE, you can then use that to query the table via its pkey for the rowid.
If you have flashback enabled, this fixup is apparently already taken care of. If you look at the same transaction in FLASHBACK_TRANSACTION_QUERY or a versions query on the table, you'll see the correct ROWID and no 'AAAA..'

Related

Trying to delete a row based upon condition defined in my trigger (SQL)

I am trying to create a row level trigger to delete a row if a value in the row is being made NULL. My business parameters state that if a value is being made null, then the row must be deleted. Also, I cannot use a global variable.
BEGIN
IF :NEW.EXHIBIT_ID IS NULL THEN
DELETE SHOWING
WHERE EXHIBIT_ID = :OLD.EXHIBIT_ID;
END IF;
I get the following errors:
ORA-04091: table ISA722.SHOWING is mutating, trigger/function may not see it
ORA-06512: at "ISA722.TRG_EXPAINT", line 7
ORA-04088: error during execution of trigger 'ISA722.TRG_EXPAINT'
When executing this query:
UPDATE SHOWING
SET EXHIBIT_ID = NULL
WHERE PAINT_ID = 5104
As already indicated this is a terrible idea/design. Triggers are very poor methods for enforcing business rules. These should be enforced in the application or better (IMO) by a stored procedure called by the application. In this case not only is it a bad idea, but it cannot be implemented as desired. Within a trigger Oracle does not permit accessing the table the trigger fired was fired on. That is what mutating indicates. Think of trying to debug this or resolve a problem a week later. Nevertheless this non-sense can be accomplished by creating view and processing against it instead of the table.
-- setup
create table showing (exhibit_id integer, exhibit_name varchar2(50));
create view show as select * from showing;
-- trigger on VIEW
create or replace trigger show_iiur
instead of insert or update on show
for each row
begin
merge into showing
using (select :new.exhibit_id new_eid
, :old.exhibit_id old_eid
, :new.exhibit_name new_ename
from dual
) on (exhibit_id = old_eid)
when matched then
update set exhibit_name = new_ename
delete where new_eid is null
when not matched then
insert (exhibit_id, exhibit_name)
values (:new.exhibit_id, :new.exhibit_name);
end ;
-- test data
insert into show(exhibit_id, exhibit_name)
select 1,'abc' from dual union all
select 2,'def' from dual union all
select 3,'ghi' from dual;
-- 3 rows inserted
select * from show;
--- test
update show
set exhibit_name = 'XyZ'
where exhibit_id = 3;
-- 1 row updated
-- Now for the requested action. Turn the UPDATE into a DELETE
update show
set exhibit_id = null
where exhibit_name = 'def';
-- 1 row updated
select * from show;
-- table and view are the same (expect o rows)
select * from show MINUS select * from showing
UNION ALL
select * from showing MINUS select * from show;
Again this is a bad option yet you can do. But just because you can doesn't mean you should. Or that you'll be happy with the result. Good Luck.
You have written a trigger that fires after or before a row change. This is in the middle of an execution. You cannot delete a row from the same table in that moment.
So you must write an after statement trigger instead that only fires when the whole statement has run.
create or replace trigger mytrigger
after update of exhibit_id on showing
begin
delete from showing where exhibit_id is null;
end mytrigger;
Demo: https://dbfiddle.uk/?rdbms=oracle_18&fiddle=dd5ade700d49daf14f4cdc71aed48e17
What you can do is create an extra column like is_to_be_deleted in the same table, and do this:
UPDATE SHOWING
SET EXHIBIT_ID = NULL, is_to_be_deleted = 'Y'
WHERE PAINT_ID = 5104;
You can use this parameter to implement your business logic of not showing the null details.
And later you can schedule a batch delete on that table to clean up these rows (or maybe archive it).
Benefit: you can avoid an extra unnecessary trigger on that table.
Nobody, will suggest you to use trigger to do this type of delete as it is expensive.

How to create a procedure which checks if there are any recently added records to the table and if there are then move them to archive table

I have to create a procedure which searches any recently added records and if there are then move them to ARCHIVE table.
This is my statement which filters recently added records
SELECT
CL_ID,
CL_NAME,
CL_SURNAME,
CL_PHONE,
VEH_ID,
VEH_REG_NO,
VEH_MODEL,
VEH_MAKE_YEAR,
WD_ID,
WORK_DESC,
INV_ID,
INV_SERIES,
INV_NUM,
INV_DATE,
INV_PRICE
FROM
CLIENT,
INVOICE,
VEHICLE,
WORKS,
WORKS_DONE
WHERE
Client.CL_ID=Invoice.INV_CL_ID and
Invoice.INV_CL_ID = Client.CL_ID and
Client.CL_ID = Vehicle.VEH_CL_ID and
Vehicle.VEH_ID = Works_Done.WD_VEH_ID and
Works_done.WD_INV_ID = Invoice.INV_ID and
WORKS_DONE.WD_WORK_ID = Works.WORK_ID and
Works_done. Timestamp >= sysdate -1;
You may need something like this (pseudo-code):
create or replace procedure moveRecords is
vLimitDate timestamp := systimestamp -1;
begin
insert into table2
select *
from table1
where your_date >= vLimitDate;
--
delete table1
where your_date >= vLimitDate;
end;
Here are the steps I've used for this sort of task in the past.
Create a global temporary table (GTT) to hold a set of ROWIDs
Perform a multitable direct path insert, which selects the rows to be archived from the source table and inserts their ROWIDs into the GTT and the rest of the data into the archive table.
Perform a delete from the source table, where the source table ROWID is in the GTT of rowids
Issue a commit.
The business with the GTT and the ROWIDs ensures that you have 100% guaranteed stability in the set of rows that you are selecting and then deleting from the source table, regardless of any changes that might occur between the start of your select and the start of your delete (other than someone causing a partitioned table row migration or shrinking the table).
You could alternatively achieve that through changing the transaction isolation level.
O.K. may be something like this...
The downside is - it can be slow for large tables.
The upside is that there is no dependence on date and time - so you can run it anytime and synchronize your archives with live data...
create or replace procedure archive is
begin
insert into archive_table
(
select * from main_table
minus
select * from archive_table
);
end;

How to update string value by string comparison condition PL/SQL

I want update all values in my tables, but this can kill my database
UPDATE Table_1
SET Value = 'Some string with but changed'
where value = 'Some string without changes';
Can I do this by procedures, and it guarantee that it will not perform in infinty please i need some tips?
Edit
I read about cursors, but how can i use it
Your SQL seems fine and that is the preferred solution. A cursor will normally be far, far slower.
If you cannot create an index and the update above is really that slow, try the following. Considering I don't have the rest of the table definition to work with, I assume your primary key is a single field named ID:
First, create a temporary table with only the matching records:
CREATE TEMPORARY TABLE temp as
SELECT *
FROM Table_1
WHERE value = 'Some string without changes';
Then, update using this temporary table:
UPDATE Table_1 SET
Table_1.Value = 'Some string with but changed'
WHERE EXISTS (
SELECT *
FROM Temp
WHERE Temp.ID = Table_1.ID
);
Another approach if your DB is higher than 11g R1 version. Oracle has provided a beautiful package called DBMS_PARALLEL_EXECUTE which is used for large DMLS or any process which can be split into chunks and can be parallely done.

Writing Procedure to enforce constraints + Testing

I need to set a constraint that the user is unable to enter any records after he/she has entered 5 records in a single month. Would it be advisable that I write a trigger or procedure for that? Else is that any other ways that I can setup the constraint?
Instead of writing a trigger i have opt to write a procedure for the constraint but how do i check if the procedure is working?
Below is the procedure:
CREATE OR REPLACE PROCEDURE InsertReadingCheck
(
newReadNo In Int,
newReadValue In Int,
newReaderID In Int,
newMeterID In Int
)
AS
varRowCount Int;
BEGIN
Select Count(*) INTO varRowCount
From Reading
WHERE ReaderID = newReaderID
AND Trunc(ReadDate,'mm') = Trunc(Sysdate,'mm');
IF (varRowCount >= 5) THEN
BEGIN
DBMS_OUTPUT.PUT_LINE('*************************************************');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE(' You attempting to enter more than 5 Records ');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('*************************************************');
ROLLBACK;
RETURN;
END;
ELSIF (varRowCount < 5) THEN
BEGIN
INSERT INTO Reading
VALUES(seqReadNo.NextVal, sysdate, newReadValue,
newReaderID, newMeterID);
COMMIT;
END;
END IF;
END;
Anyone can help me look through
This is the sort of thing that you should avoid putting in a trigger. Especially the ROLLBACK and the COMMIT. This seems extremely dangerous (and I'm not even sure whether it's possible). You might have other transactions that you wish to commit that you rollback or vice versa.
Also, by putting this in a trigger you are going to get the following error:
ORA-04091: table XXXX is mutating, trigger/function may not see it
There are ways round this but they're excessive and involve doing something funky in order to get round Oracle's insistence that you do the correct thing.
This is the perfect opportunity to use a stored procedure to insert data into your table. You can check the number of current records prior to doing the insert meaning that there is no need to do a ROLLBACK.
It depends upon your application, if insert is already present in your application many times then trigger is better option.
This is a behavior constraint. Its a matter of opinion but I would err on the side of keeping this kind of business logic OUT of your database. I would instead keep track of who added what records in the records table, and on what day/times. You can have a SP to get this information, but then your code behind should handle whether or not the user can see certain links (or functions) based on the data that's returned. Whether that means keeping the user from accessing the page(s) where they insert records, or give them read only views is up to you.
One declarative way you could solve this problem that would obey all concurrency rules is to use a separate table to keep track of number of inserts per user per month:
create table inserts_check (
ReaderID integer not null,
month date not null,
number_of_inserts integer constraint max_number_of_inserts check (number_of_inserts <= 5),
primary key (ReaderID, month)
);
Then create a trigger on the table (or all tables) for which inserts should be capped at 5:
create trigger after insert on <table>
for each row
begin
MERGE INTO inserts_check t
USING (select 5 as ReaderID, trunc(sysdate, 'MM') as month, 1 as number_of_inserts from dual) s
ON (t.ReaderID = s.ReaderID and t.month = s.month)
WHEN MATCHED THEN UPDATE SET t.number_of_inserts = t.number_of_inserts + 1
WHEN NOT MATCHED THEN INSERT (ReaderID, month, number_of_inserts)
VALUES (s.ReaderID, s.month, s.number_of_inserts);
end;
Once the user has made 5 inserts, the constraint max_number_of_inserts will fail.

DB2 for IBM iSeries: IF EXISTS statement syntax

I am familiar with Sybase which allows queries with format: IF EXISTS () THEN ... ELSE ... END IF (or very close). This a powerful statement that allows: "if exists, then update, else insert".
I am writing queries for DB2 on IBM iSeries box. I have seen the CASE keyword, but I cannot make it work. I always receive the error: "Keyword CASE not expected."
Sample:
IF EXISTS ( SELECT * FROM MYTABLE WHERE KEY = xxx )
THEN UPDATE MYTABLE SET VALUE = zzz WHERE KEY = xxx
ELSE INSERT INTO MYTABLE (KEY, VALUE) VALUES (xxx, zzz)
END IF
Is there a way to do this against DB2 on IBM iSeries? Currently, I run two queries. First a select, then my Java code decides to update/insert. I would rather write a single query as my server is located far away (across the Pacific).
+UPDATE+
DB2 for i, as of version 7.1, now has a MERGE statement which does what you are looking for.
>>-MERGE INTO--+-table-name-+--+--------------------+----------->
'-view-name--' '-correlation-clause-'
>--USING--table-reference--ON--search-condition----------------->
.------------------------------------------------------------------------.
V |
>----WHEN--+-----+--MATCHED--+----------------+--THEN--+-update-operation-+-+----->
'-NOT-' '-AND--condition-' +-delete-operation-+
+-insert-operation-+
'-signal-statement-'
See IBM i 7.1 InfoCenter DB2 MERGE statement reference page
DB/2 on the AS/400 does not have a conditional INSERT / UPDATE statement.
You could drop the SELECT statement by executing an INSERT directly and if it fails execute the UPDATE statement. Flip the order of the statements if your data is more likely to UPDATE than INSERT.
A faster option would be to create a temporary table in QTEMP, INSERT all of the records into the temporary table and then execute a bulk UPDATE ... WHERE EXISTS and INSERT ... WHERE NOT EXISTS at the end to merge all of the records into the final table. The advantage of this method is that you can wrap all of the statements in a batch to minimize round trip communication.
You can perform control-flow logic (IF...THEN...ELSE) in an SQL stored procedure. Here's sample SQL source code:
-- Warning! Untested code ahead.
CREATE PROCEDURE libname.UPSERT_MYTABLE (
IN THEKEY DECIMAL(9,0),
IN NEWVALUE CHAR(10) )
LANGUAGE SQL
MODIFIES SQL DATA
BEGIN
DECLARE FOUND CHAR(1);
-- Set FOUND to 'Y' if the key is found, 'N' if not.
-- (Perhaps there's a more direct way to do it.)
SET FOUND = 'N';
SELECT 'Y' INTO FOUND
FROM SYSIBM.SYSDUMMY1
WHERE EXISTS
(SELECT * FROM MYTABLE WHERE KEY = THEKEY);
IF FOUND = 'Y' THEN
UPDATE MYTABLE
SET VALUE = NEWVALUE
WHERE KEY = THEKEY;
ELSE
INSERT INTO MYTABLE
(KEY, VALUE)
VALUES
(THEKEY, NEWVALUE);
END IF;
END;
Once you create the stored procedure, you call it like you would any other stored procedure on this platform:
CALL UPSERT_MYTABLE( xxx, zzz );
This slightly over complex piece of SQL procedure will solve your problem:
IBM Technote
If you want to do a mass update from another table then have a look at the MERGE statement which is an incredibly powerful statement which lets you insert, update or delete depending on the values from another table.
IBM DB2 Syntax

Resources