What is going on underneath FDA queries? - oracle

Let's say I want to pull data for the TEST_TABLE table for some date. I create a query with FDA syntax:
select * from TEST_TABLE as of timestamp (timestamp 2021.05.05 15:00:15);
I want to check how exactly the query looks like in oracle engine. I.e. what are the conditions of this query, what tables are the data taken from etc....
Execution plan returned me this info:
Predicate Information (identified by operation id):
------------------------------------------
* 4 - filter(("STARTSCN"<=148411288669 OR "STARTSCN" IS NULL) AND "ENDSCN">148411288669 AND ("OPERATION"<>'D' OR "OPERATION" IS NULL) AND "ENDSCN"<=155682149589)
* 5 - filter("STARTSCN"<=148411288669 OR "STARTSCN" IS NULL)
* 7 - filter(("T"."VERSIONS_STARTSCN" IS NULL OR "T"."VERSIONS_STARTSCN"<=148411288669) AND ("T"."VERSIONS_ENDSCN" IS NULL OR "T"."VERSIONS_ENDSCN">148411288669) AND ("T"."VERSIONS_OPERATION" IS NULL
OR "T"."VERSIONS_OPERATION"<>'D'))
* 8 - filter(("ENDSCN"(+) IS NULL OR "ENDSCN"(+)>155682149589) AND ("STARTSCN"(+)<155682149589 OR "STARTSCN"(+) IS NULL))
* 9 - access("RID"(+)=ROWIDTOCHAR("T".ROWID))
But it's not quite what I'm looking for... When I add these into where section in TEST_TABLE the results are not the same.

If you are referring to which tables are used by Flashback Data Archive, a.k.a FDA, you need first to understand how Oracle works with Flashback query.
Let me show you an example. I will create a small flashback archive group and a table will be assigned to it.
SQL> create flashback archive fda_test tablespace tbrepdata quota 1g retention 1 year ;
Flashback archive created.
SQL> grant flashback archive on fda_test to test ;
Grant succeeded.
SQL> grant flashback archive administer to test ;
Grant succeeded.
SQL> GRANT EXECUTE ON DBMS_FLASHBACK_ARCHIVE TO test;
Grant succeeded.
SQL> create table test.t1 ( c1 number, c2 number ) flashback archive fda_test ;
Table created.
SQL> insert into test.t1 values ( 1 , 1 ) ;
1 row created.
SQL> insert into test.t1 values ( 2 , 2 ) ;
1 row created.
SQL> insert into test.t1 values ( 3, 3 ) ;
1 row created.
SQL> commit ;
Commit complete.
SQL> update test.t1 set c1=4,c2=4 where c1=3 ;
1 row updated.
SQL> commit ;
Commit complete.
Now, if I do a query
SQL> col versions_startscn format 9999999999999999
SQL> col versions_endscn format 9999999999999999
SQL> r
1 SELECT versions_startscn,
2 --versions_starttime,
3 versions_endscn,
4 --versions_endtime,
5 versions_xid,
6 versions_operation,
7 c1,
8 c2
9* from test.t1 versions between scn minvalue and maxvalue
VERSIONS_STARTSCN VERSIONS_ENDSCN VERSIONS_XID V C1 C2
----------------- ----------------- ---------------- - ---------- ----------
13142361651647 13001C0000AB0000 U 4 4
13142361651581 13142361651647 20002A00BD960000 I 3 3
13142361651581 20002A00BD960000 I 2 2
13142361651581 20002A00BD960000 I 1 1
Let's check the plan
SQL> set autotrace traceonly
SQL> r
1 SELECT versions_startscn,
2 --versions_starttime,
3 versions_endscn,
4 --versions_endtime,
5 versions_xid,
6 versions_operation,
7 c1,
8 c2
9* from test.t1 versions between scn minvalue and maxvalue
Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 164 | 4264 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T1 | 164 | 4264 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------
Statistics
----------------------------------------------------------
5 recursive calls
4 db block gets
22 consistent gets
0 physical reads
0 redo size
1091 bytes sent via SQL*Net to client
591 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed
As you can see, Oracle is just accessing the table. Why ? Because the data is still in the undo tablespace, as the undo blocks have not yet expired. When you use FDA, Oracle will use always this approach when you use flashback query:
If the data to the Undo Tablespace, data is recover from it.
If the data is no longer available in the Undo tablespace, then it will retrieve the rows from the underlying FDA table.
The underlying table contains the archive data based on the retention established for the archive group
SQL> set lines 200
SQL> SELECT owner_name,
2 table_name,
3 flashback_archive_name,
4 archive_table_name,
5 status
6* FROM dba_flashback_archive_tables where owner_name = 'TEST' and table_name = 'T1'
OWNER_NAME TABLE_NAME FLASHBACK_ARCHIVE_NAME ARCHIVE_TABLE_NAME STATUS
------------------------------ ------------------------------ ------------------------------ ------------------------------ -------------
TEST T1 FDA_TEST SYS_FBA_HIST_2779773 ENABLED
If you are sure that the data you are recovering with as of timestamp is no longer in the undo tablespace, you can use a 10046 event to generate a trace file to really see how Oracle is really getting the data.
Although I wonder what is you are looking for in getting that level of detail.

Related

How to fetch latest 100 rows (inserted or updated) from Oracle table?

How to fetch latest 100 rows (inserted or updated) from Oracle table ?
Need queries for below situations -
1.Table does not have any date column.
2.Table has date column.
The query should work in Oracle 11g and later on in Oracle 12c.
Your table should have enabled rowdependencies option and fetch by ora_rowscn pseudocolumn:
Select * from t order by ora_rowscn desc fetch first 100 rows only
If your table has no rowdependencies option, you need to recreate it with rowdependencies
Oracle does track how many inserts, updates, and deletes are performed on a table in the DBA_TAB_MODIFICATIONS view. It is not kept real-time so if you need to examine it over a small time period then you have to flush the stats so you can see (easily).
execute dbms_stats.flush_database_monitoring_info;
-- Do work
select table_owner,table_name,inserts,updates,deletes from dba_tab_modifications where table_name = 'RDS_LOG';
TABLE_OWNER TABLE_NAME INSERTS UPDATES DELETES
------------------------------ ------------------------------ ---------- ---------- ----------
MYUSER RDS_LOG 4660111 0 1119531
SELECT * FROM table ORDER BY column DESC WHERE rownum < 100
SQL> create table dt_del_ex(id number);
Table created.
SQL> set serveroutput on
SQL> BEGIN
2
3 INSERT INTO dt_del_ex VALUES(1);
4
5 DBMS_OUTPUT.put_line(TO_CHAR(SQL%ROWCOUNT)||' rows inserted');
6
7 INSERT INTO dt_del_ex select rownum from dual connect by level <=10;
8
9 DBMS_OUTPUT.put_line(TO_CHAR(SQL%ROWCOUNT)||' rows inserted');
10
11 UPDATE dt_del_ex SET id = id + 3 WHERE id >= 9;
12
13 DBMS_OUTPUT.put_line(TO_CHAR(SQL%ROWCOUNT)||' rows updated');
14
15 DELETE FROM dt_del_ex WHERE id <= 10;
16
17 DBMS_OUTPUT.put_line(TO_CHAR(SQL%ROWCOUNT)||' rows deleted');
18
19 END;
20 /
1 rows inserted
10 rows inserted
2 rows updated
9 rows deleted

Trouble with Oracle 11.2.0.3 redo logs

I have a table in oracle 11.2.0.3 that I want to capture in the redo logs. The issue is that it has an sdo_geometry field. This is a legacy table that I can not change. But the good news is I do not need that sdo_geometry field.
So I have created a materialized view as shown below.
CREATE MATERIALIZED VIEW LOG ON LEGACY_TABLE_NAME
WITH PRIMARY KEY
INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW LEGACY_TABLE_NAME_MV
NOLOGGING
NOCACHE
BUILD IMMEDIATE
REFRESH FAST ON COMMIT
WITH PRIMARY KEY
AS
SELECT <List of non sdo_gemoetry columns> FROM LEGACY_TABLE_NAME;
The issue shows up when I do an update and look at the redo logs. Instead of seeing an update statement, I see a delete and insert statements. Since I am using a primary key, I would expect to see the update statement.
Does anyone know what I need to do to ensure that I see an update statement in the redo logs.
Thanks
I think you are misunderstanding what a redolog stores with what a materiliazed view log does.
Let's try to make a test for both scenarios:
LogMiner to verify the content of the redo logfiles, something we can see using v$LOGMNR_CONTENTS.
Example of Materialized view Log and Update operations
Oracle version: 12.2
RedoLog Contents
SQL> create table cpl_rep.test_redo_logs ( c1 number primary key , c2 number ) ;
Table created.
SQL> insert into cpl_rep.test_redo_logs values ( 1 , 1 );
1 row created.
SQL> insert into cpl_rep.test_redo_logs values ( 2 , 2 );
1 row created.
SQL> commit ;
Commit complete.
SQL> update cpl_rep.test_redo_logs set c1=3 , c2=3 where c1 = 2 ;
1 row updated.
SQL> commit ;
Commit complete.
SQL> select * from cpl_rep.test_redo_logs ;
C1 C2
---------- ----------
1 1
3 3
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit
Production
$ sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Sat Aug 8 21:53:05 2020
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> alter system switch logfile ;
System altered.
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit
Production
Now let's start a LogMiner session by loading the redo log files into LogMiner:
SQL> exec dbms_logmnr.add_logfile('/bbdd_odcgrc1r/redo1/redo11.ora' , 1);
SQL> exec dbms_logmnr.add_logfile('/bbdd_odcgrc1r/redo2/redo21.ora' , 1);
SQL> exec dbms_logmnr.add_logfile('/bbdd_odcgrc1r/redo1/redo12.ora' , 1);
SQL> exec dbms_logmnr.add_logfile('/bbdd_odcgrc1r/redo2/redo22.ora' , 1);
SQL> exec dbms_logmnr.add_logfile('/bbdd_odcgrc1r/redo1/redo13.ora' , 1);
SQL> exec dbms_logmnr.add_logfile('/bbdd_odcgrc1r/redo2/redo23.ora' , 1);
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
SQL> exec dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog);
PL/SQL procedure successfully completed.
SQL> select count(*) from v$logmnr_contents where seg_name = upper('test_redo_logs') ;
COUNT(*)
----------
4
SQL> select sql_redo , seg_name from v$logmnr_contents where seg_name = upper('test_redo_logs') ;
SQL_REDO
--------------------------------------------------------------------------------
SEG_NAME
--------------------------------------------------------------------------------
create table cpl_rep.test_redo_logs ( c1 number primary key , c2 number ) ;
TEST_REDO_LOGS
insert into "CPL_REP"."TEST_REDO_LOGS"("C1","C2") values ('1','1');
TEST_REDO_LOGS
insert into "CPL_REP"."TEST_REDO_LOGS"("C1","C2") values ('2','2');
TEST_REDO_LOGS
SQL_REDO
--------------------------------------------------------------------------------
SEG_NAME
--------------------------------------------------------------------------------
update "CPL_REP"."TEST_REDO_LOGS" set "C1" = '3', "C2" = '3' where "C1" = '2' an
d "C2" = '2' and ROWID = 'AAGKh2AAAAAJIH1AAB';
TEST_REDO_LOGS
As you can see above, the UPDATE appears normally as any other DML operation in the SQL_REDO column of V$LOGMNR_CONTENTS. So, obviously the REDO files store any update operation as long as the operation is done in Logging mode, or the database is in FORCE LOGGING MODE, in which case it doesn't matter what mode the operation is done, because it will always be stored.
Materialized View Log
Let's create a materialized view log and a materialized view as you did in your question. However, to verify the content of the MLOG$ tables, I will put the refresh on demand, instead of on commit.
SQL> create table x ( c1 number primary key , c2 number ) ;
Table created.
SQL> insert into x values ( 1 , 1 ) ;
1 row created.
SQL> insert into x values ( 2 , 2 );
1 row created.
SQL> commit ;
Commit complete.
SQL> create materialized view log on x with primary key including new values ;
Materialized view log created.
SQL> create materialized view mv_x nologging nocache build immediate refresh fast on demand with primary key as select c1 , c2 from x ;
Materialized view created.
SQL> select * from x ;
C1 C2
---------- ----------
1 1
2 2
SQL> select * from mv_x ;
C1 C2
---------- ----------
1 1
2 2
SQL> insert into x values ( 3 , 3 );
1 row created.
SQL> commit ;
Commit complete.
SQL> update x set c1=4 , c2=4 where c1=3 ;
1 row updated.
SQL> commit ;
Commit complete.
As we did create the materialized view with refresh on demand, now let's the content of the MLOG$ table
SQL> select * from x ;
C1 C2
---------- ----------
1 1
2 2
4 4
SQL> select * from mv_x ;
C1 C2
---------- ----------
1 1
2 2
SQL> select * from mlog$_x
C1 SNAPTIME$ D O CHANGE_VEC XID$$
---------- --------- - - ---------- ----------------------------
3 01-JAN-00 I N FE 39406677128122001
3 01-JAN-00 D O 00 44473269658586765
4 01-JAN-00 I N FF 44473269658586765
Then I refresh
SQL> select * from x ;
C1 C2
---------- ----------
1 1
2 2
4 4
SQL> select * from mv_x ;
C1 C2
---------- ----------
1 1
2 2
SQL> exec dbms_mview.refresh('MV_X') ;
PL/SQL procedure successfully completed.
SQL> select * from mv_x ;
C1 C2
---------- ----------
1 1
2 2
4 4
SQL> select * from mlog$_x
2 ;
no rows selected
The reason why you don't see UPDATE on DMLTYPE$$ is because you choose Primary Key as WITH clause in your Materialized View creation. In that case, only D or I will appear in the column DMLTYPE$$ , but when it is an update, you will get two rows with the same transaction ID ( XID$$ field in the example above has the same value )
However, check what happen when I use ROWID instead of PRIMARY KEY
SQL> create materialized view log on x with rowid including new values ;
Materialized view log created.
SQL> create materialized view mv_x nologging nocache build immediate refresh fast on demand with rowid as select c1 , c2 from cpl_rep.x ;
Materialized view created.
SQL> select * from cpl_rep.mv_x ;
C1 C2
---------- ----------
1 1
2 2
3 3
SQL> select * from x ;
C1 C2
---------- ----------
1 1
2 2
3 3
SQL> insert into x values ( 4 , 4 ) ;
1 row created.
SQL> commit ;
Commit complete.
SQL> insert into x values ( 5 , 5 );
1 row created.
SQL> commit ;
Commit complete.
SQL> update x set c1=6 , c2=6 where c1=5 ;
1 row updated.
SQL> commit ;
Commit complete.
SQL> select * from x ;
C1 C2
---------- ----------
1 1
2 2
3 3
4 4
6 6
SQL> select * from mv_x ;
C1 C2
---------- ----------
1 1
2 2
3 3
Let' see the content of the M$LOG table now
SQL> col m_row$$ for a18
SQL> select * from mlog$_x ;
M_ROW$$ SNAPTIME$ D O CHANGE_VEC XID$$
------------------ --------- - - ---------- ----------------------------
AAGKh/AAAAAJJWnAAD 01-JAN-00 I N FE 3659458165104006
AAGKh/AAAAAJJWnAAE 01-JAN-00 I N FE 44754731750395757
AAGKh/AAAAAJJWnAAE 01-JAN-00 U U 06 12948119511653544
AAGKh/AAAAAJJWnAAE 01-JAN-00 U N 06 12948119511653544
I have now 4 rows, 2 for the inserts, 1 update for the field C1 and another for the field C2, which are in fact the same transaction ( field XID$$ )
I hope it clarifies how the MLOG$ tables are populated when you choose ROWID or PRIMARY KEYY. Note that materialized view log tables using primary keys also have rupd$_ tables. The rupd$_ table supports updateable materialized views, which are only possible on log tables with primary keys.

SL No generation

There is a master table name as projects master
In another form if I select project-1 sl no need to generate from project master(21819001). after successful 1 record save, if I select same project-1
sl no need to generate like 21819002, for next 21819003, so on.
Again after 3 record save if I select projects-3 sl should generate like 41819001 so on.
I am using Oracle Forms 10g.
Here's an example of what you might do - create additional table which contains the most recent SL_NO for each project. That number is created with an autonomous transaction function which makes sure that in a multi-user environment you won't get duplicates.
PROJECTS_MASTER table and some sample data:
SQL> create table projects_master
2 (project number primary key);
Table created.
SQL> insert into projects_master
2 select 21819001 from dual union all
3 select 41819001 from dual;
2 rows created.
This table will contain the most recent SL_NO for each project
SQL> create table projects_sl_no
2 (project number constraint fk_prsl references projects_master (project),
3 sl_no number);
Table created.
The function itself; if row for PAR_PROJECT exists, it'll update it. Otherwise, it'll create a brand new row in PROJECTS_SL_NO table.
SQL> create or replace function f_proj_sl_no
2 (par_project in projects_master.project%type)
3 return projects_sl_no.sl_no%type
4 is
5 pragma autonomous_transaction;
6 l_sl_no projects_sl_no.sl_no%type;
7 begin
8 select b.sl_no + 1
9 into l_sl_no
10 from projects_sl_no b
11 where b.project = par_project
12 for update of b.sl_no;
13
14 update projects_sl_no b
15 set b.sl_no = l_sl_no
16 where b.project = par_project;
17
18 commit;
19 return (l_sl_no);
20 exception
21 when no_data_found
22 then
23 lock table projects_sl_no in exclusive mode;
24
25 insert into projects_sl_no (project, sl_no)
26 values (par_project, par_project);
27
28 commit;
29 return (par_project);
30 end f_proj_sl_no;
31 /
Function created.
Testing:
SQL> select f_proj_sl_no(21819001) sl_no from dual;
SL_NO
----------
21819001
SQL> select f_proj_sl_no(21819001) sl_no from dual;
SL_NO
----------
21819002
SQL> select f_proj_sl_no(21819001) sl_no from dual;
SL_NO
----------
21819003
SQL> select f_proj_sl_no(41819001) sl_no from dual;
SL_NO
----------
41819001
SQL> select * from projects_sl_no;
PROJECT SL_NO
---------- ----------
21819001 21819003
41819001 41819001
SQL>
As you use Forms, you could call the F_PROJ_SL_NO function in WHEN-VALIDATE-RECORD trigger, PRE-INSERT, or even create a before insert database trigger if you're inserting into a more complex table than the ones I created for testing purposes. Anyway, that should be a simpler part of the story.

How to create a unique id for an existing table in PL SQL?

The situation is that, when I import a file into the database, one of the first thing I usually do is to assign an unique ID for each record.
I normally do below in TSQL
ALTER TABLE MyTable
ADD ID INT IDENTITY(1,1)
I am wondering if there is something similar in PL SQL?
All my search result come back with multiple steps.
Then I'd like to know what PL SQL programmer typically do to ID records after importing a file. Do they do that?
The main purpose for me to ID these records is to trace it back after manipulation/copying.
Again, I understand there is solution there, my further question is whether PL SQL programmer actually do that, or there is other alternative which making this step not necessary in PL SQL?
OK then, as you're on Oracle 11g, there's no identity column there so - back to multiple steps. Here's an example:
I'm creating a table that simulates your imported table:
SQL> create table tab_import as
2 select ename, job, sal
3 from emp
4 where deptno = 10;
Table created.
Add the ID column:
SQL> alter table tab_import add id number;
Table altered.
Create a sequence which will be used to populate the ID column:
SQL> create sequence seq_imp;
Sequence created.
Update current rows:
SQL> update tab_import set
2 id = seq_imp.nextval;
3 rows updated.
Create a trigger which will take care about future inserts (if any):
SQL> create or replace trigger trg_bi_imp
2 before insert on tab_import
3 for each row
4 begin
5 :new.id := seq_imp.nextval;
6 end;
7 /
Trigger created.
Check what's in the table at the moment:
SQL> select * from tab_import;
ENAME JOB SAL ID
---------- --------- ---------- ----------
CLARK MANAGER 2450 1
KING PRESIDENT 5000 2
MILLER CLERK 1300 3
Let's import some more rows:
SQL> insert into tab_import (ename, job, sal)
2 select ename, job, sal
3 from emp
4 where deptno = 20;
3 rows created.
The trigger had silently populated the ID column:
SQL> select * From tab_import;
ENAME JOB SAL ID
---------- --------- ---------- ----------
CLARK MANAGER 2450 1
KING PRESIDENT 5000 2
MILLER CLERK 1300 3
SMITH CLERK 800 4
JONES MANAGER 2975 5
FORD ANALYST 3000 6
6 rows selected.
SQL>
Shortly: you need to
alter table and add the ID column
create a sequence
create a trigger
The end.
The answer given by #Littlefoot would be my recommendation too - but still I thought I could mention the following variant which will work only if you do not intend to add more rows to the table later.
ALTER TABLE MyTable add id number(38,0);
update MyTable set id = rownum;
commit;
My test:
SQL> create table tst as select * from all_tables;
Table created.
SQL> alter table tst add id number(38,0);
Table altered.
SQL> update tst set id = rownum;
3815 rows updated.
SQL> alter table tst add constraint tstPk primary key (id);
Table altered.
SQL>
SQL> select id from tst where id < 15;
ID
----------
1
2
3
4
5
6
7
8
9
10
11
ID
----------
12
13
14
14 rows selected.
But as mentioned initially,- this only fixes numbering for the rows you have at the time of the update - your'e not going to get new id values for new rows anytime later - if you need that, go for the sequence solution.
You can add an id column to a table with a single statement (Oracle 11g, see dbfiddle):
alter table test_
add id raw( 16 ) default sys_guid() ;
Example:
-- create a table without an id column
create table test_ ( str )
as
select dbms_random.string( 'x', 16 )
from dual
connect by level <= 10 ;
select * from test_ ;
STR
ULWL9EXFG6CIO72Z
QOM0W1R9IJ2ZD3DW
YQWAP4HZNQ57C2UH
EETF2AXD4ZKNIBBF
W9SECJYDER793MQW
alter table test_
add id raw( 16 ) default sys_guid() ;
select * from test_ ;
STR ID
ULWL9EXFG6CIO72Z 0x782C6EBCAE2D7B9FE050A00A02005D65
QOM0W1R9IJ2ZD3DW 0x782C6EBCAE2E7B9FE050A00A02005D65
YQWAP4HZNQ57C2UH 0x782C6EBCAE2F7B9FE050A00A02005D65
EETF2AXD4ZKNIBBF 0x782C6EBCAE307B9FE050A00A02005D65
W9SECJYDER793MQW 0x782C6EBCAE317B9FE050A00A02005D65
Testing
-- Are the id values unique and not null? Yes.
alter table test_
add constraint pkey_test_ primary key ( id ) ;
-- When we insert more rows, will the id be generated? Yes.
begin
for i in 1 .. 100
loop
insert into test_ (str) values ( 'str' || to_char( i ) ) ;
end loop ;
end ;
/
select * from test_ order by id desc ;
-- last 10 rows of the result
STR ID
str100 0x782C806E16A5E998E050A00A02005D81
str99 0x782C806E16A4E998E050A00A02005D81
str98 0x782C806E16A3E998E050A00A02005D81
str97 0x782C806E16A2E998E050A00A02005D81
str96 0x782C806E16A1E998E050A00A02005D81
str95 0x782C806E16A0E998E050A00A02005D81
str94 0x782C806E169FE998E050A00A02005D81
str93 0x782C806E169EE998E050A00A02005D81
str92 0x782C806E169DE998E050A00A02005D81
str91 0x782C806E169CE998E050A00A02005D81
Regarding your other questions:
{1} Then I'd like to know what PL SQL programmer typically do to ID records after importing a file. Do they do that? The main purpose for me to ID these records is to trace it back after manipulation/copying.
-> As you know, the purpose of an id is: to identify a row. We don't "do anything to IDs". Thus, your usage of IDs seems legit.
{2} Again, I understand there is solution there, my further question is whether PL SQL programmer actually do that, or there is other alternative which making this step not necessary in PL SQL?
-> Not quite sure what you are asking here. Although there is a ROWID() pseudocolumn (see documentation), we should not use it to identify rows.
"You should not use ROWID as the primary key of a table. If you delete
and reinsert a row with the Import and Export utilities, for example,
then its rowid may change. If you delete a row, then Oracle may
reassign its rowid to a new row inserted later."

How to implement multidimensional sequences

For example, here is a yearly sequence. The no increments with year:
| no | year |
+----+------+
| 1 | 2016 |
| 2 | 2016 |
| 3 | 2016 |
| 1 | 2017 |
| 2 | 2017 |
| 4 | 2016 |
For now I have created sequence for each year
but the problem is Oracle will not automatically create new sequence in next year.
Another problem is if I want to use a 3D sequence, incrementing within year and type:
| no | year | type |
+----+------+------+
| 1 | 2016 | a |
| 2 | 2016 | a |
| 1 | 2016 | b |
| 1 | 2017 | b |
| 2 | 2017 | b |
| 1 | 2017 | c |
This will be too many sequences in database
I do not recommend max(no) because of parallel access issue.
I tried to lock table before getting max(no) in a trigger but it resulted in deadlock.
The only way to do this is with a code control table ...
create table code_control
(year number(4,0) not null
, type varchar2(1) not null
, last_number number(38,0) default 1 not null
, primary key (year,type)
)
organization index
/
... which is maintained like this ...
create or replace function get_next_number
(p_year in number, p_type in varchar2)
return number
is
pragma autonomous_transaction;
cursor cur_cc is
select last_number + 1
from code_control cc
where cc.year= p_year
and cc.type = p_type
for update of last_number;
next_number number;
begin
open cur_cc;
fetch cur_cc into next_number;
if cur_cc%found then
update code_control
set last_number = next_number
where current of cur_cc;
else
insert into code_control (year,type)
values (p_year, p_type)
returning last_number into next_number;
end if;
commit;
return next_number;
end;
/
The important thing is the SELECT ... FOR UPDATE. Pessimistic locking guarantees uniqueness in a multi-user environment. The PRAGMA ensures that maintaining code_control doesn't pollute the broader transaction. It allows us to call the function in a trigger without deadlocks.
Here is a table with a key like yours:
create table t42
(year number(4,0) not null
, type varchar2(1) not null
, id number(38,0)
, primary key (year,type, id)
)
/
create or replace trigger t42_trg
before insert on t42 for each row
begin
:new.id := get_next_number(:new.year, :new.type);
end;
/
There's nothing up my sleeves before I populate t42:
SQL> select * from code_control;
no rows selected
SQL> select * from t42;
no rows selected
SQL> insert into t42 (year, type) values (2016, 'A');
1 row created.
SQL> insert into t42 (year, type) values (2016, 'A');
1 row created.
SQL> insert into t42 (year, type) values (2016, 'A');
1 row created.
SQL> insert into t42 (year, type) values (2016, 'B');
1 row created.
SQL> insert into t42 (year, type) values (2016, 'A');
1 row created.
SQL> insert into t42 (year, type) values (2017, 'A');
1 row created.
SQL> select * from t42;
YEAR T ID
---------- - ----------
2016 A 1
2016 A 2
2016 A 3
2016 A 4
2016 B 1
2017 A 1
6 rows selected.
SQL> select * from code_control;
YEAR T LAST_NUMBER
---------- - -----------
2016 A 4
2016 B 1
2017 A 1
SQL>
So the obvious objection to this implementation is scalability. Inserting transactions are serialized on the code_control table. That's absolutely true. However the lock is held for the shortest possible time, so this should not be an issue even if the t42 table is populated many times a second.
However, if the table is subjected to massive numbers of concurrent inserts the locking may become an issue. It is crucial the table has sufficient Interested Transaction slots (INITRANS, MAXTRANS) to cope with concurrent demands. But very busy systems may need a smarter implementation (perhaps generating the IDs in batches); otherwise abandon the compound key in favour of a sequence (because sequences do scale in multi-user environments).

Resources