Is it ever possible to design an Oracle trigger that modifies same table (guaranteed not same row)? - oracle

I need to write a TRIGGER that preserves the old value of a column before it is updated, by inserting or updating the old value into another row in the same table. (yes, I know).
The following MERGE/DUAL trickery has served me well,
but because in this case, I'm inserting into or updating the same table,
Oracle complains at runtime. Also, for some reason, I found it unusually difficult to write code that compiles without errors.
Two questions:
Is it ever possible to modify the same table that the trigger is on, even when I can guarantee that the trigger will never update the row that triggered the trigger? Or do I have to do something like (e.g.): insert pending changes into another table, so that a 2nd trigger can merge them back into the original table? (This table is a customer interface, so I can't re-architect this to use a second table for permanently storing old values.)
What's with the compiler errors that don't let me use :old.event_key, but do let me use :old.property_val in the MERGE statement? (declaring a variable old_event_key and assigning it to the value of :old.event_key seems to work) Is there some sort of hidden intermediate language that knows when a column is (part of) the primary key, and prevents you from referencing it via :old.?
Here is the offending code:
create or replace trigger remember_old_status
before update on event_properties
for each row
when (old.property_name = 'CURRENT-STATUS')
declare
old_event_key varchar2(20);
begin
old_event_key := :old.event_key;
merge into event_properties eprop
using (select 1 from dual) dummy
on ( eprop.event_key = old_event_key
AND eprop.property_name = 'PREVIOUS-STATUS')
when matched then
update set property_val = :old.property_val
when not matched then
insert (event_key, property_name, property_val)
values (old_event_key, 'PREVIOUS-STATUS', :old.property_val);
end;
And here's the table:
CREATE TABLE "CUST"."EVENT_PROPERTIES"
( "EVENT_KEY" VARCHAR2(20 BYTE) CONSTRAINT "NN_FLE_FLK" NOT NULL ENABLE,
"PROPERTY_NAME" VARCHAR2(20 BYTE) CONSTRAINT "NN_FLE_PN" NOT NULL ENABLE,
"PROPERTY_VAL" VARCHAR2(80 BYTE),
CONSTRAINT "PX_EVENT_PROPERTIES" PRIMARY KEY ("EVENT_KEY", "PROPERTY_NAME") DEFERRABLE
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CUST_TS" ENABLE
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CUST_TS" ;
And here are the error messages:
ORA-04091: table CUST.EVENT_PROPERTIES is mutating, trigger/function may not see it
ORA-06512: at "CUST.REMEMBER_OLD_STATUS", line 5

You could use a compound trigger to do this, storing the old values in variables in a before each row section, and then merging in an after statement.
This assumes you'll only ever update one row at a time:
create or replace trigger remember_old_status
for update on event_properties
compound trigger
old_rec event_properties%rowtype;
before each row is
begin
if (:old.property_name = 'CURRENT-STATUS') then
old_rec.event_key := :old.event_key;
old_rec.property_name := :old.property_name;
old_rec.property_val := :old.property_val;
end if;
end before each row;
after statement is
begin
if (old_rec.property_name = 'CURRENT-STATUS') then
merge into event_properties eprop
using (
select old_rec.event_key as event_key,
'PREVIOUS-STATUS' as property_name,
old_rec.property_val as property_val
from dual
) dummy
on (eprop.event_key = dummy.event_key
and eprop.property_name = dummy.property_name)
when matched then
update set property_val = old_rec.property_val
when not matched then
insert (event_key, property_name, property_val)
values (dummy.event_key, dummy.property_name, dummy.property_val);
end if;
end after statement;
end remember_old_status;
/
Quick test:
insert into event_properties values('SOME_EVENT', 'CURRENT-STATUS', 'A');
1 row inserted.
update event_properties set property_val = 'B' where event_key = 'SOME_EVENT' and property_name = 'CURRENT-STATUS';
1 row updated.
select * from event_properties;
EVENT_KEY PROPERTY_NAME PROPERTY_VAL
-------------------- -------------------- --------------------------------------------------------------------------------
SOME_EVENT CURRENT-STATUS B
SOME_EVENT PREVIOUS-STATUS A
update event_properties set property_val = 'C' where event_key = 'SOME_EVENT' and property_name = 'CURRENT-STATUS';
1 row updated.
select * from event_properties;
EVENT_KEY PROPERTY_NAME PROPERTY_VAL
-------------------- -------------------- --------------------------------------------------------------------------------
SOME_EVENT CURRENT-STATUS C
SOME_EVENT PREVIOUS-STATUS B
If you want to deal with multiple updates on one statement then the before each row can populate a collection instead, and you can then use that in the after statement.
create or replace trigger remember_old_status
for update on event_properties
compound trigger
type t_type is table of event_properties%rowtype;
old_recs t_type := t_type();
before each row is
begin
if (:old.property_name = 'CURRENT-STATUS') then
old_recs.extend();
old_recs(old_recs.count).event_key := :old.event_key;
old_recs(old_recs.count).property_name := :old.property_name;
old_recs(old_recs.count).property_val := :old.property_val;
end if;
end before each row;
after statement is
begin
forall i in old_recs.first..old_recs.last
merge into event_properties eprop
using (
select old_recs(i).event_key as event_key,
'PREVIOUS-STATUS' as property_name,
old_recs(i).property_val as property_val
from dual
) dummy
on (eprop.event_key = dummy.event_key
and eprop.property_name = dummy.property_name)
when matched then
update set property_val = old_recs(i).property_val
when not matched then
insert (event_key, property_name, property_val)
values (dummy.event_key, dummy.property_name, dummy.property_val);
end after statement;
end remember_old_status;
/

Related

Need to ignore "start with" sequence for identity column when comparing schema in oracle db

We have an oracle database (18c) on several servers, and need to sync the schema from dev to prod servers. Since it is only the schema that needs to be synced, and not the content of the tables, we do not need to know the next sequence number of primary key columns. (And we certainly do not want to update the prod servers with this sequence number.)
Have tried both SQL Developers Diff Tool and dbForge Schema Compare for Oracle, but they both list tables where only this sequence number is different as tables that need to be updated.
I have not found a setting in SQL Developer Diff Tool that handles this. In dbForge Schema Compare for Oracle they have the Ignore START WITH in sequences, but this seems to not work as I thought, since it still marks tables that are equal except for the sequence number as tables that need an update.
For new tables that only exist in the source db - the sync script will be like this:
CREATE TABLE TEST (
ID NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(
START WITH 102),
TEXT VARCHAR2(4000 BYTE),
CONSTRAINT TEST_ID_PK PRIMARY KEY (ID))
LOGGING;
We need that script without the (START WITH 102) part in it.
For a table that exist in both source and target db (with no other change than the sequence number) - the sync script will be like this:
ALTER TABLE TEST
MODIFY(ID GENERATED BY DEFAULT ON NULL AS IDENTITY(
START WITH 114
INCREMENT BY 1
MAXVALUE 9999999999999999999999999999
MINVALUE 1
CACHE 20
NOCYCLE
NOORDER));
The reality here is that this is a table that does not need an update, and I thought that Ignore START WITH in sequence would handle this, but apparently not.
Anyone out there have a solution for us?
Well, I believe it is a very bad idea to use SQL Developer, or any other IDE tool for that matter, to create scripts to be deployed on Production. You are describing a clear case of lacking a real control version software, like GIT or SVN. You shouldn't need to compare between databases unless there is something wrong, but never for creating DDL scripts.
In this specific case, I would use DBMS_METADATA to create the DDLs
Example
SQL> create table t ( c1 number generated by default on null as identity ( start with 1 increment by 1 ) , c2 number ) ;
Table created.
SQL> insert into t values ( null , 1 ) ;
1 row created.
SQL> r
1* insert into t values ( null , 1 )
1 row created.
SQL> r
1* insert into t values ( null , 1 )
1 row created.
SQL> r
1* insert into t values ( null , 1 )
1 row created.
SQL> select * from t ;
C1 C2
---------- ----------
1 1
2 1
3 1
4 1
In this case SQL developer shows start with 5, because that is the next value of the identity column. You can use DBMS_METADATA.GET_DDL to get the right DDL without this clause.
SQL> begin
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SQLTERMINATOR', true);
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'PRETTY', true);
end;
/
PL/SQL procedure successfully completed.
SQL> select dbms_metadata.get_ddl('TABLE','T') from dual
DBMS_METADATA.GET_DDL('TABLE','T')
--------------------------------------------------------------------------------
CREATE TABLE "SYS"."T"
( "C1" NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY MINVALUE 1 MAXVALUE 99
99999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE
NOKEEP NOSCALE NOT NULL ENABLE,
"C2" NUMBER
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM" ;
There are several options to for example not get the storage attributes. I always use this one
SQL> BEGIN
2 DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SQLTERMINATOR', true);
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'PRETTY', true);
3 4 DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SEGMENT_ATTRIBUTES', true);
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'STORAGE', false);
END;
/ 5 6 7
PL/SQL procedure successfully completed.
SQL> select dbms_metadata.get_ddl('TABLE','T') from dual ;
DBMS_METADATA.GET_DDL('TABLE','T')
--------------------------------------------------------------------------------
CREATE TABLE "SYS"."T"
( "C1" NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY MINVALUE 1 MAXVALUE 99
99999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE
NOKEEP NOSCALE NOT NULL ENABLE,
"C2" NUMBER
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
TABLESPACE "SYSTEM" ;
SQL>
For comparison purposes, you might want to have a look into DBMS_COMPARISON
https://docs.oracle.com/database/121/ARPLS/d_comparison.htm#ARPLS868

Sample data - "Issue while executing stored procedure which consists both update and insert statements"

Below are the sample table and file details for the question which I have asked on "Issue while executing stored procedure which consists both update and insert statements". Below are the steps I am following before executing the Procedure.
I will get a file from the Vendor which contains the data in the below format.
6437,,01/01/2017,3483.92,,
14081,,01/01/2017,8444.23,,
I am loading these data to the table NMAC_PTMS_NOTEBK_SG. In the above file 1st column will be the asset.
I am updating the table with extra column with name lse_id with respect to that asset. Now the NMAC_PTMS_NOTEBK_SG table will have the data in the below format.
LSE_ID AST_ID PRPRTY_TAX_DDCTN_CD LIEN_DT ASES_PRT_1_AM ASES_PRT_2_AM
5868087 5049 Null 01-01-2017 3693.3 NULL
Now my procedure will start. In my procedure the logic should be in a way I need to take the lse_id from NMAC_PTMS_NOTEBK_SG and compare the same in MJL table (here lse_id = app_lse_s). Below is the structure for MJL table.
CREATE TABLE LPR_LP_TEST.MJL
(
APP_LSE_S CHAR(10 BYTE) NOT NULL,
DT_ENT_S TIMESTAMP(3) NOT NULL,
DT_FOL_S TIMESTAMP(3),
NOTE_TYPE_S CHAR(4 BYTE) NOT NULL,
PRCS_C CHAR(1 BYTE) NOT NULL,
PRIO_C CHAR(1 BYTE) NOT NULL,
FROM_S CHAR(3 BYTE) NOT NULL,
TO_S CHAR(3 BYTE) NOT NULL,
NOTE_TITLE_S VARCHAR2(41 BYTE) NOT NULL,
INFO_S VARCHAR2(4000 BYTE),
STAMP_L NUMBER(10) NOT NULL,
PRIVATE_C CHAR(1 BYTE),
LSE_ACC_C CHAR(1 BYTE),
COL_STAT_S CHAR(4 BYTE),
INFO1_S VARCHAR2(250 BYTE),
INFO2_S VARCHAR2(250 BYTE),
INFO3_S VARCHAR2(250 BYTE),
INFO4_S VARCHAR2(250 BYTE),
NTBK_RSN_S CHAR(4 BYTE)
)
TABLESPACE LPR_LP_TEST
PCTUSED 0
PCTFREE 25
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
CREATE UNIQUE INDEX LPR_LP_TEST.MJL_IDX0 ON LPR_LP_TEST.MJL
(APP_LSE_S, DT_ENT_S)
LOGGING
TABLESPACE LPR_LP_TEST
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
NOPARALLEL;
CREATE OR REPLACE TRIGGER LPR_LP_TEST."MT_MJL_AIUD"
AFTER INSERT OR UPDATE OR DELETE ON mjl
BEGIN
mpkg_trig_mjl.mp_mjl_aiud;
END mt_mjl_aiud;
/
CREATE OR REPLACE TRIGGER LPR_LP_TEST."MT_MJL_AIUDR"
AFTER INSERT OR UPDATE OR DELETE ON mjl FOR EACH ROW
BEGIN
mpkg_trig_mjl.mp_mjl_aiudr (INSERTING, UPDATING, DELETING,
:NEW.app_lse_s, :NEW.prcs_c, :NEW.note_type_s,
:OLD.app_lse_s, :OLD.prcs_c, :OLD.note_type_s);
END mt_mjl_aiudr;
/
CREATE OR REPLACE TRIGGER LPR_LP_TEST."MT_MJL_BIUD"
BEFORE INSERT OR UPDATE OR DELETE ON mjl
BEGIN
mpkg_trig_mjl.mp_mjl_biud;
END mt_mjl_biud;
/
CREATE OR REPLACE TRIGGER LPR_LP_TEST."MT_MJL_OBIUR"
BEFORE INSERT OR UPDATE ON mjl FOR EACH ROW
BEGIN
IF INSERTING THEN
:NEW.stamp_l := mpkg_util.mp_time_ticker;
ELSE
IF :OLD.stamp_l > 999999990 THEN
:NEW.stamp_l := 1;
ELSE
:NEW.stamp_l := :OLD.stamp_l + 1;
END IF;
END IF;
END mt_mjl_obiur;
/
Below is the procedure I am using which you have provided in previous post and it is almost working good for me.
CREATE OR REPLACE PROCEDURE LPR_LP_TEST.SP_PTMS_NOTES
(
p_app_lse_s IN mjl.app_lse_s%TYPE,
--p_dt_ent_s IN mjl.dt_ent_s%TYPE,
--p_note_type_s IN mjl.note_type_s%TYPE,
--p_prcs_c IN mjl.prcs_c%TYPE,
--p_prio_c IN mjl.prio_c%TYPE,
--p_note_title_s IN mjl.note_title_s%TYPE,
--p_info1_s IN mjl.info1_s%TYPE,
--p_info2_s IN mjl.info2_s%TYPE
)
AS
--v_rowcount_i number;
--v_lien_date mjl.info1_s%TYPE;
--v_lien_date NMAC_PTMS_NOTEBK_SG.LIEN_DT%TYPE;
--v_asst_amount mjl.info2_s%TYPE;
v_app_lse_s mjl.app_lse_s%TYPE;
BEGIN
v_app_lse_s := trim(p_app_lse_s);
-- I hope this dbms_output line is for temporary debug purposes only
-- and will be removed in the production version!
dbms_output.put_line(app_lse_s);
merge into mjl tgt
using (select lse_s app_lse_s,
sysdate dt_ent_s,
'SPPT' note_type_s,
'Y' prcs_c,
'1' prio_c,
'Property Tax Assessment' note_title_s,
lien_dt info1_s,
ases_prt_1_am info2_s
from nmac_ptms_notebk_sg
where lse_id = v_app_lse_s) src
on (trim(tgt.app_lse_s) = trim(src.app_lse_s))
-- and tgt.dt_ent_s = src.dt_ent_s)
when matched then
update set --tgt.dt_ent_s = src.dt_ent_s,
tgt.note_title_s = src.note_title_s,
tgt.info1_s = src.info1_s,
tgt.info2_s = src.info2_s
where --tgt.dt_ent_s != src.dt_ent_s
tgt.note_title_s != src.note_title_s
or tgt.info1_s != src.info1_s
or tgt.info2_s != src.info2_s
when not matched then
insert (tgt.app_lse_s,
tgt.dt_ent_s,
tgt.note_type_s,
tgt.prcs_c,
tgt.prio_c,
tgt.from_s,
tgt.to_s,
tgt.note_title_s,
tgt.info1_s,
tgt.info2_s)
values (src.app_lse_s,
src.dt_ent_s,
src.note_type_s,
src.prcs_c,
src.prio_c,
src.from_s,
src.to_s,
src.note_title_s,
src.info1_s,
src.info2_s);
commit;
end;
Now the logic should be I need to pass lse_id from the file which I
have already saved to the procedure.
If the lse_id which I am passing is matching with the app_lse_s in
the mjl table then I need to update that row and some of the harcoded
fields which I am doing it correclty.
If the lse_id is not matching then I have to insert a new row for that
lease and the hardcoded fields.
The issue which I am facing is the dt_ent_s in the mjl table is a
unique constraint.
Please let me know if the above is making any sense to you...
"The issue which I am facing is the dt_ent_s in the mjl table is a unique constraint."
Actually it's not, it's part of a compound unique key. So really your ON clause should match on
on (tgt.app_lse_s = src.app_lse_s
and tgt.dt_ent_s = src.dt_ent_s)
Incidentally, the use of trim() in the ON clause is worrying, especially trim(tgt.app_lse_s). If you're inserting values with trailing or leading spaces your "unique key" will produce multiple hits when you trim them. You should trim the spaces when you load the data from the file and insert trimmed values in your table.
"ORA-00001: unique constraint (LPR_LP_TEST.MJL_IDX0) violated"
MJL_IDX0 must me a unique index. That means you need to include its columns in any consideration of unique records.
Clearly there is a difference between your straight INSERT logic and your MERGE INSERT logic. You need to compare the two statements and figure out what the difference is.

Get complete ddl for index in oracle

I am using oracle 11g/12c. I want to get ddl of indexes in my database. For this I used the query -
SELECT DBMS_METADATA.GET_DDL('INDEX','SYS_IL0000091971C00001$$','CCEEXPERTS') FROM dual
Here 'SYS_IL0000091971C00001$$' is my index name and 'CCEEXPERTS' is my owner name.
From this I get the ddl -
CREATE UNIQUE INDEX "CCEEXPERTS"."SYS_IL0000091971C00001$$" ON "CCEEXPERTS"."DATABLOB" (
And my actual ddl is -
CREATE UNIQUE INDEX "CCEEXPERTS"."SYS_IL0000091971C00001$$" ON "CCEEXPERTS"."DATABLOB" (
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS"
PARALLEL (DEGREE 0 INSTANCES 0) ;
In actual ddl after "CCEEXPERTS"."DATABLOB" ( , next line character and from their the ddl is truncted.
How can I get the complete ddl? Please help me...
Thanks in advance.
In SQLplus, set these before running the procedure.
set long 100000
set longchunksize 100000
Oracle has a DBMS_METADATA package to retrieve and customize the DDL returned. Default settings for all indexes SQL:
select DBMS_METADATA.GET_DDL('INDEX', index_name)
from all_indexes
where owner in (USER, 'USER_OTHER_THAN_LOGGED_IN_USER');
A pl/sql block example:
set serveroutput on
DECLARE
V_DDL CLOB;
BEGIN
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false);
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'PRETTY', true);
dbms_metadata.set_transform_param(dbms_metadata.session_transform,'TABLESPACE',false);
V_DDL := DBMS_METADATA.GET_DDL('VIEW', 'A_VIEW_NAME');
DBMS_OUTPUT.PUT_LINE(V_DDL);
END;

Toad for Oracle, Editing ID Column to be an auto-increment ID

I work with Toad for Oracle 12.1 for my database. I have a Table called TBLEMPLOYEE which already contain some data in it and having Column Name called ID whose data values are increasing from 1 to N.
ID Name Gender DateOfBirth Type
------------------------------------
1 Mark Male 10/10/1982 1
2 Mary Female 11/11/1981 2
3 Esther Female 12/12/1984 2
4 Matthew Male 9/9/1983 1
5 John Male 5/5/1985 1
6 Luke Male 6/6/1986 1
Now I want to change the Column ID such that it will have auto-incremented ID when I add a new data to the Table.
I know that in Toad we can do it when we create a New Table with that behavior. For instance, using Create Table and in the newly created Column, we could set Default / Virtual / Identity settings as Identity:
And Toad will show a UI with bunch of settings to do that:
And will be automatically translated to something like:
(START WITH 1 INCREMENT BY 1 MINVALUE 1 MAXVALUE 9999999999999999999999999999 CACHE 20 NOCYCLE ORDER NOKEEP)
In the Default / Virtual / Identity settings.
But I can't seem to do the same when I do Alter Table instead of Create Table.
Why is that so?
And since I already have some data in the TBLEMPLOYEE, I want to avoid creating a new table and re-inserting the data if possible.
How can I do that?
This is the current SQL script (if this may help):
ALTER TABLE MYSCHEMA.TBLEMPLOYEE
DROP PRIMARY KEY CASCADE;
DROP TABLE MYSCHEMA.TBLEMPLOYEE CASCADE CONSTRAINTS;
CREATE TABLE MYSCHEMA.TBLEMPLOYEE
(
ID NUMBER NOT NULL,
NAME VARCHAR2(80 BYTE) NOT NULL,
GENDER VARCHAR2(6 BYTE),
DATEOFBIRTH DATE,
EMPLOYEETYPE INTEGER NOT NULL,
)
TABLESPACE USERS
RESULT_CACHE (MODE DEFAULT)
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
ALTER TABLE MYSCHEMA.TBLEMPLOYEE ADD (
PRIMARY KEY
(ID)
USING INDEX
TABLESPACE USERS
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
)
ENABLE VALIDATE);
First of all, your sequence should start with the max value + 1 from the table e.g.
(START WITH 7 INCREMENT BY 1 MINVALUE 1 MAXVALUE 9999999999999999999999999999 CACHE 20 NOCYCLE ORDER NOKEEP)
If you want to automatically populate the value for the Id and you're not running on Oracle 12c, I suggest you to use a trigger
drop sequence seq_mytest_id;
truncate table my_test_t;
drop table my_test_t;
create table my_test_t (id number, string varchar2(30));
-- prepopulate with fixed values for the id
insert into my_test_t(id, string) values (1,'test');
insert into my_test_t(id, string) values (2,'test');
insert into my_test_t(id, string) values (3,'test');
insert into my_test_t(id, string) values (4,'test');
insert into my_test_t(id, string) values (5,'test');
insert into my_test_t(id, string) values (6,'test');
commit;
--Now create the sequence and the trigger for automatically
--populating the ID column
create sequence seq_mytest_id start with 7 increment by 1 nocycle nocache;
create trigger t_mytest_bi before insert on my_test_t for each row
begin
select seq_mytest_id.nextval into :new.id from dual;
end;
/
-- Test the trigger
insert into my_test_t(string) values ('test');
insert into my_test_t(string) values ('test2');
commit;
select * from my_test_t;
If you're running on Oracle 12c you can define your column as an identity column
https://oracle-base.com/articles/12c/identity-columns-in-oracle-12cr1
Hope it helps,
R

How to fetch the system generated check constraint name of table column in oracle

I have created my TEST_TABLE table using below query in oracle
CREATE TABLE "PK"."TEST_TABLE"
( "MYNAME" VARCHAR2(50),
"MYVAL1" NUMBER(12,0),
"MYVAL2" NUMBER(12,0),
"MYVAL3" NUMBER(12,0) NOT NULL,
CHECK ("MYVAL1" IS NOT NULL) DEFERRABLE ENABLE NOVALIDATE
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM" ;
After this, I want to drop the check constraints applied on column MYVAL1.
For this, first I need to fetch the check constraint name on column MYVAL1. The I can run the alter command to drop that constraint.
So how can I fetch the exact system generated check constraint name on column MYVAL1.
i tried to fetch the data using below query but as search condition is long data type column, it was trowing below error
select * from user_constraints where TABLE_NAME = 'TEST_TABLE';
WHERE TABLE_NAME='TEST_TABLE'
AND TO_LOB(search_condition) LIKE '%"MYVAL1" IS NOT NULL%'
ERROR:
ORA-00932: inconsistent datatypes: expected - got LONG
00932. 00000 - "inconsistent datatypes: expected %s got %s"
*Cause:
*Action:
Error at Line: 23 Column: 6
Any clue?
There are two ways. First (recommended) - to give name to constraints when creating it. Second - to search in ALL_CONS_COLUMNS (or USER_CONS_COLUMNS) system view.
You need something like this:
select constraint_name
from all_cons_columns
where table_name = 'TEST_TABLE'
and owner = 'PK'
and column_name = 'MYVAL1'
See documentation: https://docs.oracle.com/cloud/latest/db121/REFRN/refrn20045.htm#REFRN20045

Resources