Need to ignore "start with" sequence for identity column when comparing schema in oracle db - oracle

We have an oracle database (18c) on several servers, and need to sync the schema from dev to prod servers. Since it is only the schema that needs to be synced, and not the content of the tables, we do not need to know the next sequence number of primary key columns. (And we certainly do not want to update the prod servers with this sequence number.)
Have tried both SQL Developers Diff Tool and dbForge Schema Compare for Oracle, but they both list tables where only this sequence number is different as tables that need to be updated.
I have not found a setting in SQL Developer Diff Tool that handles this. In dbForge Schema Compare for Oracle they have the Ignore START WITH in sequences, but this seems to not work as I thought, since it still marks tables that are equal except for the sequence number as tables that need an update.
For new tables that only exist in the source db - the sync script will be like this:
CREATE TABLE TEST (
ID NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(
START WITH 102),
TEXT VARCHAR2(4000 BYTE),
CONSTRAINT TEST_ID_PK PRIMARY KEY (ID))
LOGGING;
We need that script without the (START WITH 102) part in it.
For a table that exist in both source and target db (with no other change than the sequence number) - the sync script will be like this:
ALTER TABLE TEST
MODIFY(ID GENERATED BY DEFAULT ON NULL AS IDENTITY(
START WITH 114
INCREMENT BY 1
MAXVALUE 9999999999999999999999999999
MINVALUE 1
CACHE 20
NOCYCLE
NOORDER));
The reality here is that this is a table that does not need an update, and I thought that Ignore START WITH in sequence would handle this, but apparently not.
Anyone out there have a solution for us?

Well, I believe it is a very bad idea to use SQL Developer, or any other IDE tool for that matter, to create scripts to be deployed on Production. You are describing a clear case of lacking a real control version software, like GIT or SVN. You shouldn't need to compare between databases unless there is something wrong, but never for creating DDL scripts.
In this specific case, I would use DBMS_METADATA to create the DDLs
Example
SQL> create table t ( c1 number generated by default on null as identity ( start with 1 increment by 1 ) , c2 number ) ;
Table created.
SQL> insert into t values ( null , 1 ) ;
1 row created.
SQL> r
1* insert into t values ( null , 1 )
1 row created.
SQL> r
1* insert into t values ( null , 1 )
1 row created.
SQL> r
1* insert into t values ( null , 1 )
1 row created.
SQL> select * from t ;
C1 C2
---------- ----------
1 1
2 1
3 1
4 1
In this case SQL developer shows start with 5, because that is the next value of the identity column. You can use DBMS_METADATA.GET_DDL to get the right DDL without this clause.
SQL> begin
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SQLTERMINATOR', true);
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'PRETTY', true);
end;
/
PL/SQL procedure successfully completed.
SQL> select dbms_metadata.get_ddl('TABLE','T') from dual
DBMS_METADATA.GET_DDL('TABLE','T')
--------------------------------------------------------------------------------
CREATE TABLE "SYS"."T"
( "C1" NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY MINVALUE 1 MAXVALUE 99
99999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE
NOKEEP NOSCALE NOT NULL ENABLE,
"C2" NUMBER
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM" ;
There are several options to for example not get the storage attributes. I always use this one
SQL> BEGIN
2 DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SQLTERMINATOR', true);
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'PRETTY', true);
3 4 DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SEGMENT_ATTRIBUTES', true);
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'STORAGE', false);
END;
/ 5 6 7
PL/SQL procedure successfully completed.
SQL> select dbms_metadata.get_ddl('TABLE','T') from dual ;
DBMS_METADATA.GET_DDL('TABLE','T')
--------------------------------------------------------------------------------
CREATE TABLE "SYS"."T"
( "C1" NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY MINVALUE 1 MAXVALUE 99
99999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE
NOKEEP NOSCALE NOT NULL ENABLE,
"C2" NUMBER
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
TABLESPACE "SYSTEM" ;
SQL>
For comparison purposes, you might want to have a look into DBMS_COMPARISON
https://docs.oracle.com/database/121/ARPLS/d_comparison.htm#ARPLS868

Related

Is it ever possible to design an Oracle trigger that modifies same table (guaranteed not same row)?

I need to write a TRIGGER that preserves the old value of a column before it is updated, by inserting or updating the old value into another row in the same table. (yes, I know).
The following MERGE/DUAL trickery has served me well,
but because in this case, I'm inserting into or updating the same table,
Oracle complains at runtime. Also, for some reason, I found it unusually difficult to write code that compiles without errors.
Two questions:
Is it ever possible to modify the same table that the trigger is on, even when I can guarantee that the trigger will never update the row that triggered the trigger? Or do I have to do something like (e.g.): insert pending changes into another table, so that a 2nd trigger can merge them back into the original table? (This table is a customer interface, so I can't re-architect this to use a second table for permanently storing old values.)
What's with the compiler errors that don't let me use :old.event_key, but do let me use :old.property_val in the MERGE statement? (declaring a variable old_event_key and assigning it to the value of :old.event_key seems to work) Is there some sort of hidden intermediate language that knows when a column is (part of) the primary key, and prevents you from referencing it via :old.?
Here is the offending code:
create or replace trigger remember_old_status
before update on event_properties
for each row
when (old.property_name = 'CURRENT-STATUS')
declare
old_event_key varchar2(20);
begin
old_event_key := :old.event_key;
merge into event_properties eprop
using (select 1 from dual) dummy
on ( eprop.event_key = old_event_key
AND eprop.property_name = 'PREVIOUS-STATUS')
when matched then
update set property_val = :old.property_val
when not matched then
insert (event_key, property_name, property_val)
values (old_event_key, 'PREVIOUS-STATUS', :old.property_val);
end;
And here's the table:
CREATE TABLE "CUST"."EVENT_PROPERTIES"
( "EVENT_KEY" VARCHAR2(20 BYTE) CONSTRAINT "NN_FLE_FLK" NOT NULL ENABLE,
"PROPERTY_NAME" VARCHAR2(20 BYTE) CONSTRAINT "NN_FLE_PN" NOT NULL ENABLE,
"PROPERTY_VAL" VARCHAR2(80 BYTE),
CONSTRAINT "PX_EVENT_PROPERTIES" PRIMARY KEY ("EVENT_KEY", "PROPERTY_NAME") DEFERRABLE
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CUST_TS" ENABLE
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CUST_TS" ;
And here are the error messages:
ORA-04091: table CUST.EVENT_PROPERTIES is mutating, trigger/function may not see it
ORA-06512: at "CUST.REMEMBER_OLD_STATUS", line 5
You could use a compound trigger to do this, storing the old values in variables in a before each row section, and then merging in an after statement.
This assumes you'll only ever update one row at a time:
create or replace trigger remember_old_status
for update on event_properties
compound trigger
old_rec event_properties%rowtype;
before each row is
begin
if (:old.property_name = 'CURRENT-STATUS') then
old_rec.event_key := :old.event_key;
old_rec.property_name := :old.property_name;
old_rec.property_val := :old.property_val;
end if;
end before each row;
after statement is
begin
if (old_rec.property_name = 'CURRENT-STATUS') then
merge into event_properties eprop
using (
select old_rec.event_key as event_key,
'PREVIOUS-STATUS' as property_name,
old_rec.property_val as property_val
from dual
) dummy
on (eprop.event_key = dummy.event_key
and eprop.property_name = dummy.property_name)
when matched then
update set property_val = old_rec.property_val
when not matched then
insert (event_key, property_name, property_val)
values (dummy.event_key, dummy.property_name, dummy.property_val);
end if;
end after statement;
end remember_old_status;
/
Quick test:
insert into event_properties values('SOME_EVENT', 'CURRENT-STATUS', 'A');
1 row inserted.
update event_properties set property_val = 'B' where event_key = 'SOME_EVENT' and property_name = 'CURRENT-STATUS';
1 row updated.
select * from event_properties;
EVENT_KEY PROPERTY_NAME PROPERTY_VAL
-------------------- -------------------- --------------------------------------------------------------------------------
SOME_EVENT CURRENT-STATUS B
SOME_EVENT PREVIOUS-STATUS A
update event_properties set property_val = 'C' where event_key = 'SOME_EVENT' and property_name = 'CURRENT-STATUS';
1 row updated.
select * from event_properties;
EVENT_KEY PROPERTY_NAME PROPERTY_VAL
-------------------- -------------------- --------------------------------------------------------------------------------
SOME_EVENT CURRENT-STATUS C
SOME_EVENT PREVIOUS-STATUS B
If you want to deal with multiple updates on one statement then the before each row can populate a collection instead, and you can then use that in the after statement.
create or replace trigger remember_old_status
for update on event_properties
compound trigger
type t_type is table of event_properties%rowtype;
old_recs t_type := t_type();
before each row is
begin
if (:old.property_name = 'CURRENT-STATUS') then
old_recs.extend();
old_recs(old_recs.count).event_key := :old.event_key;
old_recs(old_recs.count).property_name := :old.property_name;
old_recs(old_recs.count).property_val := :old.property_val;
end if;
end before each row;
after statement is
begin
forall i in old_recs.first..old_recs.last
merge into event_properties eprop
using (
select old_recs(i).event_key as event_key,
'PREVIOUS-STATUS' as property_name,
old_recs(i).property_val as property_val
from dual
) dummy
on (eprop.event_key = dummy.event_key
and eprop.property_name = dummy.property_name)
when matched then
update set property_val = old_recs(i).property_val
when not matched then
insert (event_key, property_name, property_val)
values (dummy.event_key, dummy.property_name, dummy.property_val);
end after statement;
end remember_old_status;
/

Conflict creating table with Index and Primary Key

I'm a veteran SQL Server dev, recently moved to a project requiring Oracle and I'm confused by the error [ORA-02260: table can have only one primary key] I'm getting on Oracle 11.
I'm attempting to create a reference table, with an index and a primary key.
However, getting errors that my column Partner_ID is already declared. I know I'm missing something simple, but the docs and other sources I've viewed here have not given me a clue. Please help me understand what I'm doing wrong.
Thank you
ALTER TABLE REF_PARTNER
DROP PRIMARY KEY CASCADE;
DROP TABLE REF_PARTNER CASCADE CONSTRAINTS;
CREATE TABLE REF_PARTNER
(
PARTNER_ID NUMBER(10) PRIMARY KEY NOT NULL,
GLOBAL_APPID VARCHAR2(256 BYTE) NOT NULL,
FRIENDLY_NAME VARCHAR2(256 BYTE) NOT NULL,
CREATE_DTS DATE,
MODIFIED_DTS DATE,
LAST_MODIFIED_USER VARCHAR2(40 BYTE)
)
TABLESPACE DATA_1
PCTUSED 0
PCTFREE 5
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
LOGGING
NOCOMPRESS
NOCACHE
MONITORING;
BEGIN
EXECUTE IMMEDIATE 'DROP SEQUENCE PARTNER_SEQ';
EXCEPTION WHEN OTHERS THEN NULL;
END;
CREATE SEQUENCE PARTNER_SEQ START WITH 1 INCREMENT BY 1 MINVALUE 1 NOMAXVALUE NOCYCLE CACHE 200;
--CREATE UNIQUE INDEX REF_PARTNER_IDX ON REF_PARTNER
--(PARTNER_ID)
--LOGGING
--TABLESPACE INDEX_1
--PCTFREE 10
--INITRANS 2
--MAXTRANS 255
--STORAGE (
-- INITIAL 64K
-- NEXT 64K
-- MAXSIZE UNLIMITED
-- MINEXTENTS 1
-- MAXEXTENTS UNLIMITED
-- PCTINCREASE 0
-- BUFFER_POOL DEFAULT
-- );
--ALTER TABLE REF_PARTNER ADD (
-- CONSTRAINT REF_PARTNER_PK
-- PRIMARY KEY
-- (PARTNER_ID)
-- USING INDEX REF_PARTNER_PK
-- ENABLE VALIDATE);
A assume the error you get is
ORA-01408: such column list already indexed.
This is because you create the table with partner_id as the primary key. This automatically creates a unique index on partner_id.
There is no need to create a unique key on partner_id after you declared it to be the primary key.

Toad for Oracle, Editing ID Column to be an auto-increment ID

I work with Toad for Oracle 12.1 for my database. I have a Table called TBLEMPLOYEE which already contain some data in it and having Column Name called ID whose data values are increasing from 1 to N.
ID Name Gender DateOfBirth Type
------------------------------------
1 Mark Male 10/10/1982 1
2 Mary Female 11/11/1981 2
3 Esther Female 12/12/1984 2
4 Matthew Male 9/9/1983 1
5 John Male 5/5/1985 1
6 Luke Male 6/6/1986 1
Now I want to change the Column ID such that it will have auto-incremented ID when I add a new data to the Table.
I know that in Toad we can do it when we create a New Table with that behavior. For instance, using Create Table and in the newly created Column, we could set Default / Virtual / Identity settings as Identity:
And Toad will show a UI with bunch of settings to do that:
And will be automatically translated to something like:
(START WITH 1 INCREMENT BY 1 MINVALUE 1 MAXVALUE 9999999999999999999999999999 CACHE 20 NOCYCLE ORDER NOKEEP)
In the Default / Virtual / Identity settings.
But I can't seem to do the same when I do Alter Table instead of Create Table.
Why is that so?
And since I already have some data in the TBLEMPLOYEE, I want to avoid creating a new table and re-inserting the data if possible.
How can I do that?
This is the current SQL script (if this may help):
ALTER TABLE MYSCHEMA.TBLEMPLOYEE
DROP PRIMARY KEY CASCADE;
DROP TABLE MYSCHEMA.TBLEMPLOYEE CASCADE CONSTRAINTS;
CREATE TABLE MYSCHEMA.TBLEMPLOYEE
(
ID NUMBER NOT NULL,
NAME VARCHAR2(80 BYTE) NOT NULL,
GENDER VARCHAR2(6 BYTE),
DATEOFBIRTH DATE,
EMPLOYEETYPE INTEGER NOT NULL,
)
TABLESPACE USERS
RESULT_CACHE (MODE DEFAULT)
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
ALTER TABLE MYSCHEMA.TBLEMPLOYEE ADD (
PRIMARY KEY
(ID)
USING INDEX
TABLESPACE USERS
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
)
ENABLE VALIDATE);
First of all, your sequence should start with the max value + 1 from the table e.g.
(START WITH 7 INCREMENT BY 1 MINVALUE 1 MAXVALUE 9999999999999999999999999999 CACHE 20 NOCYCLE ORDER NOKEEP)
If you want to automatically populate the value for the Id and you're not running on Oracle 12c, I suggest you to use a trigger
drop sequence seq_mytest_id;
truncate table my_test_t;
drop table my_test_t;
create table my_test_t (id number, string varchar2(30));
-- prepopulate with fixed values for the id
insert into my_test_t(id, string) values (1,'test');
insert into my_test_t(id, string) values (2,'test');
insert into my_test_t(id, string) values (3,'test');
insert into my_test_t(id, string) values (4,'test');
insert into my_test_t(id, string) values (5,'test');
insert into my_test_t(id, string) values (6,'test');
commit;
--Now create the sequence and the trigger for automatically
--populating the ID column
create sequence seq_mytest_id start with 7 increment by 1 nocycle nocache;
create trigger t_mytest_bi before insert on my_test_t for each row
begin
select seq_mytest_id.nextval into :new.id from dual;
end;
/
-- Test the trigger
insert into my_test_t(string) values ('test');
insert into my_test_t(string) values ('test2');
commit;
select * from my_test_t;
If you're running on Oracle 12c you can define your column as an identity column
https://oracle-base.com/articles/12c/identity-columns-in-oracle-12cr1
Hope it helps,
R

Oracle SQL Developer 4.0 - How to remove double quotes in scripts

I'm running Oracle SQL Developer 4.0. When I script out a table, the owner, table name, column name, constraint name and etc are enclused with double quotes. I looked through Tools -> Preferences and was not able to find any option to turn it off. Does anyone know how to script a table without these quotes?
Thank you
I don't think you can do it using those methods. They are both using dbms_metata.get_ddl under the hood, it appears, and that doesn't have an option to not quote identifiers. Looks like export uses that package too; data modeller has an option to quote identifiers but not sure if that's useful to you.
You can get rid of them by querying from a worksheet if you want though, as long as the DDL is less than 32K. With default settings:
create table t42 (id number, str varchar2(10) default 'ABC',
constraint t42_pk primary key (id));
create index i42 on t42(str);
set long 1000
select dbms_metadata.get_ddl('TABLE', 'T42', user) from dual;
select dbms_metadata.get_dependent_ddl('INDEX', 'T42', user) from dual;
CREATE TABLE "STACKOVERFLOW"."T42"
( "ID" NUMBER,
"STR" VARCHAR2(10) DEFAULT 'ABC',
CONSTRAINT "T42_PK" PRIMARY KEY ("ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
TABLESPACE "USERS" ENABLE
) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
TABLESPACE "USERS"
CREATE UNIQUE INDEX "STACKOVERFLOW"."T42_PK" ON "STACKOVERFLOW"."T42" ("ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255
TABLESPACE "USERS"
CREATE INDEX "STACKOVERFLOW"."I42" ON "STACKOVERFLOW"."T42" ("STR")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
TABLESPACE "USERS"
With a little manipulation:
begin
dbms_metadata.set_transform_param(dbms_metadata.SESSION_TRANSFORM,
'PRETTY', false);
dbms_metadata.set_transform_param(dbms_metadata.SESSION_TRANSFORM,
'SQLTERMINATOR', true);
dbms_metadata.set_transform_param(dbms_metadata.SESSION_TRANSFORM,
'CONSTRAINTS_AS_ALTER', true);
dbms_metadata.set_transform_param(dbms_metadata.SESSION_TRANSFORM,
'SEGMENT_ATTRIBUTES', false);
end;
/
select replace(dbms_metadata.get_ddl('TABLE', 'T42', user), '"', null)
from dual;
select replace(dbms_metadata.get_dependent_ddl('INDEX', 'T42', user), '"', null)
from dual;
CREATE TABLE STACKOVERFLOW.T42 (ID NUMBER, STR VARCHAR2(10) DEFAULT 'ABC') ;
ALTER TABLE STACKOVERFLOW.T42 ADD CONSTRAINT T42_PK PRIMARY KEY (ID) ENABLE;
CREATE UNIQUE INDEX STACKOVERFLOW.T42_PK ON STACKOVERFLOW.T42 (ID) ;
CREATE INDEX STACKOVERFLOW.I42 ON STACKOVERFLOW.T42 (STR) ;

Perfomance issues just on production database

I'm having some performance issues when querying a table in a production database. While the query runs in 2.1 seconds in the test database (returning 8640 of 28 million records), at production, it takes 2.05 minutes (returning 8640 of 31 million records). I'm having a hard time to find the problem since I'm not an oracle expert.
Since the explain plan in both databases shows the correct index usage, I'm inclined to think that the problem resides on the table/indexes creation.
I've noticed some small differences between the SQL scripts used for the table creation:
Test database:
create table TB_PONTO_ENE
(
cd_ponto NUMBER(10) not null,
cd_fonte NUMBER(10),
cd_medidor NUMBER(10),
cd_usuario NUMBER(10),
dt_hr_insercao DATE,
dt_hr_instante DATE not null,
dt_hr_hora DATE,
dt_hr_dia DATE,
dt_hr_mes DATE,
dt_hr_instante_hv DATE,
dt_hr_hora_hv DATE,
dt_hr_dia_hv DATE,
dt_hr_mes_hv DATE,
vl_eneat_del FLOAT,
vl_eneat_rec FLOAT,
vl_enere_del FLOAT,
vl_enere_rec FLOAT,
vl_eneat_del_cp FLOAT,
vl_eneat_rec_cp FLOAT,
vl_enere_del_cp FLOAT,
vl_enere_rec_cp FLOAT
)
tablespace TELEMEDICAO
pctfree 10
initrans 1
maxtrans 255
storage
(
initial 64K
minextents 1
maxextents unlimited
);
alter table TB_PONTO_ENE
add constraint CP_TB_PONTO_ENE primary key (CD_PONTO, DT_HR_INSTANTE)
using index
tablespace TELEMEDICAO
pctfree 10
initrans 2
maxtrans 255
storage
(
initial 64K
minextents 1
maxextents unlimited
);
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_FONTE foreign key (CD_FONTE)
references TB_FONTE (CD_FONTE) on delete set null;
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_MEDIDOR foreign key (CD_MEDIDOR)
references TB_MEDIDOR (CD_MEDIDOR) on delete set null;
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_PONTO foreign key (CD_PONTO)
references TB_PONTO (CD_PONTO) on delete cascade;
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_USUARIO foreign key (CD_USUARIO)
references TB_USUARIO (CD_USUARIO) on delete set null
disable;
Production database:
create table TB_PONTO_ENE
(
cd_ponto NUMBER(10) not null,
cd_fonte NUMBER(10),
cd_medidor NUMBER(10),
cd_usuario NUMBER(10),
dt_hr_insercao DATE,
dt_hr_instante DATE not null,
dt_hr_hora DATE,
dt_hr_dia DATE,
dt_hr_mes DATE,
dt_hr_instante_hv DATE,
dt_hr_hora_hv DATE,
dt_hr_dia_hv DATE,
dt_hr_mes_hv DATE,
vl_eneat_del FLOAT,
vl_eneat_rec FLOAT,
vl_enere_del FLOAT,
vl_enere_rec FLOAT,
vl_eneat_del_cp FLOAT,
vl_eneat_rec_cp FLOAT,
vl_enere_del_cp FLOAT,
vl_enere_rec_cp FLOAT
)
tablespace TELEMEDICAO
pctfree 10
initrans 1
maxtrans 255
storage
(
initial 64K
next 5M
minextents 1
maxextents unlimited
pctincrease 0
);
alter table TB_PONTO_ENE
add constraint CP_TB_PONTO_ENE primary key (CD_PONTO, DT_HR_INSTANTE)
using index
tablespace MEDICAO_NDX
pctfree 10
initrans 2
maxtrans 255
storage
(
initial 64K
next 1M
minextents 1
maxextents unlimited
pctincrease 0
);
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_FONTE foreign key (CD_FONTE)
references TB_FONTE (CD_FONTE) on delete set null;
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_MEDIDOR foreign key (CD_MEDIDOR)
references TB_MEDIDOR (CD_MEDIDOR) on delete set null;
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_PONTO foreign key (CD_PONTO)
references TB_PONTO (CD_PONTO) on delete cascade;
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_USUARIO foreign key (CD_USUARIO)
references TB_USUARIO (CD_USUARIO) on delete set null;
The production database puts the indexes in another tablespace. Another difference is the next 5M at the tablespace declaration (no value defined in the test database).
When looking at the index properties, I also see some differences:
Test database:
AVG_DATA_BLOCKS_PER_KEY 1
AVG_LEAF_BLOCKS_PER_KEY 1
BLEVEL 2
BUFFER_POOL DEFAULT
CLUSTERING_FACTOR 611494
COMPRESSION DISABLED
DEGREE 1
DISTINCT_KEYS 28568389
DROPPED NO
GENERATED N
GLOBAL_STATS YES
INDEX_NAME CP_TB_PONTO_ENE
INDEX_TYPE NORMAL
INITIAL_EXTENT 65536
INI_TRANS 2
INSTANCES 1
IOT_REDUNDANT_PKEY_ELIM NO
JOIN_INDEX NO
LAST_ANALYZED 21/07/2010 22:08:34
LEAF_BLOCKS 85809
LOGGING YES
MAX_EXTENTS 2147483645
MAX_TRANS 255
MIN_EXTENTS 1
NUM_ROWS 28568389
PARTITIONED NO
PCT_FREE 10
SAMPLE_SIZE 377209
SECONDARY N
STATUS VALID
TABLESPACE_NAME TELEMEDICAO
TABLE_NAME TB_PONTO_ENE
TABLE_TYPE TABLE
TEMPORARY N
UNIQUENESS UNIQUE
USER_STATS NO
Production database:
AVG_DATA_BLOCKS_PER_KEY 1
AVG_LEAF_BLOCKS_PER_KEY 1
BLEVEL 2
BUFFER_POOL DEFAULT
CLUSTERING_FACTOR 10154395
COMPRESSION DISABLED
DEGREE 1
DISTINCT_KEYS 14004395
GENERATED N
GLOBAL_STATS YES
INDEX_NAME CP_TB_PONTO_ENE
INDEX_TYPE NORMAL
INITIAL_EXTENT 65536
INI_TRANS 2
INSTANCES 1
JOIN_INDEX NO
LAST_ANALYZED 05/03/2010 08:45:19
LEAF_BLOCKS 42865
LOGGING YES
MAX_EXTENTS 2147483645
MAX_TRANS 255
MIN_EXTENTS 1
NEXT_EXTENT 1048576
NUM_ROWS 14004395
PARTITIONED NO
PCT_FREE 10
PCT_INCREASE 0
SAMPLE_SIZE 2800879
SECONDARY N
STATUS VALID
TABLESPACE_NAME MEDICAO_NDX
TABLE_NAME TB_PONTO_ENE
TABLE_TYPE TABLE
TEMPORARY N
UNIQUENESS UNIQUE
USER_STATS NO
Two other things has come to my attention: the explain plan for select count(*) from thetable shows that the index is used at the test database, but shows a full table scan at the production database. Which led me to another observation: the test database index has 160MB and the production db has more than 1GB (and we don't do deletes on this table).
Can anyone point me to the solution?
UPDATE
Here are the execution plans:
Test database:
Execution Plan
----------------------------------------------------------
Plan hash value: 1441290166
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 18767 (4)| 00:03:46 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FAST FULL SCAN| IDX_HV_TB_PONTO_ENE | 28M| 18767 (4)| 00:03:46 |
-------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
111 recursive calls
0 db block gets
83586 consistent gets
83533 physical reads
0 redo size
422 bytes sent via SQL*Net to client
399 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
1 rows processed
Production database
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=RULE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TB_PONTO_ENE'
Statistics
----------------------------------------------------------
1 recursive calls
3 db block gets
605327 consistent gets
603698 physical reads
180 redo size
201 bytes sent via SQL*Net to client
242 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
UPDATE 2
The production server is running Oracle 9.2.0.
UPDATE 3
Here are the statistics for the execution with the optimizer mode set to CHOOSE:
SQL> SELECT dt_hr_instante, vl_eneat_del,vl_eneat_rec,vl_enere_del, vl_enere_rec FROM tb_ponto_ene WHERE cd_ponto = 31 AND dt_hr_instante BETWEEN to_date('01/06/2010 00:05:00','dd/mm/yyyy hh24:mi:ss') AND to_date('01/07/2010 00:00:00', 'dd/mm/yyyy hh24:mi:ss');
8640 rows selected.
Elapsed: 00:01:49.51
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=36)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TB_PONTO_ENE' (Cost=4 Card=1 Bytes=36)
2 1 INDEX (RANGE SCAN) OF 'CP_TB_PONTO_ENE' (UNIQUE) (Cost=3 Card=1)
Statistics
----------------------------------------------------------
119 recursive calls
0 db block gets
9169 consistent gets
7438 physical reads
0 redo size
308524 bytes sent via SQL*Net to client
4267 bytes received via SQL*Net from client
577 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
8640 rows processed
The Test database indexes properties include IOT_REDUNDANT_PKEY_ELIM and DROPPED columns but not the production indexes. These columns were added in oracle 10g.
Is perhaps the production database running under the old 9i version and the test database under 10g ? If so, I'd consider that a more significant difference than anything else.
That said if "select count(*) from thetable" is not using a primary key index it is very odd. The index stats are very out of date (14,004,395 rows when you suggest there's over 30 million and last gathered in March). If the table has doubled in size in the last six months, and its stats are even older, then it might be an issue.
The autotrace plan for production says "RULE" optimizer. If you look at the Oracle Tuning document (9i) section RBO Path 15: Full table scan it clearly states full table scan will be used.

Resources