I'm having some performance issues when querying a table in a production database. While the query runs in 2.1 seconds in the test database (returning 8640 of 28 million records), at production, it takes 2.05 minutes (returning 8640 of 31 million records). I'm having a hard time to find the problem since I'm not an oracle expert.
Since the explain plan in both databases shows the correct index usage, I'm inclined to think that the problem resides on the table/indexes creation.
I've noticed some small differences between the SQL scripts used for the table creation:
Test database:
create table TB_PONTO_ENE
(
cd_ponto NUMBER(10) not null,
cd_fonte NUMBER(10),
cd_medidor NUMBER(10),
cd_usuario NUMBER(10),
dt_hr_insercao DATE,
dt_hr_instante DATE not null,
dt_hr_hora DATE,
dt_hr_dia DATE,
dt_hr_mes DATE,
dt_hr_instante_hv DATE,
dt_hr_hora_hv DATE,
dt_hr_dia_hv DATE,
dt_hr_mes_hv DATE,
vl_eneat_del FLOAT,
vl_eneat_rec FLOAT,
vl_enere_del FLOAT,
vl_enere_rec FLOAT,
vl_eneat_del_cp FLOAT,
vl_eneat_rec_cp FLOAT,
vl_enere_del_cp FLOAT,
vl_enere_rec_cp FLOAT
)
tablespace TELEMEDICAO
pctfree 10
initrans 1
maxtrans 255
storage
(
initial 64K
minextents 1
maxextents unlimited
);
alter table TB_PONTO_ENE
add constraint CP_TB_PONTO_ENE primary key (CD_PONTO, DT_HR_INSTANTE)
using index
tablespace TELEMEDICAO
pctfree 10
initrans 2
maxtrans 255
storage
(
initial 64K
minextents 1
maxextents unlimited
);
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_FONTE foreign key (CD_FONTE)
references TB_FONTE (CD_FONTE) on delete set null;
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_MEDIDOR foreign key (CD_MEDIDOR)
references TB_MEDIDOR (CD_MEDIDOR) on delete set null;
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_PONTO foreign key (CD_PONTO)
references TB_PONTO (CD_PONTO) on delete cascade;
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_USUARIO foreign key (CD_USUARIO)
references TB_USUARIO (CD_USUARIO) on delete set null
disable;
Production database:
create table TB_PONTO_ENE
(
cd_ponto NUMBER(10) not null,
cd_fonte NUMBER(10),
cd_medidor NUMBER(10),
cd_usuario NUMBER(10),
dt_hr_insercao DATE,
dt_hr_instante DATE not null,
dt_hr_hora DATE,
dt_hr_dia DATE,
dt_hr_mes DATE,
dt_hr_instante_hv DATE,
dt_hr_hora_hv DATE,
dt_hr_dia_hv DATE,
dt_hr_mes_hv DATE,
vl_eneat_del FLOAT,
vl_eneat_rec FLOAT,
vl_enere_del FLOAT,
vl_enere_rec FLOAT,
vl_eneat_del_cp FLOAT,
vl_eneat_rec_cp FLOAT,
vl_enere_del_cp FLOAT,
vl_enere_rec_cp FLOAT
)
tablespace TELEMEDICAO
pctfree 10
initrans 1
maxtrans 255
storage
(
initial 64K
next 5M
minextents 1
maxextents unlimited
pctincrease 0
);
alter table TB_PONTO_ENE
add constraint CP_TB_PONTO_ENE primary key (CD_PONTO, DT_HR_INSTANTE)
using index
tablespace MEDICAO_NDX
pctfree 10
initrans 2
maxtrans 255
storage
(
initial 64K
next 1M
minextents 1
maxextents unlimited
pctincrease 0
);
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_FONTE foreign key (CD_FONTE)
references TB_FONTE (CD_FONTE) on delete set null;
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_MEDIDOR foreign key (CD_MEDIDOR)
references TB_MEDIDOR (CD_MEDIDOR) on delete set null;
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_PONTO foreign key (CD_PONTO)
references TB_PONTO (CD_PONTO) on delete cascade;
alter table TB_PONTO_ENE
add constraint CE_PENE_CD_USUARIO foreign key (CD_USUARIO)
references TB_USUARIO (CD_USUARIO) on delete set null;
The production database puts the indexes in another tablespace. Another difference is the next 5M at the tablespace declaration (no value defined in the test database).
When looking at the index properties, I also see some differences:
Test database:
AVG_DATA_BLOCKS_PER_KEY 1
AVG_LEAF_BLOCKS_PER_KEY 1
BLEVEL 2
BUFFER_POOL DEFAULT
CLUSTERING_FACTOR 611494
COMPRESSION DISABLED
DEGREE 1
DISTINCT_KEYS 28568389
DROPPED NO
GENERATED N
GLOBAL_STATS YES
INDEX_NAME CP_TB_PONTO_ENE
INDEX_TYPE NORMAL
INITIAL_EXTENT 65536
INI_TRANS 2
INSTANCES 1
IOT_REDUNDANT_PKEY_ELIM NO
JOIN_INDEX NO
LAST_ANALYZED 21/07/2010 22:08:34
LEAF_BLOCKS 85809
LOGGING YES
MAX_EXTENTS 2147483645
MAX_TRANS 255
MIN_EXTENTS 1
NUM_ROWS 28568389
PARTITIONED NO
PCT_FREE 10
SAMPLE_SIZE 377209
SECONDARY N
STATUS VALID
TABLESPACE_NAME TELEMEDICAO
TABLE_NAME TB_PONTO_ENE
TABLE_TYPE TABLE
TEMPORARY N
UNIQUENESS UNIQUE
USER_STATS NO
Production database:
AVG_DATA_BLOCKS_PER_KEY 1
AVG_LEAF_BLOCKS_PER_KEY 1
BLEVEL 2
BUFFER_POOL DEFAULT
CLUSTERING_FACTOR 10154395
COMPRESSION DISABLED
DEGREE 1
DISTINCT_KEYS 14004395
GENERATED N
GLOBAL_STATS YES
INDEX_NAME CP_TB_PONTO_ENE
INDEX_TYPE NORMAL
INITIAL_EXTENT 65536
INI_TRANS 2
INSTANCES 1
JOIN_INDEX NO
LAST_ANALYZED 05/03/2010 08:45:19
LEAF_BLOCKS 42865
LOGGING YES
MAX_EXTENTS 2147483645
MAX_TRANS 255
MIN_EXTENTS 1
NEXT_EXTENT 1048576
NUM_ROWS 14004395
PARTITIONED NO
PCT_FREE 10
PCT_INCREASE 0
SAMPLE_SIZE 2800879
SECONDARY N
STATUS VALID
TABLESPACE_NAME MEDICAO_NDX
TABLE_NAME TB_PONTO_ENE
TABLE_TYPE TABLE
TEMPORARY N
UNIQUENESS UNIQUE
USER_STATS NO
Two other things has come to my attention: the explain plan for select count(*) from thetable shows that the index is used at the test database, but shows a full table scan at the production database. Which led me to another observation: the test database index has 160MB and the production db has more than 1GB (and we don't do deletes on this table).
Can anyone point me to the solution?
UPDATE
Here are the execution plans:
Test database:
Execution Plan
----------------------------------------------------------
Plan hash value: 1441290166
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 18767 (4)| 00:03:46 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FAST FULL SCAN| IDX_HV_TB_PONTO_ENE | 28M| 18767 (4)| 00:03:46 |
-------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
111 recursive calls
0 db block gets
83586 consistent gets
83533 physical reads
0 redo size
422 bytes sent via SQL*Net to client
399 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
1 rows processed
Production database
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=RULE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TB_PONTO_ENE'
Statistics
----------------------------------------------------------
1 recursive calls
3 db block gets
605327 consistent gets
603698 physical reads
180 redo size
201 bytes sent via SQL*Net to client
242 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
UPDATE 2
The production server is running Oracle 9.2.0.
UPDATE 3
Here are the statistics for the execution with the optimizer mode set to CHOOSE:
SQL> SELECT dt_hr_instante, vl_eneat_del,vl_eneat_rec,vl_enere_del, vl_enere_rec FROM tb_ponto_ene WHERE cd_ponto = 31 AND dt_hr_instante BETWEEN to_date('01/06/2010 00:05:00','dd/mm/yyyy hh24:mi:ss') AND to_date('01/07/2010 00:00:00', 'dd/mm/yyyy hh24:mi:ss');
8640 rows selected.
Elapsed: 00:01:49.51
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=36)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TB_PONTO_ENE' (Cost=4 Card=1 Bytes=36)
2 1 INDEX (RANGE SCAN) OF 'CP_TB_PONTO_ENE' (UNIQUE) (Cost=3 Card=1)
Statistics
----------------------------------------------------------
119 recursive calls
0 db block gets
9169 consistent gets
7438 physical reads
0 redo size
308524 bytes sent via SQL*Net to client
4267 bytes received via SQL*Net from client
577 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
8640 rows processed
The Test database indexes properties include IOT_REDUNDANT_PKEY_ELIM and DROPPED columns but not the production indexes. These columns were added in oracle 10g.
Is perhaps the production database running under the old 9i version and the test database under 10g ? If so, I'd consider that a more significant difference than anything else.
That said if "select count(*) from thetable" is not using a primary key index it is very odd. The index stats are very out of date (14,004,395 rows when you suggest there's over 30 million and last gathered in March). If the table has doubled in size in the last six months, and its stats are even older, then it might be an issue.
The autotrace plan for production says "RULE" optimizer. If you look at the Oracle Tuning document (9i) section RBO Path 15: Full table scan it clearly states full table scan will be used.
Related
We have an oracle database (18c) on several servers, and need to sync the schema from dev to prod servers. Since it is only the schema that needs to be synced, and not the content of the tables, we do not need to know the next sequence number of primary key columns. (And we certainly do not want to update the prod servers with this sequence number.)
Have tried both SQL Developers Diff Tool and dbForge Schema Compare for Oracle, but they both list tables where only this sequence number is different as tables that need to be updated.
I have not found a setting in SQL Developer Diff Tool that handles this. In dbForge Schema Compare for Oracle they have the Ignore START WITH in sequences, but this seems to not work as I thought, since it still marks tables that are equal except for the sequence number as tables that need an update.
For new tables that only exist in the source db - the sync script will be like this:
CREATE TABLE TEST (
ID NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(
START WITH 102),
TEXT VARCHAR2(4000 BYTE),
CONSTRAINT TEST_ID_PK PRIMARY KEY (ID))
LOGGING;
We need that script without the (START WITH 102) part in it.
For a table that exist in both source and target db (with no other change than the sequence number) - the sync script will be like this:
ALTER TABLE TEST
MODIFY(ID GENERATED BY DEFAULT ON NULL AS IDENTITY(
START WITH 114
INCREMENT BY 1
MAXVALUE 9999999999999999999999999999
MINVALUE 1
CACHE 20
NOCYCLE
NOORDER));
The reality here is that this is a table that does not need an update, and I thought that Ignore START WITH in sequence would handle this, but apparently not.
Anyone out there have a solution for us?
Well, I believe it is a very bad idea to use SQL Developer, or any other IDE tool for that matter, to create scripts to be deployed on Production. You are describing a clear case of lacking a real control version software, like GIT or SVN. You shouldn't need to compare between databases unless there is something wrong, but never for creating DDL scripts.
In this specific case, I would use DBMS_METADATA to create the DDLs
Example
SQL> create table t ( c1 number generated by default on null as identity ( start with 1 increment by 1 ) , c2 number ) ;
Table created.
SQL> insert into t values ( null , 1 ) ;
1 row created.
SQL> r
1* insert into t values ( null , 1 )
1 row created.
SQL> r
1* insert into t values ( null , 1 )
1 row created.
SQL> r
1* insert into t values ( null , 1 )
1 row created.
SQL> select * from t ;
C1 C2
---------- ----------
1 1
2 1
3 1
4 1
In this case SQL developer shows start with 5, because that is the next value of the identity column. You can use DBMS_METADATA.GET_DDL to get the right DDL without this clause.
SQL> begin
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SQLTERMINATOR', true);
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'PRETTY', true);
end;
/
PL/SQL procedure successfully completed.
SQL> select dbms_metadata.get_ddl('TABLE','T') from dual
DBMS_METADATA.GET_DDL('TABLE','T')
--------------------------------------------------------------------------------
CREATE TABLE "SYS"."T"
( "C1" NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY MINVALUE 1 MAXVALUE 99
99999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE
NOKEEP NOSCALE NOT NULL ENABLE,
"C2" NUMBER
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM" ;
There are several options to for example not get the storage attributes. I always use this one
SQL> BEGIN
2 DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SQLTERMINATOR', true);
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'PRETTY', true);
3 4 DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SEGMENT_ATTRIBUTES', true);
DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'STORAGE', false);
END;
/ 5 6 7
PL/SQL procedure successfully completed.
SQL> select dbms_metadata.get_ddl('TABLE','T') from dual ;
DBMS_METADATA.GET_DDL('TABLE','T')
--------------------------------------------------------------------------------
CREATE TABLE "SYS"."T"
( "C1" NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY MINVALUE 1 MAXVALUE 99
99999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE
NOKEEP NOSCALE NOT NULL ENABLE,
"C2" NUMBER
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
TABLESPACE "SYSTEM" ;
SQL>
For comparison purposes, you might want to have a look into DBMS_COMPARISON
https://docs.oracle.com/database/121/ARPLS/d_comparison.htm#ARPLS868
Hi have created table which consist clob datatype :
CREATE TABLE MQ_GET_CLOB
(
SEQ_NO NUMBER NOT NULL,
MESSAGE_TEXT CLOB,
MESSAGE_LENGTH NUMBER NOT NULL,
STATUS VARCHAR2(15 BYTE)
)
TABLESPACE DATA_L1
LOB (MESSAGE_TEXT) STORE AS
(TABLESPACE DATA_L1
STORAGE (INITIAL 6144)
CHUNK 4000
NOCACHE LOGGING);
After creating table when I looked into Metadata , I found that LOB storage clause is getting skipped.
CREATE TABLE MQ_GET_CLOB
(
SEQ_NO NUMBER NOT NULL,
MESSAGE_TEXT CLOB,
MESSAGE_LENGTH NUMBER NOT NULL,
STATUS VARCHAR2(15 BYTE)
)
TABLESPACE DATA_L1
PCTUSED 40
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
Which leads to
ORA-01658: unable to create INITIAL extent for segment in tablespace
SYSAUX
I am not sure why table is using SYSAUX tablespace.
Whether anyone have faced same issue before ? This there something I have missed ?
Thanks.
I'm a veteran SQL Server dev, recently moved to a project requiring Oracle and I'm confused by the error [ORA-02260: table can have only one primary key] I'm getting on Oracle 11.
I'm attempting to create a reference table, with an index and a primary key.
However, getting errors that my column Partner_ID is already declared. I know I'm missing something simple, but the docs and other sources I've viewed here have not given me a clue. Please help me understand what I'm doing wrong.
Thank you
ALTER TABLE REF_PARTNER
DROP PRIMARY KEY CASCADE;
DROP TABLE REF_PARTNER CASCADE CONSTRAINTS;
CREATE TABLE REF_PARTNER
(
PARTNER_ID NUMBER(10) PRIMARY KEY NOT NULL,
GLOBAL_APPID VARCHAR2(256 BYTE) NOT NULL,
FRIENDLY_NAME VARCHAR2(256 BYTE) NOT NULL,
CREATE_DTS DATE,
MODIFIED_DTS DATE,
LAST_MODIFIED_USER VARCHAR2(40 BYTE)
)
TABLESPACE DATA_1
PCTUSED 0
PCTFREE 5
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
LOGGING
NOCOMPRESS
NOCACHE
MONITORING;
BEGIN
EXECUTE IMMEDIATE 'DROP SEQUENCE PARTNER_SEQ';
EXCEPTION WHEN OTHERS THEN NULL;
END;
CREATE SEQUENCE PARTNER_SEQ START WITH 1 INCREMENT BY 1 MINVALUE 1 NOMAXVALUE NOCYCLE CACHE 200;
--CREATE UNIQUE INDEX REF_PARTNER_IDX ON REF_PARTNER
--(PARTNER_ID)
--LOGGING
--TABLESPACE INDEX_1
--PCTFREE 10
--INITRANS 2
--MAXTRANS 255
--STORAGE (
-- INITIAL 64K
-- NEXT 64K
-- MAXSIZE UNLIMITED
-- MINEXTENTS 1
-- MAXEXTENTS UNLIMITED
-- PCTINCREASE 0
-- BUFFER_POOL DEFAULT
-- );
--ALTER TABLE REF_PARTNER ADD (
-- CONSTRAINT REF_PARTNER_PK
-- PRIMARY KEY
-- (PARTNER_ID)
-- USING INDEX REF_PARTNER_PK
-- ENABLE VALIDATE);
A assume the error you get is
ORA-01408: such column list already indexed.
This is because you create the table with partner_id as the primary key. This automatically creates a unique index on partner_id.
There is no need to create a unique key on partner_id after you declared it to be the primary key.
I work with Toad for Oracle 12.1 for my database. I have a Table called TBLEMPLOYEE which already contain some data in it and having Column Name called ID whose data values are increasing from 1 to N.
ID Name Gender DateOfBirth Type
------------------------------------
1 Mark Male 10/10/1982 1
2 Mary Female 11/11/1981 2
3 Esther Female 12/12/1984 2
4 Matthew Male 9/9/1983 1
5 John Male 5/5/1985 1
6 Luke Male 6/6/1986 1
Now I want to change the Column ID such that it will have auto-incremented ID when I add a new data to the Table.
I know that in Toad we can do it when we create a New Table with that behavior. For instance, using Create Table and in the newly created Column, we could set Default / Virtual / Identity settings as Identity:
And Toad will show a UI with bunch of settings to do that:
And will be automatically translated to something like:
(START WITH 1 INCREMENT BY 1 MINVALUE 1 MAXVALUE 9999999999999999999999999999 CACHE 20 NOCYCLE ORDER NOKEEP)
In the Default / Virtual / Identity settings.
But I can't seem to do the same when I do Alter Table instead of Create Table.
Why is that so?
And since I already have some data in the TBLEMPLOYEE, I want to avoid creating a new table and re-inserting the data if possible.
How can I do that?
This is the current SQL script (if this may help):
ALTER TABLE MYSCHEMA.TBLEMPLOYEE
DROP PRIMARY KEY CASCADE;
DROP TABLE MYSCHEMA.TBLEMPLOYEE CASCADE CONSTRAINTS;
CREATE TABLE MYSCHEMA.TBLEMPLOYEE
(
ID NUMBER NOT NULL,
NAME VARCHAR2(80 BYTE) NOT NULL,
GENDER VARCHAR2(6 BYTE),
DATEOFBIRTH DATE,
EMPLOYEETYPE INTEGER NOT NULL,
)
TABLESPACE USERS
RESULT_CACHE (MODE DEFAULT)
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
ALTER TABLE MYSCHEMA.TBLEMPLOYEE ADD (
PRIMARY KEY
(ID)
USING INDEX
TABLESPACE USERS
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
)
ENABLE VALIDATE);
First of all, your sequence should start with the max value + 1 from the table e.g.
(START WITH 7 INCREMENT BY 1 MINVALUE 1 MAXVALUE 9999999999999999999999999999 CACHE 20 NOCYCLE ORDER NOKEEP)
If you want to automatically populate the value for the Id and you're not running on Oracle 12c, I suggest you to use a trigger
drop sequence seq_mytest_id;
truncate table my_test_t;
drop table my_test_t;
create table my_test_t (id number, string varchar2(30));
-- prepopulate with fixed values for the id
insert into my_test_t(id, string) values (1,'test');
insert into my_test_t(id, string) values (2,'test');
insert into my_test_t(id, string) values (3,'test');
insert into my_test_t(id, string) values (4,'test');
insert into my_test_t(id, string) values (5,'test');
insert into my_test_t(id, string) values (6,'test');
commit;
--Now create the sequence and the trigger for automatically
--populating the ID column
create sequence seq_mytest_id start with 7 increment by 1 nocycle nocache;
create trigger t_mytest_bi before insert on my_test_t for each row
begin
select seq_mytest_id.nextval into :new.id from dual;
end;
/
-- Test the trigger
insert into my_test_t(string) values ('test');
insert into my_test_t(string) values ('test2');
commit;
select * from my_test_t;
If you're running on Oracle 12c you can define your column as an identity column
https://oracle-base.com/articles/12c/identity-columns-in-oracle-12cr1
Hope it helps,
R
First som basics.
Java 6
OJDBC6
Oracle 10.2.0.4 (also the same result in 11g version)
I am experiencing that a sql statement is behaving differently when executed from Java with the OJDBC6 client and using the tool SQL Gate that probably uses a native/OCI driver. For som reason the optimizer chooses to use hash join for the executed statement in Java but not for the other.
Here is the table:
CREATE TABLE DPOWNERA.XXX_CHIP (
xxxCH_ID NUMBER(22) NOT NULL,
xxxCHP_ID NUMBER(22) NOT NULL,
xxxSP_ID NUMBER(22) NULL,
xxxCU_ID NUMBER(22) NULL,
xxxFT_ID NUMBER(22) NULL,
UEMTE_ID NUMBER(38) NULL,
xxxCH_CHIPID VARCHAR2(30) NOT NULL
)
The index:
ALTER TABLE DPOWNERA.XXX_CHIP ADD
(
CONSTRAINT IX_AK1_XXX_CHIPV2
UNIQUE ( XXXCH_CHIPID )
USING INDEX
TABLESPACE DP_DATA01
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 128 K
NEXT 128 K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
)
);
Here is the SQL i used:
SELECT *
FROM (SELECT m2.*,
rownum rnum
FROM (SELECT m_chip.xxxch_id,
m_chip.xxxch_chipid
FROM xxx_chip m_chip
ORDER BY m_chip.xxxch_chipid) m2
WHERE rownum < 101)
WHERE rnum >= 1;
And finally excerpts from the explain plan:
SQL Tool Query:
OPERATION OBJECT_NAME COST CARDINALITY CPU_COST
---------------- ------------------- ----- ----------- ----------
SELECT STATEMENT NULL 2 10 11740
VIEW NULL 2 10 11740
COUNT NULL NULL NULL NULL
VIEW NULL 2 10 11740
NESTED LOOPS NULL 2 10 11740
TABLE ACCESS XXX_CHIP 1 1000000 3319
INDEX IX_AK1_XXX_CHIPV2 1 10 2336
TABLE ACCESS XXX_CUSTOMER 1 1 842
INDEX IX_PK_XXX_CUSTOMER 1 1 105
QQL Java Query OJDBC Thin client:
**OPERATION OBJECT_NAME COST CARDINALITY CPU_COST**
SELECT STATEMENT NULL 15100 100 1538329415
VIEW NULL 15100 100 1538329415
COUNT NULL NULL NULL NULL
VIEW NULL 15100 1000000 1538329415
SORT NULL 15100 1000000 1538329415
HASH JOIN NULL 1639 1000000 424719850
VIEW index$_join$_004 3 3 2268646
HASH JOIN NULL NULL NULL NULL
INDEX IX_AK1_XXX_CUSTOMER 1 3 965
INDEX IX_PK_XXX_CUSTOMER 1 3 965
TABLE ACCESS xxx_CHIP 1614 1000000 320184788
So, i am lost to why the hash join is chosen by the optimizer?
My guess is that the varchar2 is treated differently.
I found an answer and it was simpler than i thought. It all has to do with the VARCHAR2 datatype of the index column. My database was set to language and country "en", "US" but locally
i have another language and region. Therfore the optimizer rightly discarded the index since it wasn't configured with the same language and country as the client.
So what i did to test it was to start my eclipse with some extra -D parameters entered in my eclipse.ini file.
-Duser.language=en
-Duser.country=US
-Duser.region=US
Then in the data source explorer in Eclipse i had created a connection and ran my statement and it worked like a charm.
So lesson learned is to always see to that the client and database are compatible language wise. Probably we will change so we use UTF-8 in the database so it is the same for every installation. Otherwise you will have to configure it for every installation depending on country and language.
Hope this will help someone. If the answer was unclear please post a comment.