What is the disadvantage of using SHRINK SPACE for an Oracle Table? - oracle

I am going to develop a reporting database by querying the production database once using a stored procedure.
The stored procedure will then write the result into it's own output tables.
Following is the schema for the output table:
Create table Output (
Customer_ID NUMBER(15) not null,
STD_HASH RAW(1000 BYTE) ,
VALID_PERIOD_START NOT NULL TIMESTAMP(6),
VALID_PERIOD_END NOT NULL TIMESTAMP(6),
Address VARCHAR2(30 CHAR),
period for valid_period(valid_period_start,valid_period_end),
Constraint Output_PK Primary Key ( Customer_ID, valid_period_start, valid_period_end )
)
As the stored procedure will perform a lot of update and delete statement on the output table, and those output table is very big. The largest one I have right now is 8GB. I am thinking to alter those output tables with the "SHRINK SPACE" option at the end of the stored procedure to reclaim some spaces.
Following is the statement that I am going to apply:
Alter table OUTPUT1 ENABLE ROW MOVEMENT; -- Temporary enable row movement for the table
Alter TABLE output2 SHRINK SPACE;
Alter table OUTPUT1 DISABLE ROW MOVEMENT;-- Disable row movement.
Those output tables are temporal table that is using the valid period and the ID from production as a primary key.
However, as I am very new with the Shrink Space function. Can anyone tell me what is the disadvantage of using this function?
The table, basically will not be updated from anywhere other than this stored procedure. The stored procedure will be scheduled to run in a daily base.
Thanks in advance!

yes, shrink has some restrictions:
(1) it locks the table during the shrink operation
(2) it change the rowid of the table.
(3) it needs the tablespace with automatic segment-space management enabled.
we have other alternatives to claim the unused space:
(1) export/import
(2) dbms_redefinition

Related

Oracle Automatic LIST partition using virtual column not allowing REFERENCE partition on child table

I’ve made an attempt to create partition on test table using virtual column. This approach is working good for PARENT or standalone tables. However, I cannot create REFERENCE partition on CHILD table if the PARENT table is PARTITIONED using virtual column. I get the following error on create table of CHILD table
ORA-14659: Partitioning method of the parent table is not supported
Oracle Version details:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
PL/SQL Release 12.2.0.1.0 - Production
Please find the script below.
--######################PARENT TABLE###########################################
DROP TABLE BILL_HEADER_TST;
CREATE TABLE BILL_HEADER_TST
(
BILL_HDR_SID NUMBER (30) NOT NULL,
TCN VARCHAR2 (21 BYTE) NOT NULL,
TCN_DATE DATE,
PROGRAM_CID NUMBER,
CONSTRAINT XPKBILL_HEADER_TST PRIMARY KEY (BILL_HDR_SID),
PARTN_KEY NUMBER
AS ( PROGRAM_CID
|| TO_NUMBER (TO_CHAR (TCN_DATE, 'YYYYMM')))
VIRTUAL
)
PARTITION BY LIST (PARTN_KEY) AUTOMATIC (PARTITION PDEFAULT VALUES (1201401));
------------------LOCAL INDEXES------------------------------------------------
CREATE INDEX XIE33BILL_HEADER_TST
ON BILL_HEADER_TST (TCN_DATE)
LOCAL;
CREATE INDEX XIE38BILL_HEADER_TST
ON BILL_HEADER_TST (PROGRAM_CID)
LOCAL;
---------------------INDEXES---------------------------------------------------
CREATE UNIQUE INDEX XAK1BILL_HEADER_TST
ON BILL_HEADER_TST (TCN)
LOGGING
NOPARALLEL;
--#############CHILD TABLE#####################################################
DROP TABLE BILL_LINE_TST;
CREATE TABLE BILL_LINE_TST
(
BILL_LINE_SID NUMBER (30) NOT NULL,
BILL_HDR_SID NUMBER (30) NOT NULL,
CLM_TYPE_CID NUMBER (3),
PROGRAM_CID NUMBER,
CONSTRAINT XPKBILL_LINE_TST PRIMARY KEY (BILL_LINE_SID),
CONSTRAINT XFK17_BILL_LINE_TST FOREIGN KEY
(BILL_HDR_SID)
REFERENCES BILL_HEADER_TST (BILL_HDR_SID) ON DELETE CASCADE
)
PARTITION BY REFERENCE (XFK17_BILL_LINE_TST)
ENABLE ROW MOVEMENT;
From the SQL Language manual
Automatic list partitioning is subject to the restrictions listed in "Restrictions on List Partitioning". The following additional restrictions apply:
An automatic list-partitioned table must have at least one partition
when created. Because new partitions are automatically created for
new, and unknown, partitioning key values, an automatic
list-partitioned table cannot have a DEFAULT partition.
Automatic list partitioning is not supported for index-organized
tables or external tables.
Automatic list partitioning is not supported for tables containing
varray columns.
You cannot create a local domain index on an automatic
list-partitioned table. You can create a global domain index on an
automatic list-partitioned table.
An automatic list-partitioned table cannot be a child table or a
parent table for reference partitioning.
Automatic list partitioning is not supported at the subpartition
level.

Move LOB tablespace on partitioned tables

Consider following tables:
CREATE TABLE TAB_ONE
(
<irrelevant columns>
MESSAGE CLOB,
REC_DATE DATE,
<irrelevant columns>
) TABLESPACE DATA_TS;
and
CREATE TABLE TAB_TWO
(
<irrelevant columns>
RESPONSE CLOB,
PART_ID NUMBER,
<irrelevant columns>
CONSTRAINT "FK_01"
FOREIGN KEY ("PART_ID") REFERENCES <IRRELEVANT_TABLE> ("SEQ_ID")
) TABLESPACE DATA_TS PARTITION BY REFERENCE (FK_01) ENABLE ROW MOVEMENT;
now, the task is to move all the (C)LOBs from DATA_TS to newly allocated tablespace called LOB_TS.
For the first table, it is easy enough:
ALTER TABLE TAB_ONE MOVE LOB ("MESSAGE") store as (tablespace LOB_TS compress low);
For the other one, partitioning does all the trouble. The aforementioned command does not work for obvious reasons, so I managed to find another:
ALTER TABLE TAB_TWO MOVE PARTITION SYS_P18485 LOB (RESPONSE) STORE AS ( TABLESPACE LOB_TS COMPRESS LOW );
ALTER TABLE TAB_TWO MOVE PARTITION SYS_P18299 LOB (RESPONSE) STORE AS ( TABLESPACE LOB_TS COMPRESS LOW );
(one for each partition the table TAB_TWO has)
These ALTER TABLES do not fail per se. The SQL Developer proudly states "Table TAB_TWO altered."
But then I ran SELECT * FROM USER_LOBS WHERE TABLE_NAME = 'TAB_TWO' and found out, that the CLOB stayed in the previous tablespace and did not move.
Of course, the idea of copying data via Create Table as Select, dropping the table, creating a new table and restoring the data occurred to me, but I would prefer a cleaner solution without the need of duplicating large amounts of data to different tables.

PL/SQL Creating a table with indexes

I have the following PL/SQL code:
DROP TABLE TAB_PARAM;
BEGIN
EXECUTE IMMEDIATE 'CREATE TABLE TAB_PARAM
(
TABLE_OWNER VARCHAR2(30) NOT NULL,
TABLE_NAME VARCHAR2(30) NOT NULL,
COLUMN_NAME VARCHAR2(30) NOT NULL,
PATTERN VARCHAR2(1024),
TYPE_METHODE VARCHAR2(30) NOT NULL,
SEPARATEUR VARCHAR2(20),
ID VARCHAR2(30),
CONSTRAINT PK_TAB_PARAM PRIMARY KEY (TABLE_OWNER,TABLE_NAME,COLUMN_NAME) USING INDEX TABLESPACE IND_PARC_256M NOLOGGING
)
TABLESPACE TAB_PARC_256M NOLOGGING NOCACHE NOMONITORING NOPARALLEL';
commit;
END;
/
I don't understand the part :
CONSTRAINT PK_TAB_PARAM PRIMARY KEY (TABLE_OWNER,TABLE_NAME,COLUMN_NAME) USING INDEX TABLESPACE IND_PARC_256M NOLOGGING
Nor the part:
TABLESPACE TAB_PARC_256M NOLOGGING NOCACHE NOMONITORING NOPARALLEL';
I know that it is setting the ID as primary key of TAB_PARAM but then I do not get the index part.
Can anyone help me understand this code please?
Sometimes you can create a table in a disposable way , for example you need to read data via sqlloader from file and insert in table , after that you will select all records from table . This operation doesnt need indexes and doesnt need primary key . Anyway it is always good to create primary key when you create a table and Oracle will create index for you . You can create more indexes whenever you want since the table was created . Now suppose you wanna update the table using where condition on a field who is not a primary key , it could be a better decision create an index on that field . In addition when you create indexes or table , you can associate tablespaces (created before) and could be good practice separate tablespaces for tables and indexes . All those operations are DDL Statements . In Oracle you can check datafiles , tables and indexes with those queries select * from dba_data_files; select * from dba_tables; select * from dba_indexes;
For primary key oracle implicitly creates unique index in table's tablespace.
This part USING INDEX TABLESPACE allow us to indicate tablespace for this implicitly index.
NOLOGGING - data is modified with minimal logging (to mark new extents invalid and to record dictionary changes).
NOCACHE - Oracle doesn't store blocks in the buffer cache.
NOPARALLEL - Parallel execution is not allowed on this table.(Default value)
NOMONITORING - Disable statistics collection. Now is deprecated. There is other mechanism to collect statistic
Edit. from oracle doc
Oracle Database uses an existing index if it contains a unique set of values before enforcing the primary key constraint. The existing
index can be defined as unique or nonunique. When a DML operation is
performed, the primary key constraint is enforced using this existing
index.
If there already index hitting the pk columns , oracle will used it, else then Oracle Database generates a unique index.
so you can create a primary key in oracle without an index if there was already index for that columns , specifying an index is for performance issue. The purpose of an index in a table is to 'read' the data faster.
From the oracle document
An index is a schema object that contains an entry for each value that
appears in the indexed column(s) of the table or cluster and provides
direct, fast access to rows.
As for the TABLESPACE, its where the database object exists, so when you create a table you specify in which tablespace you want it to exists. Read more here oracle document
As for NOLOGGING NOCACHE NOMONITORING, so the table not to be logged in redo logs or cached, also related to performance issue.

Move an Oracle table to being Index Organised

I have an Oracle table in a live production environment and the table is over half a gig in size. Is it possible to change this normal Oracle table from being heap organised to index organised or is this only achievable by moving the data from this table to another new table which is index organised? Either way, I would be grateful if you could you please list the steps involved in this procedure.
There is no way to alter a table to make it index-organized table. Instead you can redefine the table(using DBMS_REDEFINITION)or can create new table using CTAS.
Example:
create table t2 (
id number, first_name varchar2(20),
constraint pk_id primary key (id)
)
organization index
as select * from t1;
I never used DBMS_REDEFINITION but with CTAS it is not only step to create table if it is production.
List all indexes, constraints, foreign keys and so on based on system views
Prepare create index, constraints and alter foreign keys statements. prepare list of triggers, procedures that depend on table.
Create table as select (consider lock before that step if you can)
Create all indexes, constraints with prepared statements from step 2
Swap table names and swap foreign keys (this step may cause some errors if you hit insert on foreign keys (if you expect it on that time you should lock the table and tables referencing by foreign key).
Compile dependent objects from 2 (if you locked tables unlock here)
(if you haven't locked on step 3) Insert into table select * from new minus select * from old; or if you have timstamp of inserting row just insert new rows.
I hope the list is complete.

Oracle 11G - Performance effect of indexing at insert

Objective
Verify if it is true that insert records without PK/index plus create thme later is faster than insert with PK/Index.
Note
The point here is not about indexing takes more time (it is obvious), but the total cost (Insert without index + create index) is higher than (Insert with index). Because I was taught to insert without index and create index later as it should be faster.
Environment
Windows 7 64 bit on DELL Latitude core i7 2.8GHz 8G memory & SSD HDD
Oracle 11G R2 64 bit
Background
I was taught that insert records without PK/Index and create them after insert would be faster than insert with PK/Index.
However 1 million record inserts with PK/Index was actually faster than creating PK/Index later, approx 4.5 seconds vs 6 seconds, with the experiments below. By increasing the records to 3 million (999000 -> 2999000), the result was the same.
Conditions
The table DDL is below. One bigfile table space for both data and
index.
(Tested a separate index tablespace with the same result & inferior overall perforemace)
Flush the buffer/spool before each run.
Run the experiment 3 times each and made sure the results
were similar.
SQL to flush:
ALTER SYSTEM CHECKPOINT;
ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SYSTEM FLUSH BUFFER_CACHE;
Question
Would it be actually true that "insert witout PK/Index + PK/Index creation later" is faster than "insert with PK/Index"?
Did I make mistakes or missed some conditions in the experiment?
Insert records with PK/Index
TRUNCATE TABLE TBL2;
ALTER TABLE TBL2 DROP CONSTRAINT PK_TBL2_COL1 CASCADE;
ALTER TABLE TBL2 ADD CONSTRAINT PK_TBL2_COL1 PRIMARY KEY(COL1) ;
SET timing ON
INSERT INTO TBL2
SELECT i+j, rpad(TO_CHAR(i+j),100,'A')
FROM (
WITH DATA2(j) AS (
SELECT 0 j FROM DUAL
UNION ALL
SELECT j+1000 FROM DATA2 WHERE j < 999000
)
SELECT j FROM DATA2
),
(
WITH DATA1(i) AS (
SELECT 1 i FROM DUAL
UNION ALL
SELECT i+1 FROM DATA1 WHERE i < 1000
)
SELECT i FROM DATA1
);
commit;
1,000,000 rows inserted.
Elapsed: 00:00:04.328 <----- Insert records with PK/Index
Insert records without PK/Index and create them after
TRUNCATE TABLE TBL2;
ALTER TABLE &TBL_NAME DROP CONSTRAINT PK_TBL2_COL1 CASCADE;
SET TIMING ON
INSERT INTO TBL2
SELECT i+j, rpad(TO_CHAR(i+j),100,'A')
FROM (
WITH DATA2(j) AS (
SELECT 0 j FROM DUAL
UNION ALL
SELECT j+1000 FROM DATA2 WHERE j < 999000
)
SELECT j FROM DATA2
),
(
WITH DATA1(i) AS (
SELECT 1 i FROM DUAL
UNION ALL
SELECT i+1 FROM DATA1 WHERE i < 1000
)
SELECT i FROM DATA1
);
commit;
ALTER TABLE TBL2 ADD CONSTRAINT PK_TBL2_COL1 PRIMARY KEY(COL1) ;
1,000,000 rows inserted.
Elapsed: 00:00:03.454 <---- Insert without PK/Index
table TBL2 altered.
Elapsed: 00:00:02.544 <---- Create PK/Index
Table DDL
CREATE TABLE TBL2 (
"COL1" NUMBER,
"COL2" VARCHAR2(100 BYTE),
CONSTRAINT "PK_TBL2_COL1" PRIMARY KEY ("COL1")
) TABLESPACE "TBS_BIG" ;
The current test case is probably good enough for you to overrule the "best practices". There are too many variables involved to make a blanket statement that "it's always best to leave the indexes enabled". But you're probably close enough to say it's true for your environment.
Below are some considerations for the test case. I've made this a community wiki in the hopes that others will add to the list.
Direct-path inserts. Direct-path writes use different mechanisms and may work completely differently. Direct-path inserts can often be significantly faster than regular inserts, although they have some complicated restrictions (for example, triggers must be disabled) and disadvantages (the data is not immediately backed-up). One particular way it affects this scenario is that NOLOGGING for indexes only applies during index creation. So even if a direct-path insert is used, an enabled index will always generate REDO and UNDO.
Parallelism. Large insert statements often benefit from parallel DML. Usually it's not worth worrying about the performance of bulk loads until it takes more than several seconds, which is when parallelism starts to be useful.
Bitmap indexes are not meant for large DML. Inserts or updates to a table with a bitmap index can lock the whole table and lead to disastrous performance. It might be helpful to limit the test case to b-tree indexes.
Add alter system switch logfile;? Log file switches can sometimes cause performance issues. The tests would be somewhat more consistent if they all started with empty logfiles.
Move data generation logic into a separate step. Hierarchical queries are useful for generating data but they can have their own performance issues. It might be better to create in intermediate table to hold the results, and then only test inserting the intermediate table into the final table.
It's true that it is faster to modify a table if you do not also have to modify one or more indexes and possibly perform constraint checking as well, but it is also largely irrelevant if you then have to add those indexes. You have to consider the complete change to the system that you wish to effect, not just a single part of it.
Obviously if you are adding a single row into a table that already contains millions of rows then it would be foolish to drop and rebuild indexes.
However, even if you have a completely empty table into which you are going to add several million rows it can still be slower to defer the indexing until afterwards.
The reason for this is that such an insert is best performed with the direct path mechanism, and when you use direct path inserts into a table with indexes on it, temporary segments are built that contain the data required to build the indexes (data plus rowids). If those temporary segments are much smaller than the table you have just loaded then they will also be faster to scan and to build the indexes from.
the alternative, if you have five index on the table, is to incur five full table scans after you have loaded it in order to build the indexes.
Obviously there are huge grey areas involved here, but well done for:
Questioning authority and general rules of thumb, and
Running actual tests to determine the facts in your own case.
Edit:
Further considerations -- you run a backup while the indexes are dropped. Now, following an emergency restore, you have to have a script that verifies that all indexes are in place, when you have the business breathing down your neck to get the system back up.
Also, if you absolutely were determined to not maintain indexes during a bulk load, do not drop the indexes -- disable them instead. This preserves the metadata for the indexes existence and definition, and allows a more simple rebuild process. Just be careful that you do not accidentally re-enable indexes by truncating the table, as this will render disabled indexes enabled again.
Oracle has to do more work while inserting data into table having an index. In general, inserting without index is faster than inserting with index.
Think in this way,
Inserting rows in a regular heap-organized table with no particular row order is simple. Find a table block with enough free space, put the rows randomly.
But, when there are indexes on the table, there is much more work to do. Adding new entry for the index is not that simple. It has to traverse the index blocks to find the specific leaf node as the new entry cannot be made into any block. Once the correct leaf node is found, it checks for enough free space and then makes the new entry. If there is not enough space, then it has to split the node and distribute the new entry into old and new node. So, all this work is an overhead and consumes more time overall.
Let's see a small example,
Database version :
SQL> SELECT banner FROM v$version where ROWNUM =1;
BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
OS : Windows 7, 8GB RAM
With Index
SQL> CREATE TABLE t(A NUMBER, CONSTRAINT PK_a PRIMARY KEY (A));
Table created.
SQL> SET timing ON
SQL> INSERT INTO t SELECT LEVEL FROM dual CONNECT BY LEVEL <=1000000;
1000000 rows created.
Elapsed: 00:00:02.26
So, it took 00:00:02.26. Index details:
SQL> column index_name format a10
SQL> column table_name format a10
SQL> column uniqueness format a10
SQL> SELECT index_name, table_name, uniqueness FROM user_indexes WHERE table_name = 'T';
INDEX_NAME TABLE_NAME UNIQUENESS
---------- ---------- ----------
PK_A T UNIQUE
Without Index
SQL> DROP TABLE t PURGE;
Table dropped.
SQL> CREATE TABLE t(A NUMBER);
Table created.
SQL> SET timing ON
SQL> INSERT INTO t SELECT LEVEL FROM dual CONNECT BY LEVEL <=1000000;
1000000 rows created.
Elapsed: 00:00:00.60
So, it took only 00:00:00.60 which is faster compared to 00:00:02.26.

Resources