Table compression requirements in Oracle - oracle

I have a Tablespace with total Size 600 MB(this is my test machine), and free space of 110 MB. I was trying to compress a table that's size is 120 MB. When execute the commands:
alter table DOWNLOADSESSIONLOG move compress;
I'm getting error saying:
(Error): ORA-01659: unable to allocate MINEXTENTS beyond 16 in tablespace WAP.
I was searching and all are saying to increase the Tablespace, but I wanted to know how much space I need to give, because I don't want to gave some 200 /300 MB extra space. I wanted to test this in my Test Machine and wanted to implement on Live system that have 60 GB tablespace with one 47 GB table. I wanted to compress 47 GB table on live system, before that I wanted to test it in test environment. Is there any calculation how much space we need to give; otherwise on live system I need to give Lots of space unnecessarily and its difficult.
Appreciate somebody can give some ideas.

I would create a new throw-away tablespace in your test environment that has the same characteristics as your target tablespace in the production environment and then move/compress your table into that. This will give you the best estimate of how much additional space will be necessary. You can move the table back to the original tablespace and drop the new tablespace once you have this number.
Remember that you'll need at least (original size) + (compressed size) available in the tablespace for the duration of the move.

The key thing about compression is that it works by removing duplicate values in each block. So your test table needs to have a representative spread of data.
Two extreme tables ...
SQL> create table totally_similar
2 ( txt varchar2(1000) )
3 /
Table created.
SQL> insert into totally_similar
2 select rpad('a', 1000, 'a')
3 from dual connect by level <= 1000
4 /
1000 rows created.
SQL> create table totally_different
2 ( txt varchar2(1000) )
3 /
Table created.
SQL>
Before we compress let's just check the table sizes...
SQL> insert into totally_different
2 select dbms_random.string('A',1000)
3 from dual connect by level <= 1000
4 /
1000 rows created.
SQL> select segment_name
2 , sum(bytes)
3 , sum(blocks)
4 from user_segments
5 where segment_name in ('TOTALLY_SIMILAR', 'TOTALLY_DIFFERENT')
6 group by segment_name
7 /
SEGMENT_NAME SUM(BYTES) SUM(BLOCKS)
-------------------- ---------- -----------
TOTALLY_SIMILAR 2097152 256
TOTALLY_DIFFERENT 2097152 256
SQL>
If we compress them we get two radically different results ...
SQL> alter table totally_similar move compress
2 /
Table altered.
SQL> alter table totally_different move compress
2 /
Table altered.
SQL> select segment_name
2 , sum(bytes)
3 , sum(blocks)
4 from user_segments
5 where segment_name in ('TOTALLY_SIMILAR', 'TOTALLY_DIFFERENT')
6 group by segment_name
7 /
SEGMENT_NAME SUM(BYTES) SUM(BLOCKS)
-------------------- ---------- -----------
TOTALLY_SIMILAR 65536 8
TOTALLY_DIFFERENT 2097152 256
SQL>
Note that TOTALLY_SIMILAR is eight blocks big, even though every single row was the same. So you need to understand the distribution of your data before you can calculate the compression ratio. The Oracle documentation has this to say:
The compression factor that can be
achieved depends on the cardinality of
a specific column or column pairs
(representing the likelihood of column
value repetitions) and on the average
row length of those columns. Oracle
table compression not only compresses
duplicate values of a single column
but tries to use multi-column value
pairs whenever possible.
Its advice when it comes to estimation of the return is that a sample table of 1000 blocks of the target table should give you a good enough prediction (although more blocks give a more accurate forecast). It is hard to tell without knowing your blocksize, but it seems likely that your TEST table is much bigger than it needs to be. The important thing is whether the data in the test table is representative of your target table. So, did you create it using an export or a sample from the target table, e.g.
create table test_table as select * from big_table sample block (1)
/
You will need to adjust the percentage in the SAMPLE() clause to ensure you get at least 1000 blocks.
edit
In most cases compression should actually speed up data retrieval but YMMV. The cost of compression is paid when inserting or updating the data. How much that tax is and whether you can do anything to avoid it rather depends on the profile of your table.

Related

why after truncate the table space not returned when view object size in tablespace?

I am always use TRUNCATE TABLE to free up space and always the space returned to free space
when I check the free space in oracle tablespace I found the free space increased always
I am using TOAD program to
TRUNCATE TABLE
But this time different I truncated 2 tables but its not free the table space and when I checked
the table size after truncated its still same size used before around 400 MB
why its not free the space , and how to free the space table already truncated and no data in it.
also I moved the table to another tablespace after truncate but its moved with same size
ALTER TABLE CAS_NOSHOW MOVE TABLESPACE TRNG;
how to free up space please your help .
I am using oracle 10g database
If you're looking to recover space above and below the high water mark you can shrink the table.
To do this you need to have row movement enabled.
create table t as
select rownum x, lpad('x', 500, 'x') xx from dual connect by level <= 10000;
/
-- ALTER TABLE t SHRINK SPACE CHECK;
select bytes from user_segments
where segment_name = 'T';
/
delete t where x <= 9900;
/
select bytes from user_segments
where segment_name = 'T';
/
alter table t enable row movement;
/
alter table t shrink space CASCADE;
/
select bytes from user_segments
where segment_name = 'T';

What is the data length of CLOB in oracle?

Could you please let me know what is the Data length for the 2nd column_id of "CLOB" data type in the Employee table? I see some blogs where it says maximum data length is : (4GB -1)* (database block size).
I'm new to this data designing.
Table : Employee
**Column_Name ----- Data_Type ------- Nullable ---- Column_Id**
Emp_ID NUMBER No 1
Emp_details CLOB NO 2
Please help me.
To get CLOB size for a given column in a given row, use DBMS_LOB.GETLENGTH function:
select dbms_lob.getlength(emp_details) from employee from emp_id=1;
To get CLOB size for a given column in a given table that is allocated in the tablespace, you need to identify both segments implementing the LOB.
You can compare both size with following query:
select v1.col_size, v2.seg_size from
(select sum(dbms_lob.getlength(emp_details)) as col_size from employee) v1,
(select sum(bytes) as seg_size from user_segments where segment_name in
(
(select segment_name from user_lobs where table_name='EMPLOYEE' and column_name='EMP_DETAILS')
union
(select index_name from user_lobs where table_name='EMPLOYEE' and column_name='EMP_DETAILS')
)
) v2
;
LOBs are not stored in the table, but outside of it in a dedicated structure called LOB segment, using an LOB index. As #pifor explains, you can inspect those structures in the dictionary view user_lobs.
The LOB segment uses blocks of usually 8192 bytes (check the tablespace in user_lobs), so the minimum size allocated for a single LOB is 8K. For 10.000 bytes, you need two 8K blocks and so on.
Please note that if your database is set to Unicode (as most modern Oracle databases are), the size of a CLOB is roughly 2x as expectet, because they are stored in a 16 bit encoding.
This gets a bit better if you compress the LOBs, but your Oracle license needs to cover "Advanced Compression".
For very small LOBs (less than ca 4000 bytes), you can avoid the 8K overhead and store them in the table where all the other columns are (enable storage in row).

Oracle - how to see how many blocks have been used in a table

I have very limited experience when using Oracle, and am after a rather simple query I imagine. I have a table which contains 1 million rows, Im trying to proof that compressing the data uses less space, however im not sure how to do this, based on this table creation below could someone please show me what i need to write to see the blocks used before/after?
CREATE TABLE OrderTableCompressed(OrderID, StaffID, CustomerID, TotalOrderValue)
as (select level, ceil(dbms_random.value(0, 1000)),
ceil(dbms_random.value(0,10000)),
round(dbms_random.value(0,10000),2)
from dual
connect by level <= 1000000);
ALTER TABLE OrderTableCompressed ADD CONSTRAINT OrderID_PKC PRIMARY KEY (OrderID);
--QUERY HERE THAT SHOWS BLOCKS USED/TIME TAKEN
SELECT COUNT(ORDERID) FROM OrderTableCompressed;
ALTER TABLE OrderTableCompressed COMPRESS;
--QUERY HERE THAT SHOWS BLOCKS USED/TIME TAKEN WHEN COMPRESSED
SELECT COUNT(ORDERID) FROM OrderTableCompressed;
I know how the compression works etc... its just applying the code to proove my theory. thanks for any help
--QUERY HERE THAT SHOWS BLOCKS USED
SELECT blocks, bytes/1024/1024 as MB
FROM user_segments
where segment_name = 'ORDERTABLECOMPRESSED';
Now compress the table: (Note the move. Without it you just change the attribute of the table and subsequent direct path inserts will create compressed blocks)
ALTER TABLE OrderTableCompressed MOVE COMPRESS;
Verify blocks:
--QUERY HERE THAT SHOWS BLOCKS USED TAKEN WHEN COMPRESSED
SELECT blocks, bytes/1024/1024 as MB
FROM user_segments
where segment_name = 'ORDERTABLECOMPRESSED';

Oracle 11G - Performance effect of indexing at insert

Objective
Verify if it is true that insert records without PK/index plus create thme later is faster than insert with PK/Index.
Note
The point here is not about indexing takes more time (it is obvious), but the total cost (Insert without index + create index) is higher than (Insert with index). Because I was taught to insert without index and create index later as it should be faster.
Environment
Windows 7 64 bit on DELL Latitude core i7 2.8GHz 8G memory & SSD HDD
Oracle 11G R2 64 bit
Background
I was taught that insert records without PK/Index and create them after insert would be faster than insert with PK/Index.
However 1 million record inserts with PK/Index was actually faster than creating PK/Index later, approx 4.5 seconds vs 6 seconds, with the experiments below. By increasing the records to 3 million (999000 -> 2999000), the result was the same.
Conditions
The table DDL is below. One bigfile table space for both data and
index.
(Tested a separate index tablespace with the same result & inferior overall perforemace)
Flush the buffer/spool before each run.
Run the experiment 3 times each and made sure the results
were similar.
SQL to flush:
ALTER SYSTEM CHECKPOINT;
ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SYSTEM FLUSH BUFFER_CACHE;
Question
Would it be actually true that "insert witout PK/Index + PK/Index creation later" is faster than "insert with PK/Index"?
Did I make mistakes or missed some conditions in the experiment?
Insert records with PK/Index
TRUNCATE TABLE TBL2;
ALTER TABLE TBL2 DROP CONSTRAINT PK_TBL2_COL1 CASCADE;
ALTER TABLE TBL2 ADD CONSTRAINT PK_TBL2_COL1 PRIMARY KEY(COL1) ;
SET timing ON
INSERT INTO TBL2
SELECT i+j, rpad(TO_CHAR(i+j),100,'A')
FROM (
WITH DATA2(j) AS (
SELECT 0 j FROM DUAL
UNION ALL
SELECT j+1000 FROM DATA2 WHERE j < 999000
)
SELECT j FROM DATA2
),
(
WITH DATA1(i) AS (
SELECT 1 i FROM DUAL
UNION ALL
SELECT i+1 FROM DATA1 WHERE i < 1000
)
SELECT i FROM DATA1
);
commit;
1,000,000 rows inserted.
Elapsed: 00:00:04.328 <----- Insert records with PK/Index
Insert records without PK/Index and create them after
TRUNCATE TABLE TBL2;
ALTER TABLE &TBL_NAME DROP CONSTRAINT PK_TBL2_COL1 CASCADE;
SET TIMING ON
INSERT INTO TBL2
SELECT i+j, rpad(TO_CHAR(i+j),100,'A')
FROM (
WITH DATA2(j) AS (
SELECT 0 j FROM DUAL
UNION ALL
SELECT j+1000 FROM DATA2 WHERE j < 999000
)
SELECT j FROM DATA2
),
(
WITH DATA1(i) AS (
SELECT 1 i FROM DUAL
UNION ALL
SELECT i+1 FROM DATA1 WHERE i < 1000
)
SELECT i FROM DATA1
);
commit;
ALTER TABLE TBL2 ADD CONSTRAINT PK_TBL2_COL1 PRIMARY KEY(COL1) ;
1,000,000 rows inserted.
Elapsed: 00:00:03.454 <---- Insert without PK/Index
table TBL2 altered.
Elapsed: 00:00:02.544 <---- Create PK/Index
Table DDL
CREATE TABLE TBL2 (
"COL1" NUMBER,
"COL2" VARCHAR2(100 BYTE),
CONSTRAINT "PK_TBL2_COL1" PRIMARY KEY ("COL1")
) TABLESPACE "TBS_BIG" ;
The current test case is probably good enough for you to overrule the "best practices". There are too many variables involved to make a blanket statement that "it's always best to leave the indexes enabled". But you're probably close enough to say it's true for your environment.
Below are some considerations for the test case. I've made this a community wiki in the hopes that others will add to the list.
Direct-path inserts. Direct-path writes use different mechanisms and may work completely differently. Direct-path inserts can often be significantly faster than regular inserts, although they have some complicated restrictions (for example, triggers must be disabled) and disadvantages (the data is not immediately backed-up). One particular way it affects this scenario is that NOLOGGING for indexes only applies during index creation. So even if a direct-path insert is used, an enabled index will always generate REDO and UNDO.
Parallelism. Large insert statements often benefit from parallel DML. Usually it's not worth worrying about the performance of bulk loads until it takes more than several seconds, which is when parallelism starts to be useful.
Bitmap indexes are not meant for large DML. Inserts or updates to a table with a bitmap index can lock the whole table and lead to disastrous performance. It might be helpful to limit the test case to b-tree indexes.
Add alter system switch logfile;? Log file switches can sometimes cause performance issues. The tests would be somewhat more consistent if they all started with empty logfiles.
Move data generation logic into a separate step. Hierarchical queries are useful for generating data but they can have their own performance issues. It might be better to create in intermediate table to hold the results, and then only test inserting the intermediate table into the final table.
It's true that it is faster to modify a table if you do not also have to modify one or more indexes and possibly perform constraint checking as well, but it is also largely irrelevant if you then have to add those indexes. You have to consider the complete change to the system that you wish to effect, not just a single part of it.
Obviously if you are adding a single row into a table that already contains millions of rows then it would be foolish to drop and rebuild indexes.
However, even if you have a completely empty table into which you are going to add several million rows it can still be slower to defer the indexing until afterwards.
The reason for this is that such an insert is best performed with the direct path mechanism, and when you use direct path inserts into a table with indexes on it, temporary segments are built that contain the data required to build the indexes (data plus rowids). If those temporary segments are much smaller than the table you have just loaded then they will also be faster to scan and to build the indexes from.
the alternative, if you have five index on the table, is to incur five full table scans after you have loaded it in order to build the indexes.
Obviously there are huge grey areas involved here, but well done for:
Questioning authority and general rules of thumb, and
Running actual tests to determine the facts in your own case.
Edit:
Further considerations -- you run a backup while the indexes are dropped. Now, following an emergency restore, you have to have a script that verifies that all indexes are in place, when you have the business breathing down your neck to get the system back up.
Also, if you absolutely were determined to not maintain indexes during a bulk load, do not drop the indexes -- disable them instead. This preserves the metadata for the indexes existence and definition, and allows a more simple rebuild process. Just be careful that you do not accidentally re-enable indexes by truncating the table, as this will render disabled indexes enabled again.
Oracle has to do more work while inserting data into table having an index. In general, inserting without index is faster than inserting with index.
Think in this way,
Inserting rows in a regular heap-organized table with no particular row order is simple. Find a table block with enough free space, put the rows randomly.
But, when there are indexes on the table, there is much more work to do. Adding new entry for the index is not that simple. It has to traverse the index blocks to find the specific leaf node as the new entry cannot be made into any block. Once the correct leaf node is found, it checks for enough free space and then makes the new entry. If there is not enough space, then it has to split the node and distribute the new entry into old and new node. So, all this work is an overhead and consumes more time overall.
Let's see a small example,
Database version :
SQL> SELECT banner FROM v$version where ROWNUM =1;
BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
OS : Windows 7, 8GB RAM
With Index
SQL> CREATE TABLE t(A NUMBER, CONSTRAINT PK_a PRIMARY KEY (A));
Table created.
SQL> SET timing ON
SQL> INSERT INTO t SELECT LEVEL FROM dual CONNECT BY LEVEL <=1000000;
1000000 rows created.
Elapsed: 00:00:02.26
So, it took 00:00:02.26. Index details:
SQL> column index_name format a10
SQL> column table_name format a10
SQL> column uniqueness format a10
SQL> SELECT index_name, table_name, uniqueness FROM user_indexes WHERE table_name = 'T';
INDEX_NAME TABLE_NAME UNIQUENESS
---------- ---------- ----------
PK_A T UNIQUE
Without Index
SQL> DROP TABLE t PURGE;
Table dropped.
SQL> CREATE TABLE t(A NUMBER);
Table created.
SQL> SET timing ON
SQL> INSERT INTO t SELECT LEVEL FROM dual CONNECT BY LEVEL <=1000000;
1000000 rows created.
Elapsed: 00:00:00.60
So, it took only 00:00:00.60 which is faster compared to 00:00:02.26.

Data loading in Oracle

I am facing problem in loading data. I have to copy 800,000 rows from one table to another in Oracle database.
I tried for 10,000 rows first but the time it took is not satisfactory. I tried using the "BULK COLLECT" and "INSERT INTO SELECT" clause but for both the cases response time is around 35 minutes. This is not the desired response I'm looking for.
Does anyone have any suggestions?
Anirban,
Using an "INSERT INTO SELECT" is the fastest way to populate your table. You may want to extend it with one or two of these hints:
APPEND: to use direct path loading, circumventing the buffer cache
PARALLEL: to use parallel processing if your system has multiple cpu's and this is a one-time operation or an operation that takes place at a time when it doesn't matter that one "selfish" process consumes more resources.
Just using the append hint on my laptop copies 800,000 very small rows below 5 seconds:
SQL> create table one_table (id,name)
2 as
3 select level, 'name' || to_char(level)
4 from dual
5 connect by level <= 800000
6 /
Tabel is aangemaakt.
SQL> create table another_table as select * from one_table where 1=0
2 /
Tabel is aangemaakt.
SQL> select count(*) from another_table
2 /
COUNT(*)
----------
0
1 rij is geselecteerd.
SQL> set timing on
SQL> insert /*+ append */ into another_table select * from one_table
2 /
800000 rijen zijn aangemaakt.
Verstreken: 00:00:04.76
You mention that this operation takes 35 minutes in your case. Can you post some more details, so we can see what exactly is taking 35 minutes?
Regards,
Rob.
I would agree with Rob. Insert into () select is the fastest way to do this.
What exactly do you need to do? If you're trying to do a table rename by copying to a new table and then deleting the old, you might be better off doing a table rename:
alter table
table
rename to
someothertable;
INSERT INTO SELECT is the fastest way to do it.
If possible/necessary, disable all indexes on the target table first.
If you have no existing data in the target table, you can also try CREATE AS SELECT.
As with the above, I would recommend the Insert INTO ... AS select .... or CREATE TABLE ... AS SELECT ... as the fastest way to copy a large volume of data between two tables.
You want to look up the direct-load insert in your oracle documentation. This adds two items to your statements: parallel and nologging. Repeat the tests but do the following:
CREATE TABLE Table2 AS SELECT * FROM Table1 where 1=2;
ALTER TABLE Table2 NOLOGGING;
ALTER TABLE TABLE2 PARALLEL (10);
ALTER TABLE TABLE1 PARALLEL (10);
ALTER SESSION ENABLE PARALLEL DML;
INSERT INTO TABLE2 SELECT * FROM Table 1;
COMMIT;
ALTER TABLE 2 LOGGING:
This turns off the rollback logging for inserts into the table. If the system crashes, there's not recovery and you can't do a rollback on the transaction. The PARALLEL uses N worker thread to copy the data in blocks. You'll have to experiment with the number of parallel worker threads to get best results on your system.
Is the table you are copying to the same structure as the other table? Does it have data or are you creating a new one? Can you use exp/imp? Exp can be give a query to limit what it exports and then imported into the db. What is the total size of the table you are copying from? If you are copying most of the data from one table to a second, can you instead copy the full table using exp/imp and then remove the unwanted rows which would be less than copying.
try to drop all indexes/constraints on your destination table and then re-create them after data load.
use /*+NOLOGGING*/ hint in case you use NOARCHIVELOG mode, or consider to do the backup right after the operation.

Resources