Compress Oracle table - oracle

I need to compress a table. I used alter table tablename compress to compress the table. After doing this the table size remained the same.
How should I be compressing the table?

To compress the old blocks of the table use:
alter table table_name move compress;
This will reinsert the records in another blocks, compressed, and discard old blocks, so you'll gain space. And invalidates the indexex, so you will need to rebuild them.

Compress does not affect already stored rows. Please, check the official documentation:
" You specify table compression with the COMPRESS clause of
the CREATE TABLE statement. You can enable compression for an existing
table by using this clause in an ALTER TABLEstatement. In this case,
the only data that is compressed is the data inserted or updated after
compression is enabled..."
ALTER TABLE t MOVE COMPRESS is a valid answer. But if you use different non default options, especially with big data volume, do regression tests before using ALTER TABLE ... MOVE.
There were historically more problems (performance degradations and bugs) with it. If you have access, look Oracle bug database to see if there are known problems for features and version you use.)
You are on safer side if you: create new table insert data from original (old) table drop old table rename new table to old table name

Related

Truncate a table with multiple list partitions in oracle without having to mention the partition name

I have a table with multiple list partitions. I basically want to truncate all the partitions at once without having to mention the partition name. I did try using the plain truncate table tabl_name and it seems to work. I am quite new to partitions in oracle and am not too sure if this is the right way of doing it.
I also know from reading that i can delete multiple partitions using the alter table truncate partition command.
Thanks,
Kavin
Yes, truncating the table truncates all partitions.
https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/TRUNCATE-TABLE.html#GUID-B76E5846-75B5-4876-98EC-439E15E4D8A4
If table is partitioned, then all partitions or subpartitions, as well as the LOB data and LOB index segments for each partition or subpartition, are truncated.
Note some of the documented side-effects also, such as UNUSABLE indexes being made USABLE.

Removing bloat from greenplum table

I have created some tables in Greenplum, performing insert update and delete operation. Regularly I am also performing vacuum operation. I Found bloat in it. Found solution to remove bloat https://discuss.pivotal.io/hc/en-us/articles/206578327-What-are-the-different-option-to-remove-bloat-from-a-table
However, if I truncate the table and reinsert the data, it removes bloat. Is it good practice to truncate the data from the table?
If you are performing UPDATE and DELETE statements on a heap table (default storage) and running VACUUM regularly, you will get some bloat by design. Heap storage, which is similar to the default PostgreSQL storage mechanism, provides read consistency using Multi-Version Concurrency Control (MVCC).
When you UPDATE or DELETE a record, the old value is still in the table and is able to be read by transactions that are still inflight and started before you issued the UPDATE or DELETE command. This provides the read consistency to the table.
When you execute a VACUUM statement, the database will mark the stale rows as available to be overwritten. It doesn't shrink the files. It just marks rows so they can be overwritten. The next time you execute an INSERT or UPDATE, the stale rows are now able to be used for the new data.
So if you UPDATE or DELETE 10% of a table between running VACUUM, you will probably have about 10% bloat.
Greenplum also has Append-Optimized (AO) storage which doesn't use MVCC and uses a visibility map instead. The files are bit smaller too so you should get better performance. The stale rows are hidden with the visibility map and VACUUM won't do anything until you hit the gp_appendonly_compaction_threshold percentage. The default is 10%. When you have 10% bloat in an AO table and execute VACUUM, the table will automatically get rebuilt for you.
Append-Optimized is called "appendonly" for backwards compatibility reasons but it does allow UPDATE and DELETE. Here is an example of an AO table:
CREATE TABLE sales
(txn_id int, qty int, date date)
WITH (appendonly=true)
DISTRIBUTED BY (txn_id);
Instead of truncate it is better to use drop the table, create the table and then insert the data.

Delete rows from partition table - Best way

I want to delete around 1 million records from a table which is partitioned and table size is around 10-13 millions , As of now only 2 partition exist in the table containining July month data and august month data, and i want to delete from July month.Can you please let me know if a simple delete from table paritition (0715) is ok to do ? Possibilities of fragmentation ? or any best way out?
Thank you
DELETE is rather costly operation on large partitioned tables (but 10M is not realy large). Typically you try to avoid it and remove the data partition-wise using drop partition.
The simplest schema is rolling window, where you define a range partitioning schema by dropping the oldest partitian after the retention interval.
If you need more controll you may use CTAS and exchange back approach.
Instead of deleting a large part of a partition create a copy of it
create table TMP as
select * from TAB PARTITION (ppp)
where <predicate to filter out records to be ommited for partition ppp>
Create indexes on the TMP table in the same structure as the LOCAL indexes of the partitioned table.
Than exchange the temporary table with the partition
ALTER TABLE TAB
EXCHANGE PARTITION ppp WITH TABLE TMP including indexes
WITHOUT VALIDATION
Note no fragmenatation as a result, in contrary you may use it to reorganize the partition data (e.g. with ORDER BY in CTAS or with COMPRESS etc.)
You can delete truncate the partition from the given table. Delete also you can perform if you want to delete few rows from the partition. Plz share your table structure along with the partition details so that it will be easy for people here to assist you.

Oracle 11g Deleting large amount of data without generating archive logs

I need to delete a large amount of data from my database on a regular basis. The process generates huge volume of archive logs. We had a database crash at one point because there was no storage space available on archive destination. How can I avoid generation of logs while I delete data?
The data to be deleted is already marked as inactive in the database. Application code ignores inactive data. I do not need the ability to rollback the operation.
I cannot partition the data in such a way that inactive data falls in one partition that can be dropped. I have to delete the data with delete statements.
I can ask DBAs to set certain configuration at table level/schema level/tablespace level/server level if needed.
I am using Oracle 11g.
What proportion of the data on the table would be deleted, what volume? Are there any referential integrity constraints to manage or is this table childless?
Depending on the answers , you might consider:
"CREATE TABLE keep_data UNRECOVERABLE AS SELECT * FROM ... WHERE
[keep condition]"
Then drop the original table
Then rename keep_table to original table
Rebuild the indexes (again with unrecoverable to prevent redo),constraints etc.
The problem with this approach is it's a multi-step DDL, process, which you will have a job to make fault tolerant and reversible.
A safer option might be to use data-pump to:
Data-pump expdp to extract the "Keep" data
TRUNCATE the table
Data-pump impdp import of data from step 1, with direct-path
At this point I suggest you read the Oracle manual on Data Pump, particularly the section on Direct Path Loads to be sure this will work for you.
MY preferred option would be partitioning.
Of course, the best way would be TenG solution (CTAS, drop and rename table) but it seems it's impossible for you.
Your only problem is the amount of archive logs and database crash problem. In this case, maybe you could partition your delete statement (for example per 10.000 rows).
Something like:
declare
e number;
i number
begin
select count(*) from myTable where [delete condition];
f :=trunc(e/10000)+1;
for i in 1.. f
loop
delete from myTable where [delete condition] and rownum<=10000;
commit;
dbms_lock.sleep(600); -- purge old archive if it's possible
end loop;
end;
After this operation, you should reorganize your table which is surely fragmented.
Alter the table to set NOLOGGING, delete the rows, then turn logging back on.

Can't drop Oracle index partition -- any alternative besides dropping entire index and rebuilding?

So, I have a .NET program doing batch loading of records into partitioned tables using array bound stored procedure calls via Oracle ODP.NET, but that's neither here nor there.
What I would like to know is: because I have a partitioned index on said tables, the speed of the batch load is pretty slow. I fully understand that I cannot drop an index partition, but I would obviously prefer not to have to drop and rebuild the entire index since that will take considerably more time to execute. Is this my only recourse?
Is there a fairly simple way to drop the partition itself and then rebuild the partition and index partition that would save time and go about accomplishing my goal?
Are you loading an entire partition at once? Or are you merely adding new rows to an existing partition? Are all the indexes equipartitioned with the table?
Normally, if you are loading data into a partitioned table, your partitioning scheme is chosen so that each load will put data into a fresh partition. If that is the case, you can use partition exchange to load the data. In a nutshell, you load data into an (unindexed) staging table whose structure matches the real table, you create the indexes to match the indexes on the real table, and then do
ALTER TABLE partitioned_table
EXCHANGE PARTITION new_partition_name
WITH TABLE staging_table_name
WITHOUT VALIDATION;

Resources