How can I reduce oracle database storage? - oracle

I have a problem with hard disk space lack.
so I delete all records from a big table in my oracle database.
But my hard disk space is not changed.
How can I fix it?

Try ALTER TABLE table_name SHRINK SPACE CASCADE; and rebuild your indexes.

Deleting all rows from a table does not reset the high water mark and does not free up any space. This is because with DML operations Oracle assumes the space will be re-used later and doesn't bother to release it.
You probably want to run a TRUNCATE statement instead. It's also much faster than a DELETE as it doesn't generate any REDO or UNDO. But beware that truncating a table is a DDL statement that cannot be rolled back.
TRUNCATEing tables releases space to the tablespace and that space is available to other objects. But it still doesn't reduce the amount of operating system space used. Lowering disk space requires shrinking datafiles.
Shrinking datafiles can be tricky. Datafiles can only be shrunk to the last extent. It's possible for a datafile to have a ton of empty space but not be shrinkable, if a single block of data is at the end of the file. In that case the datafile must be defragmented. That can be done by Oracle Enterprise Manager and you can find many other scripts to do it. They usually involve moving or rebuilding all the objects.
The simplest way to save some space is to try to shrink every file. This won't free up all available space, but in practice most of the free space is is usually after the last extent and can be reclaimed.
Run the below script, it will generate a bunch of alter statements to try shrink the datafiles by a large amount. Most of the statements will fail but that's OK. Cut the number in half and try again, repeatedly. This is a "dumb" way to do it, but it can usually free up some space quickly.
with decrease_size as (select 64 gb from dual)
select 'alter database datafile '''||file_name||''' resize '||to_char(round(bytes/1024/1024/1024)-(select gb from decrease_size))||'g;'
from dba_data_files
where bytes > (select gb from decrease_size) * 1024*1024*1024
order by 1 desc;

Related

Drop columns from table using option CHECKPOINT

Need to drop 3 columns from table having about 3 million rows. It is taking about 3 hours to drop 3 columns. I'm thinking of using CHECKPOINT but not sure if what CHECKPOINT number I need to use. Also is it safe to use CHECKPOINT option?
We are on Oracle 12.2
So far, I've tried this:
ALTER TABLE <table1> SET UNUSED (<col1>, <col2>, <col3>);
ALTER TABLE <table1> DROP UNUSED COLUMNS;
Is your goal to improve performance, or to conserve UNDO space used?
CHECKPOINT will not improve performance. Your command must alter all 3 million rows of data, regardless. CHECKPOINT will limit the amount of UNDO rows that exist at any one time, but it won't limit the total number of UNDO rows that need to be created over the course of the transaction. If anything, checkpointing - which will clear UNDO records and write more to REDO - will introduce even more disk I/O operations to your transaction and slow it down further.
CHECKPOINT is really only useful if you have limited disk capacity for your UNDO tablespace, in which case the number of rows should depend on the amount of UNDO space used per row and the total amount of space you can allow for the transaction. That may take experimentation to determine - start high and work down until the transaction completes - you want was few checkpoints as possible while staying within your UNDO storage threshhold.
Also, per the documentation (https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/ALTER-TABLE.html#GUID-552E7373-BF93-477D-9DA3-B2C9386F2877) anything that goes wrong after a checkpoint has been applied and before the transaction is complete may leave the table in an unstable/unusable state. Therefore it is not entirely safe. Use with caution, and be prepared to restore from backup if things go wrong.
If this is a one off operation, just re-create the table without the columns.

Oracle 12c: Wasted Disk Space and Performance

The nature of my application involves daily deleting and bulk inserting of large datasets into an Oracle 12c database. My tables are interval-partitioned by a date field and partitioned-indexed. I use a stored procedure to gather statistics for the affected partitions after each run. Lately, I found that the runs have been slowing down considerably and was wondering if this was due to the increasing size of the database.
I have searched for how to calculate the total disk space that my tables use and usually arrive at this:
select sum(bytes)/1024/1024/1024
from dba_segments
where owner='SCHEMA' and segment_name in ('TABLE_A', 'TABLE_B');
However, the numbers were huge and do not reflect the actual data volume used. When we exported the tables for restoration to another database, the file was much smaller than that query suggests. I dug deeper and arrived at this query instead:
select partition_name,
blocks*8/1024 size_m,
num_rows*avg_row_len/1024/1024 occ_m,
blocks*8/1024 - num_rows*avg_row_len/1024/1024 wast_m
from dba_tab_partitions
where table_name='TABLE_A';
This query suggests that there is a "wasted" space concept where after performing bulk inserts and deleting the data before it is replaced again, the space used is not reclaimed.
Thus I have the following questions:
Does the "wasted" space contribute to performance degradation when I
perform delete from table where ..?
Is there a difference between
performing a delete from table where .. as compared to dropping
the partitions with regard to "wasted" space?
Is performing table reorganization / defragmentation on a regular basis to reclaim table space a recommended practice?
Does the "wasted" space contribute to performance degradation when I perform delete from table where ..?
Yes, you are deleting from table Oracle has to to perform Full Tabl Scan/Index Range Scan(Index leaf node may lead to empty blocks) on the underlying table up to High Water Mark, which makes your delete slow.
Is there a difference between performing a delete from table where .. as compared to dropping the partitions with regard to "wasted" space?
Deleting is a slow process. It has to create before images(undo), update indexes, write redo logs and remove the data. Since DDL(Drop) doesnt generate redo/undo(Generate tiny bit of undo/redo for meta data) it would be faster than DML(delete).
Is performing table reorganization/defragmentation on a regular basis to reclaim table space a recommended practice?
Objects with fragmented free space can result in much wasted space, and can impact database performance. The preferred way to defragment and reclaim this space is to perform an online segment shrink.
For details:Reclaiming Unused Space
The following blog post demostrate the performance impact during DML becuase of wasted space and how to get rid of it.
Defragmentation Can Degrade Query Performance
If you're doing deletes or updates your space is getting fragmented. You can read about it in documentation.
To improve your process you can either perform some cleaning operations like shrink or just recreate tables on some big inserts. I mean instead of doing delete and insert do create table as select from old where rows not to delete and then insert new set into new table. After that just swap names and drop old table.
With your second question I think answer is here. Dropping partition will reduce HWM and delete will not.
This query suggests that there is a "wasted" space concept where after performing bulk inserts and deleting the data before it is replaced again, the space used is not reclaimed.
This is correct.
A direct path insert uses space above the high water mark for the segment. Subsequent deletes remove rows, but do not reset the high water mark.
It would be best to be able to truncate the segment prior to performing another direct path insert, as this resets the high water mark as well as removing all the rows.

Oracle Tablespaces maxsize "unlimited" not really unlimited

I recently needed to import a .dmp into a new user I created. I also created a new tablespace for the the user with the following command:
create tablespace my_tablespace
datafile 'C:\My\Oracle\Install\DataFile01.dbf' size 10M
autoextend on
next 512K
maxsize unlimited;
While the import was running I got an error:
ORA-01652 Unable to extend my_tablespace segment by in tablespace
When I examined the data files in the dba_data_files table, I observed the maxsize was around 34gb. Because I knew the general size of the database, I was able to import the .dmp without any issues after adding multiple datafiles to the tablespace.
Why did I need to add multiple datafiles to the tablespace when the first one I added was set to automatically grow to an unlimited size? Why was the maximum size 34gb and not unlimited? Is there a hard cap of 34gb?
As you've discovered, and as Alex Poole pointed out, there are limits to an individual data file size. Smallfiles are limited to 128GB and bigfiles are limited to 128TB, depending on your block size. (But you do not want to change your block size just to increase those limits.) The size limit in the create tablespace command is only there if you want to further limit the size.
This can be a bit confusing. You probably don't care about managing files and want it to "just work". Managing database storage is always gonna be annoying, but here are some things you can do:
Keep your tablespaces to a minimum. There are some rare cases where it's helpful to partition data into lots of small tablespaces. But those rare benefits are usually outnumbered by the pain you will experience managing all those objects.
Get in the habit of always adding more than one data file. If you're using ASM (which I wouldn't recommend if this is a local instance), then there is almost no reason not to go "crazy" when adding datafiles. Even if you're not using ASM you should still go a little crazy. As long as you set the original size to low, you're not close to the MAX_FILES limit, and you're not dealing with one of the special tablespaces like UNDO and TEMP, there is no penalty for adding more files. Don't worry too much about allocating more potential space than your hard-drive contains. This drives some DBAs crazy, but you have to weigh the chance of running out of OS space versus the chance of running out of space in a hundred files. (In either case, your application will crash.)
Set the RESUMABLE_TIMEOUT parameter. Then SQL statements will be suspended, may generate an alert, will be listed in DBA_RESUMABLE, and will wait patiently for more space. This is very useful in data warehouses.
Why is it called "UNLIMITED"?
I would guess the keyword UNLIMITED is a historical mistake. Oracle has had the same file size limitation since at least version 7, and perhaps earlier. Oracle 7 was released in 1992, when a 1GB hard drive cost $1995. Maybe every operating system at the time had a file size limitation lower than that. Perhaps it was reasonable back then to think of 128GB as "unlimited".
Unlimited maxsize is not enough for this operation, also your resumable timeout must be enough, you can set the value as milisecond, if you want unlimited;
alter system set resumable_timeout=0;

Oracle 10g Table size after deletion of multiple rows

As part of maintenance, we remove a considerable amount of records from one table and wish to dispose of this space for other table. The thing is that I'm checking the size of the table and it shows the same original size before the delete. What am I missing here?
To check the size, I'm looking at the dba_segments.
There are some concepts you need to know about tables before you can really understand space usage. I will try and give you the 5 second tour ...
A table is made up of extents, these can vary in size, but lets just assume they are 1MB chunks here.
When you create a table, normally 1 extent is added to it, so even though the table has no rows, it will occupy 1MB.
When you insert rows, Oracle has a internal pointer for the table known as the high water mark (HWM). Below the HWM, there are formatted data block, and above it there are unformated blocks. It starts at zero.
If you take a new table as an example, and start inserting rows, the HWM will move up, giving more and more of that first extent to be used by the table. When the extent is all used up, another one will be allocated and the process repeats.
Lets says you fill a three 1MB extents and then delete all the rows - the table is empty, but Oracle does not move the HWM back down, or free those used extents. So the table will take 3MB on disk even though it is empty, but there is 3MB of free space to be reused in it. New inserts will go into that space below the HWM until it is filled up again before the HWM is move again.
To recover the space, if your table is using ASSM, you can use the command:
alter table t1 shrink space;
If you are not using ASSM, the you need to think about a table reorg to recover the space.
If you want to "reclaim" the space, the easiest method is:
ALTER TABLE table MOVE TABLESPACE different_tablespace;
ALTER TABLE table MOVE TABLESPACE original_tablespace;
Providing you have:
Some downtime in which to do it
A second tablespace with enough space to transfer the table into.
Take a look at this site
about the table size after deleting rows.
Space is effectively reused when you delete. Your database will not show any new free
space in dba_free_space -- it will have more blocks on freelists and more empty holes in
index structures.
SELECT SUM(BYTES)
FROM DBA_SEGMENTS
WHERE SEGMENT_NAME LIKE 'YOUR TABLE NAME%';
You will get the right answer.

How to reduce undo tablespace size in oracle?

Undo tablespace size is 30 GB even not activities going on DATABASE.
As the documentation says we are quite limited when it comes to UNDO tablespaces: there is no syntax for shrinking an UNDO tablespace, even in 11g. Without intervention the UNDO tablespace should be sized to fit our largest transaction. This means if we have a huuuge batch process which runs once a year then the UNDO tablespace ought to be large enough for it.
Why don't Oracle provide tools for shrinking the UNDO tablespace? Because if we have had the transactions to stretch it to 30GB once we are likely to have that load again. Freeing up the disk space won't help us, because the UNDO tablespace is going to try to reclaim it. If we have used that space for some other purpose then our huge annual transaction will fall over.
Now, if you think you have had soem abnormal data processing which has distorted your tablespace and you are convinced you're never going to need that much UNDO ever again and you really need the disk space then you can use the ALTER DATABASE syntax to shrink the individual data files.

Resources