How to reduce undo tablespace size in oracle? - oracle

Undo tablespace size is 30 GB even not activities going on DATABASE.

As the documentation says we are quite limited when it comes to UNDO tablespaces: there is no syntax for shrinking an UNDO tablespace, even in 11g. Without intervention the UNDO tablespace should be sized to fit our largest transaction. This means if we have a huuuge batch process which runs once a year then the UNDO tablespace ought to be large enough for it.
Why don't Oracle provide tools for shrinking the UNDO tablespace? Because if we have had the transactions to stretch it to 30GB once we are likely to have that load again. Freeing up the disk space won't help us, because the UNDO tablespace is going to try to reclaim it. If we have used that space for some other purpose then our huge annual transaction will fall over.
Now, if you think you have had soem abnormal data processing which has distorted your tablespace and you are convinced you're never going to need that much UNDO ever again and you really need the disk space then you can use the ALTER DATABASE syntax to shrink the individual data files.

Related

How can I reduce oracle database storage?

I have a problem with hard disk space lack.
so I delete all records from a big table in my oracle database.
But my hard disk space is not changed.
How can I fix it?
Try ALTER TABLE table_name SHRINK SPACE CASCADE; and rebuild your indexes.
Deleting all rows from a table does not reset the high water mark and does not free up any space. This is because with DML operations Oracle assumes the space will be re-used later and doesn't bother to release it.
You probably want to run a TRUNCATE statement instead. It's also much faster than a DELETE as it doesn't generate any REDO or UNDO. But beware that truncating a table is a DDL statement that cannot be rolled back.
TRUNCATEing tables releases space to the tablespace and that space is available to other objects. But it still doesn't reduce the amount of operating system space used. Lowering disk space requires shrinking datafiles.
Shrinking datafiles can be tricky. Datafiles can only be shrunk to the last extent. It's possible for a datafile to have a ton of empty space but not be shrinkable, if a single block of data is at the end of the file. In that case the datafile must be defragmented. That can be done by Oracle Enterprise Manager and you can find many other scripts to do it. They usually involve moving or rebuilding all the objects.
The simplest way to save some space is to try to shrink every file. This won't free up all available space, but in practice most of the free space is is usually after the last extent and can be reclaimed.
Run the below script, it will generate a bunch of alter statements to try shrink the datafiles by a large amount. Most of the statements will fail but that's OK. Cut the number in half and try again, repeatedly. This is a "dumb" way to do it, but it can usually free up some space quickly.
with decrease_size as (select 64 gb from dual)
select 'alter database datafile '''||file_name||''' resize '||to_char(round(bytes/1024/1024/1024)-(select gb from decrease_size))||'g;'
from dba_data_files
where bytes > (select gb from decrease_size) * 1024*1024*1024
order by 1 desc;

Oracle 12c: Wasted Disk Space and Performance

The nature of my application involves daily deleting and bulk inserting of large datasets into an Oracle 12c database. My tables are interval-partitioned by a date field and partitioned-indexed. I use a stored procedure to gather statistics for the affected partitions after each run. Lately, I found that the runs have been slowing down considerably and was wondering if this was due to the increasing size of the database.
I have searched for how to calculate the total disk space that my tables use and usually arrive at this:
select sum(bytes)/1024/1024/1024
from dba_segments
where owner='SCHEMA' and segment_name in ('TABLE_A', 'TABLE_B');
However, the numbers were huge and do not reflect the actual data volume used. When we exported the tables for restoration to another database, the file was much smaller than that query suggests. I dug deeper and arrived at this query instead:
select partition_name,
blocks*8/1024 size_m,
num_rows*avg_row_len/1024/1024 occ_m,
blocks*8/1024 - num_rows*avg_row_len/1024/1024 wast_m
from dba_tab_partitions
where table_name='TABLE_A';
This query suggests that there is a "wasted" space concept where after performing bulk inserts and deleting the data before it is replaced again, the space used is not reclaimed.
Thus I have the following questions:
Does the "wasted" space contribute to performance degradation when I
perform delete from table where ..?
Is there a difference between
performing a delete from table where .. as compared to dropping
the partitions with regard to "wasted" space?
Is performing table reorganization / defragmentation on a regular basis to reclaim table space a recommended practice?
Does the "wasted" space contribute to performance degradation when I perform delete from table where ..?
Yes, you are deleting from table Oracle has to to perform Full Tabl Scan/Index Range Scan(Index leaf node may lead to empty blocks) on the underlying table up to High Water Mark, which makes your delete slow.
Is there a difference between performing a delete from table where .. as compared to dropping the partitions with regard to "wasted" space?
Deleting is a slow process. It has to create before images(undo), update indexes, write redo logs and remove the data. Since DDL(Drop) doesnt generate redo/undo(Generate tiny bit of undo/redo for meta data) it would be faster than DML(delete).
Is performing table reorganization/defragmentation on a regular basis to reclaim table space a recommended practice?
Objects with fragmented free space can result in much wasted space, and can impact database performance. The preferred way to defragment and reclaim this space is to perform an online segment shrink.
For details:Reclaiming Unused Space
The following blog post demostrate the performance impact during DML becuase of wasted space and how to get rid of it.
Defragmentation Can Degrade Query Performance
If you're doing deletes or updates your space is getting fragmented. You can read about it in documentation.
To improve your process you can either perform some cleaning operations like shrink or just recreate tables on some big inserts. I mean instead of doing delete and insert do create table as select from old where rows not to delete and then insert new set into new table. After that just swap names and drop old table.
With your second question I think answer is here. Dropping partition will reduce HWM and delete will not.
This query suggests that there is a "wasted" space concept where after performing bulk inserts and deleting the data before it is replaced again, the space used is not reclaimed.
This is correct.
A direct path insert uses space above the high water mark for the segment. Subsequent deletes remove rows, but do not reset the high water mark.
It would be best to be able to truncate the segment prior to performing another direct path insert, as this resets the high water mark as well as removing all the rows.

Oracle Tablespaces maxsize "unlimited" not really unlimited

I recently needed to import a .dmp into a new user I created. I also created a new tablespace for the the user with the following command:
create tablespace my_tablespace
datafile 'C:\My\Oracle\Install\DataFile01.dbf' size 10M
autoextend on
next 512K
maxsize unlimited;
While the import was running I got an error:
ORA-01652 Unable to extend my_tablespace segment by in tablespace
When I examined the data files in the dba_data_files table, I observed the maxsize was around 34gb. Because I knew the general size of the database, I was able to import the .dmp without any issues after adding multiple datafiles to the tablespace.
Why did I need to add multiple datafiles to the tablespace when the first one I added was set to automatically grow to an unlimited size? Why was the maximum size 34gb and not unlimited? Is there a hard cap of 34gb?
As you've discovered, and as Alex Poole pointed out, there are limits to an individual data file size. Smallfiles are limited to 128GB and bigfiles are limited to 128TB, depending on your block size. (But you do not want to change your block size just to increase those limits.) The size limit in the create tablespace command is only there if you want to further limit the size.
This can be a bit confusing. You probably don't care about managing files and want it to "just work". Managing database storage is always gonna be annoying, but here are some things you can do:
Keep your tablespaces to a minimum. There are some rare cases where it's helpful to partition data into lots of small tablespaces. But those rare benefits are usually outnumbered by the pain you will experience managing all those objects.
Get in the habit of always adding more than one data file. If you're using ASM (which I wouldn't recommend if this is a local instance), then there is almost no reason not to go "crazy" when adding datafiles. Even if you're not using ASM you should still go a little crazy. As long as you set the original size to low, you're not close to the MAX_FILES limit, and you're not dealing with one of the special tablespaces like UNDO and TEMP, there is no penalty for adding more files. Don't worry too much about allocating more potential space than your hard-drive contains. This drives some DBAs crazy, but you have to weigh the chance of running out of OS space versus the chance of running out of space in a hundred files. (In either case, your application will crash.)
Set the RESUMABLE_TIMEOUT parameter. Then SQL statements will be suspended, may generate an alert, will be listed in DBA_RESUMABLE, and will wait patiently for more space. This is very useful in data warehouses.
Why is it called "UNLIMITED"?
I would guess the keyword UNLIMITED is a historical mistake. Oracle has had the same file size limitation since at least version 7, and perhaps earlier. Oracle 7 was released in 1992, when a 1GB hard drive cost $1995. Maybe every operating system at the time had a file size limitation lower than that. Perhaps it was reasonable back then to think of 128GB as "unlimited".
Unlimited maxsize is not enough for this operation, also your resumable timeout must be enough, you can set the value as milisecond, if you want unlimited;
alter system set resumable_timeout=0;

Is it good to use default tablespace for high volume tables?

In our application (Oracle based) we are handling high volume of data. For few major tables, we are using separate tablespace but for the remaining tables default tablespace is being used.
My query is,
a. Is it good (In terms of performance) to have separate tablespace for every table (where the number of records are more than million)
b. Or we can define a separate tablespace instead of default table space for the remaining tables.
c. Does it affect the performance if default tablespace is used for the high volume tables?
Any suggestion would be appreciated.
Usually nowadays you don't have dedicated disc for your tablespaces, data are stored in a storage network (SAN) and you don't have any fixed relation to a physical file system. Thus the distribution of tablespaces is less sensitive or critical as in earlier day - as long as you don't have very special or very big data.
For example I have an application where I get every day about 1 billion records, i.e app. 150GB. There I use one tablespace (i.e. 150 cycling tablespaces) for each daily partition. The main reason is easier maintenance, for example truncating old data.
SANs seem to have killed this debate, however, when you consider things like locally managed tablespaces, there is a trend to use these to get tables to inherit storage properties. For example, it is very common these days to see tablepsaces like LM_SMALL_TABLE, LM_MEDIUM_TABLE, LM_LARGE_TABLE (and similar for index), or LM_16k_TABLE, LM_1M_TABLE, LM_10M_TABLE, LM_100M_TABLE and similar for indexes. These will have initial and next extents set to 16k, 1m, etc. Tables are then placed in the tablespace that is appropriate to expected volume. You sometimes see database where there is archived/read-only data moved to cheaper disk by moving the table/partition to such tablespaces. The only time I've seen 1 tablespace per table was on 8i, where the client wanted had this so that a particular table could be restored to a backup, by restoring just the tablespace/datafiles.

reclaim the space after a large delete in oracle

In my application we are uploaded huge data.if any wrongly uploaded data then we can delete the data.So,whatever data deleted that space when will reclaim the data?.is there any impact on performance Is there anyway to reclaim the space after a large delete in oracle?.How to reclaim the space?
In Oracle, deleting data does not automatically reclaim disk space. The database will retain its storage until you do something administrative to the tablespace. The expectation of this behavior is that if you needed the storage at one time, you will likely need it again and it would therefore be more efficient to simply keep the allocation.
As far as performance impact, less data to process will generally make queries go faster. :)
To reclaim the space, you have a few choices. This article from ORACLE-BASE has a pretty comprehensive look at this situation.
Also, why would you insert data, then determine it is "bad" to then delete it? Wouldn't you be better off avoiding putting the data in from the beginning?
You need to research the High Water Mark (HWM).
Basically it is the maximum number of blocks that has ever been used by a specific table, if you load a large volume of data them you may well increase the HWM, deleting those records does not then reduce the HWM.
Here is a great article on how to adjust the HWM and if, once you understand it, you think it may be affecting your environment then use the tips included to reduce your HWM.
Hope it helps...
There is no way to reclaim space after delete. but if after the delete the size of the remaining data is relatively small then create a new table with the data, drop the old table and rename the new table to the old name
I know it is an old question, but for future references:
DELETE empty the table but space is already "locked" for future fillings
TRUNCATE TABLE do the trick (you need to disable all foreign keys referencing the table in order to work).

Resources