If I have 2 datafiles attached to a tablespace, and BOTH are set to AUTOEXTEND and BOTH are set to unlimited, will Oracle know to extend both datafiles, or only extend one of them. I have read through many manuals, but none of them answer this question. As to why this is set like this, well, it's an inherited system that I am starting to tune.
It depends on how much disk space you have (among other things). "Unlimited" in this context means "to the largest supported table size on your platform" not "grab up all the disk space even if you can't use it." If you have enough disk space for two maximum-sized datafiles, you should be fine.
Also (possibly dumb question) since this is an existing application, why can't you just look and see what it did?
-- MarkusQ
If you have 2 datafiles in the same tablespace, oracle will only fill/extend the 2nd, when the first one is filled to its maxsize.
If you have unlimited on the first, there will be no reason (at least that I can think of right now) when oracle itself will fill/use/extend the 2nd datafile.
Even if you have run out of space on the first Datafile's filesystem/ASM/whatever, I'm quite sure, Oracle will still try to extend the 1st datafile (At least that's what I think I saw just a few hours ago :P).
Please anyone, correct me if I'm wrong.
Related
I recently needed to import a .dmp into a new user I created. I also created a new tablespace for the the user with the following command:
create tablespace my_tablespace
datafile 'C:\My\Oracle\Install\DataFile01.dbf' size 10M
autoextend on
next 512K
maxsize unlimited;
While the import was running I got an error:
ORA-01652 Unable to extend my_tablespace segment by in tablespace
When I examined the data files in the dba_data_files table, I observed the maxsize was around 34gb. Because I knew the general size of the database, I was able to import the .dmp without any issues after adding multiple datafiles to the tablespace.
Why did I need to add multiple datafiles to the tablespace when the first one I added was set to automatically grow to an unlimited size? Why was the maximum size 34gb and not unlimited? Is there a hard cap of 34gb?
As you've discovered, and as Alex Poole pointed out, there are limits to an individual data file size. Smallfiles are limited to 128GB and bigfiles are limited to 128TB, depending on your block size. (But you do not want to change your block size just to increase those limits.) The size limit in the create tablespace command is only there if you want to further limit the size.
This can be a bit confusing. You probably don't care about managing files and want it to "just work". Managing database storage is always gonna be annoying, but here are some things you can do:
Keep your tablespaces to a minimum. There are some rare cases where it's helpful to partition data into lots of small tablespaces. But those rare benefits are usually outnumbered by the pain you will experience managing all those objects.
Get in the habit of always adding more than one data file. If you're using ASM (which I wouldn't recommend if this is a local instance), then there is almost no reason not to go "crazy" when adding datafiles. Even if you're not using ASM you should still go a little crazy. As long as you set the original size to low, you're not close to the MAX_FILES limit, and you're not dealing with one of the special tablespaces like UNDO and TEMP, there is no penalty for adding more files. Don't worry too much about allocating more potential space than your hard-drive contains. This drives some DBAs crazy, but you have to weigh the chance of running out of OS space versus the chance of running out of space in a hundred files. (In either case, your application will crash.)
Set the RESUMABLE_TIMEOUT parameter. Then SQL statements will be suspended, may generate an alert, will be listed in DBA_RESUMABLE, and will wait patiently for more space. This is very useful in data warehouses.
Why is it called "UNLIMITED"?
I would guess the keyword UNLIMITED is a historical mistake. Oracle has had the same file size limitation since at least version 7, and perhaps earlier. Oracle 7 was released in 1992, when a 1GB hard drive cost $1995. Maybe every operating system at the time had a file size limitation lower than that. Perhaps it was reasonable back then to think of 128GB as "unlimited".
Unlimited maxsize is not enough for this operation, also your resumable timeout must be enough, you can set the value as milisecond, if you want unlimited;
alter system set resumable_timeout=0;
I'll need to migrate our main database to a new server and storage subsystem in a couple of weeks. Oracle 11 is currently running on Windows, and we will install a brand new SuSE for it. There will be no other major changes. Memory will be the same, and the server is just a bit newer.
My main concern is with the time it will take to create indexes. Our last experience recreating some indexes took very long, and since then I'm researching how to optimize it.
The current server has 128GB of memory and we're using Oracle ASSM (51GB for SGA and 44GB for PGA), and Spotlight On Oracle reports no physical read activity on datafiles. Everything is cached on memory, and the performance is great. Spotlight also informs that PGA consumption is only 500 MB.
I know my biggest table has 100 million rows, and occupy 15GB. Its indexes, however, occupy 53GB. When I recreate one of these, I can see a lot of write activity in the TEMP tablespace.
So the question is: how can I use all available memory in order to minimize TEMP activity ?
I'm not really sure if this is relevant, but we see an average of 300-350 users connections, and I raised initialization parameters to 700 max sessions.
Best regards,
You should consider setting WORKAREA_SIZE_POLICY to MANUAL for the session that will be doing the index rebuilds, and then setting SORT_AREA_SIZE to a sufficiently large number. (Max is O/S dependent, but 2GB would be a good starting point.)
Also, though you didn't make any mention of it, you should also consider using NOLOGGING to improve performance.
Hope that helps.
In my application we are uploaded huge data.if any wrongly uploaded data then we can delete the data.So,whatever data deleted that space when will reclaim the data?.is there any impact on performance Is there anyway to reclaim the space after a large delete in oracle?.How to reclaim the space?
In Oracle, deleting data does not automatically reclaim disk space. The database will retain its storage until you do something administrative to the tablespace. The expectation of this behavior is that if you needed the storage at one time, you will likely need it again and it would therefore be more efficient to simply keep the allocation.
As far as performance impact, less data to process will generally make queries go faster. :)
To reclaim the space, you have a few choices. This article from ORACLE-BASE has a pretty comprehensive look at this situation.
Also, why would you insert data, then determine it is "bad" to then delete it? Wouldn't you be better off avoiding putting the data in from the beginning?
You need to research the High Water Mark (HWM).
Basically it is the maximum number of blocks that has ever been used by a specific table, if you load a large volume of data them you may well increase the HWM, deleting those records does not then reduce the HWM.
Here is a great article on how to adjust the HWM and if, once you understand it, you think it may be affecting your environment then use the tips included to reduce your HWM.
Hope it helps...
There is no way to reclaim space after delete. but if after the delete the size of the remaining data is relatively small then create a new table with the data, drop the old table and rename the new table to the old name
I know it is an old question, but for future references:
DELETE empty the table but space is already "locked" for future fillings
TRUNCATE TABLE do the trick (you need to disable all foreign keys referencing the table in order to work).
I am working on a data warehouse system which was upgraded about a year ago to Oracle 10g (now 10.2.0.5).
The database is set up with workarea_size_policy=auto and pga_aggregate_target=1G. Most of the ETL process is written in PL/SQL and this code generally sets workarea_size_policy=manual and sets the SORT_AREA_SIZE and HASH_AREA_SIZE for particular sessions when building specific parts of the warehouse.
The values chosen for the SORT_AREA_SIZE and HASH_AREA_SIZE are different for different parts of the build. These sizes are probably based on the expected amount of data that will be processed in each area.
The problem I am having is that this code is starting to cause a number of ORA-600 errors to occur. It is making me wonder if we should even be overriding the automatic settings at all.
The code that sets the manual settings was written many years ago by a developer who is no longer here. It was probably originally written for Oracle 8 with an amendment for Oracle 9 to set the workarea_size_policy to manual. No one really knows how the values used for HASH_AREA_SIZE and SORT_AREA_SIZE were found. They could be completely inappropriate for all I know.
After that long preamble, I've got a few questions.
How do I know when (if ever) I should be overriding the manual settings with workarea_size_policy=manual?
How do I find appropriate values for HASH_AREA_SIZE, SORT_AREA_SIZE, etc?
How do I benchmark that particular settings are actually providing any sort of benefit?
I'm aware that this is a pretty broad question but help would be appreciated.
I suggest you comment out the manual settings and do a test run only with automatic (dynamic) settings, like PGA_AGGREGATE_TARGET.
Management of Sort and Hash memory areas has improved a lot since Oracle 8!
It's hard to predetermine the memory requirements of your procedures, so the best is to test them with representative volumes of data and see how it goes.
You can then create an AWR report covering the timeframe of the execution of the procedures. There's a section in the report named PGA Memory Advisory. That will tell you if you need more memory assigned to PGA_AGGREGATE_TARGET, based on your current data volumes.
See sample here:
In this case you can clearly see that there's no need to go over the current 103 MB assigned, and you could actually stay at 52 MB without impacting the application.
Depending on the volumes we're talking about, if you can't assign more memory, some Sort or Hash operations might spill to a TEMPORARY tablespace, so make sure you have a properly sized one and possibly spread across as many disks / volumes as possible (see SAME configuration, also here).
Undo tablespace size is 30 GB even not activities going on DATABASE.
As the documentation says we are quite limited when it comes to UNDO tablespaces: there is no syntax for shrinking an UNDO tablespace, even in 11g. Without intervention the UNDO tablespace should be sized to fit our largest transaction. This means if we have a huuuge batch process which runs once a year then the UNDO tablespace ought to be large enough for it.
Why don't Oracle provide tools for shrinking the UNDO tablespace? Because if we have had the transactions to stretch it to 30GB once we are likely to have that load again. Freeing up the disk space won't help us, because the UNDO tablespace is going to try to reclaim it. If we have used that space for some other purpose then our huge annual transaction will fall over.
Now, if you think you have had soem abnormal data processing which has distorted your tablespace and you are convinced you're never going to need that much UNDO ever again and you really need the disk space then you can use the ALTER DATABASE syntax to shrink the individual data files.