Oracle Tablespaces maxsize "unlimited" not really unlimited - oracle

I recently needed to import a .dmp into a new user I created. I also created a new tablespace for the the user with the following command:
create tablespace my_tablespace
datafile 'C:\My\Oracle\Install\DataFile01.dbf' size 10M
autoextend on
next 512K
maxsize unlimited;
While the import was running I got an error:
ORA-01652 Unable to extend my_tablespace segment by in tablespace
When I examined the data files in the dba_data_files table, I observed the maxsize was around 34gb. Because I knew the general size of the database, I was able to import the .dmp without any issues after adding multiple datafiles to the tablespace.
Why did I need to add multiple datafiles to the tablespace when the first one I added was set to automatically grow to an unlimited size? Why was the maximum size 34gb and not unlimited? Is there a hard cap of 34gb?

As you've discovered, and as Alex Poole pointed out, there are limits to an individual data file size. Smallfiles are limited to 128GB and bigfiles are limited to 128TB, depending on your block size. (But you do not want to change your block size just to increase those limits.) The size limit in the create tablespace command is only there if you want to further limit the size.
This can be a bit confusing. You probably don't care about managing files and want it to "just work". Managing database storage is always gonna be annoying, but here are some things you can do:
Keep your tablespaces to a minimum. There are some rare cases where it's helpful to partition data into lots of small tablespaces. But those rare benefits are usually outnumbered by the pain you will experience managing all those objects.
Get in the habit of always adding more than one data file. If you're using ASM (which I wouldn't recommend if this is a local instance), then there is almost no reason not to go "crazy" when adding datafiles. Even if you're not using ASM you should still go a little crazy. As long as you set the original size to low, you're not close to the MAX_FILES limit, and you're not dealing with one of the special tablespaces like UNDO and TEMP, there is no penalty for adding more files. Don't worry too much about allocating more potential space than your hard-drive contains. This drives some DBAs crazy, but you have to weigh the chance of running out of OS space versus the chance of running out of space in a hundred files. (In either case, your application will crash.)
Set the RESUMABLE_TIMEOUT parameter. Then SQL statements will be suspended, may generate an alert, will be listed in DBA_RESUMABLE, and will wait patiently for more space. This is very useful in data warehouses.
Why is it called "UNLIMITED"?
I would guess the keyword UNLIMITED is a historical mistake. Oracle has had the same file size limitation since at least version 7, and perhaps earlier. Oracle 7 was released in 1992, when a 1GB hard drive cost $1995. Maybe every operating system at the time had a file size limitation lower than that. Perhaps it was reasonable back then to think of 128GB as "unlimited".

Unlimited maxsize is not enough for this operation, also your resumable timeout must be enough, you can set the value as milisecond, if you want unlimited;
alter system set resumable_timeout=0;

Related

Is it good to use default tablespace for high volume tables?

In our application (Oracle based) we are handling high volume of data. For few major tables, we are using separate tablespace but for the remaining tables default tablespace is being used.
My query is,
a. Is it good (In terms of performance) to have separate tablespace for every table (where the number of records are more than million)
b. Or we can define a separate tablespace instead of default table space for the remaining tables.
c. Does it affect the performance if default tablespace is used for the high volume tables?
Any suggestion would be appreciated.
Usually nowadays you don't have dedicated disc for your tablespaces, data are stored in a storage network (SAN) and you don't have any fixed relation to a physical file system. Thus the distribution of tablespaces is less sensitive or critical as in earlier day - as long as you don't have very special or very big data.
For example I have an application where I get every day about 1 billion records, i.e app. 150GB. There I use one tablespace (i.e. 150 cycling tablespaces) for each daily partition. The main reason is easier maintenance, for example truncating old data.
SANs seem to have killed this debate, however, when you consider things like locally managed tablespaces, there is a trend to use these to get tables to inherit storage properties. For example, it is very common these days to see tablepsaces like LM_SMALL_TABLE, LM_MEDIUM_TABLE, LM_LARGE_TABLE (and similar for index), or LM_16k_TABLE, LM_1M_TABLE, LM_10M_TABLE, LM_100M_TABLE and similar for indexes. These will have initial and next extents set to 16k, 1m, etc. Tables are then placed in the tablespace that is appropriate to expected volume. You sometimes see database where there is archived/read-only data moved to cheaper disk by moving the table/partition to such tablespaces. The only time I've seen 1 tablespace per table was on 8i, where the client wanted had this so that a particular table could be restored to a backup, by restoring just the tablespace/datafiles.

How to use currently available memory to minimize CREATE INDEX time?

I'll need to migrate our main database to a new server and storage subsystem in a couple of weeks. Oracle 11 is currently running on Windows, and we will install a brand new SuSE for it. There will be no other major changes. Memory will be the same, and the server is just a bit newer.
My main concern is with the time it will take to create indexes. Our last experience recreating some indexes took very long, and since then I'm researching how to optimize it.
The current server has 128GB of memory and we're using Oracle ASSM (51GB for SGA and 44GB for PGA), and Spotlight On Oracle reports no physical read activity on datafiles. Everything is cached on memory, and the performance is great. Spotlight also informs that PGA consumption is only 500 MB.
I know my biggest table has 100 million rows, and occupy 15GB. Its indexes, however, occupy 53GB. When I recreate one of these, I can see a lot of write activity in the TEMP tablespace.
So the question is: how can I use all available memory in order to minimize TEMP activity ?
I'm not really sure if this is relevant, but we see an average of 300-350 users connections, and I raised initialization parameters to 700 max sessions.
Best regards,
You should consider setting WORKAREA_SIZE_POLICY to MANUAL for the session that will be doing the index rebuilds, and then setting SORT_AREA_SIZE to a sufficiently large number. (Max is O/S dependent, but 2GB would be a good starting point.)
Also, though you didn't make any mention of it, you should also consider using NOLOGGING to improve performance.
Hope that helps.

How to reduce undo tablespace size in oracle?

Undo tablespace size is 30 GB even not activities going on DATABASE.
As the documentation says we are quite limited when it comes to UNDO tablespaces: there is no syntax for shrinking an UNDO tablespace, even in 11g. Without intervention the UNDO tablespace should be sized to fit our largest transaction. This means if we have a huuuge batch process which runs once a year then the UNDO tablespace ought to be large enough for it.
Why don't Oracle provide tools for shrinking the UNDO tablespace? Because if we have had the transactions to stretch it to 30GB once we are likely to have that load again. Freeing up the disk space won't help us, because the UNDO tablespace is going to try to reclaim it. If we have used that space for some other purpose then our huge annual transaction will fall over.
Now, if you think you have had soem abnormal data processing which has distorted your tablespace and you are convinced you're never going to need that much UNDO ever again and you really need the disk space then you can use the ALTER DATABASE syntax to shrink the individual data files.

Is there any logical reason of having different tablespace for indexes?

Hi Can some let me know why we created different table space for Index and data.
It is a widespread belief that keeping indexes and tables in separate tablespaces improves performance. This is now considered a myth by many respectable experts (see this Ask Tom thread - search for "myth"), but is still a common practice because old habits die hard!
Third party edit
Extract from asktom: "Index Tablespace" from 2001 for Oracle version 8.1.6 the question
Is it still a good idea to keep indexes in their own tablespace?
Does this inhance performance or is it more of a recovery issue?
Does the answer differ from one platform to another?
First part of the Reply
Yes, no, maybe.
The idea, born in the 1980s when systems were tiny and user counts were in the single
digits, was that you separated indexes from data into separate tablespaces on different
disks.
In that fashion, you positioned the head of the disk in the index tablespace and the head
of the disk in the data tablespace and that would be better then seeking 2 times on the
same disk.
Drives back then were really slow at seeking and typically measured in the 10's to 100's
of megabytes (if you were lucky)
Today, with logical volumes, raid, NN gigabyte (nn is rapidly becoming NNN gigabytes)
drives, hundreds/thousands of concurrent users, thousands of tables, 10's of thousands of
indexes - this sort of "optimization" is sort of impossible.
What you strive for today is to be able to manage things, to spread IO out evenly
avoiding hot spots.
Since I believe all things should be in locally managed tablespaces with UNIFORM extent
sizes, I would say that yes, indexes would be in a different tablespace from the data but
only because they are a different SIZE then the data. My table with 50 columns and an
average row size of 4k might belong in a tablespace that has 5meg extents whereas the
index on a single number column might belong in a tablespace with 512k or 1m extents.
I tend to keep my indexes separate from the data but for the above sizing reason. The
tablespaces frequently end up on the same exact mount points. You strive for even io
across your disks and you may end up with indexes and data on the same devices.
It makes a sense in 80s, when there were not to many users and the databases size was not too big. At that time it was usefull to store indexes and tables in the different physical volumes.
Now there are the logical volumes, raid and so on and it is not necessary to store the indexes and tables in different tablespaces.
But all tablespaces must be locally managed with uniform extends size. From this point of view the indexes must be stored in different tablespace as the table with the 50 columns could be stored in the tablespace with 5Mb exteds size, when the tablespace for indexes will be enought 512Kb extended size.
Performance. It should be analyzed from case to case. I think that keeping all toghether in one tablespace becomes another myth too! It should be enough spindles, enough luns and take care of queuing in operating system. if someone thinks that making one tablespace is enough and is the same like many tablespaces without taking in consideration all other factors, means again another myth. It depends!
High Avalilability. using separate tablespaces can improve high availability of the system in case that some file corrution, files system corruption, block corruption. If the problem occures only at index tablespace there is achance to do the recovery online and our application still beeing available to the customer. see also: http://richardfoote.wordpress.com/2008/05/02/indexes-in-their-own-tablespace-recoverability-advantages-get-back/
using separate tablespaces for indexes, data, blobs, clobs, eventually some individual tables can be important for the manageability and costs. We can use our storage system to store our blobs, clobs, eventually archive to a different layer of storage with different quality of service

Oracle datafile autoextend question

If I have 2 datafiles attached to a tablespace, and BOTH are set to AUTOEXTEND and BOTH are set to unlimited, will Oracle know to extend both datafiles, or only extend one of them. I have read through many manuals, but none of them answer this question. As to why this is set like this, well, it's an inherited system that I am starting to tune.
It depends on how much disk space you have (among other things). "Unlimited" in this context means "to the largest supported table size on your platform" not "grab up all the disk space even if you can't use it." If you have enough disk space for two maximum-sized datafiles, you should be fine.
Also (possibly dumb question) since this is an existing application, why can't you just look and see what it did?
-- MarkusQ
If you have 2 datafiles in the same tablespace, oracle will only fill/extend the 2nd, when the first one is filled to its maxsize.
If you have unlimited on the first, there will be no reason (at least that I can think of right now) when oracle itself will fill/use/extend the 2nd datafile.
Even if you have run out of space on the first Datafile's filesystem/ASM/whatever, I'm quite sure, Oracle will still try to extend the 1st datafile (At least that's what I think I saw just a few hours ago :P).
Please anyone, correct me if I'm wrong.

Resources