Is there any logical reason of having different tablespace for indexes? - oracle

Hi Can some let me know why we created different table space for Index and data.

It is a widespread belief that keeping indexes and tables in separate tablespaces improves performance. This is now considered a myth by many respectable experts (see this Ask Tom thread - search for "myth"), but is still a common practice because old habits die hard!
Third party edit
Extract from asktom: "Index Tablespace" from 2001 for Oracle version 8.1.6 the question
Is it still a good idea to keep indexes in their own tablespace?
Does this inhance performance or is it more of a recovery issue?
Does the answer differ from one platform to another?
First part of the Reply
Yes, no, maybe.
The idea, born in the 1980s when systems were tiny and user counts were in the single
digits, was that you separated indexes from data into separate tablespaces on different
disks.
In that fashion, you positioned the head of the disk in the index tablespace and the head
of the disk in the data tablespace and that would be better then seeking 2 times on the
same disk.
Drives back then were really slow at seeking and typically measured in the 10's to 100's
of megabytes (if you were lucky)
Today, with logical volumes, raid, NN gigabyte (nn is rapidly becoming NNN gigabytes)
drives, hundreds/thousands of concurrent users, thousands of tables, 10's of thousands of
indexes - this sort of "optimization" is sort of impossible.
What you strive for today is to be able to manage things, to spread IO out evenly
avoiding hot spots.
Since I believe all things should be in locally managed tablespaces with UNIFORM extent
sizes, I would say that yes, indexes would be in a different tablespace from the data but
only because they are a different SIZE then the data. My table with 50 columns and an
average row size of 4k might belong in a tablespace that has 5meg extents whereas the
index on a single number column might belong in a tablespace with 512k or 1m extents.
I tend to keep my indexes separate from the data but for the above sizing reason. The
tablespaces frequently end up on the same exact mount points. You strive for even io
across your disks and you may end up with indexes and data on the same devices.

It makes a sense in 80s, when there were not to many users and the databases size was not too big. At that time it was usefull to store indexes and tables in the different physical volumes.
Now there are the logical volumes, raid and so on and it is not necessary to store the indexes and tables in different tablespaces.
But all tablespaces must be locally managed with uniform extends size. From this point of view the indexes must be stored in different tablespace as the table with the 50 columns could be stored in the tablespace with 5Mb exteds size, when the tablespace for indexes will be enought 512Kb extended size.

Performance. It should be analyzed from case to case. I think that keeping all toghether in one tablespace becomes another myth too! It should be enough spindles, enough luns and take care of queuing in operating system. if someone thinks that making one tablespace is enough and is the same like many tablespaces without taking in consideration all other factors, means again another myth. It depends!
High Avalilability. using separate tablespaces can improve high availability of the system in case that some file corrution, files system corruption, block corruption. If the problem occures only at index tablespace there is achance to do the recovery online and our application still beeing available to the customer. see also: http://richardfoote.wordpress.com/2008/05/02/indexes-in-their-own-tablespace-recoverability-advantages-get-back/
using separate tablespaces for indexes, data, blobs, clobs, eventually some individual tables can be important for the manageability and costs. We can use our storage system to store our blobs, clobs, eventually archive to a different layer of storage with different quality of service

Related

Will Shrinking and lowering the high water mark cause issues in OLTP systems

newb here, We have an old Oracle 10g instance that they have to keep alive until it is replaced. The nightly jobs have been very slow causing some issues. Every other Week there is a large process that does large amounts of DML (deletes, inserts, updates). Some of these tables have 2+ million rows. I noticed that some of the tables the HWM is higher than expected and in Toad I ran a database advisor check that recommended shrinking some tables, but I am concerned that the tables may need the space for DML operations or will shrinking them make the process faster or slower?
We cannot add cpu due to licensing costs
If you are accessing the tables with full scans and have a lot of empty space below the HWM, then yes, definitely reorg those (alter table move). There is no downside, only benefit. But if your slow jobs are using indexes, then the benefit will be minimal.
Don't assume that your slow jobs are due to space fragmentation. Use ASH (v$active_session_history) and SQL monitor (v$sql_plan_monitor) data or a graphical tool that utilizes this data to explore exactly what your queries are doing. Understand how to read execution plans and determine whether the correct plan is being used for your data. Tuning is unfortunately not a simple thing that can be addressed with a question on this forum.
In general, shrinking tables or rebuilding indexes should speed up reads of the table, or anything that does full table scans. It should not affect other DML operations.
When selecting or searching data, all of the empty blocks in the table and any indexes used by the query must still be read, so rebuilding them to reduce empty space and lower the high water mark will generally improve performance. This is especially true in indexes, where space lost to deleted rows is not recovered for reuse.

Is it good to use default tablespace for high volume tables?

In our application (Oracle based) we are handling high volume of data. For few major tables, we are using separate tablespace but for the remaining tables default tablespace is being used.
My query is,
a. Is it good (In terms of performance) to have separate tablespace for every table (where the number of records are more than million)
b. Or we can define a separate tablespace instead of default table space for the remaining tables.
c. Does it affect the performance if default tablespace is used for the high volume tables?
Any suggestion would be appreciated.
Usually nowadays you don't have dedicated disc for your tablespaces, data are stored in a storage network (SAN) and you don't have any fixed relation to a physical file system. Thus the distribution of tablespaces is less sensitive or critical as in earlier day - as long as you don't have very special or very big data.
For example I have an application where I get every day about 1 billion records, i.e app. 150GB. There I use one tablespace (i.e. 150 cycling tablespaces) for each daily partition. The main reason is easier maintenance, for example truncating old data.
SANs seem to have killed this debate, however, when you consider things like locally managed tablespaces, there is a trend to use these to get tables to inherit storage properties. For example, it is very common these days to see tablepsaces like LM_SMALL_TABLE, LM_MEDIUM_TABLE, LM_LARGE_TABLE (and similar for index), or LM_16k_TABLE, LM_1M_TABLE, LM_10M_TABLE, LM_100M_TABLE and similar for indexes. These will have initial and next extents set to 16k, 1m, etc. Tables are then placed in the tablespace that is appropriate to expected volume. You sometimes see database where there is archived/read-only data moved to cheaper disk by moving the table/partition to such tablespaces. The only time I've seen 1 tablespace per table was on 8i, where the client wanted had this so that a particular table could be restored to a backup, by restoring just the tablespace/datafiles.

How expensive is a query in terms of TEMP tablespace?

I have a few sprocs that execute some number of more complex queries and liberally use collections.
My DBA is complaining that they occasionally consume a S#$%ton of in-memory TEMP tablespace.
I can perform optimizations on the queries but i also wish to be as noninvasive as possible and to do this i need to see the effects my changes have on the TEMP tablespace.
QUESTION:
How can i see what cost my query has on the TEMP tablespace?
One thing to consider is i dont have DBA access.
Thanks in advance.
Depends what you mean by the cost your query has on temp.
If you can select from v$tempseg_usage, you can see how much space you are consuming in temp - on a DEV database there is no reason your DBA cannot give you access to that view.
As was mentioned by gpeche - autotrace will give you a good idea about how many IOs you are doing from temp - so that combined with the space usage will give you a good idea about what is going on.
Large collections are generally a bad idea - they consume a lot of memory in the PGA (which is very different from TEMP) which is shared by all the other sessions - this will be what your DBA is concerned about. How large is large depends on your system - low thousands of small records probably isn't too bad, but 100's of thousands or millions of records in a collection and I would be getting worried.
Before doing all kinds of interesting queries and tricks, estimate the data volume that should be sorted, after filtering. If this is larger than what fits in the sort area, the sort will move blocks from memory to temp and read them back later. Add a little overhead to the raw data size; use 30% overhead. This should give a reasonable estimation for the needed total sort size.
Use the same strategy for collections. There has to be room for the data somewhere, there is no magic/compression that makes your data volume smaller. If you have memory for 1000 rows max and try to use it with 1000.000 rows it won't fit. In that case talk to your dba and try to find a solution. It could be that you end up partitioning your workload.
Without having DBA access, I would try with AUTOTRACE. It will not give you TEMP tablespace consumption, but you can get a lot of useful information for tuning your queries (logical reads, number of sorts to disk, recursive SQL, redo consumption, network roundtrips). Note that you need some privileges granted to use AUTOTRACE, but not full DBA rights.
While your query is running you can query v$sql_workarea_active, or after it has run you can query v$sql_workarea.
These will show you the temp tablespace usage in terms of memory used, disk space used, and (most importantly) the number of passes (space usage is only part of the issue -- multipass sorts are very expensive), and correlate the usage to steps in the explain plan.
You can then consider whether modifying memory management would help you reduce temp tablespace usage both in terms of absolute space used and in the pass count.

Tablespaces in Oracle

Is adding tablespaces to a database decrease the performance of an Oracle 9 database?
We consider that the number of request remains constant.
Thx,
Eric
No, tablespaces allow you to logically separate objects, and adding more makes no difference - you have the same amount of data stored on the same amount of disk.
How tables and indexes are organised across tablespaces can make a difference; particularly having indexes in different tablespaces to their tables, though that's really to do with the underlying data files and how those are organised on disk. (I think there's some debate about whether even that makes much difference any more, as disk technology has improved, and with the widespread use of SAN/NAS.)

SQL Server 2008 large table performance

I have this relatively large table in a separate filegroup (2 GB, well, it's not THAT large but large enough I think to start thinking about performance as it's a heavy duty table).
This is the only table in this filegroup.
Right now the filegroup contains only one datafile.
Assuming the table is well-indexed and that index fragmentation is almost zero, would it increase performance (for select and insert statements) if I split the filegroup into two datafiles, BUT having those two datafiles reside on the same physical disk (as I don't have an array of disks at my disposal) ?
Or is a split into multiple files only an improvement when you can split those files over separate physical disks ?
Thanks for any replies.
ps: must add that we're using standard edition so table partitioning is a no-go
Mathieu
You really need to have separate spindles/LUNs if you're going to split index/data
For busting the "one thread per file" myth, read these from Paul Randall.
For the situation you have described, I doubt you could measure the difference accurately, since it would be insignificant. You would need a high end database with specific heavy workloads to entertain the thoughts that you are suffering SGAM / GAM contention.
GBN is right in indicating that you need it on seperate spindles to see a suitable difference.

Resources