Resizing Oracle Datafiles - oracle

I have an Tablespace with 3 datafiles (autoextend enabled). Actually Datafile_1 and Datafile_2 are 32GB in size and Datafile_3 size is 10GB.
I Dropped one huge table and Datafile_2 occupation dropped to 4GB. Using some queries**[1]** over the dba_extents view I could see the HWM was still at 32GB. I was able to move/shrink all the objects at the end of the datafile. Using the same query again i could see the HWM dropped to about 4GB.
Then I tried:
ALTER DATABASE DATAFILE '+DATA/myDatabase/datafile/datafile_2' resize 5G;
And got:
ORA-03297: file contains used data beyond requested RESIZE value;
I tried to increase the size until 30GB without sucess.
I did some research and found this article:
http://www.dbi-services.com/index.php/blog/entry/resize-your-oracle-datafiles-down-to-the-minimum-without-ora-03297
Here, instead of dba_extents the author access directly sys.x$ktfbue to view the metadata. His script show that Datafile_2 HWM was still 32G. For some reason MAX(block_id) in sys.x$ktfbue is different from the one in dba_extents.
Well, I did some more research to discover how to map the block_id on x$ktfbue to a database object. Found another script with that. After running it I could see that all the blocks at the end of the Datafile_2 was marked as "free space".
Well, now I am on a situation where I don't know what to do. Maybe having more than one datafile is the problem? Any tips?
[1] This is the query:
SELECT (MAX((block_id + blocks-1)*8192))/1024/1024 "HWM (MB)" FROM dba_extents WHERE file_id=8;
*file_id 8 represents datafile_2
EDIT: Tried one other thing. I created a new tablespace and moved all the objects on Datafile_3 to it. I can see now that Datafile_3 is completely empty via DBA_EXTENTS. Also purged recyclebin.
Then tried to resize it without success(ORA-03297), also tried to drop it and got "ORA-03262: the file is not empty".
Finally I decided to sum all the free extents from Datafile_3 on DBA_FREE_SPACE and compare it to the value at DBA_DATA_FILES. There is a difference of 1MB somewhere! I am going crazy. :)

Related

Temp tablespace runs out of space and prompts ORA-01652 when a select is executed

I am facing an issue while executing a huge query, where the temp tablespace of the oracle instance runs out of space. At the following link is the query.
https://dl.dropboxusercontent.com/u/96203352/Query/title_block.sql
Size of the Temp tablespace is 30 GB and due to clients concerns I can not extend its space more. Therefore, I tried to reduce sort operations but it all went in vain. Is there anyway to optimize or reduce sorts operations of this query.
At the following link the statistics of the PLAN Table is placed.
https://dl.dropboxusercontent.com/u/96203352/Query/PLAN_TABLE_INFO.txt
As the size of the query and the explain plan is way to large to be posted in this question, therefore I have to share it while using a link. Sorry for the inconvenience.
One more thing I can not remove distinct from the select statement as there is duplication in the data returned.
Please help.
The query plan says it all at the very top: all the temp space is being used by the DISTINCT operation. The reason that operation is requiring so much memory is that your query rows are so fat... around 10,000 bytes each!!!
I'm not sure why your query rows are so fat, but one suggestion I would try would be to change your query to do the DISTINCT operation and then, in a later step, CAST the necessary columns to VARCHAR2(3999). Logically that shouldn't affect it, but I've seen strange behaviour with CAST over the years. I wouldn't trust it enough not to at least try my suggestion.

Getting this error when trying to create an user in IDM ORA-01653: unable to extend table WAVESET.USERATTR by 512 in tablespace WAVESET_DATA

Please let me know what to do here to get the user created in IDM where as i deleted some old files in the server to make some space available
Your tablespace WAVESET_DATA is full and you do not have auto extend set on the datafiles.
Either add a new datafile or increase the size of the existing ones.
In a nutshell you are out of space for Oracle but not necessarily out of disk space. Be extremely careful in deleting old files. Anything in the app or oracle folder should be left alone until you know what it is.

How to know the usage of temporary tablespace in Oracle

I am trying to compare two tables which are very large in my system(Oracle 10g).
The way I used to compare is the "MINUS" operation.
Because of the large size of tables, I want to know the usage of the temporary tablespace
on the real time.
I googled someways on how to get the usage of the tempory tablespace. But I am not sure
which one is right.Here are the three ways:
1.select TABLESPACE_NAME, BYTES_USED, BYTES_FREE from V$TEMP_SPACE_HEADER;
2.select BYTES_USED,BYTES_CACHED from V$TEMP_EXTEND_POOL
What is the difference of BYTES_USED and BYTES_CACHED
3.select USED_EXTENDS, USED_BLOCKS v$sort_segment
the three ways really confused me a lot and I don't know what is the difference.
Look at the dynamic perfomance views v$sql_workarea and v$sql_workarea_active -- they will tell you not only how much space is being used by the query, but how much of it is attributable to different phases in the execution plan, what sort of sort area it is (hash join etc) and how it is being used (one-pass etc). It'll be a much more effective method of performance tuning.
V$SORT_SEGMENT view can be used to get used/free extents, used/free blocks information for TEMPORARY tablespaces.
V$TEMP_SPACE_HEADER and V$TEMP_EXTEND_POOL views are almost the same which provides used bytes information. However, V$TEMP_EXTEND_POOL is reliable because former is updated only when DB is restarted or tablespace is recreated.
Note: From Oracle 11g, DBA_TEMP_FREE_SPACE view can be used to get TEMPORARY tablespace information.

What would make a foxpro memo table loses its records?

I have an old Foxpro database that I work with. The database could be about 100 meg in size and due to corruption and index issue, all of a sudden the new table (table after corruption) is about 4k in size.
I understand that the data is corrupted why would the data disappear though?
If any Foxpro experts could tell me why is the data missing, i would really appreciate it.
BTW: Foxpro is still very fast compare to a lot of the bells and whistles in databases out there.
The last data truncation/error occurred after a power outage and the data is just gone. The file size decreased to 4k.
Maybe CHR(0) in the corruption, though I wouldn't expect the file to shrink unless you also did something to rewrite the file. Maybe PACK?
A DBF file has a header followed by data. If the header is corrupted, it loses track of where the data is.
I have had an instances in the past where windows has mis-reported the physical size of a foxpro table, reporting one file to be BIGGER than it actually was and reporting another SMALLER than it actually was.
The data MAY actually still be there, the trick would be getting Foxpro to recognise the fact that there are additional records in the table than recorded in the table header.
QUESTIONS:-
Have you packed the table ?
have you tried one of the table recovery tools like DBF recovery on the file ?
If the answer is no to both of the above then it maybe worth a try !
Good luck

How does oracle db writer decide whether or not to do multiblock / sequential writes

We have a test system which matches our production system like for like. 6 months ago we did some testing on new hardware, and found the performance limit of our system.
However, now we are re-doing the testing with a view to adding further hardware, and we have found the system doesnt perform as it used to.
The reason for this is because on one specific volume we are now doing random I/O which used to be sequential. Further to this it has turned out that the activity on this volume by oracle which is 100% writes, is actually in 8k blocks, where before it was up to 128k.
So something has caused the oracle db writer to stop batching up it's writes.
We've extensively checked our config, and cannot see any difference between our test and production systems. We've also opened a call with Oracle but at this stage information is slow in forthcoming.
so; Ultimately this is 2 related questions:
Can you rely on oracle multiblock writes? Is that a safe thing to engineer/tune your system for?
Why would oracle change its behaviour?
We're not at this stage necessarily blaming oracle - it may well be reacting to something in the environment - but what?
The OS/arch is solaris/sparc.
Oh; I forgot to mention, the insert table has no indexes, and only a couple of foreign keys - it's designed as a bucket for as fast an insert as possible. It's also partitioned on the key field.
Thanks for any tips!
More description of the workload would allow some hypotheses.
If you are updating random blocks, then the DBWR process(es) are going to have little choice but to do single-block writes. Indexes especially are likely to have writes all over the place. If you have an index of character values and need to insert a new 'M' record where there isn't room, it will get a new block for the index and split the current block. You'll have some of those 'M' records in the original block, and some in the new block (while will be the last [used] block in the last extent).
I suspect you are most likely to get multi-block writes when bulk inserting into tables, as new blocks will be allocated and written to. Potentially, initially you had (say) 1GB of extents allocated and were writing into that space. Now you might have reached the limit of that and be creating new extents (say 50 Mb) which it may be getting from scattered file locations (eg other tables that have been dropped).

Resources