Many months ago while shrinking table spaces I mistakenly deleted a table space with out taking backup. There were many indexes which belonged to tables which are a part of that table space.
I never had any problems while inserting or deleting records from tables which belonged to that specific tablespace. But from yesterday I observed that data isn't inserting into many tables due to indexes on it which are giving me error that unique index violated which belong to that specific table space.
Since this morning, I am receiving this error which is caused by a file which was deleted by me:
ORA-00376: file 663 cannot be read at this time
ORA-01110: data file 663: '/oradata/db3/pm/pm4h_db_dat_w_150316_02.dbf'"
How can I overcome this error?
I also now tried to recover the file which gives me this error:
SQL> recover datafile '/oradata/db3/pm/pm4h_db_dat_w_150316_00.dbf';
ORA-00283: recovery session canceled due to errors
ORA-01122: database file 661 failed verification check
ORA-01110: data file 661: '/oradata/db3/pm/pm4h_db_dat_w_150316_00.dbf'
ORA-01207: file is more recent than control file - old control file"
Related
Oracle is throwing this error when dropping and creating a table I regularly do without problem.
ORA-12801: error signaled in parallel query server P000, instance MyURL:bwup (3)
ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS3'
I would like to understand conceptually what is happening here and how could I fix this without autoextending the tablespace. I've purged the DB in order to cleanse the space occupied by those recoverable tables recently dropped and stopped throwing the error, but I don't understand why. Is that tablespace that Oracle is talking about common for different processes?
I have reached the file size limit on my SQL Server 2008 R2 Express database which I believe is 10Gb. I know this because I see Event ID 1101 in the event log.
Could not allocate a new page for database 'ExchangeBackup' because of insufficient disk space in filegroup 'PRIMARY'
I have removed some historic data to work around the problem for now but it is only a temporary fix. One table (PP4_MailBackup) is much larger than the others so when I created this database 12 months ago, I converted this table to be a Filestream table and the data is stored outside the FileGroup in the File System. This appeared to be working successfully until I received the error and new data was no longer being added to my database.
When I do a report on table sizes I see the Reserved(KB) column adds up to almost 10GB.
The folder that holds my FileStream data is 176 GB
The database .mdf file is indeed 10GB.
Does anyone have any idea why the table PP4_MailBackup is still using nearly 7GB?
Here is the "Standard Reports -> Disk Usage report" for this database:
Thanks in advance
David
Update
Here is some more info.
There are 868,520 rows in this table.
This cmd returns 1 so I'm assuming Ansipadding is on. I have never changed this from the default.
SELECT SESSIONPROPERTY('ANSI_PADDING')
The columns are defined like this
Even if every record for every column filled the full record size, by my rough calculation the table would be around 4,125,470,000 bytes. I understand that the nvarchar columns only use the actual space required.
I'm still missing a lot of space.
Not really an answer but more of a conclusion.
I have given up on this problem and resided myself to remove data to stay under the 10GB Primary file size limit. I figured out that the nvarchar columns store 2 bytes per character in order to deal with Unicode characters although they do only use the space required and don't pad out the column with spaces. So this would account for some of the space I can't find.
I tried to convert my char(500) columns to varchar(500) by adding new columns with the correct type copying data into them and then removing the old column. This worked but the table actually got bigger because removing the column is only a Meta data change and does not actually remove the data. To recover the space I would need to create a new table and copy the data across then remove the old table of course I don't have enough space in the primary file to do that.
I thought about copying the table to temp db removing the original table then copying it back but temp db doesn't support filestream columns (at least to my knowledge) so I would need to hold all 170GB within the temp db table. This sounded like a dubious solution and my test server didn't have enough space on the partition where temp db was stored. I couldn't find anything on the files size limit of tempdb on sql 2008 Express, but at this point it was all getting too hard.
We are trying to enter 1.6M records in a table, however, we are running into the "undo tablespace" error described below. Any suggestions? We looked at other similar errors here, but didn't find a solution.
Couple notes:
We just installed the db application today.
We entered 1.6M records in another table successfully. Didn't commit afterward.
Then, tried adding another 1.6M records into another table and got the error.
Tried committing the first 1.6M records, but still got the error.
Error:
SQL Error: ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS1'
30036. 00000 - "unable to extend segment by %s in undo tablespace '%s'"
*Cause: the specified undo tablespace has no more space available.
*Action: Add more space to the undo tablespace before retrying
the operation. An alternative is to wait until active
transactions to commit.
Found the answer. Came across a YouTube video that had enough info to help me piece together the answer (I found the video only OK, but helpful):
https://youtu.be/CSD0JFc9mtw
Couple notes
1) First note that in order to release the space in the UNDO tablespace, you have to "commit" the transaction(s).
2) Second note the UNDO tablespace has a size limit.
3) Last, it also has a retention time, so even after you commit the transaction, you have to wait a defined time before it releases the space to be used.
The third was the part that tripped me up because I couldn't understand why even after commiting the transactions I still was having issue with the UNDO tablespace.
To fix the problem all I had to do was increase the size of the tablespace, then to avoid the problem next time, I reduced the retention time (careful with the second part - see video).
Increase size with this:
ALTER DATABASE DATAFILE 'C:\MYDBLOCATION\UNDOTBS1.DBF' RESIZE 1024M;
If while loading this file
$ cat employee.txt
100,Thomas,Sales,5000
200,Jason,Technology,5500
300,Mayla,Technology,7000
400,Nisha,Marketing,9500
500,Randy,Technology,6000
501,Ritu,Accounting,5400
using the control file (say) sqlldr-add-new.ctl I came to know all the records are faulty so I want the previously loaded records in that table (those that were loaded yesterday) to be retained if today's had any error. How to handle this exception.
This is my sample ctl file
$ cat sqlldr-add-new.ctl
load data
infile '/home/ramesh/employee.txt'
into table employee
fields terminated by ","
( id, name, dept, salary )
You can't roll back from SQL*Loader, it commits automatically. This is mentioned in the errors parameter description:
On a single-table load, SQL*Loader terminates the load when errors exceed this error limit. Any data inserted up that point, however, is committed.
And there's a section on interrupted loads.
You could attempt to load the data to a staging table, and if it is successful move the data into the real table (with delete/insert into .. select .., or with a partition swap if you have a large amount of data). Or you could use an external table and do the same thing, but you'd need a way to determine if the table had any discarded or rejected records.
try with ERRORS=0.
You could find all explanation here:
http://docs.oracle.com/cd/F49540_01/DOC/server.815/a67792/ch06.htm
ERRORS (errors to allow)
ERRORS specifies the maximum number of insert errors to allow. If the number of errors exceeds the value of ERRORS parameter, SQL*Loader terminates the load. The default is 50. To permit no errors at all, set ERRORS=0. To specify that all errors be allowed, use a very high number.
On a single table load, SQL*Loader terminates the load when errors exceed this error limit. Any data inserted up that point, however, is committed.
SQL*Loader maintains the consistency of records across all tables. Therefore, multi-table loads do not terminate immediately if errors exceed the error limit. When SQL*loader encounters the maximum number of errors for a multi-table load, it continues to load rows to ensure that valid rows previously loaded into tables are loaded into all tables and/or rejected rows filtered out of all tables.
In all cases, SQL*Loader writes erroneous records to the bad filz
I'm encountering this error multiple times, but it appears to be random.
I perform an INSERT query where I attempt to insert a BLOB file into a designated table.
I do not know if there's a connection between the BLOB and the error.
Worth mentioning that the table is partitioned.
Here is the complete query:
INSERT INTO COLLECTION_BLOB_T
(OBJINST_ID, COLINF_ID, COLINF_PARTNO, BINARY_FILE_NAME, BINARY_FILE_SIZE, BINARY_FILE)
VALUES (:p1, :p2, :p3, :p4, :p5, EMPTY_BLOB());
This is the only INSERT/UPDATE into this table in the entire application.
So I doubt that any other query is locking it, and the error is not about a locked resource.
What can be the cause?
As I've mentioned, this appears to occur randomly.
Thanks in advance.
The table is partitioned as I've mentioned, so between midnight - 3:00AM the partitioned changes and this under some instances the error occurs.