I get ORA-00054 while loading large data files(~ 10 gb)
The error occurs when this a new file is loaded after a previous file.
Any ideas how I can solve this?
One possible scenario.
Is this a direct path load ? If so, please check the v$locked_object view and see if is being locked by someone during your load.
select dbao.object_name
from v$locked_object vlo,
dba_objects dbao
where vlo.object_id = dbao.object_id
and dbao.object_name = 'Table that you are trying to load...'
From the Oracle Documentation at http://download.oracle.com/docs/cd/B10500_01/server.920/a96524/c21dlins.htm
Locking Considerations with
Direct-Path INSERT
During direct-path INSERT, Oracle
obtains exclusive locks on the table
(or on all partitions of a partitioned
table). As a result, users cannot
perform any concurrent insert, update,
or delete operations on the table, and
concurrent index creation and build
operations are not permitted.
Concurrent queries, however, are
supported, but the query will return
only the information before the insert
operation.
Maybe this is linked to tablespace datafile sizes, table size, because ORA-00054 usually appears when an ALTER statement is run.
I do not pretend to be right here.
Check those views.
DBA_BLOCKERS – Shows non-waiting sessions holding locks being waited-on
DBA_DDL_LOCKS – Shows all DDL locks held or being requested
DBA_DML_LOCKS - Shows all DML locks held or being requested
DBA_LOCK_INTERNAL – Displays 1 row for every lock or latch held or being requested with the username of who is holding the lock
DBA_LOCKS - Shows all locks or latches held or being requested
DBA_WAITERS - Shows all sessions waiting on, but not holding waited for locks
http://www.dba-oracle.com/t_ora_00054_locks.htm
Your table seems to be locked: ORA-00054
It can be because of the way that Oracle driver handles the BLOB types (the driver locks the record, opens an stream to write the binary data, and needs "some help" to release the record).
I would try the next secuence:
Load the first file
COMMIT;
Load the second file
Related
in my pl\sql process, there is an execution of "alter table exchange parition.."
on some table.
the problem is that during that operation - other users can try to access the target table.
one process that executed a select query on that table, got this error:
ORA-08103 object no longer exists.
i think that it is not the same like 'object or view doesn't exist'.
i think that 'object no longer exists' error, come when the process start ok,
and then the exchange (or other operation) come from the side and the process
can't bo done.
the chance is very low that it will happen, because the exchange is very very fast.
but for this case, there is any idea how to solve it? how to prevent this situation?
maybe a way to execute the exchange only if no-one touch the table?
thanks.
This will happen when session #2 alters the table in the table after session #1 has opened its cursor but before it is finished fetching the rows from that cursor.
I don't think there is a foolproof way to prevent the exchange from happening unless you are willing to change the code that other users are using to access the target table.
LOCK TABLE mytable IN EXCLUSIVE MODE will not wait for SELECT statements to complete, nor will it prevent new SELECT statements from starting, so acquiring an exclusive lock before attempting the ALTER TABLE will not work.
If you want to prevent the ALTER TABLE from happening at the same time as SELECTs, you need both to depend on acquiring the same lock. A relatively robust way to do that would be to use the DBMS_LOCK package to allocate a lock for that table. Call that lock "mytablelock".
Then, using DBMS_LOCK, your SELECT sessions would need to acquire "shared" locks on "mytablelock" before progressing. Your alter table sessions would need to acquire an "exclusive" lock on "mytablelock" before progressing.
This scheme would allow multiple SELECT sessions to run without interfering with each other, but it would prevent the ALTER TABLE from running while any SELECT was running.
A (much) less robust, but simpler, way to do it would be to change the SELECT statements issued from the other query into SELECT FOR UPDATES. But that's a recipe for lots of unnecessary waits and deadlock errors.
I have a pl/sql script that clears (via delete from statement) and populates several depended tables like this:
delete from table-A
insert into table-A values(...)
delete from table-B
insert into table-B values(...)
These operations require ~ 10 seconds to complete and I'd like to stop all sql queries that try to read data from table-A or table-B while tables are updating. These queries should stop and continue execution when table-A and table-B are completely updated.
What is the proper way to do this?
As others have pointed out, Oracle's basic concurrency model is that writers do not block readers and readers do not block writers. You can't stop a simple select from running. Your queries will see the data as of the SCN that they started executing (assuming that you're using the default read committed transaction isolation level) so they will have a consistent view of the data before your updates started.
You could potentially acquire a custom named lock using dbms_lock.request. You would need to acquire this lock before running your updates and every session that queries the tables would also need to acquire the lock before it starts to query the tables. That will, obviously, decrease the scalability of your application but it will accomplish what you appear to be asking for. Presumably, the sessions doing queries can acquire the lock in shared mode while the session doing the updates would need to acquire it in exclusive mode.
I need to delete a large amount of data from my database on a regular basis. The process generates huge volume of archive logs. We had a database crash at one point because there was no storage space available on archive destination. How can I avoid generation of logs while I delete data?
The data to be deleted is already marked as inactive in the database. Application code ignores inactive data. I do not need the ability to rollback the operation.
I cannot partition the data in such a way that inactive data falls in one partition that can be dropped. I have to delete the data with delete statements.
I can ask DBAs to set certain configuration at table level/schema level/tablespace level/server level if needed.
I am using Oracle 11g.
What proportion of the data on the table would be deleted, what volume? Are there any referential integrity constraints to manage or is this table childless?
Depending on the answers , you might consider:
"CREATE TABLE keep_data UNRECOVERABLE AS SELECT * FROM ... WHERE
[keep condition]"
Then drop the original table
Then rename keep_table to original table
Rebuild the indexes (again with unrecoverable to prevent redo),constraints etc.
The problem with this approach is it's a multi-step DDL, process, which you will have a job to make fault tolerant and reversible.
A safer option might be to use data-pump to:
Data-pump expdp to extract the "Keep" data
TRUNCATE the table
Data-pump impdp import of data from step 1, with direct-path
At this point I suggest you read the Oracle manual on Data Pump, particularly the section on Direct Path Loads to be sure this will work for you.
MY preferred option would be partitioning.
Of course, the best way would be TenG solution (CTAS, drop and rename table) but it seems it's impossible for you.
Your only problem is the amount of archive logs and database crash problem. In this case, maybe you could partition your delete statement (for example per 10.000 rows).
Something like:
declare
e number;
i number
begin
select count(*) from myTable where [delete condition];
f :=trunc(e/10000)+1;
for i in 1.. f
loop
delete from myTable where [delete condition] and rownum<=10000;
commit;
dbms_lock.sleep(600); -- purge old archive if it's possible
end loop;
end;
After this operation, you should reorganize your table which is surely fragmented.
Alter the table to set NOLOGGING, delete the rows, then turn logging back on.
Please anyone explain locking mode in Oracle i.e. Share, Exclusive and Update lock. I found many theories on this and according to that
Share lock : Nobody can change data,Read only purpose
Exclusive lock : Only one user/connection are allow to change the data.
Update lock : Rows are locked till user made commit/rollback.
Then, I tried shared to check how it works
SQL> lock table emp in share mode;
Table(s) Locked.
SQL> update emp set sal=sal+10;
14 rows updated.
Then, I found that, user can change data after share lock. Then, what makes it different from exclusive lock and update lock.
Another question, how Update lock and exclusive lock are different with each other, even they seems almost equivalent.
Posting explanation for future visitors, and it also gives the answer.
Shared lock
Before I begin let me first say that there are 5 types of table locks - row shared, row exclusive, shared, shared row exclusive and exclusive. And shared lock is one among these. Also, please note that there are row locks, which are different than table locks. Follow the link I have provided in end to read about all this.
A shared lock is acquired on the table specified in following statement – LOCK TABLE table IN SHARE MODE;
This lock prevents other transactions from getting “row exclusive” (this lock is used by INSERT, UPDATE and DELETE statement), “shared row exclusive” and “exclusive” table locks, otherwise everything is permitted.
So, this means that a shared lock will block other transactions from executing INSERT, UPDATE and DELETE statements on that table but will allow other transactions to update the rows using “SELECT … FOR UPDATE” statement because for this statement a “row shared” lock is required, and it is permitted when a “shared” lock is required.
Below table is a good summary of locks and what's permitted.
Since many users will follow this question so I decided to go one more step further and put my learning notes, I hope folks will be benefited from it:
Source of this information and also excellent reading about Oracle locks.
It's very well explained in the documentation: http://docs.oracle.com/cd/E11882_01/server.112/e41084/ap_locks001.htm#SQLRF55502
In your example you locked the table in shared mode. This does not prevent other sessions locking the same object in shared mode, but it does prevent them from locking it in exclusive mode so you could not drop the table (which requires an exclusive lock) while it is being updated (which has a shared lock).
I have tried to find examples but they are all simple with a single where clause. Here is the situation. I have a bunch of legacy data transferred from another database. I also have the "good" tables in that same database. I need to transfer (data-conversion) data from the legacy tables to thew tables. Because this is a different set of tables the data-conversion requires complex joins to put the old data into the new tables correctly.
So, old tables old data.
New tables must have the old data but it requires lots of joins to get that old data into the new tables correctly.
Can I use direct path with lots of joins like this? INSERT SELECT (lots of joins)
Does direct path apply to tables that are already on the same database (transfer between tables)? Is it only for loading tables from say a text file?
Thank you.
The query in your SELECT can be as complex as you'd like with a direct-path insert. The direct-path refers only to the destination table. It has nothing to do with the way that data is read or processed.
If you're doing a direct-path insert, you're asking Oracle to insert the new data above the high water mark of the table so you bypass the normal code that reuses space in existing blocks for new rows to be inserted. It also has to block other inserts since you can't have the high water mark of the table change during a direct-path insert. This probably isn't a big deal if you've got a downtime window in which to do the load but it would be quite problematic if you wanted the existing tables to be available for other applications during the load.
No, on the contrary, it means you need to do a backup after a NOLOGGING load, not that you can't backup the database.
Allow me to elaborate a bit. Normally, when you do DML in Oracle, the before images of the changes you are are making get logged in UNDO, and all the changes (including the UNDO changes) are first written to REDO. This is how Oracle manages transactions, instance recovery, and database recovery. If a transaction is aborted or rolled back, Oracle uses the information in UNDO to undo the changes your transaction made. If the instance crashes, then on instance restart, Oracle will use the information in REDO and UNDO to recover up to the last committed transaction. First, Oracle will read the REDO and roll forward, then, use UNDO to roll back all the transactions that were not committed at the time of the crash. In this way, Oracle is able to recover up to the last committed transaction.
Now, when you specify an APPEND hint on an insert statement, Oracle will execute the INSERT with direct load. This means that data is loaded into brand new, never before used blocks, from above the highwater mark. Because the blocks being loaded are brand new, there is no "before image", so, Oracle can avoid writing UNDO, which improves performance. If the database is in NOARCHIVELOG mode, then Oracle will also not write REDO. On a database in ARCHIVELOG mode, Oracle will still write REDO, unless, before you do the insert /*+ append */, you set the table to NOLOGGING, (i.e. alter table tab_name nologging;). In that case, REDO logging is disabled for the table. However, this is where you could run into backup/recovery implications. If you do a NOLOGGING direct load, and then you suffer a media failure, and the datafile containing the segment with the nologging operation is restored from a backup taken before the nologging load, then the redo log will not contain the changes required to recover that segment. So, what happens? Well, when you do a NOLOGGING load, Oracle writes extent invaldation records to the redo log, instead of the actual changes. Then, if you use that redo in recovery, those data blocks will be marked logically corrupt. Any subsequent queries against that segment will get an ORA-26040 error.
So, how to avoid this? Well, you should always take a backup imediately following any NOLOGGING direct load. If you restore/recover from a backup taken after the nologging load, there is no problem, because the data will be in the datablocks in the file that was restored.
Hope that's clear,
-Mark
Yes, there should not be any arbitrary limits on query complexity.
If you do
insert /*+ APPEND */ into target_table select .... from source1, source2..., sourceN where
It should work fine. Consider though, that the performance of the load will be limited by the performance of that query, so, be sure it's well-tuned, if you're expecting good performance.
Finally, consider whether setting NOLOGGING on the target table would improve performance significantly. But, also consider the backup recovery implications, if you decide to implement NOLOGGING.
Hope that helps,
-Mark