We are currently executing an overnight batch and due to some issues ( still under analysis) we are getting an error of table lock after the batch is completed. Right now we are manually removing the lock before batch everyday, is there a way that can be used to automate the search for table locks in DB and remove if any found ?
I am currently running these 2 lines of code :
select distinct(table_name),owner,STATTYPE_LOCKED from dba_tab_statistics where owner = 'owner' and STATTYPE_LOCKED = 'ALL';
EXEC DBMS_STATS.unlock_table_stats('owner','table1');
EXEC DBMS_STATS.unlock_table_stats('owner','table2');
Any help on this would be very much helpful!
TIA
Related
When I run query select * from table#dblink in PL/SQL Developer,transaction commit/rollback icons are activated, but then if I use Fetch last page these icons are disabled. Why is this happening?
Querying over a db_link flips the 'we have a transaction' switch in the data dictionary
In most tools, you'll get a prompt for COMMIT or an indicator of an open transaction whenever you query against a DB_LINK.
That's because you're doing 'something' that's not clear to us in a different database. Your 'SELECT' could have side effects which require a COMMIT/ROLLBACK, or as Tom would say
'If you are distributed, you would want to commit to finish off anything that was implicitly started on the remote site.'
I think that PL/SQL is trying to remove useless transactions to help avoid session errors. It seems that whenever you press the "Fetch last page" button, PL/SQL Developer runs commit write batch if the statement contains a database link, if there are no transactions currently open in the session, and if the statement does not include FOR UPDATE.
Those are a lot of weird conditions, but they seem to ensure that the program won't commit when it shouldn't. I assume PL/SQL Developer is using commit write batch to use less resources than a normal commit. That guess is based on the number returned by this query increasing when I hit the button. (There's another statistic for user commits, and that number does not increase.)
select value
from v$mystat
join v$statname on v$mystat.statistic# = v$statname.statistic#
where lower(display_name) = 'commit batch performed';
This behavior is a little odd, but it could help prevent some errors in the session. For example, if you later try to run alter session enable parallel dml the session would throw the error ORA-12841: Cannot alter the session parallel DML state within a transaction. By committing the (worthless) transaction, you avoid some of those errors.
A VB 6 program is processing records and inserting in a temporary table, then these records are moved from this temporary table to actual table as
connection.Execute "INSERT INTO MAIN_TABLE SELECT * FROM TEMP_TABLE"
The temporary table is then truncated when records are moved
connection.Execute "TRUNCATE TABLE TEMP_TABLE"
This is working fine untill I use PARALLEL hint for INSERT query. I receive this error on TRUNCATE
ORA-00054: resource busy and acquire with NOWAIT specified or timeout
expired
It looks to me that parallel query returns before completing the job and TRUNCATE command is issued causing the lock.
I checked the number of records inserted as below and found that it is far less than the number of records in temporary table
connection.Execute "INSERT /*+ PARALLEL */ INTO MAIN_TABLE SELECT * FROM TEMP_TABLE", recordsAffected
Is there any way to wait for INSERT to complete?
Delete may be slower but Truncate is DDL which you can't run at the same time as DML. In fact, Truncate requires exclusive access to the table. DML on tables will request a share mode lock on the table which means you can't do DDL against the table at the same time.
A possible alternate solution would be to use synonyms. You have your table A
and a synonym S pointing to A
create table B as select * from A where 1=0;
create or replace synonym S for B
Your app now uses B instead of A so you can do what you want with A.
Do this every time you want to "truncate"
This assumes you're using ADO - though I now notice you don't have
that tag in your question.
Can you monitor the connection state with a loop waiting for executing to finish?
Something like
EDIT - Fix Boolean Add to use + instead of "AND"
While Conn.State = (adStateOpen + adStateExecuting)
DoEvents
Sleep 500 ' uses Sleep API command to delay 1/2 second
Wend
Sleep API declare
Edit - Add Asynch Hint/Option
Also - it might help the ADO connection to give it a hint that its running asynchronously, by adding the adAsyncExecute to end of your execute command
ie. Change the execute sql command to look like
conn.execute sqlString, recordsaffected, adAsyncExecute
I have to develop an informatica process that loads data from a flatfile into the target (simple truncate & load), but the catch is that :
If the number of rejected rows is greater than 100, the process should stop, i.e. the session should fail & the data in the target must be rolled back to what it was originally before load.
I think the TC Transformation might be useful here , but am not sure of how to use this. It would be great if I could get some help on this.
Thanks !
You can't use truncate in such scenario - it's irreversible. Try loading the data into a temporary table first (with Truncate table option enabled). Create a second session that will execute a set of sql commands like
`truncate table YourTable
insert into YourTable select * from YourTempTable`
Link the two with a condition like $yourTempTableSession.TgtFailedRows>100.
To meet the second requirement (i.e. to fail the workflow) add a Control task and set it to Abort top level workflow. Add a link from the temp table session load with a condition like $yourTempTableSession.TgtFailedRows>100.
I have a wrapper to the VFP TABLEUPDATE() function which, among other things, logs additions and changes to other tables that are made. The log table gets thrashed on occasion, due to multiple users saving and editing throughout the app, which results in a 'File is in use' error on my log table. The table is not open when the INSERT is called.
I am reasonably sure no process has the file opened exclusively. Ideally, I want to
Check and see if the file is available to open
Write to the file using INSERT INTO
Get out as fast as I can
Records are never edited, only INSERTed. Is there a way I can test the table before issuing the INSERT?
If you receive File is in use (Error 3), then according to Visual Fox Manual: You have attempted a USE, DELETE, or RENAME command on a file that is currently open. So you say DELETE or RENAME is out of the question.
It must be the USE IN SELECT("cTableName").
If EXCLUSIVE is OFF, there is no need to check if the file is open.
Do not open the table before INSERT. Just execute the INSERT and there will be no need to close the table afterwards.
And so you can get rid of the UNLOCK IN cTableName USE IN SELECT("cTableName").
My first thought is that you're holding the table open for too long and that any preliminary checks that you add will just tie the table up for longer. Do you close the table after your INSERT?
You say that the log table isn't open at the start of the process. This means that Fox will open the table for you silently so that the SQL can run. Are you opening it exclusive and are you explicitly closing it afterwards?
Have you tried locking the table in your insert routine?
IF FLOCK("mytable")
INSERT INTO ......
ELSE
WAIT WINDOW "Unable to lock"
ENDIF
Perhaps put this into a DO WHJILE loop?
I have a weird problem right now that if a ref cursor returned from a stored procedure that has only 1 record in it, the fetch operation will hang and freeze. The stored procedure execution was really fast, just the fetching process hangs. If the ref cursor has more than 1 record, then everything is fine. Does anyone have similar issues before?
The Oracle server is 11g running on Linus. The client is Windows Server 2003. I'm testing this using the generic Oracle sqlplus tool on the Windows Server.
Any help and comments would be greatly appreciated. thanks.
When you say hangs, what do you mean ?
If the session is still active in the database (status in V$SESSION), then it is probably waiting on some event (eg SQL*Net from client means it is waiting for the client to do something).
It may be that the query is taking a long time to find that there aren't any more rows. Consider a table of 10,000,000 rows with no indexes. The query may full scan the table and find the first row matches the criteria. It still has to scan the next 9,999,999 rows to find that they don't. That can take a while.
Since you are saying that the process hangs, Is there a chance that your cursor does a "select for Update" instead of "Select " ? Since you are saying that the fetch of multiple records does not cause this error, that might not be the case.
Can you show us the code (or a reproducible small test/sample) for your select and the fetch.
Also, you can check the v$locked_objects using the following query and giving in your table name(s) to see if the object in question is being locked. Again, unless your current query has "for update" this fetch should not hang.
select do.*
from v$locked_objects vo,
dba_objects do
where vo.object_id = do.object_id
and vo.object_name = '<your_table_name>'