In SAS we have a library which is actually ORACLE schema and today I faced with a strange event when trying to query a table in this library.
A regular SAS SQL query:
proc sql;
delete from table where id=123;
quit;
Was hung up for two hours while it usually took some seconds:
NOTE: PROCEDURE SQL used (Total process time):
real time 2:00:33.49
cpu time 0.03 seconds
While this operation was being performed I tried to delete a nearby row in ORACLE SQL DEVELOPER but it hung up processing delete request too. However deleting a row that was not nearby these rows did not cause any problems. Well how can I find out the possible reason? I guess that was a sort of deadlock.
It sounds like someone has locked a row that your session is trying to delete. You should be able to spot this by querying v$session:
select sid, schemaname, osuser, terminal, program, event
from v$session
where type != 'BACKGROUND';
and checking if your session has an event of "enq: TX - row lock contention" (or similar). If so, then you'll have to work out who has the blocking lock (if you have access to Toad's session browser, this is easy to do, but Google should throw up something that can help. Or, if your database is Oracle 11.2, there's a view: v$session_blockers that ought to pinpoint the blocking session), and then get them to either commit or rollback their transaction.
Related
I have a procedure in oracle that runs in about 40 mins when run from oracle.
I have a pass through query in ms access that looks like this
Begin
MyProcedure;
End;
This exact code runs in oracle just fine but hangs in MS access. I don't know if it will finish, it's been going for 6 hours already and I guess it doesn't matter if it finishes or not, this is unacceptable.
Can someone explain what the difference is between running it from oracle and MS access and how I can fix this
I presume MyProcedure is an Oracle stored procedure.
If that's so, I suggest you include logging into it. How? Create an autonomous transaction procedure (so that it could insert logging information into some log table and commit) and call it from MyProcedure, for example before every statement it contains (some nasty selects, updates, whatever). Doing so, you'd be able to trace MyProcedure's execution and see what takes that much time.
Apart from that, see whether there are uncommitted (or rolled back) transactions that hold certain rows (tables?) locked so - when you called MyProcedure - it waits for another session to commit (or roll back) in order to be able to continue its execution.
Question
How I can set a timeout value for nonblocking DDL (ALTER TABLE add column) in oracle so that if any DML lock the table for long time (several hours), my DDL can fast-fail instead of waiting for hours. (we expect oracle raise error like ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired to interrupt our DDL)
P.S: DDL_LOCK_TIMEOUT is not working (refer 'What I tried' below)
Background
I'm working on a big oracle database (Oracle Database 19c). There are legacy application every hour will do aggregation job to calculate the data in past hour, like AVG, SUM of the counters. The production has 40 CPUs and 200GB+ memory, normally the aggregation job will run around 30 minutes, but in some case, like due to maintenance break the aggregation jobs are delayed, more data need to be handle in next aggregation job cause the job running for few hours.
Those legacy applications are out of my control. It's not possible to change the aggregation job.
Edition-Based Redefinition is not used.
My work is update database table (due to new counter added). We use ALTER TABLE to add new column to the existing tables. But in some case, the aggregation job lock the table for hours make my script hang there for hours. It make customer unhappy. So I want to make my script fast-fail.
What I tried
By google a long time, seems DDL_LOCK_TIMEOUT is the simplest solution.
However, based on the test, we notice that DDL_LOCK_TIMEOUT is not works in our case. By a long time google again, we found Oracle document here clearly mentioned:
The DDL_LOCK_TIMEOUT parameter affects blocking DDL statements (but not nonblocking DDL statements)
ALTER TABLE add column is exactly 'nonblocking DDL' as listed in List of Nonblocking DDLs
Expectation
When a DML lock the table for 1 hours, like SELECT * FROM MY_TABLE FOR UPDATE and commit after 1 hours. I want my DDL like ALTER TABLE MY_TABLE ADD (COL_A number) can get timeout after 10 minutes instead of wait for 1 hour.
Other Solutions
1
There have one solution in my mind that we can first issue a lock table MY_TABLE IN EXCLUSIVE MODE wait 600 to get the lock fist. But before we go with this solution, I want to seek is there any simple solution just like DDL_LOCK_TIMEOUT to set only one parameter.
2
Based on oracle doc, enable Supplemental Logging able to downgrade the nonblocking DDL to blocking way. But Supplemental Logging is DB level configuration. I do not have the permission to do such change.
is there a way to know that of the say 100 DDL and DML statement which i am executing through jdbc , is something stuck.
i need to find out for progress bar that in db sql statements are executing and are not hung, so that can inform to user.
Is there a way to find this out.
There are any number of Oracle data dictionary views that you might use given the relatively vague requirements (what, exactly, is your definition of "hung", for example).
I'd probably start, though, with v$session assuming you can identify the sid and serial# of the particular session that you want to monitor (which may be aided by looking at the various columns in v$session). STATUS will tell you whether the session is actively executing a SQL statement at that particular instant. The sql_id will (generally) let you join to v$sqlarea or other views that tell you what statement is currently executing. The event column will tell you what the session is waiting on (i.e. reading from disk, waiting on CPU, waiting to acquire a lock, etc.).
The sql_id from v$session will also let you join to v$sqlstats which periodically updates with things like the amount of logical I/O a particular SQL statement has generated which would let you see that the currently active statement is doing something (whether that something is useful or whether it will terminate in our lifetime would be much more difficult).
Depending on what the code is doing, there may be one or more rows in v$session_longops that you can use to track the progress of longer-running operations-- using this effectively, though, will require that the third-party code issues the sort of long-running SQL operations that Oracle can monitor automatically (i.e. table scans of tables that have a reasonable amount of data) or that the code is instrumented to use v$session_longops to track its own progress.
Depending on what version of Oracle you're using, you might also be able to use the v$sql_monitor view to monitor SQL in real time.
I have a JDBC-based application which uses XA datasources and transactions which span multiple connections, connected to an Oracle database. The app sometimes needs to make some queries using join with a table from another (Oracle) server using a shared DbLink. The request works if I don't do it too often, but after 4 or 5 requests in rapid succession I get an error (ORA-02020 - too many links in use). I did some research, and the suggested remedy is to call "ALTER SESSION CLOSE DATABASE LINK ". If I call this request after the query that joins the DbLnk table, I get the error ORA-2080 (link is in use). If I call it before the query, I get ORA-2081 (link closed). Does this call do any good at all? The JDBC connection is closed long before the transaction commit (which is managed either by servlet or by EJB container, depending on the circumstances). I get the impression that when the connection closes, Oracle marks the link as closed, but it takes a minute or two for it to return to the pool of available links. I understand I could enlarge the pool of links (using the open_links property in the config file), but that won't guarantee that I won't have the same problem under a heavier load. Is there something I can do differently to get the dblinks to close more rapidly?
Any distributed SQL, even a select, will open a transaction that must be closed before you can close the database link. You need to either rollback or commit before you call ALTER SESSION CLOSE DATABASE LINK.
But it sounds like you've already got something else handling your transactions. If it's not possible to manually rollback or commit, you should try to increase the number of open links. The OPEN_LINKS parameter is the maximum number of links per session. The number of links you need isn't really dependent on the load, it should be based on the maximum number of distinct remote databases.
Edit:
The situation you describe in your comment shouldn't happen. I don't understand enough about your system to know what's really happening with the transactions. Anyway, if you can't figure out exactly what the system is doing maybe you can replace "alter session close database link" with a procedure like this:
create or replace procedure rollback_and_close_db_links authid current_user is
begin
rollback;
for links in (select db_link from v$dblink) loop
execute immediate 'alter session close database link '||links.db_link;
end loop;
end;
/
You'll probably need this grant:
grant select on v_$dblink to [relevant user];
I have a weird problem right now that if a ref cursor returned from a stored procedure that has only 1 record in it, the fetch operation will hang and freeze. The stored procedure execution was really fast, just the fetching process hangs. If the ref cursor has more than 1 record, then everything is fine. Does anyone have similar issues before?
The Oracle server is 11g running on Linus. The client is Windows Server 2003. I'm testing this using the generic Oracle sqlplus tool on the Windows Server.
Any help and comments would be greatly appreciated. thanks.
When you say hangs, what do you mean ?
If the session is still active in the database (status in V$SESSION), then it is probably waiting on some event (eg SQL*Net from client means it is waiting for the client to do something).
It may be that the query is taking a long time to find that there aren't any more rows. Consider a table of 10,000,000 rows with no indexes. The query may full scan the table and find the first row matches the criteria. It still has to scan the next 9,999,999 rows to find that they don't. That can take a while.
Since you are saying that the process hangs, Is there a chance that your cursor does a "select for Update" instead of "Select " ? Since you are saying that the fetch of multiple records does not cause this error, that might not be the case.
Can you show us the code (or a reproducible small test/sample) for your select and the fetch.
Also, you can check the v$locked_objects using the following query and giving in your table name(s) to see if the object in question is being locked. Again, unless your current query has "for update" this fetch should not hang.
select do.*
from v$locked_objects vo,
dba_objects do
where vo.object_id = do.object_id
and vo.object_name = '<your_table_name>'