Application becomes unresponsive because of oracle lock - oracle

The app is connected to an oracle 11G database using the JDBC driver provided from the official website. When many users (Around 50) from different instances connected to the same schema start using the application, i experience some freezes all around the app and when i run a query to get the locking sessions and the locked objects i find only "Row Exclusive" lock type, which normally should not lock all the table and permits multiple sessions to perfom DML queries. Thus my question is when can a row exclusive table lock the whole table or else provoque these freezes.
Note: i have looked around in forums and saw some MAXTRANS and ITL configurations, could these parameters be generating these freezes ?
Thank you

i think you have your terms confused.. "Row Exclusive" locks mean 'i have locked this row.. no other session is allowed to update it'.
so if you have 50 sessions all trying to update or delete a specific row then yes.. you are going to have contention. and that will seriously limit your performance.
so I would guess that its possible your application is missing a commit statment that would free the lock after the row has been modified.
you say you are using sequences.. are you using an actually oracle sequence (ie create sequence my_seq; ) or are you doing to custom thing that like select max(id)+1 from sequence_table which would be another bad idea.

Maybe it's too early to blame Oracle. It can be a servlet container configuration such as not enough exec threads. Or it can be an internal contention. Many things can go wrong. A quick way to identify the bottleneck is to get a thread dump when the application is experiencing "some freezes all around the app" and see where your threads as stuck. You can get a thread dump by sending kill -3 to your Java process. Post it here and will be happy to look at it.

Related

Analyzing commit activity retrospectively in Oracle 19c

I'm trying to work out why my Oracle 19c database is "suddenly" experiencing high commit waits. Looking in V$ACTIVE_SESSION_HISTORY and DBA_HIST_ACTIVE_SESS_HISTORY shows me that lots of sessions are waiting on "log file sync" and the blocking session is the LGWR process. Not a sign of a problem in itself, but a couple of months ago (before a recent set of product updates) it wasn't doing that, so I'm trying to understand what has changed. Either some code changes made over the last 2 months have caused this, or potentially the I/O system is experiencing a problem.
Because it's an OLTP system we have many different types of transaction, and I'm finding it difficult to filter out the noise from the performance views. What I'd like to be able to do is identify the sessions which are doing most commits, and also the sessions that are doing the "largest" commits, and then I can trace these back to see which pieces of code are responsible etc.
I would therefore like to be able to create a table such as this:
SESSION_ID
SESSION_SERIAL#
COMMIT_COUNT
COMMIT_SIZE
1
12345
3
132436
For commit size, I guessed I would need to use something like the "wait time" as an approximation and was hoping that the TM_DELTA_DB_TIME column would help me out here, but not sure how to measure the number of commits. I had hoped that the XID column would allow me to see the transaction boundaries but it's usually NULL.
And now I've stopped to question why there isn't an easier way to do this, and whether I'm going about it the wrong way. Surely I can't be the only person to want more in-depth understanding of the commit activity within their Oracle database. Or am I asking for data that doesn't exist in the views?
If anybody has some tips for where to look I would be very grateful!

Dropping a table partition avoiding the error ORA-00054

I need your opinion in this situation. I’ll try to explain the scenario. I have a Windows service that stores data in an Oracle database periodically. The table where this data is being stored is partitioned by date (Interval-Date Range Partitioning). The database also has a dbms_scheduler job that, among other operations, truncates and drops older partitions.
This approach has been working for some time, but recently I had an ORA-00054 error. After some investigation, the error was reproduced with the following steps:
Open one sqlplus session, disable auto-commit, and insert data in the
partitioned table, without committing the changes;
Open another sqlplus session and truncate/drop an old partition (DDL
operations are automatically committed, if I’m not mistaken). We
will then get the ORA-00054 error.
There are some constraints worthy to be mentioned:
I don’t have DBA access to the database;
This is a legacy application and a complete refactoring isn’t
feasible;
So, in your opinion, is there any way of dropping these old partitions, without the risk of running into an ORA-00054 error and without the intervention of the DBA? I can just delete the data, but the number of empty partitions will grow everyday.
Many thanks in advance.
This error means somebody (or something) is working with the data in the partition you are trying to drop. That is, the lock is granted at the partition level. If nobody was using the partition your job could drop it.
Now you say this is a legacy app and you don't want to, or can't, refactor it. Fair enough. But there is clearly something not right if you have a process which is zapping data that some other process is using. I don't agree with #tbone's suggestion of just looping until the lock is released: you can't just get rid of data which somebody is using with establishing why they are still working with data that they apparently should not be using.
So, the first step is to find out what the locking session is doing. Why are they still amending this data your background job wants to retire? Here's a script which will help you establish which session has the lock.
Except that you "don't have DBA access to the database". Hmmm, that's a curly one. Basically this is not a problem which can be resolved without DBA access.
It seems like you have several issues to deal with. Unfortunately for you, they are political and architectural rather than technical, and there's not much we can do to help you further.
How about wrapping the truncate or drop in pl/sql that tries the operation in a loop, waiting x seconds between tries, for a max num of tries. Then use dbms_scheduler to call that procedure/function.
Maybe this can help. Seems to be the same issue as the one that you discribe.
(ignore the comic sans, if you can) :)

Exclusive table (read) lock on Oracle 10g?

Is there a way to exclusively lock a table for reading in Oracle (10g) ? I am not very familiar with Oracle, so I asked the DBA and he said it's impossible to lock a table for reading in Oracle?
I am actually looking for something like the SQL Server (TABLOCKX HOLDLOCK) hints.
EDIT:
In response to some of the answers: the reason I need to lock a table for reading is to implement a queue that can be read by multiple clients, but it should be impossible for 2 clients to read the same record. So what actually happens is:
Lock table
Read next item in queue
Remove item from the queue
Remove table lock
Maybe there's another way of doing this (more efficiently)?
If you just want to prevent any other session from modifying the data you can issue
LOCK TABLE whatever
/
This blocks other sessions from updating the data but we cannot block other peple from reading it.
Note that in Oracle such table locking is rarely required, because Oracle operates a policy of read consistency. Which means if we run a query that takes fifteen minutes to run the last row returned will be consistent with the first row; in other words, if the result set had been sorted in reverse order we would still see exactly the same rows.
edit
If you want to implement a queue (without actually using Oracle's built-in Advanced Queueing functionality) then SELECT ... FOR UPDATE is the way to go. This construct allows one session to select and lock one or more rows. Other sessions can update the unlocked rows. However, implementing a genuine queue is quite cumbersome, unless you are using 11g. It is only in the latest version that Oracle have supported the SKIP LOCKED clause. Find out more.
1. Lock table
2. Read next item in queue
3. Remove item from the queue
4. Remove table lock
Under this model a lot of sessions are going to be doing nothing but waiting for the lock, which seems a waste. Advanced Queuing would be a better solution.
If you want a 'roll-your-own' solution, you can look into SKIP LOCKED. It wasn't documented until 11g, but it is present in 10g. In this algorithm you would do
1. SELECT item FROM queue WHERE ... FOR UPDATE SKIP LOCKED
2. Process item
3. Delete the item from the queue
4. COMMIT
That would allow multiple processes to consume items off the queue.
The TABLOCKX and HOLDLOCK hints you mentioned appear to be used for writes, not reads (based on http://www.tek-tips.com/faqs.cfm?fid=3141). If that's what you're after, would a SELECT FOR UPDATE fit your need?
UPDATE: Based on your update, SELECT FOR UPDATE should work, assuming all clients use it.
UPDATE 2: You may not be in a position to do anything about it right now, but this sort of problem is actually an ideal fit for something other than a relational database, such as AMQP.
If you mean, lock a table so that no other session can read from the table, then no, you can't. Why would you want to do that anyway?

How to disable oracle cache for performance tests

I'm trying to test the utility of a new summary table for my data.
So I've created two procedures to fetch the data of a certain interval, each one using a different table source. So on my C# console application I just call one or another. The problem start when I want to repeat this several times to have a good pattern of response time.
I got something like this: 1199,84,81,81,81,81,82,80,80,81,81,80,81,91,80,80,81,80
Probably my Oracle 10g is making an inappropriate caching.
How I can solve this?
EDIT: See this thread on asktom, which describes how and why not to do this.
If you are in a test environment, you can put your tablespace offline and online again:
ALTER TABLESPACE <tablespace_name> OFFLINE;
ALTER TABLESPACE <tablespace_name> ONLINE;
Or you can try
ALTER SYSTEM FLUSH BUFFER_CACHE;
but again only on test environment.
When you test on your "real" system, the times you get after first call (those using cached data) might be more interesting, as you will have cached data. Call the procedure twice, and only consider the performance results you get in subsequent executions.
Probably my Oracle 10g is making a
inappropriate caching.
Actually it seems like Oracle is doing some entirely appropriate caching. If these tables are going to be used a lot then you would hope to have them in cache most of the time.
edit
In a comment on Peter's response Luis said
flushing before the call I got some
interesting results like:
1370,354,391,375,352,511,390,375,326,335,435,334,334,328,337,314,417,377,384,367,393.
These findings are "interesting" because the flush means the calls take a bit longer than when the rows are in the DB cache but not as long as the first call. This is almost certainly because the server has stored the physical records in its physical cache. The only way to avoid that, to truely run against an empty cache is to reboot the server before every test.
Alternatively learn to tune queries properly. Understanding how the database works is a good start. And EXPLAIN PLAN is a better tuning aid than the wall-clock. Find out more.

Is there a way to peek inside of another Oracle session?

I have a query editor (Toad) looking at the database.
At the same time, I am also debugging an application with its own separate connection.
My application starts a transaction, does some updates, and then makes decisions based on some SELECT statements. Because the update statements (which are many and complex) are not committed yet, the results my application gets from its SELECT are not the same as what I get if I run the same statement in Toad.
Currently I get around this by dumping the query output from the app into a text file, and reading that.
Is there a better way to peek inside another oracle session, and see what that session sees, before the commit is complete?
Another way to ask this is: Under Oracle, can I enable dirty reads between only two sessions, without affecting anyone else's session?
No, Oracle does not permit dirty reads. Also, since the changes may not have physically been written to disk, you won't find them in the data files.
The log writer will write any pending data changes at least every three seconds, so you may be able to use the Log Miner stuff to pick it out from there.
But in general, your best bet is to include your own debugging information which you can easily switch on and off as required.
It's not a full answer I know, but while there are no dead reads, there are locks that can give you some idea what is going on.
In session 1 if you insert a row with primary key 7, then you will not see it when you select from session 2. (That would be a dirty read).
However, if you attempt an insert from session 2 using the primary key of 7 then it will block behind session 1 as it has to wait and see if session 1 will commit or rollback. You can use "WAIT 10" to wait 10 seconds for this to happen.
A similar story exists for updates or anything that would cause a unique constraint violation.
Can you not just set the isolation level in the session you want to peak at to 'read uncommitted' with an alter session command or a logon trigger (I have not tried this myself) temporarily?
What I prefer to do (in general) is place debug statements in the code that remain there permanently, but are turned off in production - Tom Kyte's debug.f package is a useful place to start - http://asktom.oracle.com/tkyte/debugf

Resources