Find the user/process details of archive log in Oracle server - oracle

We have a incident at a customer with Oracle archive logs filling up the disk.
We want to find out why this is happening, anyone have any SQLs or scripts to find out which process, user etc that generates so much logs?

"Why it's happening is simple. The app is generating more redo (insert, update, delete) than the current archivelog disk space can handle. Either they have a rogue process, or they are not performing necessary housekeeping on the archivelogs.
As to the rogue process, the best means of identifying it is probably with logminer. Logminer if fully documented, here.
As for the housekeeping, they should be taking regular rman backups of the archivelogs, with the DELETE INPUT option. At a minimum they should be doing this daily. For a very active database, it is not out of the question to take that backup more frequently, as needed to stay ahead of the generation.
Also, don't overlook the possibility (probability?) that their archivelog destination is simply too small for their needs.
Also check that if they are using FRA, that they are really using FRA. If their backup destination format specifies the actual name of a directory that happens to be under the FRA, then the FRA mechanism doesn't know anything about that space consumption. And it won't be doing it's own housekeeping.

Related

Is it possible to query the un-modified data within an Oracle (9.2) Transaction?

I'm looking at doing a data-fix and need to be able to prove that the data I have intended to change is the only data changed. (For example - that a trigger hasn't modified additional columns that I wasn't expecting)
I've been looking at Oracle's Flashback Query process, which would be great, except that this is not enabled on the database in question.
Since this check would be carried out prior to committing the transaction, Oracle must have the "before" information squirreled away somewhere, and I wondered if there is any way of accessing this undo information?
Otherwise, I would potentially have to make a temporary copy of each table and do a compare between the live table and the backup, which may also result in inconsistencies between the backup query time and the update transaction time.
While I'm expecting the answer "no", I'm hoping someone can point me in a better direction than that which I appear to be headed at present.
Thanks!

Oracle database reverting database to a previous state

I have some automated tests that insert some data, and save it into Oracle XE database, version 11g.
At the moment, after the tests are done, data is manually deleted through SQL. But i was wondering is there any other way to make a rollback easier, and more efficient? I am reading about restore points and wondering is that the feature that I am looking for?
How memory consuming is that process of restoring, and is it a good practice to use it for what i need? Or is it maybe some other way to rollback data inserting?
Thanks
Oracle flashback is perfect for this scenario. Just be sure you have allocated sufficient disk space in the flash_recovery_area parameter.
Usage is something like
FLASHBACK DATABASE TO RESTORE POINT 'before_upgrade';
Limitations:
Does not recover operations undertaken with No Logging such as direct path inserts
cannot help you if are dropping datafiles

Does using NOLOGGING in Oracle break ACID? specifically during poweroutage

When using NOLOGGING in Oracle, say for inserting new records. Will my database be able to gracefully recover from a power outage? if it randomly went down during the insert.
Am I correct in stating that the the UNDO logs will be used for such recoveries ... as opposed to REDO log usage which be be used for recovery if the main datafiles were physically corrupted.
It seems to me, you're muddling some concepts together here.
First, let's talk about instance recovery. Instance recovery is what happens following a database crash, whether it is killed, server goes down, etc. On instance startup, Oracle will read data from the redo logs and roll forward, writing all pending changes to the datafiles. Next, it will read undo, determine which transactions were not committed, and use the data in undo to rollback any changes that had not committed up to the time of the crash. In this way, Oracle guarantees to have recovered up to the last committed transaction.
Now, as to direct loads and NOLOGGING. It's important to note that NOLOGGING is only valid for direct loads. This means that updates and deletes are never NOLOGGING, and that INSERT is only nologging if you specify the APPEND hint.
It's important to understand that when you do a direct load, you are literally "directly loading" data into the datafiles. So, no need to worry about issues around instance recovery, etc. When you do a NOLOGGING direct load, data is still written directly to the datafiles.
It goes something like this. You do a direct load (for now, let set aside the issue of NOLOGGING), and data is loaded directly into the datafiles. The way that happens, is that Oracle will allocate storage from above the high water mark (HWM), and format and load those brand new blocks directly. When that block allocation is made, those data dictionary updates that describe the space allocation are written to and protected by redo. Then when your transaction commits, the changes become permanent.
Now, in the event of an instance crash, either the transaction was committed (in which case the data is in the datafiles and the data dictionary reflects those new extents have been allocated), or it was not committed, and the table looks exactly like it did before the direct load began. So, again, data up to and including the last committed transaction is recovered.
Now, NOLOGGING. Whether a direct load is logged or not, is irrelevant for the purposes of instance recovery. It will only come into play in the event of media failure and media recovery.
If you have a media failure, you'll need to recover from backup. So, you'll restore the corrupted datafile and then apply redo, from archived redo logs, to "play back" the transactions that occurred from the time of the backup to the current point in time. As long as all the changes were logged, this is not a problem, as all the data is there in the redo logs. However, what will happen in the event of a media failure subsequent to a NOLOGGING direct load?
Well, when the redo is applied to your segments that were loaded with NOLOGGING, the required data is not in the redo. So, those data dictionary transactions that I mentioned that created the new extents where data was loaded, those are in the redo, but nothing to populate those blocks. So, the extents are allocated to the segment, but then are also marked as invalid. So, if/when you attempt to select from the table, and hit those invalid blocks, you'll get ORA-26040 "data was loaded using the NOLOGGING option". This is Oracle letting you know you have a data corruption caused by recovery through a NOLOGGING operation.
So, what to do? Well, first off, any time you load data with NOLOGGING, make sure you can re-run the load, if necessary. So, if you do suffer an instance failure during the load, you can restart the load, or if your suffer a media failure between the time of the NOLOGGING load and the next backup, you can re-run the load.
Note that, in the event of a NOLOGGING direct load, you're only exposed to data loss until your next backup of the datafiles/tablespaces containing the segments that had the direct load. Once it's protected by backup, you're safe.
Hope this helps clarify the ideas around direct loads, NOLOGGING, instance recovery, and media recovery.
IF you use NOLOGGING you don't care about the data. Nologging operations should be recoverable with other procedures than the regular databases recovery procedures. Many times the recovery will happen without problems. Problem is when you have a power failure on the storage. In that case you might end up corrupting the online redo - that was active - and because of that also have problems with corrupt undo segments.
So, specifically in your case: I would not bet on it.
Yes, much of the recovery would be done by reading undo, that might get stuck because of exactly the situation you described. That is one of the nastiest problems to recover.
As to be 100% ACID compliant a DBMS needs to be serializable, this is very rare even amongst major vendors. To be serializable read, write and range locks need to be released at the end of a transaction. There are no read locks in Oracle so Oracle is not 100% ACID compliant.

Dropping a table partition avoiding the error ORA-00054

I need your opinion in this situation. I’ll try to explain the scenario. I have a Windows service that stores data in an Oracle database periodically. The table where this data is being stored is partitioned by date (Interval-Date Range Partitioning). The database also has a dbms_scheduler job that, among other operations, truncates and drops older partitions.
This approach has been working for some time, but recently I had an ORA-00054 error. After some investigation, the error was reproduced with the following steps:
Open one sqlplus session, disable auto-commit, and insert data in the
partitioned table, without committing the changes;
Open another sqlplus session and truncate/drop an old partition (DDL
operations are automatically committed, if I’m not mistaken). We
will then get the ORA-00054 error.
There are some constraints worthy to be mentioned:
I don’t have DBA access to the database;
This is a legacy application and a complete refactoring isn’t
feasible;
So, in your opinion, is there any way of dropping these old partitions, without the risk of running into an ORA-00054 error and without the intervention of the DBA? I can just delete the data, but the number of empty partitions will grow everyday.
Many thanks in advance.
This error means somebody (or something) is working with the data in the partition you are trying to drop. That is, the lock is granted at the partition level. If nobody was using the partition your job could drop it.
Now you say this is a legacy app and you don't want to, or can't, refactor it. Fair enough. But there is clearly something not right if you have a process which is zapping data that some other process is using. I don't agree with #tbone's suggestion of just looping until the lock is released: you can't just get rid of data which somebody is using with establishing why they are still working with data that they apparently should not be using.
So, the first step is to find out what the locking session is doing. Why are they still amending this data your background job wants to retire? Here's a script which will help you establish which session has the lock.
Except that you "don't have DBA access to the database". Hmmm, that's a curly one. Basically this is not a problem which can be resolved without DBA access.
It seems like you have several issues to deal with. Unfortunately for you, they are political and architectural rather than technical, and there's not much we can do to help you further.
How about wrapping the truncate or drop in pl/sql that tries the operation in a loop, waiting x seconds between tries, for a max num of tries. Then use dbms_scheduler to call that procedure/function.
Maybe this can help. Seems to be the same issue as the one that you discribe.
(ignore the comic sans, if you can) :)

Oracle transaction read-consistency?

I have a problem understanding read consistency in database (Oracle).
Suppose I am manager of a bank . A customer has got a lock (which I don't know) and is doing some updating. Now after he has got a lock I am viewing their account information and trying to do some thing on it. But because of read consistency I will see the data as it existed before the customer got the lock. So will not that affect inputs I am getting and the decisions that I am going to make during that period?
The point about read consistency is this: suppose the customer rolls back their changes? Or suppose those changes fail because of a constraint violation or some system failure?
Until the customer has successfully committed their changes those changes do not exist. Any decision you might make on the basis of a phantom read or a dirty read would have no more validity than the scenario you describe. Indeed they have less validity, because the changes are incomplete and hence inconsistent. Concrete example: if the customer's changes include making a deposit and making a withdrawal, how valid would your decision be if you had looked at the account when they had made the deposit but not yet made the withdrawal?
Another example: a long running batch process updates the salary of every employee in the organisation. If you run a query against employees' salaries do you really want a report which shows you half the employees with updated salaries and half with their old salaries?
edit
Read consistency is achieved by using the information in the UNDO tablespace (rollback segments in the older implementation). When a session reads data from a table which is being changed by another session, Oracle retrieves the UNDO information which has been generated by that second session and substitutes it for the changed data in the result set presented to the first session.
If the reading session is a long running query it might fail because due to the notorious ORA-1555: snapshot too old. This means the UNDO extent which contained the information necessary to assemble a read consistent view has been overwritten.
Locks have nothing to do with read consistency. In Oracle writes don't block reads. The purpose of locks is to prevent other processes from attempting to change rows we are interested in.
For systems that have large number of users, where users may "hold" the lock for a long time the Optimistic Offline Lock pattern is usually used, i.e. use the version in the UPDATE ... WHERE statement.
You can use a date, version id or something else as the row version. Also the virtual columm ORA_ROWSCN may be used but you need to read up on it first.
When a record is locked due to changes or an explicit lock statement, an entry is made into the header of that block. This is called an ITL (interested transaction list). When you come along to read that block, your session sees this and knows where to go to get the read consistent copy from the rollback segment.

Resources