Oracle SCN clarification - oracle

I would like to know why I am getting different SCN number for the below quires
SELECT TIMESTAMP_TO_SCN(SYSDATE) FROM DUAL - I USE this for POINT IN TIME RECOVER FOR TABLESPACE .
SELECT CURRENT_SCN FROM V$DATABASE. - I use this for Database Recovery(RMAN)
WHY I AM GETTING TWO DIFFERENT SCN ?
I know the basic of SCN , but still I am confused .
Can anyone clarify what is the exact meaning of the query

timestamp_to_scn gives an approximate result. In any given second, a database is likely to go through thousands of SCNs so the result cannot be exact. And it would be terribly expensive to maintain a table that associated a timestamp with every SCN that the system had ever encountered. Under the covers, Oracle maintains a table that stores the current SCN every few seconds and keeps that data for a few days. In recent versions, the granularity of that table is 1 SCN every 3 seconds though that may change over time.
When you call timestamp_to_scn, therefore, you get an SCN that was created within a few seconds of the date you're interested in but it's never going to be exact and it's not going to work forever. That's generally close enough for a point in time recovery-- you know that you want to restore to May 20, 2015 at 12:05:00 am but you don't really care if you restore to a state a second or two earlier or later. If you're identifying a particular bad transaction that you want to restore the system to (or to just before), you wouldn't want to use timestamp_to_scn.

It not about 2 different queries , even if you try with any one of the query the SCN is different every time as it get generated every real time second.Even running of your query forces the database to create the new SCN evry time.Hence Every time you fires the query you get different SCN number of Database.

Related

ORA-01555 Snapshot Too Old:rollback segment number with name "" too small

An Oracle stored procedure suddenly throws ORA-01555 while executing.
select
a,b
from table1 S into a_var,b_var
where s.abc=systedate
and requiedate between add_months(sysdate,-2) and sysdate
AND Currency= NVL(currency_CODe,USD)
group by S.actcount;
table1_invoice(1)=a_var;
table1_invoice(2)=b_var;
FORALL indx in 1..test.count SAVE EXCEPTIONS
insert into table2 values
table1_invoice(indx);
When the procedure was running and using table A, I executed an index re-build in parallel on the same table.
Once that completed, I executed gather stats on table A.
Does this things create error ORA-01555? Does rebuild index consume a rollback segment and the old snapshot of uncommitted data is removed?
I have pasted dummy code.
I execute index re-build in parallel on the same table.
This is your likely cause. ORA-1555 pertains to being able to give you a consistent view of the data. For example, using your dummy code as a template:
You open your cursor at 9am
You start fetching from that cursor at 9am and lets say the total execution of the query takes 60 seconds.
So lets say you at the 40 second mark of that fetch. Because (you) reading data does not block others from changing it, you might come across some data that has been recently changed (say 3 seconds ago) by someone else.
We can't give you THAT data, because we have to show you the data as it was at 9am, (when your query started).
So we find the transaction(s) that changed that data 3 seconds ago, and use the undo information those transactions wrote to reverse out the changes. We'll continue to do that until the data now looks like it did at 9am
Now we can use that (undone) data because it is consistent with the time you opened the cursor.
So where does ORA-1555 fit in? What if our query ran for (say) an hour? Now we might need to be undo-ing other transactions that ran nearly an hour ago. There is only so much space we reserve for the undo for (completed) transactions, because we need to free it up for new transactions as they come in. We throw away the old stuff because those transactions have committed. So revisiting the processing above, the following might happen:
You open your cursor at 9am
You start fetching from that cursor at 9am and lets say the total execution of the query takes 60 seconds.
So lets say you at the 40 second mark of that fetch. Because (you) reading data does not block others from changing it, you might come across some data that has been recently changed (say 3 seconds ago) by someone else.
We can't give you THAT data, because we have to show you the data as it was at 9am, (when your query started).
So we find the transaction(s) that changed that data 3 seconds ago and use the undo information those transactions wrote to reverse out the changes.
OH NO! That undo information has been discarded!!!
Now we're stuck, because we cannot give you the data as it was at 9am anymore because we can't take some changed data all the way back to 9am. The snapshot in time of the data you want is too old.
Hence "ORA-1555: Snapshot too old"
This is why the most common solution is just to retry your operation because now you are starting your query at a more recent time.
So you can see - the more activity going on against the database from OTHER sessions at the time of your query, the greater the risk of hitting a ORA-1555 because undo space is being consumed quickly and thus we might throw away the older stuff more rapidly.

What will happen when inserting a row during a long running query

I am writing some data loading code that pulls data from a large, slow table in an oracle database. I have read-only access to the data, and do not have the ability to change indexes or affect the speed of the query in any way.
My select statement takes 5 minutes to execute and returns around 300,000 rows. The system is inserting large batches of new records constantly, and I need to make sure I get every last one, so I need to save a timestamp for the last time I downloaded the data.
My question is: If my select statement is running for 5 minutes, and new rows get inserted while the select is running, will I receive the new rows or not in the query result?
My gut tells me that the answer is 'no', especially since a large portion of those 5 minutes is just the time spent on the data transfer from the database to the local environment, but I can't find any direct documentation on the scenario.
"If my select statement is running for 5 minutes, and new rows get inserted while the select is running, will I receive the new rows or not in the query result?"
No. Oracle enforces strict isolation levels and does not permit dirty reads.
The default isolation level is Read Committed. This means the result set you get after five minutes will be identical to the one you would have got if Oracle could have delivered you all the records in 0.0000001 seconds. Anything committed after you query started running will not be included in the results. That includes updates to the records as well as inserts.
Oracle does this by tracking changes to the table in the UNDO tablespace. Provided it can restrict the original image from that data your query will run to completion; if for any reason the undo information is overwritten your query will fail with the dreaded ORA-1555: Snapshot too old. That's right: Oracle would rather hurl an exception than provide us with an inconsistent result set.
Note that this consistency applies at the statement level. If we run the same query twice within the one transaction we may see two different results sets. If that is a problem (I think not in your case) we need to switch from Read Committed to Serialized isolation.
The Concepts Manual covers Concurrency and Consistency in great depth. Find out more.
So to answer your question, take the timestamp from the time you start the select. Specifically, take the max(created_ts) from the table before you kick off the query. This should protect you from the gap Alex mentions (if records are not committed the moment they are inserted there is the potential to lose records if you base the select on comparing with the system timestamp). Although doing this means you're issuing two queries in the same transaction which means you do need Serialized isolation after all!

Oracle slow down unexpected and rapidly when using sql "update" continuously

The situation is simple, there is a table in oracle used as a "shared table" for data exchange. The table structure and number of records remains unchanged. In normal case, I continuously update data into this table and other process read this table for current data.
Strange thing is, when my process starts, the time consumption of each update statement execution is approximately 2 ms. And after a certain peroid of time(like 8 hours), the time consumption increased to 10 ~ 20 ms per statement. It makes the procedure quite slow.
the structure of table
and the update statement is like:
anaNum = anaList.size();
qry.prepare(tr("update YC set MEAVAL=:MEAVAL, QUALITY=:QUALITY, LASTUPDATE=:LASTUPDATE where YCID=:YCID"));
foreach(STbl_ANA ana, anaList)
{
qry.bindValue(":MEAVAL",ana.meaVal);
qry.bindValue(":QUALITY",ana.quality);
qry.bindValue(":LASTUPDATE",QDateTime::fromTime_t(ana.lastUpdate));
qry.bindValue(":YCID",ana.ycId);
if(!qry.exec())
{
qWarning() << QObject::tr("update yc failed, ")
<< qry.lastError().databaseText() << qry.lastError().driverText();
failedAnaList.append(ana);
}
}
the update statement using qt interface
There is many reasons which can cause orcle opreation slowd down, but I cannot find a clue to explain this.
I never start a transaction manually in qt code, which means the commit operation is executed every time after update statement.
The update frequency is about 200 records per second, but the number is dynamically changed by time. It maybe increase to 1000 in one time and drop to 10 in next time.
once the time consumption up to 10 ~ 20 ms per statement, it'll never dorp down. time consumption can be restored to 2ms only be restart oracle service.(it's useless to shutdown or restart any user process which visit orcle)
Please tell me how to solve it or at least what to be examined.
Good starting points is to check the AWR and ASH reports.
Comparing the reports in "good" and "bad" times you can spot the cause of the change. This can be for example a change of an execution plan or increase of wait events. One possible outcome is that only change you see is that the database is waiting more time on the client (i.e. the problem is not in the DB).
Anyway as diagnosed in other answer, the root cause of problems seems to be the update in a loop. If your update lists are long (say more that 10-100 entries) you can profit by updating the whole list in a single statement using MERGE.
build a collection from your list
cast the collection as TABLE
use this table in a MERGE statement to update the rows.
See here for details.
You can trace the session while it is running quickly and again later when it is running slowly. Use the sql trace functionality and tkprof to get a breakdown of where the update is spending its time in each case and see what has changed.
https://docs.oracle.com/cd/E25178_01/server.1111/e16638/sqltrace.htm#i4640
If you need help interpreting the results you can update your question or ask a new one.
Secondly, as a rule single record updates are not the best way to do updates in Oracle. Since you have many records to update already prepared before you prepare the query, look at execBatch.
https://doc.qt.io/qt-4.8/qsqlquery.html#execBatch
This will both execute the update faster and only issue a single commit.

Why Oracle stuck after few deletes?

I have an application that do like:
delete from tableA where columnA='somevalue' and rownum<1000
In cycle like:
while(deletedRows>0) {
begin tran
deletedRows = session ... "delete from tableA where columnA='somevalue'
and rownum<1000"....
commit tran
}
It runs few times (each deleting takes near 20 seconds) and after hungs for long time
Why? Does it possible to fix?
Thanks.
The reason why the deletes are run in a loop rather than as a single SQL statement is lack of rollback space. See this question for more info.
Every time the query scans the table from the beginning. So, it scans the zones where there are no rows to delete(columnA='somevalue'). They are more and more far away from the first block of the table.
If the table is big and there would be no columnA='somevalue' the query will take the time to verify all the row for your condition.
What you can do is to make an index on columnA. In this case the engine will know faster where are the rows with that condition(search on index is exponential time faster).
Another possibility, if you are in a concurent system, is that someone updated a row that you ar trying to delete, but doesn't commited the transaction, so the row is locked.
You probably run into many different issues. As you are saying that database hungs the main reason is that your database is hitting ORA-00257 Archiver error.
Every delete produces a redo vector, all redos are then downloaded into an archive log. When archivelog space is exahausted your session hang and remain stuck until someone frees the space.
Usually your DBA has a job that run an archivelog backup every hour (this might be any couple of hours, or every 5 mins, depending by the database workload, etc...) and after the backup has done all sessions go ahead correctly.
Depending by the database configuration, from the client point of view, you might not see the error but just have the behaviour described where you session waits until the space is freed.
In term of design, I agree with other users that a DELETE in a loop is not a good idea. It could be interesting to know why you are trying to do this loop instead a single DELETE statement.

Oracle Transaction - Count table

I have a table where I need to constrain by category and then find all overlapping dates against some date range. This takes about 2 seconds, which is unacceptable to do on every transaction which occurs at roughly 50/s. The alternative is to create some tally table -- then again, I don't know how great of an idea this is because things can get out of sync.
Date on Rent # Rented Category
9/5/2011 5 CATEGORY1
In Oracle (PL/SQL, if it matters), how can I maintain this performance, but ensure that concurrent transactions don't screw up the increment / decrement by making it one less or one more than it really is?
I have two types of transactions, kind of like a search and a rent. Only rents will be updating this tally table (and searches just reading from it). I don't mind if rents slow down, but do not want search performance impacted. Rents can occur as frequently as 5-10 / second.
Oracle uses locks to ensure data consistency during a query. The details of how they work can get complex, but the effect is that it guarantees that an update/insert to your tally table will only use the data in the main table as it was when the query began. If there is another update or insert to your main table while you are doing an update/insert to the tally table, it won't affect it.
You'll have to experiment with your data to see if keeping a summary/tally table helps or hurts you. It really depends on how quickly the main table is getting updated, how much time you spend updating your tally table vs how much time you save by being able to select on it, and how up-to-date you need selects to be.

Resources