I am currently working on 9.2.0.8 Oracle database.I Have some questions related to Performace of Database that
too related to Redo logs latches & contention. Answers from real practice will be highly appreciated. please help.
My data is currently having 25 redo log files with 2 members in each file. Each member is of size 100m.
So Is this worth keeping 25 redo log file each with 2 members (100MB each).
My database is 24*7 with a min user of 275 & Max of 650. My database is having mostly SELECT's but very
less INSERT/UPDATE/DELETE's .
And since 1 month i started obsorving that my database is generating archives on an average of 17GB
min to 28GB at MAX.
But the LOGSWITCH is taking place on an average every 5-10 min. some times more frequently.
And even some times 3 times in a min.
But my SPFILE says log_checkpoint_timeout=1800 ( 30 min's).
And About Redo log latches & contention,
when i isssue:- SELECT name, value
FROM v$sysstat
WHERE name = 'redo log space requests';
Output:-
NAME VALUE
-------------------------------------------------------------------- ----------
redo log space requests 20422
(This value is getting increased day by day)
Where as Oracle recommened's to have the redo log space request close to zero.
So i want to know why my database is going for log switch frequently. Is this Because of
data Or Becoze of some thing else.
My doubt was, If i increase REDO LOG Buffer the Problem may resolve. And i increased redo log buffer
from 8MB to 11MB. But i did'nt find much difference.
If i increase the size of REDO LOG FILE from 100MB to 200MB, Will it help. Will it help me to reduce
the log switching time & bring the value of REDO LOG SPACE REQUEST close to zero.
Something about the information you supplied doesn't add up - if you were really generating around 20G/min of archive logs, then you would be switching your 100M log files at least 200 times per minute - not the 3 times/minute worst case that you mentioned. This also isn't consistent with your description of "... mostly SELECT's".
In the real world, I wouldn't worry about log switches every 5-10 minutes on average. With this much redo, none of the init parameters are coming into play for switching - it is happening because of the online redo logs filling up. In this case, the only way to control the switching rate is to resize the logs, e.g. doubling the log size will reduce the switching frequency by half.
17GB of logfiles per minute seems pretty high to me. Perhaps one of the tablespaces in your database is still in online backup mode.
It would probably help to look at which sessions are generating lots of redo, and which sessions are waiting on the redo log space the most.
SQL> l
1 select name, sid, value
2 from v$sesstat s, v$statname n
3 where name in ('redo size','redo log space requests')
4 and n.statistic# = s.statistic#
5 and value > 0
6* order by 1,2
Related
Users were able to run reports before 10 am. After that same reports became very slow, sometimes users just didn't have patience to wait. After some troubleshooting I fount the column that was causing the delay. It was computed column that uses function in order to bring the result.
Approximately at the same time I got another complain about slow running report, that was always working fine. After some troubleshooting I found the columns that was causing a delay:
where (Amount - PTD) <> 0
And again, the Amount column is computed column.
So my questions are:
why all of the sudden computed columns that was always part of the reports started to slow down the performance significantly? Even when nobody using database.
What could really happen approx after 10 am?
And what is the disadvantage if I make those columns persisted?
Thank you
You don't provide a lot of detail here - so I can only answer in generalities.
So, in general - database performance tends to be determined by bottlenecks. A query might run fine on a table with 1 records, 10 records, 1000 records, 100000 records - and then at 100001 records, it suddenly gets slow. This is because you've exceeded some boundary in the system - for instance, the data doesn't fit in memory anymore.
It's really hard to identify those bottlenecks, and even harder to predict - but keep an eye on perfmon, and see what your CPU, disk i/o and memory stats are doing.
Computed columns are unlikely to be a problem in their own right - but using them in a "where" statement (especially with another calculation) is likely to be slow if you don't have an index on that column. In your example, you might create another computed column for (Amount - PTD) and create an index on that column too.
I have a situation where I have to read the file having at least 1 million accounts and store them in a vector, and need to update the table (table_name) for all the accounts read from the file finally need to commit. Conversion of file in to vector is completed in few minutes but the later process is taking 2 to 3 hours (for updating the table). How to optimise the process? Any suggestions to reduce the time? I am working with oracle 12 c. I have searched many blogs I didn't find any proper help.
Assuming regular basis workload of the database
generates 1GB of data in redologs files every hour.
What number and size of the redologs files might be appropriate for a
good performance?
It's much more interesting how much redo data will be generated during the peak hours! So plan for the peaks, not for the regular workload!
Some DBAs say: size your online redo logs, so that it won't be switched more than 3-6 times per hour during peak times.
And it's better to make it bit bigger (add some buffer for future harder peaks), but make sure that you back them up more often, so that you won't loose too much changes when you will have to do restore and recovery.
You also may want to read this:
I ran a query against the V$SEGMENT_STATISTICS view today and got some possibly disturbing numbers. Can some one let me know if they are bad or if I just reading to much into them?
DB has been up since 01-JAN-2011 so they represent the stats since then. DB size is 3TB
OBJECT_NAME OBJECT_TYPE STATISTIC_NAME VALUE
XXPK0EMIANCE INDEX space allocated 27,246,198,784
ITEMINTANCE TABLE space allocated 22,228,762,624
LITEMINSTANCE TABLE space used 19,497,901,889
XXPK0TEMINSTANCE INDEX space used 17,431,957,592
on the XXPK0EMIANCE index the inital extent is 64k
also these
OBJECT_NAME OBJECT_TYPE STATISTIC_NAME VALUE
XXPK0MINSTANCE INDEX ITL waits 1,123
XXIEKILSTANCE INDEX ITL waits 467
If these are bad do they impact performance? My understanding is that being wait states, things stop until they are resolved. Is that true.
Also these looked high, are they?
LATION_PK INDEX logical reads 242,212,503,104
XXAK1STSCORE INDEX logical reads 117,542,351,984
XXPK0TSTANCE INDEX logical reads 113,532,240,160
TCORE TABLE db block changes 1,913,902,176
SDENT TABLE physical reads 72,161,312
XXPK0PDUCT INDEX segment scans 35,268,027
ESTSORE TABLE buffer busy waits 2,604,947
XXPK0SUCORE INDEX buffer busy waits 119,007
XXPK0INSTANCE INDEX row lock waits 63,810
XXPK0EMINSTANCE INDEX row lock waits 58,129
These figure are for the best part of 6 months. I don't think you can really draw anything meaningful from them.
I think you would be better spending your time looking at the reports from AWR (or statspack if you don't have the diagnostics and tuning license). Look at the performance over a 1 hour snapshot during your busy periods and see if anything stands out there.
From a performance perspective, if nobody is complaining, there is probably nothing wrong.
Yes. When an object needs more space it is an overhead. The question is, how often does it need more space and do the users notice an significant issue when this happens. As I suggested earlier. If the users do not perceive a problem, then there probably isn't a problem. I know that sounds a bit reactive, rather than proactive, but there is little point wasting time tuning something that is not causing a problem. :)
As for the stats. Yes. Oracle tracks them and yes, they are useful. My problem with it is you are looking at the stats over a 6 month period. I'm not sure this gives anything useful you can work with. For example, what if most of those figures were accumulated in the first month, then the database has done nothing in the subsequent 5 months, or vice versa. Using these figures doesn't allow you to draw any conclusions in itself.
Reports such as AWR and statspack use the same database statistics, but report a change over time. For example, the change in the stats over the last hour. If I look at a snapshot spanning my busy periods and see that the database is being hammered, I might want to take a look at what is using all the resources. If I check the AWR/statspack report for my busy period and the database is quiet, what is the point in trying to tune it. It's doing nothing.
So the stats are useful, but you have to understand how the context in which they are used affects their value.
Given: SQL Server 2008 R2. Quit some speedin data discs. Log discs lagging.
Required: LOTS LOTS LOTS of inserts. Like 10.000 to 30.000 rows into a simple table with two indices per second. Inserts have an intrinsic order and will not repeat, as such order of inserts must not be maintained in short term (i.e. multiple parallel inserts are ok).
So far: accumulating data into a queue. Regularly (async threadpool) emptying up to 1024 entries into a work item that gets queued. Threadpool (custom class) has 32 possible threads. Opens 32 connections.
Problem: performance is off by a factor of 300.... only about 100 to 150 rows are inserted per second. Log wait time is up to 40% - 45% of processing time (ms per second) in sql server. Server cpu load is low (4% to 5% or so).
Not usable: bulk insert. The data must be written as real time as possible to the disc. THis is pretty much an archivl process of data running through the system, but there are queries which need access to the data regularly. I could try dumping them to disc and using bulk upload 1-2 times per second.... will give this a try.
Anyone a smart idea? My next step is moving the log to a fast disc set (128gb modern ssd) and to see what happens then. The significant performance boost probably will do things quite different. But even then.... the question is whether / what is feasible.
So, please fire on the smart ideas.
Ok, anywering myself. Going to give SqlBulkCopy a try, batching up to 65536 entries and flushing them out every second in an async fashion. Will report on the gains.
I'm going through the exact same issue here, so I'll go through the steps i'm taking to improve my performance.
Separate the log and the dbf file onto different spindle sets
Use basic recovery
you didn't mention any indexing requirements other than the fact that the order of inserts isn't important - in this case clustered indexes on anything other than an identity column shouldn't be used.
start your scaling of concurrency again from 1 and stop when your performance flattens out; anything over this will likely hurt performance.
rather than dropping to disk to bcp, and as you are using SQL Server 2008, consider inserting multiple rows at a time; this statement inserts three rows in a single sql call
INSERT INTO table VALUES ( 1,2,3 ), ( 4,5,6 ), ( 7,8,9 )
I was topping out at ~500 distinct inserts per second from a single thread. After ruling out the network and CPU (0 on both client and server), I assumed that disk io on the server was to blame, however inserting in batches of three got me 1500 inserts per second which rules out disk io.
It's clear that the MS client library has an upper limit baked into it (and a dive into reflector shows some hairy async completion code).
Batching in this way, waiting for x events to be received before calling insert, has me now inserting at ~2700 inserts per second from a single thread which appears to be the upper limit for my configuration.
Note: if you don't have a constant stream of events arriving at all times, you might consider adding a timer that flushes your inserts after a certain period (so that you see the last event of the day!)
Some suggestions for increasing insert performance:
Increase ADO.NET BatchSize
Choose the target table's clustered index wisely, so that inserts won't lead to clustered index node splits (e.g. autoinc column)
Insert into a temporary heap table first, then issue one big "insert-by-select" statement to push all that staging table data into the actual target table
Apply SqlBulkCopy
Choose "Bulk Logged" recovery model instad of "Full" recovery model
Place a table lock before inserting (if your business scenario allows for it)
Taken from Tips For Lightning-Fast Insert Performance On SqlServer