I have an AWR report covering an issue of decreasing performance oracle 19C in the section of Instance Efficiency Percentages (Target 100%) I notice a negative value of Flash Cache Hit % comparing with other clients the normal value it is 100 or 0.
As motioned below in the screenshot when the issue:
Can you help me why this negative value appear?
Related
In JDBC the default fetch size is 10, but I guess that's not the best fetch size when I have a million rows. I understand that a fetch size too low reduces performance, but also if the fetch size is too high.
How can I find the optimal size? And does this have an impact on the DB side, does it chew up a lot of memory?
If your rows are large then keep in mind that all the rows you fetch at once will have to be stored in the Java heap in the driver's internal buffers. In 12c, Oracle has VARCHAR(32k) columns, if you have 50 of those and they're full, that's 1,600,000 characters per row. Each character is 2 bytes in Java. So each row can take up to 3.2MB. If you're fetching rows 100 by 100 then you'll need 320MB of heap to store the data and that's just for one Statement. So you should only increase the row prefetch size for queries that fetch reasonably small rows (small in data size).
As with (almost) anything, the way to find the optimal size for a particular parameter is to benchmark the workload you're trying to optimize with different values of the parameter. In this case, you'd need to run your code with different fetch size settings, evaluate the results, and pick the optimal setting.
In the vast majority of cases, people pick a fetch size of 100 or 1000 and that turns out to be a reasonably optimal setting. The performance difference among values at that point are generally pretty minimal-- you would expect that most of the performance difference between runs was the result of normal random variation rather than being caused by changes in the fetch size. If you're trying to get the last iota of performance for a particular workload in a particular configuration, you can certainly do that analysis. For most folks, though, 100 or 1000 is good enough.
The default value of JDBC fetch size property is driver specific and for Oracle driver it is 10 indeed.
For some queries fetch size should be larger, for some smaller.
I think a good idea is to set some global fetch size for whole project and overwrite it for some individual queries where it should be bigger.
Look at this article:
http://makejavafaster.blogspot.com/2015/06/jdbc-fetch-size-performance.html
there is description on how to set up fetch size globally and overwrite it for carefully selected queries using different approaches: Hibernate, JPA, Spring jdbc templates or core jdbc API. And some simple benchmark for oracle database.
As a rule of thumb you can:
set fetchsize to 50 - 100 as global setting
set fetchsize to 100 - 500 (or even more) for individual queries
JDBC does have default prefetch size of 10. Check out
OracleConnection.getDefaultRowPrefetch in JDBC Javadoc
tl;dr
How to figure out the optimal fetch size for the select query
Evaluate some maximal amount of memory (bytesInMemory)
4Mb, 8Mb or 16Mb are good starts.
Evaluate the maximal size of each column in the query and sum up
those sizes (bytesPerRow)
...
Use this formula: fetch_size = bytesInMemory / bytesPerRow
You may adjust the formula result to have predictable values.
Last words, test with different bytesInMemory values and/or different queries to appreciate the results in your application.
The above response was inspired by the (as of this writing attic) Apache MetaModel project. They found an answer for this exact question. To do so, they built a class for calculating a fetch size given a maximal memory amount. This class is based on an Oracle whitepaper explaining how Oracle JDBC drivers manage memory.
Basically, the class is constructed with a maximal memory amount (bytesInMemory). Later, it is asked a fetch size for a Query (an Apache Metamodel class). The Query class helps find the number of bytes (bytesPerRow) a typical query results row would have. The fetch size is then calculated with the below formula:
fetch_size = bytesInMemory / bytesPerRow
The fetch size is also adjusted to stay in this range : [1,25000]. Other adjustments are made along during the calculation of bytesPerRow but that's too much details for here.
This class is named FetchSizeCalculator. The link leads to the full source code.
I've found a gap of 18 numbers in my oracle sequence that appeared during holidays and found no traces of this crime in log files. Can I see somehow a time when sequence is updated or any other way to trace the disappearance of sequence numbers?
As the other answers/comments state, sequences are never guaranteed to be gap free. Gaps can happen in many ways.
One of the things about sequences are that they default to CACHE 20. That means that when NEXTVAL is called, the sequence in the database is actually increased by 20, the value is returned, and the next 19 values are just returned from memory (the shared pool.)
If the sequence is aged out of the shared pool, or the shared pool is flushed or the database restarted, the values "left over" in the sequence memory cache is lost, as the sequence next time picks up the value from the database, which was 20 higher than last time.
As you state it happened during holidays, I guess it to be a likely cause that your sequence used 2 values from memory, then the DBA did maintenance in the holidays that caused the shared pool to flush, and the remaining 18 values in the sequence memory cache is gone.
That is normal behaviour and the reason why sequences work fast. You can get "closer" to gap-free by NOCACHE NOORDER, but it will cost performance and never be quite gap-free anyway.
A sequence is not guaranteed to produce a gap free sequence of numbers. It is used to produce a series of (unique) numbers that is highly scalable in a multi-user environment.
If your application requires that some numeric identifier on a table is sequential without gaps then you will need another way of generating those identifiers.
The oracle sequence are NOORDER in default. please refer the oracle documentation https://docs.oracle.com/cd/B13789_01/server.101/b10759/statements_6014.htm . hence NOORDER behavior cannot be as expected. You can check whether your oracle sequence is in ORDER if not then it is some other problem.
I could see the DBA team advises to set the sequence cache to a higher value at the time of performance optimization. To increase the value from 20 to 1000 or 5000.The oracle docs, says the the cache value,
Specify how many values of the sequence the database preallocates and keeps in memory for faster access.
Somewhere in the AWR report I can see,
select SEQ_MY_SEQU_EMP_ID.nextval from dual
Can any performance improvement be seen if I increase the cache value of SEQ_MY_SEQU_EMP_ID.
My question is:
Is the sequence cache perform any significant role in performance? If so how to know what is the sufficient cache value required for a sequence.
We can get the sequence values from oracle cache before them used out. When all of them were used, oracle will allocate a new batch of values and update oracle data dictionary.
If you have 100000 records need to insert and set the cache size is 20, oracle will update data dictionary 5000 times, but only 20 times if you set 5000 as cache size.
More information maybe help you: http://support.esri.com/en/knowledgebase/techarticles/detail/20498
If you omit both CACHE and NOCACHE, then the database caches 20 sequence numbers by default. Oracle recommends using the CACHE setting to enhance performance if you are using sequences in an Oracle Real Application Clusters environment.
Using the CACHE and NOORDER options together results in the best performance for a sequence. CACHE option is used without the ORDER option, each instance caches a separate range of numbers and sequence numbers may be assigned out of order by the different instances. So more the value of CACHE less writes into dictionary but more sequence numbers might be lost. But there is no point in worrying about losing the numbers, since rollback, shutdown will definitely "lose" a number.
CACHE option causes each instance to cache its own range of numbers, thus reducing I/O to the Oracle Data Dictionary, and the NOORDER option eliminates message traffic over the interconnect to coordinate the sequential allocation of numbers across all instances of the database. NOCACHE will be SLOW...
Read this
By default in ORACLE cache in sequence contains 20 values. We can redefine it by given cache clause in sequence definition. Giving cache caluse in sequence benefitted into that when we want generate big integers then it takes lesser time than normal, otherwise there are no such drastic performance increment by declaring cache clause in sequence definition.
Have done some research and found some relevant information in this regard:
We need to check the database for sequences which are high-usage but defined with the default cache size of 20 - the performance
benefits of altering the cache size of such a sequence can be
noticeable.
Increasing the cache size of a sequence does not waste space, the
cache is still defined by just two numbers, the last used and the
high water mark; it is just that the high water mark is jumped by a
much larger value every time it is reached.
A cached sequence will return values exactly the same as a non-cached
one. However, a sequence cache is kept in the shared pool just as
other cached information is. This means it can age out of the shared
pool in the same way as a procedure if it is not accessed frequently
enough. Everything is the cache is also lost when the instance is
shut down.
Besides spending more time updating oracle data dictionary having small sequence caches can have other negative effects if you work with a Clustered Oracle installation.
In Oracle 10g RAC Grid, Services and Clustering 1st Edition by Murali Vallath it is stated that if you happen to have
an Oracle Cluster (RAC)
a non-partitioned index on a column populated with an increasing sequence value
concurrent multi instance inserts
you can incur in high contention on the rightmost index block and experience a lot of Cluster Waits (up to 90% of total insert time).
If you increase the size of the relevant sequence cache you can reduce the impact of Cluster Waits on your index.
Gather statistics on some Oracle tables take a long time. These tables have a record count ranging from 2 Million Records to 9 Million Records. The tables have around 5-6 indexes on each one of them.
The Oracle version is
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
PL/SQL Release 10.2.0.1.0 - Production
"CORE 10.2.0.1.0 Production"
TNS for IBM/AIX RISC System/6000: Version 10.2.0.1.0 - Productio
NLSRTL Version 10.2.0.1.0 - Production
The gather stats syntax is
dbms_stats.gather_table_stats('OWNER', 'TABLE_NAME', estimate_percent => 100, method_opt => 'for all columns size auto', cascade => true);
We cannot change the parameters of the above gather stats command as the application vendor insists as these parameters be used.
Please let me know if there is anything we could do to reduce the time taken by the Gather Statistics JOB. I noticed that the JOB when runs, causes the app performance to reduce a little and this is not acceptable.
I also noticed that some of the tables occupy a lot of space on the Disk but the real data (by doing a estimate of record count multiplied by average row length) is a lot lesser. Looks like the tables need to be Compacted/Shrunk/High Water Mark Reset etc.
Some tables for example occupy 9 GB on Disk but Real Data is shown to be 1.2 GB . . . Almost 70% Space Wasted in Fragmentation.
Will a ALTER TABLE Shrink reduce the overall time taken to collect statistics on the table? Is it recommended?
Yes, shrinking space will help. If you can afford to take the application down for a while, and the tables aren't going to just bounce right back to their previous size, then shrinking space is always a good idea.
Other than that, if parameters can't change there's not much you can do to improve things. Setting the DEGREE parameter can signficantly improve performance in some cases. I know you said you can't change any parameters, but I don't see how they could complain about that one. Although it may make the job run faster it would probably impact the system performance even more (but for a shorter period of time).
The best solution might be to upgrade to 11g, where any sane application would use estimate_percent => dbms_stats.auto_sample_size. Statistics collection in 11g is much better than 10g. With features like improved auto sample algorithms, incremental statistics, setting statistics preferences, and concurrent statistics, gathering statistics is often much faster and more accurate.
CREATE SEQUENCE S1
START WITH 100
INCREMENT BY 10
CACHE 10000000000000000000000000000000000000000000000000000000000000000000000000
If i fire a query with such a big size even if it creates the sequence s1.
What is the max size that I can provide with it???
http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/statements_6015.htm#SQLRF01314
Quote from 11g docs ...
Specify how many values of the sequence the database preallocates and keeps in memory for faster access. This integer value can have 28 or fewer digits. The minimum value for this parameter is 2. For sequences that cycle, this value must be less than the number of values in the cycle. You cannot cache more values than will fit in a given cycle of sequence numbers. Therefore, the maximum value allowed for CACHE must be less than the value determined by the following formula:
(CEIL (MAXVALUE - MINVALUE)) / ABS (INCREMENT)
If a system failure occurs, then all cached sequence values that have not been used in committed DML statements are lost. The potential number of lost values is equal to the value of the CACHE parameter.
Determining the optimal value is a matter of determining the rate at which you will generate new values, and thus the frequency with which recursive SQL will have to be executed to update the sequence record in the data disctionanry. Typically it's higher for RAC systems to avoid contention, but then they are also generally busier as well. Performance problems relating to insufficient sequence cache are generally easy to sport through AWR/Statspack and other diagnostic tools.
Looking in the Oracle API, I don't see a maximum cache size specified (Reference).
Here are some guidelines on setting an optimal cache size.