CACHE of SQL in oracle - oracle

How oracle CACHE a Query (SQL), Query execution contains the following steps.
1. PARSE
2. Execute
3. Fetch
in First step oracle checks if query exists in CACHE (Shared Pool) or not (It will be exists if query is identical and based on LRU), if exists then PARSING will be skipped and execution will start.
So to make query performance intensive we must use bind variables and use the identical sql queries.
But in Parsing oracle verifies authentication also (User Access), if multiple users are using the same query, how oracle skip/use parsing?

The parsing of a query is not tied to a user, it is dependent on the query. Note that an exact character for character match is required. Blanks and comments in a query will cause it to miss the Shared Pool match.
The parse-tree is then used in the generation of an execution plan. If the same schema is used in the new query as the matched query then the existing execution plan is used.
You can test this by creating multiple schema, one with a small amount of data and one with a large
amount. Then analyze all the tables. Take a look at the execution plans for the same query with vastly different amounts of data. This will show the different execution plans for the same query.
Now run the query a large amount of times and check the amount of time that it takes for the first
and then subsequent executions. Use Oracle Trace and look in the left hand pain for the "Re-Parse"
frequency. This can also be gleaned from some of the dictionary tables.
Take a look at The Oracle documentation on using Oracle Trace

First step oracle checks if query exists in CACHE (Shared Pool) or not (It will be exists if query is identical and based on LRU), if exists then PARSING will be skipped and execution will start. So to make query performance intensive we must use bind variables and use the identical sql queries.
This is actual process when you execute a query on Oracle:
Parsing Steps
Syntax Check
Semantic Analysis
Has the query been executed in some other session?
Hard Parse
Parse
Optimize
Generate the plan for the query.
If the answer to #1.3 is yes - Oracle skips the hard parse portion, and uses the existing query plan.
For more info:
* AskTom: Difference between soft parse and hard parse
* Bind variables - The key to application performance

Usual practice in Oracle is to create stored procedures with definer rights which means that the queries are executed with privileges of their definer, despite of who calls them. That's why cache works well.
If you create a procedure or package with invoker rights (authid current_user), then the queries will be parsed for each invoker separately.
See Invoker Rights Versus Definer Rights for details.

Related

Oracle not uses best dbplan

i'm struggeling with Performance in oracle. Situation is: Subsystem B has a dblink to master DB A. on System B a query completes after 15 seconds over dblink, db plan uses appropriate indexes.
If same query should fill a table in a stored procedure now, Oracle uses another plan with full scans. whatever i try (hints), i can't get rid of these full scans. that's horrible.
What can i do?
The Oracle Query Optimizer tries 2000 different possibilities and chooses the best one in normal situations. But if you think it choose wrong plan, You may suspect the following cases:
1- Your histograms which belongs to querying tables are deprecated.
2- Your indexes can not be used because of your faulty query.
3- You can use index hints to force the indexes to be used.
4- You can use SQL Advisor or run TKProf for performance analysis and decide what's wrong or what caused bad performance. Check network, Disk I/O values etc.
If you share your query we can give you more information.
Look like we are not taking same queries in two different conditions.
First case is Simple select over dblink & Second case is "insert as select over dblink".
can you please share two queries & execution plans here as You may have them handy. If its not possible to past queries due to security limitations, please past execution plans.
-Abhi
after many tries, I could create a new DB Plan with Enterprise Manager. now it's running perfect.

Why is the same Select parsed again on every execution if it contains a function from a different schema?

In our application we have several queries that take a long time to parse (> 500 ms, sometimes much more) but are fast to execute (< 50 ms). This is due to complicated views and generally not a problem, if the parsed Query is cached by Oracle.
We now run into the problem that some Queries are parsed every time they are executed: These Queries select from a View in one Schema (SCHEMA1) and use a Function from a Package in a different Schema (SCHEMA2) in the Select clause.
When we execute this Query, it is parsed on every execution. In V$SQLAREA the VERSION_COUNT is equal to the number of executions. Every execution takes the long time.
If we warp the call to the Function from SCHEMA1 in a local Function in SCHEMA2 and use the new Function in the Query, only the first execution leads to a parse. All subsequent executions are much faster. In V$SQLAREA we see a VERSION_COUNT of 1 (or some number much lower than the number of executions).
Unfortunaly, wrapping the functions in local functions are highly impractical in our use case, because there are many functions in SCHEMA2 and they are used with Views von several other Schemas.
The Query doesn’t contain parameters and the circumstances of execution are exactly the same every time.
The effect dos not depend on the code in the function: if we replace the actual function with a test function that just returns a constant value, we get the same effect:
It doesn’t make a different whether we execute the Query from SCHEMA1, SCHEMA2 or any third Schema, except when we execute it as SYSDBA. In this case, subsequent executions don’t lead to new parses.
We use Oracle 12c Release 12.1.0.2.0.
Update: V$SQL_SHARED_CURSOR displays Y at AUTH_CHECK_MISMATCH for these queries. I am not sure yet what this means.
This can be caused by various reasons.
Different sql execution environment. For example caller uses different datatypes for bind variables. Check the view V$SQL_SHARED_CURSOR.
Different optimizer goal
Bind variable peeking. Check v$sql.is_bind_aware.
Some bug in the database caused by cursor_sharing parameter
Maybe you should generate AWR and check wait events, and also the size of Library cache. Possibly Oracle might have problems to find some free space in Library cache for your query(is it's really complex).

Oracle Bind Query is very slow

I have an Oracle bind query that is extremely slow (about 2 minutes) when it executes in my C# program but runs very quickly in SQL Developer. It has two parameters that hit the tables index:
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
Also, if I remove the bind variables and create dynamic sql, it runs just like it does in SQL Developer.
Any suggestion?
BTW, I'm using ODP.
If you are replacing the bind variables with static varibles in sql developer, then you're not really running the same test. Make sure you use the bind varibles, and if it's also slow you're just getting bit by a bad cached execution plan. Updating the stats on that table should resolve it.
However if you are actually using bind variables in sql developers then keep reading. The TLDR version is that parameters that ODP.net run under sometimes cause a slightly more pessimistic approach. Start with updating the stats, but have your dba capture the execution plan under both scenarios and compare to confirm.
I'm reposting my answer from here: https://stackoverflow.com/a/14712992/852208
I considered flagging yours as a duplicate but your title is a little more concise since it identifies the query does run fast in sql developer. I'll welcome advice on handling in another manner.
Adding the following to your config will send odp.net tracing info to a log file:
This will probably only be helpful if you can find a large gap in time. Chances are rows are actually coming in, just at a slower pace.
Try adding "enlist=false" to your connection string. I don't consider this a solution since it effecitively disables distributed transactions but it should help you isolate the issue. You can get a little bit more information from an oracle forumns post:
From an ODP perspective, all we can really point out is that the
behavior occurs when OCI_ATR_EXTERNAL_NAME and OCI_ATR_INTERNAL_NAME
are set on the underlying OCI connection (which is what happens when
distrib tx support is enabled).
I'd guess what you're not seeing is that the execution plan is actually different (meaning the actual performance hit is actually occuring on the server) between the odp.net call and the sql developer call. Have your dba trace the connection and obtain execution plans from both the odp.net call and the call straight from SQL Developer (or with the enlist=false parameter).
If you confirm different execution plans or if you want to take a preemptive shot in the dark, update the statistics on the related tables. In my case this corrected the issue, indicating that execution plan generation doesn't really follow different rules for the different types of connections but that the cost analysis is just slighly more pesimistic when a distributed transaction might be involved. Query hints to force an execution plan are also an option but only as a last resort.
Finally, it could be a network issue. If your odp.net install is using a fresh oracle home (which I would expect unless you did some post-install configuring) then the tnsnames.ora could be different. Host names in tnsnams might not be fully qualified, creating more delays resolving the server. I'd only expect the first attempt (and not subsequent attempts) to be slow in this case so I don't think it's the issue but I thought it should be mentioned.
Are the parameters bound to the correct data type in C#? Are the columns key1 and key2 numbers, but the parameters :key1 and :key2 are strings? If so, the query may return the correct results but will require implicit conversion. That implicit conversion is like using a function to_char(key1), which prevents an index from being used.
Please also check what is the number of rows returned by the query. If the number is big then possibly C# is fetching all rows and the other tool first pocket only. Fetching all rows may require many more disk reads in that case, which is slower. To check this try to run in SQL Developer:
SELECT COUNT(*) FROM (
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
)
The above query should fetch the maximum number of database blocks.
Nice tool in such cases is tkprof utility which shows SQL execution plan which may be different in cases above (however it should not be).
It is also possible that you have accidentally connected to different databases. In such cases it is nice to compare results of queries.
Since you are raising "Bind is slow" I assume you have checked the SQL without binds and it was fast. In 99% using binds makes things better. Please check if query with constants will run fast. If yes than problem may be implicit conversion of key1 or key2 column (ex. t.key1 is a number and :key1 is a string).

Why "recursive sql" is used in "Dictionary-Managed Tablespaces" in oracle?

Oracle documents points out that locally managed tablespace is better than dictionary-managed tablespace in several aspects. one is that recursive sql is used when database allocate free blocks in dictionary-managed tablespace.
Table fet$ has columns (TS#, FILE#, BLOCK#, LENGTH)
Could anyone explain why recursive sql is used for allocation with fet$?
You seem to be interpreting 'recursive' in the normal programming sense; but it can have slightly different meanings:
drawing upon itself, referring back.
The recursive nature of stories which borrow from each other
(mathematics, not comparable) of an expression, each term of which is determined by applying a formula to preceding terms
(computing, not comparable) of a program or function that calls itself
...
If you interpret it as a recursive function (meaning 3) then it doesn't quite make sense; fet$ isn't updated repeatedly and an SQL statement doesn't re-execute itself. Here 'recursive' is used more generally (meaning 1, sort of), in the sense that the SQL you run generates another layer of SQL 'under the hood'. Not the same SQL or the same function called by itself, but 'SQL drawing upon SQL', or 'SQL referring back to SQL', if you like.
The concepts guide - which is where I think you got your question from - says:
Avoids using the data dictionary to manage extents
Recursive operations can occur in dictionary-managed tablespaces if
consuming or releasing space in an extent results in another operation
that consumes or releases space in a data dictionary table or undo
segment.
With a table in a dictionary managed tablespace (DMT), when you insert data Oracle has to run SQL statements against the dictionary tables to identify and allocate blocks. You don't normally notice that, but you can see it in trace files and other performance views. SQL statements will be run against fet$ etc. to manage the space.
The 'recursive' part is that one SQL statement has to execute another (different) SQL statement; and that may in turn have to execute yet another (different again) SQL statement.
With a locally managed tablespace (LMT), block information is held in a bitmap within the tablespace itself. There is no dependence on the dictionary (for this, anyway). That extra layer of SQL is not needed, which saves time - both from the dictionary query itself and from potential concurrency delays, as multiple queries (across the database, for all tablespaces) access the dictionary at the same time. Managing that local block is much simpler and faster.
The concepts guide also says:
Note: Oracle strongly recommends the use of locally managed tablespaces with Automatic Segment Space Management.
As David says, there's not really any benefit to ever using a dictionary managed tablespaces any more, and unless you've inherited an old database that still uses them - in which case migrating to LMT should be considered - or are just learning for the sake of it, you can pretty much forget about them; anything new should be using LMT really, and references to DMTs are hopefully only of historic significance.
I wanted to demonstrate the difference by running a trace on the same insert statement against an LMT and a DMT, and showing the extra SQL statements from the trace file in the DMT version; but I can't find a DMT on any database I have access too, going back to 9i, which kind of backs up David's point I suppose. Instead I'll point you to yet more documentation:
Sometimes, to execute a SQL statement issued by a user, Oracle
Database must issue additional statements. Such statements are called
recursive calls or recursive SQL statements. For example, if you
insert a row into a table that does not have enough space to hold that
row, then Oracle Database makes recursive calls to allocate the space
dynamically. Recursive calls are also generated when data dictionary
information is not available in the data dictionary cache and must be
retrieved from disk.
You can use the tracing tools described in that document to compare for yourself, if you have access to a DMT; or you can search for examples.
You can see recursive SQL referred to elsewhere, usually in errors; the error isn't directly in the SQL your are executing, but in extra SQL Oracle issues internally in order to fulfil your request. LMTs just remove one instance where they used to be necessary, and in the process can remove a significant bottleneck.

How can I utilize Oracle bind variables with Delphi's SimpleDataSet?

I have an Oracle 9 database from which my Delphi 2006 application reads data into a TSimpleDataSet using a SQL statement like this one (in reality it is more complex, of course):
select * from myschema.mytable where ID in (1, 2, 4)
My applications starts up and executes this query quite often during the course of the day, each time with different values in the in clause.
My DBAs have notified me that this is creating execessive load on the database server, as the query is re-parsed on every run. They suggested to use bind variables instead of building the SQL statement on the client.
I am familiar with using parameterized queries in Delphi, but from the article linked to above I get the feeling that is not exactly what bind variables are. Also, I would need theses prepared statements to work across different runs of the application.
Is there a way to prepare a statement containing an in clause once in the database and then have it executed with different parameters passed in from a TSimpleDataSet so it won't need to be reparsed every time my application is run?
My answer is not directly related to Delphi, but this problem in general. Your problem is that of the variable-sized in-list. Tom Kyte of Oracle has some recommendations which you can use. Essentially, you are creating too many unique queries, causing the database to do a bunch of hard-parsing. This will spike the CPU consumption (and DBA blood pressures) unnecessarily.
By making your query static, it can get by with a soft-parse or perhaps no parse at all! The DB can then cache the execution plan, the DBAs can deal with a more "stable" SQL, and overall performance should be improved.

Resources