Query result in Oracle memory or Perl memory? - oracle

I am using the DBI module to fire a select query of Oracle. Using the prepare module of the DBI, the query has been prepared, and then using the execute module, the select query is executed.
My question is: Once the query is executed, the result is stored in memory till we use any of the fetchrow methods to retrieve the result. Till then, the query result is stored in Oracle memory or Perl memory?
As of my understanding, it should be in Oracle memory, I still wanted to confirm.

It is held in Oracle until you issue your first fetch. However, you should be aware that once you make your first fetch call DBD::Oracle (which I presume you are using) will likely fetch multiple rows back in one go even if you asked for only one (you can see how many with RowsInCache). You can alter the settings used with ora_prefetch_rows, ora_prefetch_memory and ora_row_cache_off.

in the Oracle memory. First hint: you don't have access to that data.
You could test the amount of memory used by your Perl script before and after the execute statement to confirm.
See http://docstore.mik.ua/orelly/linux/dbi/ch05_01.htm

Related

Oracle Bind Query is very slow

I have an Oracle bind query that is extremely slow (about 2 minutes) when it executes in my C# program but runs very quickly in SQL Developer. It has two parameters that hit the tables index:
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
Also, if I remove the bind variables and create dynamic sql, it runs just like it does in SQL Developer.
Any suggestion?
BTW, I'm using ODP.
If you are replacing the bind variables with static varibles in sql developer, then you're not really running the same test. Make sure you use the bind varibles, and if it's also slow you're just getting bit by a bad cached execution plan. Updating the stats on that table should resolve it.
However if you are actually using bind variables in sql developers then keep reading. The TLDR version is that parameters that ODP.net run under sometimes cause a slightly more pessimistic approach. Start with updating the stats, but have your dba capture the execution plan under both scenarios and compare to confirm.
I'm reposting my answer from here: https://stackoverflow.com/a/14712992/852208
I considered flagging yours as a duplicate but your title is a little more concise since it identifies the query does run fast in sql developer. I'll welcome advice on handling in another manner.
Adding the following to your config will send odp.net tracing info to a log file:
This will probably only be helpful if you can find a large gap in time. Chances are rows are actually coming in, just at a slower pace.
Try adding "enlist=false" to your connection string. I don't consider this a solution since it effecitively disables distributed transactions but it should help you isolate the issue. You can get a little bit more information from an oracle forumns post:
From an ODP perspective, all we can really point out is that the
behavior occurs when OCI_ATR_EXTERNAL_NAME and OCI_ATR_INTERNAL_NAME
are set on the underlying OCI connection (which is what happens when
distrib tx support is enabled).
I'd guess what you're not seeing is that the execution plan is actually different (meaning the actual performance hit is actually occuring on the server) between the odp.net call and the sql developer call. Have your dba trace the connection and obtain execution plans from both the odp.net call and the call straight from SQL Developer (or with the enlist=false parameter).
If you confirm different execution plans or if you want to take a preemptive shot in the dark, update the statistics on the related tables. In my case this corrected the issue, indicating that execution plan generation doesn't really follow different rules for the different types of connections but that the cost analysis is just slighly more pesimistic when a distributed transaction might be involved. Query hints to force an execution plan are also an option but only as a last resort.
Finally, it could be a network issue. If your odp.net install is using a fresh oracle home (which I would expect unless you did some post-install configuring) then the tnsnames.ora could be different. Host names in tnsnams might not be fully qualified, creating more delays resolving the server. I'd only expect the first attempt (and not subsequent attempts) to be slow in this case so I don't think it's the issue but I thought it should be mentioned.
Are the parameters bound to the correct data type in C#? Are the columns key1 and key2 numbers, but the parameters :key1 and :key2 are strings? If so, the query may return the correct results but will require implicit conversion. That implicit conversion is like using a function to_char(key1), which prevents an index from being used.
Please also check what is the number of rows returned by the query. If the number is big then possibly C# is fetching all rows and the other tool first pocket only. Fetching all rows may require many more disk reads in that case, which is slower. To check this try to run in SQL Developer:
SELECT COUNT(*) FROM (
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
)
The above query should fetch the maximum number of database blocks.
Nice tool in such cases is tkprof utility which shows SQL execution plan which may be different in cases above (however it should not be).
It is also possible that you have accidentally connected to different databases. In such cases it is nice to compare results of queries.
Since you are raising "Bind is slow" I assume you have checked the SQL without binds and it was fast. In 99% using binds makes things better. Please check if query with constants will run fast. If yes than problem may be implicit conversion of key1 or key2 column (ex. t.key1 is a number and :key1 is a string).

How can I utilize Oracle bind variables with Delphi's SimpleDataSet?

I have an Oracle 9 database from which my Delphi 2006 application reads data into a TSimpleDataSet using a SQL statement like this one (in reality it is more complex, of course):
select * from myschema.mytable where ID in (1, 2, 4)
My applications starts up and executes this query quite often during the course of the day, each time with different values in the in clause.
My DBAs have notified me that this is creating execessive load on the database server, as the query is re-parsed on every run. They suggested to use bind variables instead of building the SQL statement on the client.
I am familiar with using parameterized queries in Delphi, but from the article linked to above I get the feeling that is not exactly what bind variables are. Also, I would need theses prepared statements to work across different runs of the application.
Is there a way to prepare a statement containing an in clause once in the database and then have it executed with different parameters passed in from a TSimpleDataSet so it won't need to be reparsed every time my application is run?
My answer is not directly related to Delphi, but this problem in general. Your problem is that of the variable-sized in-list. Tom Kyte of Oracle has some recommendations which you can use. Essentially, you are creating too many unique queries, causing the database to do a bunch of hard-parsing. This will spike the CPU consumption (and DBA blood pressures) unnecessarily.
By making your query static, it can get by with a soft-parse or perhaps no parse at all! The DB can then cache the execution plan, the DBAs can deal with a more "stable" SQL, and overall performance should be improved.

Is there a way to fix Oracle query in shared pool

I have a report engine, performing PreparedStatements on Oracle 11, that is a highly prioritized task.
What I see is that first query invocation usually performs much much longer than the same query afterwards (query has different parameters and return different data).
I suppose this is due to hard parsing done by Oracle, on first query invocation.
I wonder, is there a way of hinting to Oracle, that this query is highly prioritized query which would be performed often, and which performance is critical, so it should remain in shared pool, no matter what?
I know that I can fix execution plan in Oracle 11, but I don't want to fix it, I want Oracle still to be able to change it, as system changes, all I want is to exclude query hard parsing.
Perhaps you should change your "I suppose..." into a "I tested and have determined..." :)
The query performance may be affected by more than just parsing; when it executes it has to fetch blocks from disk into the buffer cache - subsequent executions quite possibly are taking advantage of the blocks being found in memory and so are faster.
EDIT: to answer your immediate question - a workaround may be to have a job run periodically that parses the query but doesn't execute it. You might even be able to use this to determine whether parsing or fetching is the locus of the problem.
You can try pinning to shared pool using dbms_shared_pool.keep
But I would first make sure that you have an aging out problem first
Anton,
if your query is using bind variables it will be re-used. The cursor will be cached and as long as it is re-used, it will remain in the cursor cache. Make sure that it uses bind variables. This increases re-usability and scalability.
If you don't trust the rdbms you can pin it using dbms_shared_pool.keep.
See http://psoug.org/reference/dbms_shared_pool.html
You need to find your cursor in order to do so.
Normally there is an other problem that should be fixed.
Ronald.
http://ronr.blogspot.com

Oracle xmltype extract function never deallocate/reclaim memory until session down

I'm using Oracle 9.2x to do some xmltype data manipulation.
The table as simple as tabls(xml sys.xmltype), with like 10000 rows stored. Now I use a cursor to loop every row, then doing like
table.xml.extract('//.../text()','...').getStringVal();
I notice the oracle instance and the uga/pga keep allocating memory per execution of xmltype.extract() function, until running out of the machine's available memory, even when the dbms_session.free_unused_user_memory() is executed per call of extract().
If the session is closed, then the memory used by the Oracle instance returns right away to as before the execution.
I'm wondering, how to free/deallocate those memory allocated by extract function in same session?
Thanks.
--
John
PL/SQL variables and instantiated objects are some in session memory, which is why your programming is hitting the PGA rather than the SGA. Without knowing some context it is difficult for us to give you some specific advice. The general advice would be to consider how you could reduce the footprint of the variables in your PL/SQL.
For instance, you could include the extract() in the SQL statement rather than doing it in PL/SQL; retrieving just the data you want is always an efficient thing to do. Another possibility would be to use BULK COLLECT with the LIMIT clause to reduce the amount of data you're handling at any one point. A third approach might be to do away with the PL/SQL altogether and just use pure SQL. Pure SQL is way more efficient than switching between SQL and PL/SQL, because sets are better than RBAR. But like I said, because you haven't told us more about what you're trying to achieve we cannot tell whether your CURSOR LOOP is appropriate.

CACHE of SQL in oracle

How oracle CACHE a Query (SQL), Query execution contains the following steps.
1. PARSE
2. Execute
3. Fetch
in First step oracle checks if query exists in CACHE (Shared Pool) or not (It will be exists if query is identical and based on LRU), if exists then PARSING will be skipped and execution will start.
So to make query performance intensive we must use bind variables and use the identical sql queries.
But in Parsing oracle verifies authentication also (User Access), if multiple users are using the same query, how oracle skip/use parsing?
The parsing of a query is not tied to a user, it is dependent on the query. Note that an exact character for character match is required. Blanks and comments in a query will cause it to miss the Shared Pool match.
The parse-tree is then used in the generation of an execution plan. If the same schema is used in the new query as the matched query then the existing execution plan is used.
You can test this by creating multiple schema, one with a small amount of data and one with a large
amount. Then analyze all the tables. Take a look at the execution plans for the same query with vastly different amounts of data. This will show the different execution plans for the same query.
Now run the query a large amount of times and check the amount of time that it takes for the first
and then subsequent executions. Use Oracle Trace and look in the left hand pain for the "Re-Parse"
frequency. This can also be gleaned from some of the dictionary tables.
Take a look at The Oracle documentation on using Oracle Trace
First step oracle checks if query exists in CACHE (Shared Pool) or not (It will be exists if query is identical and based on LRU), if exists then PARSING will be skipped and execution will start. So to make query performance intensive we must use bind variables and use the identical sql queries.
This is actual process when you execute a query on Oracle:
Parsing Steps
Syntax Check
Semantic Analysis
Has the query been executed in some other session?
Hard Parse
Parse
Optimize
Generate the plan for the query.
If the answer to #1.3 is yes - Oracle skips the hard parse portion, and uses the existing query plan.
For more info:
* AskTom: Difference between soft parse and hard parse
* Bind variables - The key to application performance
Usual practice in Oracle is to create stored procedures with definer rights which means that the queries are executed with privileges of their definer, despite of who calls them. That's why cache works well.
If you create a procedure or package with invoker rights (authid current_user), then the queries will be parsed for each invoker separately.
See Invoker Rights Versus Definer Rights for details.

Resources