How does the PL/SQL context work? - oracle

i have some doubts about the PL/SQL context, are there:
the PL/SQL context is static ?
the PL/SQL context is sync ?
if a procedure was called two times at the same time, the first one takes 20 seconds to complete.. will the second one wait this 20 seconds to start its execution ?
thanks.

Each database session that references a package has an independent instance of the package. All package state (i.e. global package variables) is distinct to each session.
There is no synchronization between multiple sessions invoking the same package procedures or functions -- except what might occur as a natural side effect of the database operations that they perform and the locking required to achieve them.

Your questions are a bit difficult to understand. I'll try to answer it any way.
PL/SQL is a procedural language. So I wouldn't talk about instances. The code only exists once, the package variables exist once per session and the local variables exist once per call of the procedure or function. You cannot access aonther session's variables or memory.
All calls to PL/SQL code are synchronuous. There are no concepts like multi-threading or shared memory (in PL/SQL). Note however, that Oracle is a multi-user system. So other sessions might be changing data in the database at the same time. And many of these changes a temporarily hidden from you due to transaction isloation. But it doesn't influence any variables in memory.
Procedures never block unless they try to change the same database row as another session. But this isn't related to any piece of PL/SQL code and can also be experienced with an SQL command run from a different tool.

Related

Running a stored procedure in multi threaded way in oracle

I have a job which picks a record from a cursor and then it calls a stored procedure which processes the record picked up from the cursor.
The stored procedure has multiple queries to process the records. In all, procedure takes about 0.3 seconds to process a single record picked up by the cursor but since cursor contains more than 100k records it takes hours to complete the job.
The queries in the stored procedure are all optimized
I was thinking of making the procedure run in multi threaded way as in java and other programming language.
Can it be done in oracle? or is there any other way I can reduce the run time of my job.
I agree with the comments regarding processing cursors in a loop. As Tom Kyte often said "Row at a time [processing] is slow at a time"; Oracle performs best with set based operations and row-at-a-time operations usually have scalability issues (i.e. very susceptible to poor performance when things change on the DB such as CPU capacity, workload, number of records that need processing, changes in size of underlying tables, ...).
You probably already know that Oracle since 8i has a Java VM built in to the DB engine, so you might be able to have java code wrappered as PL/SQL, but this is not for the faint of heart [not saying that you are, just sayin'].
Before going to the trouble of re-writing your application, I would recommend the following tuning approach as it may yield some actionable tunings [assumes diagnostics and tuning pack licenses; won't remove the scalability issues but may lessen the impact of them]:
In versions of oracle 11g and above:
Find the the top level sql id recorded in gv$active_session_history and dba_hist_active_sess_history for the call to the PL/SQL procedure.
Examine the wait events for the sql_id's under that top_level_sql_id. (they tell you what the SQL is waiting on).
Run the tuning advisor on those sql_id's and check for any tuning recommendations. Sometimes if SQL is already sub-second getting it from hundredths of a second to thousandths of a second can have a big impact when call many times.
Run the ADDM report for the period when the procedure is running. Often you will find that heavy PL/SQL processes require increase in PGA. Further, ADDM may advise other relevant actions (e.g. increase SGA, session cached cursors, db writer processes, log buffer, run segment tuning advisor, ...)

Why is the same Select parsed again on every execution if it contains a function from a different schema?

In our application we have several queries that take a long time to parse (> 500 ms, sometimes much more) but are fast to execute (< 50 ms). This is due to complicated views and generally not a problem, if the parsed Query is cached by Oracle.
We now run into the problem that some Queries are parsed every time they are executed: These Queries select from a View in one Schema (SCHEMA1) and use a Function from a Package in a different Schema (SCHEMA2) in the Select clause.
When we execute this Query, it is parsed on every execution. In V$SQLAREA the VERSION_COUNT is equal to the number of executions. Every execution takes the long time.
If we warp the call to the Function from SCHEMA1 in a local Function in SCHEMA2 and use the new Function in the Query, only the first execution leads to a parse. All subsequent executions are much faster. In V$SQLAREA we see a VERSION_COUNT of 1 (or some number much lower than the number of executions).
Unfortunaly, wrapping the functions in local functions are highly impractical in our use case, because there are many functions in SCHEMA2 and they are used with Views von several other Schemas.
The Query doesn’t contain parameters and the circumstances of execution are exactly the same every time.
The effect dos not depend on the code in the function: if we replace the actual function with a test function that just returns a constant value, we get the same effect:
It doesn’t make a different whether we execute the Query from SCHEMA1, SCHEMA2 or any third Schema, except when we execute it as SYSDBA. In this case, subsequent executions don’t lead to new parses.
We use Oracle 12c Release 12.1.0.2.0.
Update: V$SQL_SHARED_CURSOR displays Y at AUTH_CHECK_MISMATCH for these queries. I am not sure yet what this means.
This can be caused by various reasons.
Different sql execution environment. For example caller uses different datatypes for bind variables. Check the view V$SQL_SHARED_CURSOR.
Different optimizer goal
Bind variable peeking. Check v$sql.is_bind_aware.
Some bug in the database caused by cursor_sharing parameter
Maybe you should generate AWR and check wait events, and also the size of Library cache. Possibly Oracle might have problems to find some free space in Library cache for your query(is it's really complex).

How can I utilize Oracle bind variables with Delphi's SimpleDataSet?

I have an Oracle 9 database from which my Delphi 2006 application reads data into a TSimpleDataSet using a SQL statement like this one (in reality it is more complex, of course):
select * from myschema.mytable where ID in (1, 2, 4)
My applications starts up and executes this query quite often during the course of the day, each time with different values in the in clause.
My DBAs have notified me that this is creating execessive load on the database server, as the query is re-parsed on every run. They suggested to use bind variables instead of building the SQL statement on the client.
I am familiar with using parameterized queries in Delphi, but from the article linked to above I get the feeling that is not exactly what bind variables are. Also, I would need theses prepared statements to work across different runs of the application.
Is there a way to prepare a statement containing an in clause once in the database and then have it executed with different parameters passed in from a TSimpleDataSet so it won't need to be reparsed every time my application is run?
My answer is not directly related to Delphi, but this problem in general. Your problem is that of the variable-sized in-list. Tom Kyte of Oracle has some recommendations which you can use. Essentially, you are creating too many unique queries, causing the database to do a bunch of hard-parsing. This will spike the CPU consumption (and DBA blood pressures) unnecessarily.
By making your query static, it can get by with a soft-parse or perhaps no parse at all! The DB can then cache the execution plan, the DBAs can deal with a more "stable" SQL, and overall performance should be improved.

Oracle xmltype extract function never deallocate/reclaim memory until session down

I'm using Oracle 9.2x to do some xmltype data manipulation.
The table as simple as tabls(xml sys.xmltype), with like 10000 rows stored. Now I use a cursor to loop every row, then doing like
table.xml.extract('//.../text()','...').getStringVal();
I notice the oracle instance and the uga/pga keep allocating memory per execution of xmltype.extract() function, until running out of the machine's available memory, even when the dbms_session.free_unused_user_memory() is executed per call of extract().
If the session is closed, then the memory used by the Oracle instance returns right away to as before the execution.
I'm wondering, how to free/deallocate those memory allocated by extract function in same session?
Thanks.
--
John
PL/SQL variables and instantiated objects are some in session memory, which is why your programming is hitting the PGA rather than the SGA. Without knowing some context it is difficult for us to give you some specific advice. The general advice would be to consider how you could reduce the footprint of the variables in your PL/SQL.
For instance, you could include the extract() in the SQL statement rather than doing it in PL/SQL; retrieving just the data you want is always an efficient thing to do. Another possibility would be to use BULK COLLECT with the LIMIT clause to reduce the amount of data you're handling at any one point. A third approach might be to do away with the PL/SQL altogether and just use pure SQL. Pure SQL is way more efficient than switching between SQL and PL/SQL, because sets are better than RBAR. But like I said, because you haven't told us more about what you're trying to achieve we cannot tell whether your CURSOR LOOP is appropriate.

Implementing user-defined db parameters/properties in Oracle

OK, the question title probably isn't the best, but I'm looking for a good way to implement an extensible set of parameters for Oracle database applications that "stay with" the host/instance. By "stay with", I mean that I'd like to rule out just having an Oracle table of name/value pairs that would have to modified if I create a test/QA instance by cloning the production instance. (For example, imagine a parameter called email_error_address that should be set to prod_support#abc.com in production and qa_support#abc.com in testing).
These parameters need to be accessed from both PL/SQL code running in the database as well as client-side code. I started out doing this by overloading the plsql_cc_flags init parameter (not a solution I'm proud of), but this is getting messy to maintain and parse.
[Edit]
Ideally, the implementation would allow changes to the list without restarting the instance, similar to the dynamically-modifiable init parameters.
You want to have a separate set of values for each environment. You want these values to be independent of the data, so that they don't get overridden if you import data from another instance.
The solution is to use an external table (providing you are on 9i or higher). Because external tables hold the data in an OS file they are independent of the database. To apply changed values all you need to do is overwrite the OS file.
All you need to do is ensure that the files for each environment are kept separate, This is easy enough if Test, QA, Production, etc are on their own servers. If they are on the same server then you will need to distinguish them by file name or directory path; in either case you may need to issue a bit of DDL to correct the location in the event of a database refresh.
The drawback to using external tables is that they can be a bit of a performance overhead - they are really intended for bulk loading. If this is likely to be a problem you could use caching, with a user-defined namespace or CONTEXT. Load the values into memory using DBMS_SESSION.SET_CONTEXT() either on demand on with an ON LOGON trigger. Retrieve the values by wrapper calls to SYS_CONTEXT(). Because the namespace is in session memory retrieval is quite fast. René Nyffenegger has a simple example of working with CONTEXT: check it out.
While I've been writing this up I see you have added a requirement to change things on the fly. As I have said already this is easy with an OS file, but the use of caching makes things sightly more difficult. The solution would be to use a globally accessible CONTEXT. Have a routine which loads all the values at startup which you can also call whenever you refresh the OS file.
You could use environment variables that you can set per oracle user (the account that starts up the Oracle database) or per server. The environment variables can be read with the DBMS_SYSTEM.GET_ENV procedure.
I tend to use a system_parameters table. If your concerned with it being overwritten put it in it's own schema and make a public synonym.
#APC's answer is clever.
You could solve the performance overhead by adding a materialized view on top of the external table(s). You would refresh it after RMAN-cloning, and after each update of the config files.

Resources