AWR Report for execution of PL/SQL with multiple queries - oracle

How does a PLSQL script execute?
Currently, I am confused with the AWR report analysis of a PLSQL script.
I see two different SQLIDs, one for the PLSQL SQL script and another for a query executed by the PLSQL.
Is it expected to have two different SQLIDs, or some other process is running the other sql?

Just we are taking small example for Top running session for CPu.
we are checking same sqlid clicked then we are able to data for same.

Related

What is the difference between Parse, Execute and Fetch?

The tkprof utility generates the trace file with three types of information which are Parse, Execute and Fetch. Could you please explain what is the difference between these three? What will be counted as Parse and Execute and Fetch?
Thanks in advance for your help.
When you issue a SQL statement, Oracle:
Parses your SQL statement. That means Oracle analyzes the correctness of the syntax, checks the access rights, and creates the execution plan (or takes it from the cache).
Actually executes your SQL statement.
For SELECT statements, Oracle fetches the rows returned by your query. (For INSERT, DELETE, and UPDATE Oracle fetches nothing).
The numbers of these operations is written in the trace.
If we are talking about the performance tuning, the idea is to parse SQL statements once and then keep them in cache, execute them when you need and do not close cursors if you will need them again to reduce number of fetches.

Oracle: Return Large Dataset with Cursor in Procedure

I've seen lots of posts regarding the use of cursors in PL/SQL to return data to a calling application, but none of them touch on the issue I believe I'm having with this technique. I am fairly new to Oracle, but have extensive experience with MSSQL Server. In SQL Server, when building queries to be called by an application for returning data, I usually put the SELECT statement inside a stored proc with/without parameters, and let the stored proc execute the statement(s) and return the data automatically. I've learned that with PL/SQL, you must store the resulting dataset in a cursor and then consume the cursor.
We have a query that doesn't necessarily return huge amounts of rows (~5K - 10K rows), however the dataset is very wide as it's composed of 1400+ columns. Running the SQL query itself in SQL Developer returns results instantaneously. However, calling a procedure that opens a cursor for the same query takes 5+ minutes to finish.
CREATE OR REPLACE PROCEDURE PROCNAME(RESULTS OUT SYS_REFCURSOR)
AS
BEGIN
OPEN RESULTS FOR
<SELECT_query_with_1400+_columns>
...
END;
After doing some debugging to try to get to the root cause of the slowness, I'm leaning towards the cursor returning one row at a time very slowly. I can actually see this real-time by converting the proc code into a PL/SQL block and using DBMS_SQL.return_result(RESULTS) after the SELECT query. When running this, I can see each row show up in the Script output window in SQL Developer one at a time. If this is exactly how the cursor returns the data to the calling application, then I can definitely see how this is the bottleneck as it could take 5-10 minutes to finish returning all 5K-10K rows. If I remove columns from the SELECT query, the cursor displays all the rows much faster, so it does seem like the large amount of columns is an issue using a cursor.
Knowing that running the SQL query by itself returns instant results, how could I get this same performance out of a cursor? It doesn't seem like it's possible. Is the answer putting the embedded SQL in the application code and not using a procedure/cursor to return data in this scenario? We are using Oracle 12c in our environment.
Edit: Just want to address how I am testing performance using the regular SELECT query vs the PL/SQL block with cursor method:
SELECT (takes ~27 seconds to return ~6K rows):
SELECT <1400+_columns>
FROM <table_name>;
PL/SQL with cursor (takes ~5-10 minutes to return ~6K rows):
DECLARE RESULTS SYS_REFCURSOR;
BEGIN
OPEN RESULTS FOR
SELECT <1400+_columns>
FROM <table_name>;
DBMS_SQL.return_result(RESULTS);
END;
Some of the comments are referencing what happens in the console application once all the data is returned, but I am only speaking regarding the performance of the two methods described above within Oracle\SQL Developer. Hope this helps clarify the point I'm trying to convey.
You can run a SQL Monitor report for the two executions of the SQL; that will show you exactly where the time is being spent. I would also consider running the two approaches in separate snapshot intervals and checking into the output from an AWR Differences report and ADDM Compare Report; you'd probably be surprised at the amazing detail these comparison reports provide.
Also, even though > 255 columns in a table is a "no-no" according to Oracle as it will fragment your record across > 1 database blocks, thus increasing the IO time needed to retrieve the results, I suspect the differences in the two approaches that you are seeing is not an IO problem since in straight SQL you report fast result fetching all. Therefore, I suspect more of a memory problem. As you probably know, PL/SQL code will use the Program Global Area (PGA), so I would check the parameter pga_aggregate_target and bump it up to say 5 GB (just guessing). An ADDM report run for the interval when the code ran will tell you if the advisor recommends a change to that parameter.

Schedule "string too long for attribute" query in Oracle

I have several long (lines of SQL) Oracle queries that currently repeat daily as an SSIS package. The queries are all Create Table AS Select... with a bunch of joins, where clauses and selects.
The SSIS runs fine, but slow, due to nature of the linked server. Runs fine in SQL Developer. I am trying to move all queries into Oracle as scheduled jobs, but I am running into the "String value too long for attribute "job action" ORA16612. How to proceed? Tried to make a procedure of it using "Create Procedure...Begin.... execute immediate" but I get the same error on length.

Evisions Argos - Execute a procedure before the report

We are migrating some reports from Oracle Reports to Evisions Argos. And in Oracle reports, there was a "Before Report" trigger, that would get fired before the actual running of the report query. This allowed us to fill some tables before the query and keep the whole business logic in the report itself. Is it possible to do something like this in Argos? Where could you execute PL/SQL code before running the query for the report? Either at the report or the datablock could work for us.
You can add a dataset to the report and call the procedure(enclosed with begin/end) there.

Same stored procedure acts differently on two/(three) different IDEs

I just created a stored procedure in MS SQL DB using TOAD.
what it does is that it accepts an ID wherein some records are associated with, then it inserts those records to a table.
next part of the stored procedure is to use the ID input to search on the table where the items got inserted and then return it as the result set to the user just to confirm that the information got inserted.
IN TOAD, it does what is expected. It inserts date and returns information using just the stored procedure.
IN Oracle SQL developer however, it does the insert and it ends at that. It seems to not execute the 2nd part of the stored procedure which is a select stmt.
I just have a feeling that this is because of the jdbc adapter. Also why I'm asking is because I'm using a reporting tool Pentaho Report Designer and it would really make it easier if I can do 2 things at the same time. Pentaho Report Designer is also using jdbc adapters, not a coincidence maybe?
But if there are other things that I can tweak I'd really appreciate it.
This is a guess, but worth considering...
There are things called "Batches", where are sets of SQL Statements that are all sent to the server at once, and executed by the server as one set of statements, within a single server-side session. Sending a set of sql statements to the server as a batch will often result in different results than if you sent them one at a time, where each statement is executed in its own session.
I haven't used Toad (or Oracle) in a while, but as I recall, it dealt with batches differently than the other ide I used. If the second statement in your set is relying on being in the same session as the first, and in one ide it is in a separate session, then this might explain what is happening.

Resources