We are experiencing sporadic long queries execution in our application. The database is Oracle 12.1 RDS. I can see in AppDynamics that query was executed for 13s, I'm executing it myself in Oracle SQL Developer and it never takes longer than 0.1s. I can't put query here as there are 3 of them that sporadically give execution time longer than 10s and for each of them I can't reproduce it in SQL Developer.
We've started to log Execution plan for long running queries using /*+ gather_plan_statistics */ and it is the same as if query executed for 0.1s except the fact that it doesn't have such a record "1 SQL Plan Directive used for this statement".
I'm looking for any ideas that could help to identify the root cause of this behavior.
One possibility is that you've got a cached execution plan which works fine for most parameter values, or combination of parameter values, but which fails badly for certain values/combinations. You can try adding a non-filtering predicate such as 1 = 1 to your WHERE clause. I've read but haven't tested that this can be used to force a hard parse, but it may be that you need to change the value (e.g. 1 = 1, 2 = 2, 3 = 3, etc) for each execution of your query.
Related
What is Plan hash value in Oracle ? Does this imply anything related to time of execution of a query ? How do I find execution time of a query in Oracle ?
There are 3 views that show SQL statements that ran in your SGA.
V$SQL shows stats and is updated every 5 seconds.
V$SQLAREA shows parsed statements in memory, ready to execute.
V$SQLSTATS has greater retention than V$SQL.
So if you look in V$SQL you will see every statement has a unique SQL ID. When the statement is parsed, oracle generates an explain plan for the SQL and then associates that plan with a hash value which is a unique value for that plan. Certain factors can cause the plan to change, making it execute better or worse. Then you will get a new plan and a new hash value for that plan.
To see the history of this, look at view DBA_HIST_SQL_PLAN.
There is a lot more theory around explain plans and how to optimize SQL statements, and how to give them profiles and baselines, but I hope this gives you an idea of the basics.
I'm running queries against a Vertica table with close to 500 columns and only 100 000 rows.
A simple query (like select avg(col1) from mytable) takes 10 seconds, as reported by the Vertica vsql client with the \timing command.
But when checking column query_requests.request_duration_ms for this query, there's no mention of the 10 seconds, it reports less than 100 milliseconds.
The query_requests.start_timestamp column indicates that the beginning of the processing started 10 seconds after I actually executed the command.
The resource_acquisitions table show no delay in resource acquisition, but its queue_entry_timestamp column also shows the queue entry occurred 10 seconds after I actually executed the command.
The same query run on the same data but on a table with only one column returns immediately. And since I'm running the queries directly on a Vertica node, I'm excluding any network latency issue.
It feels like Vertica is doing something before executing the query. This is taking most of the time, and is related to the number of columns of the table. Any idea what it could be, and what I could try to fix it ?
I'm using Vertica 8, in a test environment with no load.
I was running Vertica 8.1.0-1, it seems the issue was caused by a Vertica bug in the query planning phase causing a performance degradation. It was solved in versions >= 8.1.1 :
https://my.vertica.com/docs/ReleaseNotes/8.1./Vertica_8.1.x_Release_Notes.htm
VER-53602 - Optimizer - This fix improves complex query performance during the query planning phase.
We have a stored procedure that returns all of the records that fall within a geospatial region ("geography"). It uses a CTE (with), some unions, some inner joins and returns the data as XML; nothing controversial or cutting edge but also not trivial.
This stored procedure has served us well for many years on SQL Server 2008. It has been running within 1 sec on a relatively slow server. We have just migrated to SQL Server 2016 on a super fast server with lots of memory and a super fast SDDs.
The entire database and associated application is going really fast on this new server and we are very happy with it. However this one stored procedure is running in 16 sec rather than 1 sec - against exactly the same parameters and exactly the same dataset.
We have updated the indexes and statistics on this database. We have also changed the compatibility level of the database from 100 to 130.
Interesting, I have re-written the stored procedure to use a temporary table and 'insert' rather than using the CTE. This has brought the time down from 16 sec to 4 sec.
The execution plan does not provide any obvious insights into where a bottleneck may be.
We are a bit stuck for ideas. What should we do next? Thanks in advance.
--
I have now spent more time on this problem than i care to admit. I have boiled down the stored procedure to the following query to demonstrate the problem.
drop table #T
declare #viewport sys.geography=convert(sys.geography,0xE610000001041700000000CE08C22D7740C002370B7670F4624000CE08C22D7740C002378B5976F4624000CE08C22D7740C003370B3D7CF4624000CE08C22D7740C003378B2082F4624000CE08C22D7740C003370B0488F4624000CE08C22D7740C004378BE78DF4624000CE08C22D7740C004370BCB93F4624000CE08C22D7740C004378BAE99F4624000CE08C22D7740C005370B929FF4624000CE08C22D7740C005378B75A5F4624000CE08C22D7740C005370B59ABF462406F22B7698E7640C005370B59ABF462406F22B7698E7640C005378B75A5F462406F22B7698E7640C005370B929FF462406F22B7698E7640C004378BAE99F462406F22B7698E7640C004370BCB93F462406F22B7698E7640C004378BE78DF462406F22B7698E7640C003370B0488F462406F22B7698E7640C003378B2082F462406F22B7698E7640C003370B3D7CF462406F22B7698E7640C002378B5976F462406F22B7698E7640C002370B7670F4624000CE08C22D7740C002370B7670F4624001000000020000000001000000FFFFFFFF0000000003)
declare #outputControlParameter nvarchar(max) = 'a value passed in through a parameter to the stored that controls the nature of data to return. This is not the solution you are looking for'
create table #T
(value int)
insert into #T
select 136561 union
select 16482 -- These values are sourced from parameters into the stored proc
select
[GeoServices_Location].[GeographicServicesGatewayId],
[GeoServices_Location].[Coordinate].Lat,
[GeoServices_Location].[Coordinate].Long
from GeoServices_Location
inner join GeoServices_GeographicServicesGateway
on GeoServices_Location.GeographicServicesGatewayId = GeoServices_GeographicServicesGateway.GeographicServicesGatewayId
where
(
(len(#outputControlParameter) > 0 and GeoServices_Location.GeographicServicesGatewayId in (select value from #T))
or (len(#outputControlParameter) = 0 and GeoServices_Location.Coordinate.STIntersects(#viewport) = 1)
)
and GeoServices_GeographicServicesGateway.PrimarilyFoundOnLayerId IN (3,8,9,5)
GO
With the stored procedure boiled down to this, it runs in 0 sec on SQL Server 2008 and 5 sec on SQL Server 2016
http://www.filedropper.com/newserver-slowexecutionplan
http://www.filedropper.com/oldserver-fastexecutionplan
Windows Server 2016 is choking on the Geospatial Intersects call with 94% of the time spent there. Sql Server 2008 is spending its time with with a bunch of other steps including Hash Matching and Parallelism and other standard stuff.
Remember this is the same database. One has just been copied to a SQL Server 2016 machine and had its compatibility level increased.
To get around the problem I have actually rewritten the stored procedure so that Sql Server 2016 does not choke. I have running in 250msec. However this should not have happened in the first place and I am concerned that there are other previously finely tuned queries or stored procedures that are now not running efficiently.
Thanks in advance.
--
Furthermore, I had a suggestion to add the traceflag -T6534 to start up parameter of the service. It made no difference to the query time. Also I tried adding option(QUERYTRACEON 6534) to the end of the query too but again it made no difference.
From the query plans you provided I see that spatial index is not used on newer server version.
Use spatial index hint to make sure query optimizer chose the plan with spatial index:
select
[GeoServices_Location].[GeographicServicesGatewayId],
[GeoServices_Location].[Coordinate].Lat,
[GeoServices_Location].[Coordinate].Long
from GeoServices_Location with (index ([spatial_index_name]))...
I see that the problem with the hint is OR operation in query predicate, so my suggestion with hint actually won’t help in this case.
However, I see that predicate depends on #outputControlParameter so rewriting query in order to have these two cases separated might help (see my proposal below).
Also, from your query plans I see that query plan on SQL 2008 is parallel while on SQL 2016 is serial. Use option (recompile, querytraceon 8649) to force parallel plan (should help if your new superfast server has more cores then the old one).
if (len(#outputControlParameter) > 0)
select
[GeoServices_Location].[GeographicServicesGatewayId],
[GeoServices_Location].[Coordinate].Lat,
[GeoServices_Location].[Coordinate].Long
from GeoServices_Location
inner join GeoServices_GeographicServicesGateway
on GeoServices_Location.GeographicServicesGatewayId = GeoServices_GeographicServicesGateway.GeographicServicesGatewayId
where
GeoServices_Location.GeographicServicesGatewayId in (select value from #T))
and GeoServices_GeographicServicesGateway.PrimarilyFoundOnLayerId IN(3,8,9,5)
option (recompile, querytraceon 8649)
else
select
[GeoServices_Location].[GeographicServicesGatewayId],
[GeoServices_Location].[Coordinate].Lat,
[GeoServices_Location].[Coordinate].Long
from GeoServices_Location with (index ([SPATIAL_GeoServices_Location]))
inner join GeoServices_GeographicServicesGateway
on GeoServices_Location.GeographicServicesGatewayId = GeoServices_GeographicServicesGateway.GeographicServicesGatewayId
where
GeoServices_Location.Coordinate.STIntersects(#viewport) = 1
and GeoServices_GeographicServicesGateway.PrimarilyFoundOnLayerId IN (3,8,9,5)
option (recompile, querytraceon 8649)
check the growth of the data/log files on the new server (DBs) vs old server (DBs) configuration: the DB the query is running on + tempdb
check the log for I/O buffer errors
check recovery model of the DB's - simple vs full/bulk
is this a consistent behavior? maybe a process is running during the execution?
regarding statistics/indexes - are you sure it's running on correct data sample? (look at the plan)
many more things can be checked/done - but there is not enough info in this question.
In Db trace, there is a query taking long time.Can some one explain what it means.Seems this is very generic oracle query and not involved with my custom tables.
select condition from cdef$ where rowid=:1;
Found the same query in multiple places in trc files(DB trace) and one among all have huge amount of elapsed time. So, what will be the solution to avoid taking such a long time. Am using 11g version oracle.
You're right, that is an example of Oracle's recursive SQL, the statements it runs against the data dictionary to support our application SQL. That particular statement is the query Oracle runs to get the Search Condition of a CHECK constraint. If you are inserting or updating rows in tables with check constraints you will see it a lot.
The actual statement shouldn't take too long to run, so it is unlikely to be the source of a performance problem. Unless you are running lots of insert statements with hard-coded values. Oracle will run that query every time it parses a fresh insert or update statement. That will get expensive if you're not using bind variables.
I'm learning about database indexes right now, and I'm trying to understand the efficiency of using them.
I'd like to see whether a specific query uses an index.
I want to actually see the difference between executing the query using an index and without using the index (so I want to see the execution plan for my query).
I am using sql+.
How do I see the execution plan and where can I found in it the information telling me whether my index was used or not?
Try using this code to first explain and then see the plan:
Explain the plan:
explain plan
for
select * from table_name where ...;
See the plan:
select * from table(dbms_xplan.display);
Edit: Removed the brackets
The estimated SQL execution plan
The estimated execution plan is generated by the Optimizer without executing the SQL query. You can generate the estimated execution plan from any SQL client using EXPLAIN PLAN FOR or you can use Oracle SQL Developer for this task.
EXPLAIN PLAN FOR
When using Oracle, if you prepend the EXPLAIN PLAN FOR command to a given SQL query, the database will store the estimated execution plan in the associated PLAN_TABLE:
EXPLAIN PLAN FOR
SELECT p.id
FROM post p
WHERE EXISTS (
SELECT 1
FROM post_comment pc
WHERE
pc.post_id = p.id AND
pc.review = 'Bingo'
)
ORDER BY p.title
OFFSET 20 ROWS
FETCH NEXT 10 ROWS ONLY
To view the estimated execution plan, you need to use DBMS_XPLAN.DISPLAY, as illustrated in the following example:
SELECT *
FROM TABLE(DBMS_XPLAN.DISPLAY (FORMAT=>'ALL +OUTLINE'))
The ALL +OUTLINE formatting option allows you to get more details about the estimated execution plan than using the default formatting option.
Oracle SQL Developer
If you have installed SQL Developer, you can easily get the estimated execution plan for any SQL query without having to prepend the EXPLAIN PLAN FOR command:
##The actual SQL execution plan
The actual SQL execution plan is generated by the Optimizer when running the SQL query. So, unlike the estimated Execution Plan, you need to execute the SQL query in order to get its actual execution plan.
The actual plan should not differ significantly from the estimated one, as long as the table statistics have been properly collected by the underlying relational database.
GATHER_PLAN_STATISTICS query hint
To instruct Oracle to store the actual execution plan for a given SQL query, you can use the GATHER_PLAN_STATISTICS query hint:
SELECT /*+ GATHER_PLAN_STATISTICS */
p.id
FROM post p
WHERE EXISTS (
SELECT 1
FROM post_comment pc
WHERE
pc.post_id = p.id AND
pc.review = 'Bingo'
)
ORDER BY p.title
OFFSET 20 ROWS
FETCH NEXT 10 ROWS ONLY
To visualize the actual execution plan, you can use DBMS_XPLAN.DISPLAY_CURSOR:
SELECT *
FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST ALL +OUTLINE'))
Enable STATISTICS for all queries
If you want to get the execution plans for all queries generated within a given session, you can set the STATISTICS_LEVEL session configuration to ALL:
ALTER SESSION SET STATISTICS_LEVEL='ALL'
This will have the same effect as setting the GATHER_PLAN_STATISTICS query hint on every execution query. So, just like with the GATHER_PLAN_STATISTICS query hint, you can use DBMS_XPLAN.DISPLAY_CURSOR to view the actual execution plan.
You should reset the STATISTICS_LEVEL setting to the default mode once you are done collecting the execution plans you were interested in. This is very important, especially if you are using connection pooling, and database connections get reused.
ALTER SESSION SET STATISTICS_LEVEL='TYPICAL'
Take a look at Explain Plan. EXPLAIN works across many db types.
For sqlPlus specifically, see sqlplus's AUTO TRACE facility.
Try this:
http://www.dba-oracle.com/t_explain_plan.htm
The execution plan will mention the index whenever it is used. Just read through the execution plan.