The first execution time of the query in the data warehouse DB in Oracle 18c takes two minutes, but if cancel the query after a few seconds in the first execution and re-run the query, it only takes 3 seconds to execute the query, so I'm wondering what happens when I cancel the query for the first time that causing the second execution to be very fast.
The execution plan of the first and the second execution of query is exactly the same and all the statistics are up to date and there isn't any stale statistic, below is the execution plan.
And below are optimizer parameters
memoptimize_pool_size 0
inmemory_optimized_arithmetic DISABLE
plsql_optimize_level 2
optimizer_features_enable 12.1.0.2
optimizer_mode ALL_ROWS
optimizer_index_cost_adj 100
optimizer_index_caching 0
optimizer_dynamic_sampling 0
optimizer_ignore_hints FALSE
optimizer_secure_view_merging TRUE
optimizer_use_pending_statistics FALSE
optimizer_capture_sql_plan_baselines FALSE
optimizer_use_sql_plan_baselines TRUE
optimizer_use_invisible_indexes FALSE
optimizer_adaptive_reporting_only FALSE
optimizer_adaptive_plans TRUE
optimizer_inmemory_aware TRUE
optimizer_adaptive_statistics FALSE
Should I change the value of any parameters in the above list to fix the performance issue of first execution time?
e.g. increase optimizer_dynamic_sampling
Oracle Database uses cache memory to store query results, cancelling query execution has nothing to do with it.
Oracle Database uses the buffer cache to store data blocks read from the disk.
On the other hand Result Cache is a new feature in Oracle 11g and it does exactly what its name implies, it caches the results of queries and puts them into a slice of the shared pool.
When the query is executed for the first time and if data is not cached Oracle DB fetches the blocks from Disk and then they are cached in the Buffer pool, next time you execute the same query then blocks are simply fetched from the buffer pool.
It could be that data is cached in the buffer cache, as noted by in the previous answer. It could also be that the SQL is taking a long time to parse, and the second execution is using the shared cursor. One way to test this would be to see how long it takes to EXPLAIN the statement.
It's definitely not the Result Cache; you would see this in the execution plan.
Related
What is Plan hash value in Oracle ? Does this imply anything related to time of execution of a query ? How do I find execution time of a query in Oracle ?
There are 3 views that show SQL statements that ran in your SGA.
V$SQL shows stats and is updated every 5 seconds.
V$SQLAREA shows parsed statements in memory, ready to execute.
V$SQLSTATS has greater retention than V$SQL.
So if you look in V$SQL you will see every statement has a unique SQL ID. When the statement is parsed, oracle generates an explain plan for the SQL and then associates that plan with a hash value which is a unique value for that plan. Certain factors can cause the plan to change, making it execute better or worse. Then you will get a new plan and a new hash value for that plan.
To see the history of this, look at view DBA_HIST_SQL_PLAN.
There is a lot more theory around explain plans and how to optimize SQL statements, and how to give them profiles and baselines, but I hope this gives you an idea of the basics.
We are experiencing sporadic long queries execution in our application. The database is Oracle 12.1 RDS. I can see in AppDynamics that query was executed for 13s, I'm executing it myself in Oracle SQL Developer and it never takes longer than 0.1s. I can't put query here as there are 3 of them that sporadically give execution time longer than 10s and for each of them I can't reproduce it in SQL Developer.
We've started to log Execution plan for long running queries using /*+ gather_plan_statistics */ and it is the same as if query executed for 0.1s except the fact that it doesn't have such a record "1 SQL Plan Directive used for this statement".
I'm looking for any ideas that could help to identify the root cause of this behavior.
One possibility is that you've got a cached execution plan which works fine for most parameter values, or combination of parameter values, but which fails badly for certain values/combinations. You can try adding a non-filtering predicate such as 1 = 1 to your WHERE clause. I've read but haven't tested that this can be used to force a hard parse, but it may be that you need to change the value (e.g. 1 = 1, 2 = 2, 3 = 3, etc) for each execution of your query.
I'm running queries against a Vertica table with close to 500 columns and only 100 000 rows.
A simple query (like select avg(col1) from mytable) takes 10 seconds, as reported by the Vertica vsql client with the \timing command.
But when checking column query_requests.request_duration_ms for this query, there's no mention of the 10 seconds, it reports less than 100 milliseconds.
The query_requests.start_timestamp column indicates that the beginning of the processing started 10 seconds after I actually executed the command.
The resource_acquisitions table show no delay in resource acquisition, but its queue_entry_timestamp column also shows the queue entry occurred 10 seconds after I actually executed the command.
The same query run on the same data but on a table with only one column returns immediately. And since I'm running the queries directly on a Vertica node, I'm excluding any network latency issue.
It feels like Vertica is doing something before executing the query. This is taking most of the time, and is related to the number of columns of the table. Any idea what it could be, and what I could try to fix it ?
I'm using Vertica 8, in a test environment with no load.
I was running Vertica 8.1.0-1, it seems the issue was caused by a Vertica bug in the query planning phase causing a performance degradation. It was solved in versions >= 8.1.1 :
https://my.vertica.com/docs/ReleaseNotes/8.1./Vertica_8.1.x_Release_Notes.htm
VER-53602 - Optimizer - This fix improves complex query performance during the query planning phase.
I've got a Oracle Insert query that runs and it has been busy for almost 24 hours.
The SELECT part of the statement had a cost of 211M.
Now I've updated the statistics on the source tables and the cost has came down significantly to 2M.
Should I stop and restart my INSERT statement, or will the new updated statistics automatically have an effect and start speeding up the performance?
I'm using Oracle 11g.
Should I stop and restart my INSERT statement, or will the new updated statistics automatically have an effect and start speeding up the performance?
New statistics will be used the next time Oracle parses them.
So, optimizer cannot update the execution plan based on the stats gathered at run time, since the query is already parsed and the execution plan has already been chosen.
What you can expect from 12c optimizer is, adaptive dynamic execution. It has the ability to adapt the plan at run time based on actual execution statistics. You can read more about it here http://docs.oracle.com/database/121/TGSQL/tgsql_optcncpt.htm
I am trying to execute a rather "big" query on a SQL Server database :
SELECT *, (SELECT MAX(data) FROM another_sample_table) as max_data
FROM sample_test_1 st1
LEFT JOIN sample_table_2 st2 ON (st2.date = st1.date)
LEFT JOIN sample_table_3 st3 ON (st3.id = st2.id)
LEFT JOIN sample_table_4 st4 ON (st4.code = st1.code)
-- Two ohter LEFT JOINs
WHERE st1.date = '2000-01-01'
AND st4.code IN ('EX1') -- and a list of code
EXPECTED BEHAVIOR :
The query, when executed for the first time takes about 1 minute. I think it is a matter of indexes. The expected behavior should be that every time the query is executed, execution time should be more or less around 1 minute.
ACTUAL RESULTS:
The execution time becomes 1 second when the query is executed for the 2nd, 3rd, 4th etc. time.
QUESTION:
Which technical aspect of SQL Server 2008 could explain such behavior ? Does the database save the results in a kind of cache for a certain amount of time then deletes it ? Or is it the SELECT MAX(data) FROM another_sample_table query that is causing some trouble ?
You should probably have a look at Execution Plan Caching and Reuse
SQL Server has a pool of memory that is used to store both execution
plans and data buffers. The percentage of the pool allocated to either
execution plans or data buffers fluctuates dynamically, depending on
the state of the system. The part of the memory pool that is used to
store execution plans is referred to as the procedure cache.