i try to tune an existing View. I'm sorry for not posting an example but i failed to replicate the Problem. And i can't understand the behavior.
My View (View A) based on another view (View b) and a Table (Table C). In the select list some fields of these and some Package calls are used.
The runtime of a specific Select is nearly 32 seconds.
I analyzed the Statement and startet to optimize View b. I droped all columns i don't need in View a and reduced the overhead of the View b.
After this the select on View a was 5 seconds faster. I executed the select multiple times to get a valid average execute time, to be sure.
A Few minutes later i axecuted the Statement again and i got 32 Seconds. I executed this multiple times but it won't become faster.
There is no trafic on this database, the amount of data didn't change. And this is, until now, the first statement i have problems with getting reasonable results will analyzing the statment.
The Explainplan, I watched first, looks fine. No Full Table Scan (I know FTS is not overall bad). I have no idea why the executetime is so unstable, this made it hard to optimize the View and compare the results.
I think this sounds like a dump question, but i can't see the problem or have an idea what to do next.
Thanks and sorry for my bad english.
Unstable execute time
1) Did you clean up buffer cache between select statements?
2) In some situation you can improve your function and views with result_cacheing. But you have to figure out if it is applicable for your problem? (long story about resultchahe)[http://www.oracle.com/technetwork/articles/datawarehouse/vallath-resultcache-rac-284280.html]
Related
I have a question about how to handle errors with holistic SQL queries. We are using Oracle PL/SQL. Most of our codebase is row-by-row processing which ends up in extremely poor performance. As far as I understand the biggest problem with that is the context switches between the PL/SQL and SQL engine.
Problem with that is that the user don't know what went wrong. Old style would be like:
Cursor above some data
Fire SELECT (count) in another table if data exists, if not show errormsg
SELECT that data
Fire second SELECT (count) in another table if data exists, if not show errormsg
SELECT that data
modify some other table
And that could go on for 10-20 tables. It's basically pretty much like a C program. It's possible to remodel that to something like:
UPDATE (
SELECT TAB1.Status,
10 AS New_Status
FROM TAB1
INNER JOIN TAB2 ON TAB1.FieldX = TAB2.FieldX
INNER ..
INNER ..
INNER ..
INNER ..
LEFT ..
LEFT ..
WHERE TAB1.FieldY = 2
AND TAB3.FieldA = 'ABC'
AND ..
AND ..
AND ..
AND ..
) TAB
SET TAB.Status = New_Status
WHERE TAB.Status = 5;
A holistic SELECT like that would speed up a lot of things extremely. I changed some queries like that and that stuff went down from 5 hours to 3 minutes but that was kinda easy because it was a service without human interaction.
Question is how would you handle stuff like that were someone fills some form and waits for a response. So if something went wrong they need an errormsg. Only solution that came to my mind was checking if rows were updated and if not jump into another code section that still does all the single selects to determinate that error. But after every change we would have to update the holistic select and all single selects. Guess after some time they would differ and lead to more problems.
Another solution would be a generic errormsg which would lead to hundred calls a day and us replacing 50 variables into the query, kill some of the where conditions/joins to find out what condition filtered away the needed rows.
So what is the right approach here to get performance and still be kinda user friendly. At the moment our system feels unusable slow. If you press a button you often have to wait a long time (typically 3-10 seconds, on some more complex tasks 5 minutes).
Set-based operations are faster than row-based operations for large amounts of data. But set-based operations mostly apply to batch tasks. UI tasks usually deal with small amounts of data in a row by row fashion.
So it seems your real aim should be understanding why your individual statements take so long.
" If you press a button you often have to wait a long time (typically 3-10 seconds on some complexer tasks 5 minutes"
That's clearly unacceptable. Equally clearly it's not possible for us to explain it: we don't have the access or the domain knowledge to diagnose systemic performance issues. Probably you need to persuade your boss to spring for a couple of days of on-site consultancy.
But here is one avenue to explore: locking.
"many other people working with the same data, so state is important"
Maybe your problems aren't due to slow queries, but to update statements waiting on shared resources? If so, a better (i.e. pessimistic) locking strategy could help.
"That's why I say people don't need to know more"
Data structures determine algorithms. The particular nature of your business domain and the way its data is stored is key to writing performative code. Why are there twenty tables involved in a search? Why does it take so long to run queries on these tables? Is STORAGE_BIN_ID not a primary key on all those tables?
Alternatively, why are users scanning barcodes on individual bins until they find one they want? It seems like it would be more efficient for them to specify criteria for a bin, then a set-based query could allocate the match nearest to their location.
Or perhaps you are trying to write one query to solve multiple use cases?
I recently working with Oracle database to generate some reports. What I need is to get result sets of specific records (only SELECT statement), sometimes are large records, to be used for generating the report in excel file.
At first, the reports are queried in Views but some of them are slow (have some complex subqueries). I was asked to increase the performance and also fixed some field mapping. I also want to tidy things up, because when I query against View, I must specifically call the right column name. I want to separate the data works into database, and the web app just for passing parameters and call the right result set.
I'm new to Oracle, so which is better to do this kind of task? Using SP or Function? or in what condition that maybe View is better?
Makes no difference whether you compile your SQL in a view, SP or function. It is the SQL itself that matters.
As long as you are able to meet your requirements with the views they should be a good option. If you intend to break-up your queries into multiple ones for achieving better performance then you should go for stored procedures. If you decide to go for stored procedure then it would be advisable to create a package and bundle all the stored procedures together in the package. If your problem is performance then there may not be a silver bullet solution for the same. You will have to work on your queries and design for the same.
If the problem is performance due to complex SELECT query (queries), you can consider tuning the queries. Often you will find queries written 15-20 years ago, which do not use functionality and techniques that were introduced by Oracle in more recent versions (even if the organization spent the big bucks to buy the more recent versions - making it into a waste of money). Honestly, that may be too much of a task for you if you are new at Oracle; also, some slow queries may have been written by people just like you, many years ago - before they had a chance to learn a lot about Oracle and have experience with it.
Another thing, if the reports don't need to use the absolute current state of the underlying tables (for example, if "what was in the tables at the end of the business day yesterday" is acceptable), you can create a materialized view. It will not work any faster than a regular view, but it can run overnight (say), or every six hours, or whatever - so that the further reporting processing from there will not have to wait for the queries to complete. This is one of the main uses of materialized views.
Good luck!
I'm learning about Oracle Views and I got the concept of views but little confused about performance.
I was watching video and there I listen that oracle view can increase the performance. Suppose I have created view like below.
CREATE VIEW SALES_MAN
AS
SELECT * FROM EMP WHERE JOB='SALESMAN';
Ok now I have executed query to get SALES_MAN detail.
SELECT * FROM SALES_MAN
Ok now confusion start.
I listened in video that once the above query SELECT * FROM SALES_MAN will be executed the DATA/RECORD will be placed into cache memory after hitting the oracle DB. and if I will execute same query( IN Current Session/Login ) Oracle Engine will not hit to Database and will give you record from CACHED-MEMORY is it right?
But I have read on many websites that View add nothing to SQL performance more
Another Reference that says view not help in performance. Here
So Views increase performance too or not?
A view is just a kind of virtual table defined by a query. It has no proper data and performance will depend on the underlying table(s) and the query definition.
But you also have Materialized View wich stores the results from the view query definition. Data is synchronized automatically, and you can add indexes to the view. In this case, you can achieve better performances. The cost to pay is
more space (the view contains data dupplicated),
no access to the up to date data from the underlying tables
You (and perhaps the creator of the un-cited video) are confusing two different concepts. The reason the data that was in the cache was used on the second execution was NOT because it was the second execution of the view. As long as data remains in the cache, it is available for ANY query that needs it. The very fact that the first execution of the view had to get the data from disk should be a strong clue. And what if your second use of the view wasn't until hours or days later when the data was no longer in the cache? The answer remains. A view does NOT improve performance.
I run a complex query against Oracle DB 11G based eBS R12 schema:
For first run it takes 4 seconds. If I run it again, it takes 9, next 30 etc.
If I add "and 1=1" it takes 4 seconds again, then 9, the 30 and so on.
Quick workaraound is that we added a random generated "and sometstring = somestring" and now the results are always in 4 second.
I have never encoutered a query that would behave this way (it should be opposite, or no siginificat change between executions). We tested it on 2 copies of same DB, same behaviour.
How to debug it? And what internal mechanics could be getting confused?
UPDATE 1:
EXPLAIN PLAN FOR
(my query);
SELECT * FROM table(DBMS_XPLAN.DISPLAY);
Is exactly the same before first run that it is for subsequent ones. see http://pastebin.com/dMsXmhtG
Check the DBMS_XPLAN.DISPLAY_CURSOR. The reason could be cardinality feedback or other adaptive techniques Oracle uses. You should see multiple child cursors related to SQL_ID of your query and you can compare their plans.
Has your query bound variables and columns used for filtering histograms? This could be another reason.
Sounds like you might be suffering from adaptive cursor sharing or cardinality feedback. Here is an article showing how to turn them off - perhaps you could do that and see if the issue stops happening, as well as using #OldProgrammer's suggestion of tracing what is happening.
If one of these is found to be the problem, you can then take the necessary steps to ensure that the root cause (eg. incorrect statistics, unnecessary histograms, etc.) is corrected.
I am trying to understand the performance of a query that I've written in Oracle. At this time I only have access to SQLDeveloper and its execution timer. I can run SHOW PLAN but cannot use the auto trace function.
The query that I've written runs in about 1.8 seconds when I press "execute query" (F9) in SQLDeveloper. I know that this is only fetching the first fifty rows by default, but can I at least be certain that the 1.8 seconds encompasses the total execution time plus the time to deliver the first 50 rows to my client?
When I wrap this query in a stored procedure (returning the results via an OUT REF CURSOR) and try to use it from an external application (SQL Server Reporting Services), the query takes over one minute to run. I get similar performance when I press "run script" (F5) in SQLDeveloper. It seems that the difference here is that in these two scenarios, Oracle has to transmit all of the rows back rather than the first 50. This leads me to believe that there is some network connectivity issues between the client PC and Oracle instance.
My query only returns about 8000 rows so this performance is surprising. To try to prove my theory above about the latency, I ran some code like this in SQLDeveloper:
declare
tmp sys_refcursor;
begin
my_proc(null, null, null, tmp);
end;
...And this runs in about two seconds. Again, does SQLDeveloper's execution clock accurately indicate the execution time of the query? Or am I missing something and is it possible that it is in fact my query which needs tuning?
Can anybody please offer me any insight on this based on the limited tools I have available? Or should I try to involve the DBA to do some further analysis?
"I know that this is only fetching the
first fifty rows by default, but can I
at least be certain that the 1.8
seconds encompasses the total
execution time plus the time to
deliver the first 50 rows to my
client?"
No, it is the time to return the first 50 rows. It doesn't necessarily require that the database has determined the entire result set.
Think about the table as an encyclopedia. If you want a list of animals with names beginning with 'A' or 'Z', you'll probably get Aardvarks and Alligators pretty quickly. It will take much longer to get Zebras as you'd have to read the entire book. If your query is doing a full table scan, it won't complete until it has read the entire table (or book), even if there is nothing to be picked up in anything after the first chapter (because it doesn't know there isn't anything important in there until it has read it).
declare
tmp sys_refcursor;
begin
my_proc(null, null, null, tmp);
end;
This piece of code does nothing. More specifically, it will parse the query to determine that the necessary tables, columns and privileges are in place. It will not actually execute the query or determine whether any rows meet the filter criteria.
If the query only returns 8000 rows it is unlikely that the network is a significant problem (unless they are very big rows).
Ask your DBA for a quick tutorial in performance tuning.