I have some SQLs running on ORACLE 10g database. Some of these SQLs were using oracle sample clause.
e.g.
select /*+ use_nl(emp,dept) +*/ *
from emp, dept
sample(10)
where emp.deptno=dept.deptno
Now, the oracle 10g will be upgraded to 11g. We have to proove that there is no performance impact for the sample clause for the database upgrade. In other words, I should proove the sample clause working well in oracle 11g on performance aspect.
But I spent a whole day on search on google, no expected answer.
Could you give me a answer or suggestion on it? Thank you so much.
Well the good news is that it's exactly the same clause used internally by Oracle for sampling -- most notably for DBMS_Stats programs. So I'd say that it's rather unlikely that it's done anything other than get faster.
The bad news is that there are an enormous number of issues that might harm or improve performance during an upgrade, but where there is a problem you are most likely going to find out pretty quickly and there will be a fix available. However, someone doesn't want to be held to blame if anything at all changes during the upgrade, and is covering their arse by making you waste your time on efforts that are bound to be unproductive and inconclusive.
The correct way to investigate this is to set up a new system with the new version that you are actually going to be using -- no cut-down data sets, no export-import if what you're going to do is an in-place upgrade (you want the physical data layout to be exactly the same as production), no "small representative set of queries", using the same storage architecture, processor type and memory configuration. Run your actual application on real data and compare performance before and after upgrade in a meaningful way.
It is absolutely the only way to be sure, because the fundamental problem with the task that you've been set is that you have to look for evidence of something (a performance problem) not existing. It's like trying to prove the non-existence of invisible pixies in your washing machine that eat one sock out of every pair -- practically impossible! The burden of proof lies with those who say that something might exist.
http://www.logicallyfallacious.com/index.php/logical-fallacies/146-proving-non-existence
Here are some constructive steps you can take:
Search Metalink -- this is the number one authoratative source for upgrade problems, because problems get reported via Metalink and either a bug is raised, or an explanation is published.
Search the Oracle forums -- if there is a general change in behaviour that people are encountering then they'll question it here.
Search the internet generally.
If you've done all of those then that's the limit of the efforts you can be expected to take.
If that's not enough then you or the people behind this migration just have to escalate it, and ask some awkward questions: Was this level of proof required in moving from 9i to 10g? Is it required for upgrades from 10.2.0.2 to 10.2.0.5?
I'd really love to know the politics behind this -- it sounds like a dreadful place to work.
Related
As I come up to speed on Oracle (having been a DB2-guy for the last couple decades), I see a A LOT of existing code that uses optimizer-hints in its queries.
From what I've read on various Oracle-focused web-sites, several Oracle "experts" advise AGAINST putting optimizer-hints in production code because:
With every Oracle patch or upgrade, the hint will probably be wrong.
With every DDL, the hint will probably be wrong.
One "expert" says:
The reason to be wary of hinting is that by embedding hints in your SQL, you are overriding the optimizer and saying that you know more than it does – not just now, but every time in the future that your SQL will be run, irrespective of any other changes that may happen to your database. The likely consequence of this is that your SQL will possibly run sub-optimally now and almost certainly in the future.
(See http://allthingsoracle.com/a-beginners-guide-to-optimizer-hints/)
So, if optimizer-hints are commonly known to be "unwise", why are they so frequently used (... at least in the code I've seen)?
To the extent that hints are common, it's generally because someone in the past prioritized fixing an acute issue rather than dealing with the underlying statistics problem. It's entirely possible that this prioritization was reasonable (particularly at the time) but it introduces technical debt that will probably have to be paid back.
When a system becomes unresponsive because a query plan changed and became massively less efficient, fixing the acute production issue generally takes precedence over identifying the root cause. Slapping a hint on a single query is generally both quicker and easier than figuring out the underlying issue or learning how to use the various tools Oracle provides to ensure query plan stability or to evolve plans over time.
Of course, having slapped a band-aid on a single query, if you don't then invest the time to understand why statistics sent the optimizer down the wrong path or why your approach to plan stability didn't work, it's likely that whatever statistics issue you have will cause other queries to perform poorly. It's very rare that misleading statistics would cause just one query in the system to perform poorly. Generally, that either leads to a viscous circle where more queries start performing badly leading to more hints being added or a virtuous circle where DBAs take a step back, work through what's going wrong to cause query performance to suffer, fix the underlying issue, and then remove the hints.
All that being said, there are a few hints that may be reasonably used relatively commonly depending on the sort of code you're using. If you've got a function that returns a sys_refcursor that is returned to a client application that you know is going to fetch the first few rows, display them to the user, and only ask for the next set of rows if they don't find what they're looking for, it makes sense to use a FIRST_ROWS hint pretty liberally because you know something the optimizer can't possibly know. You know that users are much more interested in the first few rows rather than the complete set of results. If you have lots of code that is using collections in SQL, you probably want to use a lot of CARDINALITY hints because otherwise Oracle has no idea how many elements the collection is likely to have.
In my case, most of the hints I see in our production code are like these:
/*+APPEND*/
/*+ full(MP) parallel(MP, 16) */
/*+ PARALLEL(ME1, 16) FULL(ME1) DRIVING_SITE(ME1) */
Rarely ( but every so often ), I'll see an index-hint:
e.g.
/*+ index(MSL XCL01000) */
Thanks for your valuable input. I certainly understand a bit more the dangers and necessities of hints.
This isn't a question of "which is the fastest ORM", nor is it a question on "how to write good code with ORMs". This is the other side: the code's been written, it's gone live, several thousand users are hitting the application, but there's a perceived overall performance problem. A SQL Profiler trace can only be ran for a short amount of time: 5 mins gives several hundred thousand results.
The question is simply this: having used SQL Profiler to narrow down a number of slow queries (duration greater than a given amount of time), what techniques and solutions exist for tracing these SQL queries back into the problematic component? A releated question is that if a specific area is slow, how can we identify the SQL that this area is executing so it can be suitably filtered in SQL Profiler?
The background to this is that we have a rather large application with a fairly complex table structure, and is currently based around data-access via stored procedures. If a SQL performance problem arises, it's usually case of pulling out SQL profiler, find out if there's anything slow (filter by duration) or if a the area being complained about is slow (filter by stored procedure), and tune the stored procedures (or the schema - through indexing).
Now there's a push to move our code over from a mostly-sproc solution to a mostly-ORM solution, however the big push against the move is how performance problems, if they arise, can be traced back to problematic code. I've read around and it seems that more often than not, it may involve third-party tools (ORM tracing utilities like NHProf or .NET tracing utils like dottrace) that we'd need to install on the server. Now whether additional tools can be installed on a live environment is another question, so if things like this can be performed without additional tools, then that may be a bonus.
I'm mostly interested in solutions with SQL Server 2008, but it's probably generic enough for any RDBMS. As far as the ORM tech, on this I have no specific focus as nothing's currently in use, so be interested to hear how techniques differ (or are common) twixt nHibernate, fluent-nhibernate and Entity Framework. Other ORMs are welcome though if they offer something else :-)
I've read through How to find and fix performance problems (...), and I think the issue is simply the section on there that says "isolate". A problem that is easily reproducible only on a live system is going to be difficult to isolate. The figures I quoted in para 2 are figures the types of volumes that we can get from a profile as well...
If you have real-world experience of ORM tracing on live, so much the better :-)
Update, 2016-10-21: Just for completeness, we eventually solved this for NHibernate by writing code, and overriding NHibernate methods. Full details in this other SO question I asked: NHibernate and Interceptors - measuring SQL round trip times. I expect this will be a similar approach for many different ORMs.
There exists profilers for ORM tools, like UberProf. It finds out which SQL statements that are generated by the ORM can be problematic.
Like the select n+1 problem, for instance. These kind of tools might give you an indication of which ORM query statements result in poor SQL code, and perhaps even how you could improve them.
We had a Java/Hibernate app with issues, so we used SET CONTEXT_INFO with a different value. If we saw, say, 0x14 on the same SPID just before a WTF query, we could narrow it to module x.
Not being a Java guy, I don't know exactly what they did, and of course it may not apply to .net. IIRC you have to be careful about when connections are opened/closed
We could also control the client load at this time so we didn't have too much superfluous traffic.
YMMV of course, but it may be useful
I just found these which could be useful too
Temporary tables, sessions and logging in SQL Server?
Why is my CONTEXT_INFO() empty?
I am a Java guy who is familiar to basic SQL and PL/SQL. I can write simple procedures and functions. Now, I am expected to do some Oracle performance tuning. Can anyone help me with some crash course material? Thanks in advance.
--Siddharth
Along with the earlier suggestions (all excellent in their own rights) there are a few simple things you can do which will make you into an Instant Performance Guru (tm):
Carry a briefcase full of papers and books. Dog-eared books with titles like "Oracle Performance Tuning For Highly Effective People"+ and pieces of paper with boxes and arrows scribbled on them work well. If the books are for outdated versions of Oracle so much the better as it makes it look like you've been doing this for a while - plus, you can buy 'em cheap from the 'clearance' table at your local bookstore. For best effect the briefcase should be well worn - if you're forced to purchase a new briefcase you can get that weathered effect by backing over it with a car and/or tying a rope to the handle and dragging it through the sand/dirt/mud for ten minutes or so. This all helps to impress the natives. A bullet hole or two can be interesting conversation starters. You can also use the briefcase to carry your lunch and other important stuff like a towel.
Add full-key indexes for all queries.
Ensure all foreign keys have full-key indexes backing them.
This may give you the idea that "performance analysis" consists mostly of adding the indexes that the people who wrote the software never bothered to add because in their next-to-empty development database everything ran really fast. This is not correct. A complete fabrication. Utter nonsense. At best it's only about, like, 95% of it. Pay no attention to that man behind the curtain - he is of no importance whatsoever...
You now know the Secrets of the Oracle Performance Masters. (Well, most of it anyways, except for the Secret Handshake, which is tough to explain in a text message (and besides, you need six fingers on each hand), and the Hidden Mysteries, which consist principally of a lot of stuff about frogs which you don't technically need to know but which has been know to make well-informed people giggle a lot - which is not attractive...).
Go ye forth and do ye good works.
+This is actually slightly incorrect. The book you really want to have in your briefcase is "Oracle Performance Tuning For People Who Are Smarter Than 99.9% Of The Inhabitants Of This Planet". Since 99.99999%++ of the inhabitants of this planet are single-celled organisms or managers (and sometimes both) this is not difficult to accomplish.
++This is a real number. You can't just make this stuff up+++.
+++Actually, you can and I did. But it's not "lying" if you use an exact number - it's "creative re-imagining"++++.
++++That's "lying" in consultant-speak.
Performance Tuning is all about "It depends". If there existed a check list for how to improve performance, it would already be implemented in the database.
Hang around the OTN forums and look for topics related to tuning. Try to really understand what the experts write. Why was there a problem? What clues was there to aid in discovering a solution? What is the difference in solutions provided? What tradeoffs had to be made in order to select one solution over the other? Re-create the problems in your database and experiment with them yourself.
Don't be afraid to ask if you don't understand why some adviced was posted!
Links to the forums
SQL and PLSQL- Forums
General - Forums
Oh, and please don't go down the usual path which is throwing hints at queries until something happens. Without a solid understanding of access statistics, selectivity, costing, join mechanisms, access paths or just basic knowledge of the architecture of the database, it makes no sense whatsoever to override the optimizer with hints.
If I could travel back in time and give myself three books, I would bring:
Expert Oracle Database Architecture (Tom Kyte)
Cost-based Oracle Fundamentals (Jonathan Lewis)
SQL Tuning (Dan Tow)
If you want to know more on SQL preformance tuning, then you need to know the oracle hints and how to check the explain plan of the query:
http://www.adp-gmbh.ch/ora/sql/hints/index.html
Other important performance related things are the analytical functions
http://www.adp-gmbh.ch/ora/sql/analytical/index.html
This can be useful if you are interested more in Oracle Specific options. The other answers might be more useful if you are generally interested in performance tuning
Two Tom Kyte books:
"Expert Oracle Database Architecture" - A crash course in Oracle internals from the perspective of a developer.
"Effective Oracle by Design" - How to write hi-performance applications. Demystification of tools like EXPLAIN PLAN, AUTOTRACE, TKPROF, Statspack, and DBMS_PROFILER; understanding the optimizer; schema design, and many more.
Any other materials written by Mr. Kyte are worth reading as well.
You could start with the Introduction to Performance Tuning section of the Oracle docs. I have linked to the Oracle 10G version.
Learning to generate and interpret trace files might be a good way to begin.
Try statspack. Check details here
Performance is usually a black magic and it requires a lot of patience :)
We are developing a client-server desktop application(winforms with sql server 2008, using LINQ-SQL).We are now finding many issues related to performance.These relate to querying too much data with LINQ , bad database design,not much caching etc.What do you suggest,we should do - how to go about solving these performance issues? One thing,I am doing is doing sql profiling,and trying to fix some queries.As far caching is concerned,we have static lists.But,how to keep them updated,we don't have any server side implementation.So,these lists can be stale,if someone changes data.
regards
Performance analysis without tools is fruitless, with the wrong tools frustrating. SQL Profiler is the wrong tool to rely on for what you are looking at. I think it is at best giving you a hint of what is wrong.
You need to use a code profiler to determine why/when these queries are being executed. You should be able to find one by Googling it and run it a x day trial.
The key questions are:
Are queries being run multiple times when there is no reason to at all? Is the data already in memory (even if not stored statically). This happens a lot where data is already retrieved but because of some action on the code it loads it again. Class properties are a big culprit here.
Should certain data be stored statically across the application? How volatile is that data? Can you afford to show stale data?
The only way to decide on #2 is to have hard data to examine the cost of a particular transaction. For example, if I know it takes me 1983 ms to create a new invoice, what will it be after I start caching data. After the cache is that savings significant. But recognize you can't answer that question until you know it takes 1983 ms to create an invoice.
When I profile an application transaction I focus on the big contributor and try to determine why it is so big. I look for individual methods that are slow and for any code that is executed frequently. It is often the latter, the death of a thousand cuts, that gets you.
And I wanted to add this, it is also very important to know when to stop working on a performance issue.
I found Jeff Atwood's articles on this quite interesting:
Compiled Or Bust
All Abstractions are field Abstractions
For updating, you can create a Table. I called it ListVersions.
Just store list id, name and version.
When you do some changes to a list, just increment its version. In your application, you'll just need to compare version and update only if it has changed. Update lists that have version incremented, not all.
I've described it in my answer to this question
What is the preferred method of refreshing a combo box when the data changes?
Good Luck!
A general recipe for performance issues:
Measure (wall clock time, CPU time, memory consumption etc.)
Design & implement an algorithm that you think could be faster than current code.
Measure again to assess the impact of your fix.
Many times the biggest bottle necks aren't exactly where you though they were. So, base your actions on measured data.
Try to keep the number of SQL queries small. You're more likely to get performance improvements by lowering the amount of queries than restrucrturing the SQL syntax of an individual query.
I recommed adding some server side logic instead of directly firing the SQL queries from the client. You could implement caching shared but all clients on the server side.
Are there any good scripts that I could run against my Oracle database to test for SQL defects or maybe common performance issues?
Edit: Everything in an Oracle database can be queried. From the PL/SQL packages, indexes and sql running stats. The performance books say look in this place and it will show some absolute values that need the developer to be able to interpret. Has anyone combined their knowledge to include this interpretation within the scripts?
Are you asking for the information in this book?
http://www.amazon.com/Oracle-Database-Performance-Techniques-Osborne/dp/0072263059/ref=sr_1_1?ie=UTF8&s=books&qid=1264619796&sr=1-1
Are you asking about this wiki?
http://wiki.oracle.com/page/Performance+Tuning
Or are you asking for this vendor information?
http://www.oracle.com/technology/deploy/performance/index.html
Edit. There is no magical set of queries that you simply run and set the various tuning options.
Oracle is very complicated. Changing a parameter to make one thing fast can make several other things faster or slower. Or makes makes the instance consume more real memory than you have installed. It's hard to generalize this into magical queries. You have tools, but even then, the tools give you tuning options and you may need to run different experiments.
Performance is a balance. You have to strike a balance between physical I/O time and CPU time. It's not possible to generalize this into a magical query. Your system may need faster physical I/O (data warehouses, for instance, often need this) because it can't effectively work from cache. My system may need faster processor time and will have to work in cache to achieve this.
Performance is a function of your application. No magical query of Oracle will reveal a single thing about how your application is designed to work.
Enterprise Manager and it's associated performance tools are a good place to start looking for queries that are consuming the most resources. Here you can see the plans generated for your SQL, view traces of long running queries, etc.
If you have a budget, there is Spotlight by Quest. I've only used the trial version, but I found it useful.
I would recommend checking out the book Optimizing Oracle Performance and any of Cary Millsap's other writings. It is a waste of time to think about optimizing every query. You really need an approach to finding out where your performance bottlenecks are. His Method R approach is a very good one to read up on. Also most of Tom Kyte's books go into detail about performance issues.