Sybase 15 performance issue - performance

I am working with Sybase 15 in my application and there is performance issue related with nested joins. I have stored procedure which selects 2 columns from 2 tables and compares equalities of over 10 columns between this 2 tables. But when I run this stor. proc., the result takes 40 minutes. I added "set merge-join off" statement to top of my proc then the result takes 22 seconds. but I need one more solution without that. I was using sybase 12.5 before and there was no any issue like that and my proc was take 3 mins for the result.
I have compared server configurations with sp_configure between 15 and 12.5 and sybase15 server configurations (I/O and memory configuration settings) are bigger than sybase12.5 server.
Info: sybase15 located pc's system resources are really good.

Same as the others, I have commiseration rather than a real answer! We are seeing a problem where the ASE 15 query planner massively underestimates the cost of a table scan and similarly overestimates the cost of using the clustered index. This results in a merge join being the suggested plan. Disabling merge joins or setting the allrows_oltp optgoal sometimes results in a better query plan. The estimated costs are still way off, but by taking one option off the table the query planner may find a good solution - albeit via the wrong analysis.
ASE 15 documents say that it has a much cleaner set of algorithms whereas the ASE 12 planner had a bunch of special cases. Perhaps a special case that says "if you have the clustered index column in the join it's going to be faster than a table scan" wouldn't be such a bad idea ... :(

I have just spent 14 hours at work debugging critical performance issues that arose from a Sybase 15 migration on the weekend.
The query optimiser has been making (for us) some very odd decisions.
Take an example,
select a, b, c from table1, table2, table3 where ...
versus
create table #temp (col1 int, col2 int, ... etc)
insert #temp
select a, b, c from table1, table2, table3 where ...
We had the first run in good time, and could not get it to make the correct decision in the 2nd instance, despite extensive reworking. We even took the query apart into temporary tables, but still got unusual results.
In the end we resorted to SET FORCEPLAN ON for some queries - this is after 10 hours of having our DBAs and Sybase on the line. The solution came from the application developers also rather than any advice from the Sybase engineers.
So to save yourself some time, take this route is my suggestion.

Sybase effectively rewrote the query engine for version 15 which means that queries that ran super-fast on 12.x may run much slower on the newer version, and vice versa. The only way to debug this is to compare the 12.x query plan to the 15 query plan and see what's being done differently.

Everyone concerned with this issue should read this doc:
http://www.sybase.com/files/White_Papers/ASE15-Optimizer-Best-Practices-v1-051209-wp.pdf
It has a candid warning about migrating from Sybase 12 to Sybase 15.
Quoteth:
... don't treat ASE 15 as "just
another release". As much as we would
like to say that you could simply
upgrade and point your applications at
the upgraded servers, the depth and
breadth of change in one of the most
fundamental areas of a database, query
execution, necessitates a more focused
testing regimen. This paper is meant
to provide you with the clear facts
and best practices to reduce this
effort as much as practically
possible.
It goes on to talk about the new ASE 15 Query Optimizer, vis-a-vis OLTP queries and DSS (Decision Support System) queries.
However, there's good news: in March 2009, Sybase 15.0.3 introduced a compatibility mode. See the following doc:
http://www.sybase.com/detail?id=1063556
With this mode, you need not analyze queries to decide if they fit OLTP or DSS profiles.

Related

Does the number of columns in a Vertica table impact query performance?

We are working with a Vertica 8.1 table containing 500 columns and 100 000 rows.
The following query will take around 1.5 seconds to execute, even when using the vsql client straight on one of the Vertica cluster nodes (to eliminate any network latency issue) :
SELECT COUNT(*) FROM MY_TABLE WHERE COL_132 IS NOT NULL and COL_26 = 'anotherValue'
But when checking the query_requests table, the request_duration_ms is only 98 ms, and the resource_acquisitions table doesn't indicate any delay in resource asquisition. I can't understand where the rest of the time is spent.
If I then export to a new table only the columns used by the query, and run the query on this new, smaller, table, I get a blazing fast response, even though the query_requests table still tells me the request_duration_ms is around 98 ms.
So it seems that the number of columns in the table impacts the execution time of queries, even if most of these columns are not referenced. Am I wrong ? If so, why is it so ?
Thanks by advance
It sounds like your query is running against the (default) superprojection that includes all tables. Even though Vertica is a columnar database (with associated compression and encoding), your query is probably still touching more data than it needs to.
You can create projections to optimize your queries. A projection contains a subset of columns; if one is available that has all the columns your query needs, then the query uses that instead of the superprojection. (It's a little more complicated than that, because physical location is also a factor, but that's the basic idea.) You can use the Database Designer to create some initial projections based on your schema and sample queries, and iteratively improve it over time.
I was running Vertica 8.1.0-1, it seems the issue was a Vertica bug in the Vertica planning phase causing a performance degradation. It was solved in versions >= 8.1.1 :
[https://my.vertica.com/docs/ReleaseNotes/8.1.x/Vertica_8.1.x_Release_Notes.htm]
VER-53602 - Optimizer - This fix improves complex query performance during the query planning phase.

Oracle not uses best dbplan

i'm struggeling with Performance in oracle. Situation is: Subsystem B has a dblink to master DB A. on System B a query completes after 15 seconds over dblink, db plan uses appropriate indexes.
If same query should fill a table in a stored procedure now, Oracle uses another plan with full scans. whatever i try (hints), i can't get rid of these full scans. that's horrible.
What can i do?
The Oracle Query Optimizer tries 2000 different possibilities and chooses the best one in normal situations. But if you think it choose wrong plan, You may suspect the following cases:
1- Your histograms which belongs to querying tables are deprecated.
2- Your indexes can not be used because of your faulty query.
3- You can use index hints to force the indexes to be used.
4- You can use SQL Advisor or run TKProf for performance analysis and decide what's wrong or what caused bad performance. Check network, Disk I/O values etc.
If you share your query we can give you more information.
Look like we are not taking same queries in two different conditions.
First case is Simple select over dblink & Second case is "insert as select over dblink".
can you please share two queries & execution plans here as You may have them handy. If its not possible to past queries due to security limitations, please past execution plans.
-Abhi
after many tries, I could create a new DB Plan with Enterprise Manager. now it's running perfect.

Why SQL query could take each time more time execute on subsequent executions?

I run a complex query against Oracle DB 11G based eBS R12 schema:
For first run it takes 4 seconds. If I run it again, it takes 9, next 30 etc.
If I add "and 1=1" it takes 4 seconds again, then 9, the 30 and so on.
Quick workaraound is that we added a random generated "and sometstring = somestring" and now the results are always in 4 second.
I have never encoutered a query that would behave this way (it should be opposite, or no siginificat change between executions). We tested it on 2 copies of same DB, same behaviour.
How to debug it? And what internal mechanics could be getting confused?
UPDATE 1:
EXPLAIN PLAN FOR
(my query);
SELECT * FROM table(DBMS_XPLAN.DISPLAY);
Is exactly the same before first run that it is for subsequent ones. see http://pastebin.com/dMsXmhtG
Check the DBMS_XPLAN.DISPLAY_CURSOR. The reason could be cardinality feedback or other adaptive techniques Oracle uses. You should see multiple child cursors related to SQL_ID of your query and you can compare their plans.
Has your query bound variables and columns used for filtering histograms? This could be another reason.
Sounds like you might be suffering from adaptive cursor sharing or cardinality feedback. Here is an article showing how to turn them off - perhaps you could do that and see if the issue stops happening, as well as using #OldProgrammer's suggestion of tracing what is happening.
If one of these is found to be the problem, you can then take the necessary steps to ensure that the root cause (eg. incorrect statistics, unnecessary histograms, etc.) is corrected.

How to speed up a simple SQL Server 2000 query

I have a product table which as many columns. Primary key is productid. There 50,000 rows but when I issue a select statement like select * from products then it is taking 10 minutes to get the full data. So advise me what to do as a result I can run my query faster.
Is your primary key also the clustering key on that table?
If you do a SELECT * .... you'll basically always get a full table scan. There's really nothing that can speed that query up - you want all rows, all columns - so you get it all and it takes the time it takes.
If you do more "focused" queries like
SELECT col1, col2 FROM dbo.Products WHERE SomeColumn = 42
then you have a chance of speeding this up by using the appropriate indices.
Buy a better computer.
Seriously.
SQL Server 2000 has been retired years ago, so this is an OLD install. 50.000 products is a joke - any table below 1 million is nothing.
But when i issue a select statement like select * from products then it is taking 10 minute
to get the full data.
Assuming this is over LAN, not over a slow internet connection, there can be 2 reasons for that:
System is TERRIBLY OVERLOADED. Like SERIOUS overloaded. Not like I have not seen that on old setups. Been there, seen that - hard discs so overloaded (hey, they are SCSI, they are fast) that they took more than 2 seconds to answer to a request.
System is programmed by incompetents. Could be bad transaction level handling leading to terrible locks for long duration which block you. This is possible, but then you are in for a LOT of rework to get the ridiculous code out of the programming.
A select * from table should not take more than a couple of seconds to transfer all the data over LAN. Point. Unless the bale has tons of binary data (i.e. HUGH amounts of data in some fields).
As your local database specialist to make an analysis. Start with hardware load then move to locking behavior. Consider upgrading to a technology that is more modern, By now you are a LOT of generations behind.
Because there's no criterium (WHERE), the time your query takes is not due to the selection (determining which rows to select) but most likely due to the sheer size of the data.
The only solution is:
Do not use SELECT *, but select only the columns you need.

Oracle performance via SQLDeveloper vs application

I am trying to understand the performance of a query that I've written in Oracle. At this time I only have access to SQLDeveloper and its execution timer. I can run SHOW PLAN but cannot use the auto trace function.
The query that I've written runs in about 1.8 seconds when I press "execute query" (F9) in SQLDeveloper. I know that this is only fetching the first fifty rows by default, but can I at least be certain that the 1.8 seconds encompasses the total execution time plus the time to deliver the first 50 rows to my client?
When I wrap this query in a stored procedure (returning the results via an OUT REF CURSOR) and try to use it from an external application (SQL Server Reporting Services), the query takes over one minute to run. I get similar performance when I press "run script" (F5) in SQLDeveloper. It seems that the difference here is that in these two scenarios, Oracle has to transmit all of the rows back rather than the first 50. This leads me to believe that there is some network connectivity issues between the client PC and Oracle instance.
My query only returns about 8000 rows so this performance is surprising. To try to prove my theory above about the latency, I ran some code like this in SQLDeveloper:
declare
tmp sys_refcursor;
begin
my_proc(null, null, null, tmp);
end;
...And this runs in about two seconds. Again, does SQLDeveloper's execution clock accurately indicate the execution time of the query? Or am I missing something and is it possible that it is in fact my query which needs tuning?
Can anybody please offer me any insight on this based on the limited tools I have available? Or should I try to involve the DBA to do some further analysis?
"I know that this is only fetching the
first fifty rows by default, but can I
at least be certain that the 1.8
seconds encompasses the total
execution time plus the time to
deliver the first 50 rows to my
client?"
No, it is the time to return the first 50 rows. It doesn't necessarily require that the database has determined the entire result set.
Think about the table as an encyclopedia. If you want a list of animals with names beginning with 'A' or 'Z', you'll probably get Aardvarks and Alligators pretty quickly. It will take much longer to get Zebras as you'd have to read the entire book. If your query is doing a full table scan, it won't complete until it has read the entire table (or book), even if there is nothing to be picked up in anything after the first chapter (because it doesn't know there isn't anything important in there until it has read it).
declare
tmp sys_refcursor;
begin
my_proc(null, null, null, tmp);
end;
This piece of code does nothing. More specifically, it will parse the query to determine that the necessary tables, columns and privileges are in place. It will not actually execute the query or determine whether any rows meet the filter criteria.
If the query only returns 8000 rows it is unlikely that the network is a significant problem (unless they are very big rows).
Ask your DBA for a quick tutorial in performance tuning.

Resources