AnyDAC components execute SQL query slower than FIBPlus components - performance

The following SQL command returns approximately 4.500 records and contains integer-, string- and blob (text) values. All indexes are set properly. Furthermore we know that the in clause is not the best but that shouldn't bother us right now. The SQL command is executed on a firebird 3.0 server:
Select
distinct O.Nr, O.Betreff, O.Text, O.Absenderadresse, O.Gelesen, O.Datum, O.Ordner, O.TextNotiz, M.ID, R.PosNr as RPosNr, R.AddressType, R.Address, R.Name, A.Nr as AttachmentNr, M.Bezeichnung as Mailordner, 0 as Archived
from Table1 O
left join Table2 R on (R.Nr = O.IDNr)
join Table3 M on (M.Nr = O.Ordner)
and (M.PersonalNr=O.PersonalNr)
left join Table4 A on (A.Nr = O.IDNr)
where (O.PersonalNr = 12)
and O.Ordner in (608,7440,7441,7444,6144,7091,5467,617,2751,710,6580,2812,609,7054,7194,7479,614,620,7030,615,3434,4883,619,6465,7613)
We executed the SQL command in an external application (from which we know that this application uses the FIBPlus components) and in our very basic sample Delphi7 application (which uses the original AnyDAC database components Version 8.0.5). If we fetch all records into a grid, we have the following performance:
External Application with FIBPlus ~ 200 – 400 milliseconds
Delphi7 with AnyDAC ~ 3.500 – 4.500 milliseconds
In our Delphi7 program we have connected a TADQuery to it's own TADTransaction.
The default settings are used for both components, except the ReadOnly property of the TADTransaction, which we changed to True
Now we wonder why the external application is approximately 10 times faster than our Delphi7 program?
Are there any properties which we can modify to speed up our Delphi 7 program?
Any help is appreciated...

Related

ORACLE UPDATE Statement with MERGE INTO takes forever

I have the following oracle statement which some days ago worked quick and fine. When I run it now it takes forever:
MERGE INTO tablef f using
(
select t.colf,t.colup
from tablet t
inner join (select max(created) as maxcreated
from tablet) mt
on t.created = mt.maxcreated) fu
on f.colf = fu.colf
WHEN MATCHED THEN
UPDATE SET f.aus = fu.colup
Or is there a way to reformulate the query to get it running again and faster?
The two tables involved have a primary key. For tablef the primary key is colf and for tablet the primary key is colf and created.
Actually I also want to run this query from an SQL server database via an oracle linked server.
And when I run this in SQL server management studio as follows:
execute ('MERGE INTO tablef f using
(
select
t.colf,t.colup
from tablet t
inner join (select max(created) as maxcreated from tablet) mt on t.created = mt.maxcreated) fu
on f.colf = fu.colf
WHEN MATCHED THEN
UPDATE SET f.aus = fu.colup') AT ORACLELINKEDSERVER*
it runs without error and says 1 row affected. However if I check the data on the oracle database the row is not updated.

Different output script generated by linq to sql 2016 Express/Standard edition vs developer edition

I'm developing desktop application (WinForm) using vb.net and using LINQ to access the database (SQLSERVER 2016)
I have 2 instance database identical DB(same structure and data).
- SQLEXPRESS2016 (Express edition)
- SQLSERVER2016 (Developer edition)
But why I getting significant Different time execution and different schema execution plan also? The sql script generated by LINQ??
dim myResult = (from i in myDataContext.ItemMaster _
Where i.IsActive _
Order by m.ItemNumber).AsQueryable
dim count = myResult.Count()
I profiling the query generated by linq by SQL Server profiling, and myResult.Count() will generated the script (in this case the script is same between dev and standard/express):
Select Count(1) AS [value]
FROM( Select TOP (1000) NULL AS [EMPTY]
FROM ITEM_MASTER as [t0] WHERE IS_Active = 1
ORDER BY [t0].[ItemNumber]
)AS [t1]
My Question are:
1. Why in some cases the query was different between
Express/Standard edition and Developer edition? (The DB structure and data is same, just different version)
one will generated SELECT TOP (1)..., the other will SELECT TOP (2) ....
Why The execution time different is significant.
dev = 0.xx seconds
std/express = 8s econds.
it should not a big deal, since the number of rows about 10,000 something
Why the execution plan also different? std/express seems more complicated schema and some index is missing.
screenshot sql dev vs express/standard
It solved by adding NOEXPAND hint keyword.
See original answer here:
https://social.msdn.microsoft.com/Forums/en-US/b095ce80-6b19-45a5-9a31-4532fcd8af83/different-output-script-generated-by-linq-to-sql-2016-expressstandard-edition-vs-developer-edition?forum=sqlnetfx
credit to: Yuvraj Singh Bais

Is this SQL running slowly because of the code? Or the external db calls?

So, this is a pretty simple thing, I should think. It runs without error, but takes forever (about 1.5 hours when limited to 'top 1000' records, and over 5 hours without that limit).
I am relatively new to writing SQL and more accustomed to using GUIs to get my reporting done, but this is needed to make data available for another project so it needs to be setup as an SSIS package to keep stuff up to date.
The intent here is to pull data from the TTS db and park it in the table which is on another db. TTS is an Oracle db and the working db is MSSQL. I'm running this in MS SQL Server Manager 2008.
My suspicion is that it's dragging because either the MS-to-Oracle thing isn't happy, or because I've built the logic in the Where clause instead of doing table joins in the From clause.
Thoughts?
Insert into
OCC.Workflow_Step (INV_SEQ_ID, DATE_ENTERED, DATE_EXITED, [DESCRIPTION])
Select
INV.INVOICE_SEQ_ID,
Cast(WS.DATE_ENTERED as date) DATE_ENTERED,
Cast(WS.DATE_EXITED as date) DATE_EXITED,
WFS.[DESCRIPTION]
From
TTS..BT51.INVOICE INV,
TTS..BT51.WRK WRK,
TTS..BT51.WRK_STATE WS,
TTS..BT51.WF_STATE WFS
Where
INV.INVOICE_SEQ_ID=WRK.FK_ID
and WRK.WRK_ID=WS.WRK_ID
and WS.WF_STATE_ID=WFS.WF_STATE_ID
and WS.DATE_EXITED is null
;
Thank you for your time and effort. I'm fairly sure this is something I should know but being new, there are some nuances that I don't know whether they are systematic, or just me being me. :/
Updated/troubleshooting code as requested below:
Select count (*)
--INV.INVOICE_SEQ_ID,
--cast(WS.DATE_ENTERED as date) DATE_ENTERED,
--cast(WS.DATE_EXITED as date) DATE_EXITED,
--WFS.[DESCRIPTION]
From
TTS..BT51.INVOICE as INV
Inner join TTS..BT51.WRK as WRK on INV.INVOICE_SEQ_ID=WRK.FK_ID
inner join TTS..BT51.WRK_STATE as WS on WRK.WRK_ID=WS.WRK_ID
inner join TTS..BT51.WF_STATE as WFS on WS.WF_STATE_ID=WFS.WF_STATE_ID
Where
WS.DATE_EXITED is null
;
Okay, I sat down with a guy who knows the Oracle side of this equation better and we came up with the below. Runtime went from over 7 hours to under ten minutes.
select asdf.*
from
openquery([TTS], '
Select
INV.INVOICE_SEQ_ID,
WS.DATE_ENTERED,
WS.DATE_EXITED,
WFS.DESCRIPTION
From
BT51.INVOICE INV
inner join BT51.WRK on INV.INVOICE_SEQ_ID=WRK.FK_ID
inner join BT51.WRK_STATE WS on WRK.WRK_ID=WS.WRK_ID and WS.DATE_EXITED is null
inner join BT51.WF_STATE WFS on WS.WF_STATE_ID=WFS.WF_STATE_ID
') asdf
inner join occ.amo_occ_stg stg on stg.invoice_seq_id=asdf.INVOICE_SEQ_ID
Thank you all for your time and energy on this. It is immensely appreciated. :)
Running this as an OpenQuery seems to have fixed the issue, reducing run time from 7+ hours to under 10 minutes.
Thank you to everyone who looked at it and offered thoughts/assistance.
Working/final code below. :)
select asdf.*
from
openquery([TTS], '
Select
INV.INVOICE_SEQ_ID,
WS.DATE_ENTERED,
WS.DATE_EXITED,
WFS.DESCRIPTION
From
BT51.INVOICE INV
inner join BT51.WRK on INV.INVOICE_SEQ_ID=WRK.FK_ID
inner join BT51.WRK_STATE WS on WRK.WRK_ID=WS.WRK_ID and WS.DATE_EXITED is null
inner join BT51.WF_STATE WFS on WS.WF_STATE_ID=WFS.WF_STATE_ID
') asdf
inner join occ.amo_occ_stg stg on stg.invoice_seq_id=asdf.INVOICE_SEQ_ID

After upgrading from Sql Server 2008 to Sql Server 2016 a stored procedure that was fast is now slow

We have a stored procedure that returns all of the records that fall within a geospatial region ("geography"). It uses a CTE (with), some unions, some inner joins and returns the data as XML; nothing controversial or cutting edge but also not trivial.
This stored procedure has served us well for many years on SQL Server 2008. It has been running within 1 sec on a relatively slow server. We have just migrated to SQL Server 2016 on a super fast server with lots of memory and a super fast SDDs.
The entire database and associated application is going really fast on this new server and we are very happy with it. However this one stored procedure is running in 16 sec rather than 1 sec - against exactly the same parameters and exactly the same dataset.
We have updated the indexes and statistics on this database. We have also changed the compatibility level of the database from 100 to 130.
Interesting, I have re-written the stored procedure to use a temporary table and 'insert' rather than using the CTE. This has brought the time down from 16 sec to 4 sec.
The execution plan does not provide any obvious insights into where a bottleneck may be.
We are a bit stuck for ideas. What should we do next? Thanks in advance.
--
I have now spent more time on this problem than i care to admit. I have boiled down the stored procedure to the following query to demonstrate the problem.
drop table #T
declare #viewport sys.geography=convert(sys.geography,0xE610000001041700000000CE08C22D7740C002370B7670F4624000CE08C22D7740C002378B5976F4624000CE08C22D7740C003370B3D7CF4624000CE08C22D7740C003378B2082F4624000CE08C22D7740C003370B0488F4624000CE08C22D7740C004378BE78DF4624000CE08C22D7740C004370BCB93F4624000CE08C22D7740C004378BAE99F4624000CE08C22D7740C005370B929FF4624000CE08C22D7740C005378B75A5F4624000CE08C22D7740C005370B59ABF462406F22B7698E7640C005370B59ABF462406F22B7698E7640C005378B75A5F462406F22B7698E7640C005370B929FF462406F22B7698E7640C004378BAE99F462406F22B7698E7640C004370BCB93F462406F22B7698E7640C004378BE78DF462406F22B7698E7640C003370B0488F462406F22B7698E7640C003378B2082F462406F22B7698E7640C003370B3D7CF462406F22B7698E7640C002378B5976F462406F22B7698E7640C002370B7670F4624000CE08C22D7740C002370B7670F4624001000000020000000001000000FFFFFFFF0000000003)
declare #outputControlParameter nvarchar(max) = 'a value passed in through a parameter to the stored that controls the nature of data to return. This is not the solution you are looking for'
create table #T
(value int)
insert into #T
select 136561 union
select 16482 -- These values are sourced from parameters into the stored proc
select
[GeoServices_Location].[GeographicServicesGatewayId],
[GeoServices_Location].[Coordinate].Lat,
[GeoServices_Location].[Coordinate].Long
from GeoServices_Location
inner join GeoServices_GeographicServicesGateway
on GeoServices_Location.GeographicServicesGatewayId = GeoServices_GeographicServicesGateway.GeographicServicesGatewayId
where
(
(len(#outputControlParameter) > 0 and GeoServices_Location.GeographicServicesGatewayId in (select value from #T))
or (len(#outputControlParameter) = 0 and GeoServices_Location.Coordinate.STIntersects(#viewport) = 1)
)
and GeoServices_GeographicServicesGateway.PrimarilyFoundOnLayerId IN (3,8,9,5)
GO
With the stored procedure boiled down to this, it runs in 0 sec on SQL Server 2008 and 5 sec on SQL Server 2016
http://www.filedropper.com/newserver-slowexecutionplan
http://www.filedropper.com/oldserver-fastexecutionplan
Windows Server 2016 is choking on the Geospatial Intersects call with 94% of the time spent there. Sql Server 2008 is spending its time with with a bunch of other steps including Hash Matching and Parallelism and other standard stuff.
Remember this is the same database. One has just been copied to a SQL Server 2016 machine and had its compatibility level increased.
To get around the problem I have actually rewritten the stored procedure so that Sql Server 2016 does not choke. I have running in 250msec. However this should not have happened in the first place and I am concerned that there are other previously finely tuned queries or stored procedures that are now not running efficiently.
Thanks in advance.
--
Furthermore, I had a suggestion to add the traceflag -T6534 to start up parameter of the service. It made no difference to the query time. Also I tried adding option(QUERYTRACEON 6534) to the end of the query too but again it made no difference.
From the query plans you provided I see that spatial index is not used on newer server version.
Use spatial index hint to make sure query optimizer chose the plan with spatial index:
select
[GeoServices_Location].[GeographicServicesGatewayId],
[GeoServices_Location].[Coordinate].Lat,
[GeoServices_Location].[Coordinate].Long
from GeoServices_Location with (index ([spatial_index_name]))...
I see that the problem with the hint is OR operation in query predicate, so my suggestion with hint actually won’t help in this case.
However, I see that predicate depends on #outputControlParameter so rewriting query in order to have these two cases separated might help (see my proposal below).
Also, from your query plans I see that query plan on SQL 2008 is parallel while on SQL 2016 is serial. Use option (recompile, querytraceon 8649) to force parallel plan (should help if your new superfast server has more cores then the old one).
if (len(#outputControlParameter) > 0)
select
[GeoServices_Location].[GeographicServicesGatewayId],
[GeoServices_Location].[Coordinate].Lat,
[GeoServices_Location].[Coordinate].Long
from GeoServices_Location
inner join GeoServices_GeographicServicesGateway
on GeoServices_Location.GeographicServicesGatewayId = GeoServices_GeographicServicesGateway.GeographicServicesGatewayId
where
GeoServices_Location.GeographicServicesGatewayId in (select value from #T))
and GeoServices_GeographicServicesGateway.PrimarilyFoundOnLayerId IN(3,8,9,5)
option (recompile, querytraceon 8649)
else
select
[GeoServices_Location].[GeographicServicesGatewayId],
[GeoServices_Location].[Coordinate].Lat,
[GeoServices_Location].[Coordinate].Long
from GeoServices_Location with (index ([SPATIAL_GeoServices_Location]))
inner join GeoServices_GeographicServicesGateway
on GeoServices_Location.GeographicServicesGatewayId = GeoServices_GeographicServicesGateway.GeographicServicesGatewayId
where
GeoServices_Location.Coordinate.STIntersects(#viewport) = 1
and GeoServices_GeographicServicesGateway.PrimarilyFoundOnLayerId IN (3,8,9,5)
option (recompile, querytraceon 8649)
check the growth of the data/log files on the new server (DBs) vs old server (DBs) configuration: the DB the query is running on + tempdb
check the log for I/O buffer errors
check recovery model of the DB's - simple vs full/bulk
is this a consistent behavior? maybe a process is running during the execution?
regarding statistics/indexes - are you sure it's running on correct data sample? (look at the plan)
many more things can be checked/done - but there is not enough info in this question.

SQL Server - Oracle Linked Server with join

Hare is the scenario:
Main DB Server: SQL Server 2008 R2 with a Linked Server to Oracle 11g.
I have a Stored Procedure that make a query like:
Select t1.a, t1.b, t2.c, t3.d
From LocalTable a inner join LinkedServerName..Schema.Tableb b on a.aNumber= b.id
inner join LinkedServerName..Schema.Tablec c on b.value = c.id
inner join LinkedServerName..Schema.Tabled d on a.someOtherNumber = d.Id
Where a.WhereValue1 = #Parameter1
and b.WhereValue2 = #Parameter2
That turns to be painfully slow. I cannot figure out how use OpenQuery to improve the query since the Where clauses uses parameters (if that is even possible to use).
Is there way to improve the data Retrieval? I'm retrieving millions of records from the Oracle DB.
Thanks you very much.
What I suggest you do is at least create a view on the Oracle side that joins tables b,c,d and join to that view. How many records are in LocalTable? If there are very few (under 10,000 or so) then you are better joining the entire thing on the Oracle side.
What is your overall objective? Are you building a report or trying to identifying differing records so you can merge the data? Are you able to make changes on the oracle side?

Resources