SQL Server 2014 Linked Server Performance issue - performance

When I add linked server (SQL Server 2012 or earlier versions) to SQL Server 2014 and try just to make plain select from linked server table like (select id ,image from [linkedserver].[dbname].[schema].[tablename]) I have performance impact.
Image column is blob and 200 records, each 50 kb blob data (summary 10 mb) query executes some 1 minute or so.
If I install SQL Server 2012 on the same server and add the same remote server as linked server and execute the same query I have 20 time better result (2-3 seconds)
I try to analyze query, also include statistics , but with no positive result.
Is it a SQL Server 2014 specific bug ?

Related

SSIS Package stuck up in Pre-Execute stage

We are having a SSIS Package, which is taking very long time in the Pre-Execute stage once in a while.
Background:
We have to pull sales transaction information from a humongous database.
What we are doing in our SSIS package is:
Execute SQL Task: Create global temporary table to hold the transactionIds with unique clustered index
Data Flow Task: Select query to pull transaction information by joining with the global temporary table created in Step No. 1 and load to target table in another SQL Server.
What is our issue:
Step No.2 is stuck up in the Pre-Execute stage for long time. When I run sp_whoisactive on the source server, there is nothing running in the humongous database.
When I query SSIS catalog, it comes as given below. The last row, Pre-Execute is stuck for more than 7 hours. The Package generally completes within an hour.
message
Load staging table from HumongousDB:Validation has started.
Load staging table from HumongousDB:Information: Validation phase is beginning.
Load staging table from HumongousDB:Validation is complete.
Load staging table from HumongousDB:Information: Prepare for Execute phase is beginning.
Load staging table from HumongousDB:Information: Pre-Execute phase is beginning.
What we have already done in our package:
DelayValidation to True in the Data Flow Task & connection managers for source, destination
ValidateExternalMetaData to False in Data Flow Task source & destination
Our SSIS Catalog SQL Server version is:
Microsoft SQL Server 2016
(SP2-CU15) (KB4577775) - 13.0.5850.14 (X64) Sep 17 2020 22:12:45
Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on
Windows Server 2012 Datacenter 6.2 (Build 9200: ) (Hypervisor)
can you please guide us on what can we do to avoid this issue ?

SQL Server 2019 : Why is the generation of the explain plan is taking so much time (hanging)?

Situation
Same query and same volume on a new server (same hardware specs, processors, RAM, disk SSD, etc...) on SQL Server 2016 runs in 8 seconds and on SQL Server 2019 more than 3 hours.
Step by step
Installed a new SQL Server 2019 database on a new server, to be the new production environment. Same number of processors, same memory, SSD disks, data in one disk, logs on other, etc ....
Migrated the tables, views, stored procedures, the data, the indexes, rebuild all the indexes.
Executed the ETL, reading from source production, and all is OK, execution times are within params.
Configured the reporting tool (that generates SQL over the database), all ok.
problem with some reports.
Copy the SQL to the Management studio to debug and just to generate the explain plan of this query, on the SQL Server 2016 it takes 8 sec, but on the SQL Server 2019 several minutes (after 5 minutes, I cancelled the request)
Why?
Then I:
checked the memory "Available physical memory is high"
rebuilt the indexes
confirm that the disks were SSD
execute the explain plan and check if the CPUs where being used (monitor)
updated the statistics (exec sp_updatestats)
installed the CU9 and restart the SQL Server 2019 (not the server)
cut the query to be able to generate the explain plan on both servers.
compare explain plans (between 2016 and 2019) and change the "Cost Threshold for Parallelism" and the "Max Degree of Parallelism" to 0 because 2016 used parallelism and 2019 was not. Same problem.
use HINT to force parallelism, but with same execution times again.
then out of nothing and without HINT, it was using now parallelism on the short explain plan, but still unable to generate the complete explain plan.
the query was reading from ## tables so I've created normal tables on the database, same problem.
Bottom line
For me, it's strange the amount of time that SQL Server 2019 needs to generate the explain plan, while the SQL Server 2016 only need a couple of seconds.
How can I troubleshoot this?
I have experienced very similar problem with SQL Server 2019 (RTM-CU16-GDR) on windows 2019.
The query was a simple query like "select count(*) from Schema1.Table1 where report_date='2022-01-23' and type = 2 and DueDate='2022-03-18'". I just tried to see estimated execution plan but it took 3 minutes. When I went into details, I have realized that Statistic is created for DueDate automatically. Since the statistic is created, plan generation took just a few seconds. I When I remove the statistics, again it took 3 minutes. When I created the statistics of DueDate manually, plan generation took a few seconds which was very good indeed.
To find solution I turned off AUTO_CREATE_STATISTICS off and on, and then it behaved normal, plan generation took a few seconds. Here is the script.
ALTER DATABASE [DbName] SET AUTO_CREATE_STATISTICS OFF
GO
ALTER DATABASE [DbName] SET AUTO_CREATE_STATISTICS ON
GO
After this simple silly turning OFF and ON, even after removing the specific statistic of the column, the estimated plan was generated in seconds instead of minutes.

MS Access database on Office 365 with 64-bit version is getting intermittently hang with Linked Oracle tables

We have MS Access database hosted on desktop which is pulling the data using linked Oracle tables with database hosted on AWS cloud into different region.
MS Access dB with linked tables are getting hanged with following behaviors
If we keep any linked table open - It is getting hanged (Not responding) intermittently every 1-2 minutes even though we are not performing any operation on table.
Click on any cell within open linked table
Navigating from one row to other row in linked table using scroll bar.
MS Access DB Size is around 45 MB.
Network Latency is 200+ ms from Client Desktop to Source database.
Oracle db Version: 12c
MS Access DB Version: Office 365 64 bit MSO (16.0.12730.20394).
MS Access DB is connecting with Oracle database using ODBC driver.
Any insight/guidance is most welcome since we are struggling to resolve the issue since last one month.

SSIS - Data flow stuck at Execution Phase while using Attunity Oracle Source

I am using Attunity Oracle drivers to connect to Oracle Database on a remote server to retrieve data and dump into Excel File.
Everything works fine in Visual Studio BIDS. From VS I can connect directly to remote Oracle server and retrieve the data.
But when i deploy this ETL to my production server (64 Bit Windows Server 2008 & SQL Server 2012), ETL is always get stuck at Execution phase. After running for some time (20-30 mins), it gives following warning & still keeps running without giving any errors -
[SSIS.Pipeline] Information: The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers. 0 buffers were considered and 0 were locked.
Either not enough memory is available to the pipeline because not enough is installed, other processes are using it, or too many buffers are locked.
Some more info -
I have checked server memory, only 3GB is in use out of total 12GB.
I have already set SQL server to use max 8GB.
I am using SQL Server Agent job to run the ETL periodically every 15 mins.
I have tried stopping all other ETLs on the server and tried running this ETL through Execute Package Utility but still the result is same.
I am using a date range in Oracle Query to retrieve the data, when the query for a particular date range does not return any data, ETL execution is always successful !!.
Progress log (Execute Package Utility) -
Any pointers/suggestion ??
Hope I am able to describe the issue properly.
Update (5/Mar/2014) -
I tried reducing the amount of data I am retrieving, and the ETL was successful.
I have also set the DefaultBufferSize to 10 MB(Max size).
But if the query data is exceeding DefaultBufferSize then why the package is successful on my development machine but not on the server ??
Thanks,
Prateek

Informatica: reduced performance for simplet one to one mapping when target change from Oracle to SQL Server

I have simple informatica(9.1) mapping(one to one) which loads data from flat file to RDBMS
it take 5 mins to load to Oracle db and 20 mins to load same file in SQL Server 2008 R2.
Can there be any source/pointers for performance improvement
A few things I can think of
for both tests is the file local to the app server running the mapping?
is the connection/distance between the app server and the data servers comparable
Is "Target load type" of the Target in the Session Properties set to "Bulk"?
Check the thread statistics in the session log to understand if the issue is while writing to db or while reading from file.
Is the PC server installed on Oracle db server? Is it the same case with SQL Server? Are SQL Server and PC server on the same box?
Is the mapping using ODBC or native connection to DB ?

Resources