SSIS - Data flow stuck at Execution Phase while using Attunity Oracle Source - oracle

I am using Attunity Oracle drivers to connect to Oracle Database on a remote server to retrieve data and dump into Excel File.
Everything works fine in Visual Studio BIDS. From VS I can connect directly to remote Oracle server and retrieve the data.
But when i deploy this ETL to my production server (64 Bit Windows Server 2008 & SQL Server 2012), ETL is always get stuck at Execution phase. After running for some time (20-30 mins), it gives following warning & still keeps running without giving any errors -
[SSIS.Pipeline] Information: The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers. 0 buffers were considered and 0 were locked.
Either not enough memory is available to the pipeline because not enough is installed, other processes are using it, or too many buffers are locked.
Some more info -
I have checked server memory, only 3GB is in use out of total 12GB.
I have already set SQL server to use max 8GB.
I am using SQL Server Agent job to run the ETL periodically every 15 mins.
I have tried stopping all other ETLs on the server and tried running this ETL through Execute Package Utility but still the result is same.
I am using a date range in Oracle Query to retrieve the data, when the query for a particular date range does not return any data, ETL execution is always successful !!.
Progress log (Execute Package Utility) -
Any pointers/suggestion ??
Hope I am able to describe the issue properly.
Update (5/Mar/2014) -
I tried reducing the amount of data I am retrieving, and the ETL was successful.
I have also set the DefaultBufferSize to 10 MB(Max size).
But if the query data is exceeding DefaultBufferSize then why the package is successful on my development machine but not on the server ??
Thanks,
Prateek

Related

Best way to transfer data IN BATCHES/BULK from Oracle 19c to SQL Server 2016 using SSIS

We have a legacy process that runs on SSIS 2016 on Windows Server 2016, executes custom queries against databases on remote servers, pulls the results (thousands or sometimes millions of records) and stores them in a local SQL Server database. These other databases are on DB2 and Oracle 19c.
This process has always connected using an OLE DB driver and a data flow with OLE DB source and destination components. It also has always been slow.
Because of some article we read recently talking about how OLE DB transfers only 1 record at a time, but with ADO.NET this network transfer could be done in batches (is this even true?), we decided to try to use an ADO.NET driver to connect to DB2 and replace the OLE DB source and destination components by ADO.NET components.
The transfer we were using as test case, which involved 46 million records, basically flew and we could see it bring down around 10K records at a time. Something that used to run in 13 hours ran in 24 minutes with no other changes. Some small tweaks in the query allowed us to bring that time even lower to 11 minutes.
This is obviously major and we want to be able to replicate it with our Oracle data sources. Network bandwidth seems to have been the main issue, so we want to be able to transfer data from Oracle 19c to our SQL Server 2016 databases using SSIS in batches, but want to ask the experts what the best/fastest way to do this is.
Is Microsoft Connector for Oracle the way to go as far as driver? Since we're not on SQL Server 2019, this article says we also need to install the Oracle Client and Microsoft Connector Version 4.0 for Oracle by Attunity. What exactly is the Oracle Client? Is it one of these? If so, which one, based on our setup?
Also, should we use ADO.NET components in the data flow just like we did with DB2? In other words, is the single record vs. record batches difference driven by the driver used to connect, the type of components in the data flow or both need to go hand in hand for this to work?
Thanks in advance for your responses!
OLEDB connections are not slow by themselves - it's a matter or what features the driver has available to it. It sounds like the ADO.NET driver for DB2 allows bulk insert and the OLEDB one does not.
Regarding Oracle, the attunity driver is the way to go. You'll need to install the oracle driver as well. The links that you have look correct to me but I don't have access to test.
Also, please note that dataflows will batch data by default in increments of the buffer size. 10k rows for example.

SQL Server 2019 : Why is the generation of the explain plan is taking so much time (hanging)?

Situation
Same query and same volume on a new server (same hardware specs, processors, RAM, disk SSD, etc...) on SQL Server 2016 runs in 8 seconds and on SQL Server 2019 more than 3 hours.
Step by step
Installed a new SQL Server 2019 database on a new server, to be the new production environment. Same number of processors, same memory, SSD disks, data in one disk, logs on other, etc ....
Migrated the tables, views, stored procedures, the data, the indexes, rebuild all the indexes.
Executed the ETL, reading from source production, and all is OK, execution times are within params.
Configured the reporting tool (that generates SQL over the database), all ok.
problem with some reports.
Copy the SQL to the Management studio to debug and just to generate the explain plan of this query, on the SQL Server 2016 it takes 8 sec, but on the SQL Server 2019 several minutes (after 5 minutes, I cancelled the request)
Why?
Then I:
checked the memory "Available physical memory is high"
rebuilt the indexes
confirm that the disks were SSD
execute the explain plan and check if the CPUs where being used (monitor)
updated the statistics (exec sp_updatestats)
installed the CU9 and restart the SQL Server 2019 (not the server)
cut the query to be able to generate the explain plan on both servers.
compare explain plans (between 2016 and 2019) and change the "Cost Threshold for Parallelism" and the "Max Degree of Parallelism" to 0 because 2016 used parallelism and 2019 was not. Same problem.
use HINT to force parallelism, but with same execution times again.
then out of nothing and without HINT, it was using now parallelism on the short explain plan, but still unable to generate the complete explain plan.
the query was reading from ## tables so I've created normal tables on the database, same problem.
Bottom line
For me, it's strange the amount of time that SQL Server 2019 needs to generate the explain plan, while the SQL Server 2016 only need a couple of seconds.
How can I troubleshoot this?
I have experienced very similar problem with SQL Server 2019 (RTM-CU16-GDR) on windows 2019.
The query was a simple query like "select count(*) from Schema1.Table1 where report_date='2022-01-23' and type = 2 and DueDate='2022-03-18'". I just tried to see estimated execution plan but it took 3 minutes. When I went into details, I have realized that Statistic is created for DueDate automatically. Since the statistic is created, plan generation took just a few seconds. I When I remove the statistics, again it took 3 minutes. When I created the statistics of DueDate manually, plan generation took a few seconds which was very good indeed.
To find solution I turned off AUTO_CREATE_STATISTICS off and on, and then it behaved normal, plan generation took a few seconds. Here is the script.
ALTER DATABASE [DbName] SET AUTO_CREATE_STATISTICS OFF
GO
ALTER DATABASE [DbName] SET AUTO_CREATE_STATISTICS ON
GO
After this simple silly turning OFF and ON, even after removing the specific statistic of the column, the estimated plan was generated in seconds instead of minutes.

SSIS not running in parallel with OraOLEDB.Oracle.1 Provider

we had one SSIS package with Oracle 11 Client, we would run our daily query with 30min to 1 hour run time.
we had to upgrade our oracle clients as one of our other oracle source got upgraded.
post upgrade to Oracle 12c, our daily job run time increased.
oracle DBA said, its not running in parallel, as its occupying only one processor.
when we run the same query from SQL Developer or toad, its running in parallel. but if we run from SSIS OLEDB Source component its not running in parallel.
I'm clue less with this behavior. any solution will be helpful.
ask me more clarifications if required.
Trying to figure out the issue
I tried to search on this topic, i didn't find a lot of informations but i think it is based on the OLEDB Connection string provided in the OLEDB Connection Manager.
Check the following Oracle documentation it may give you some insights:
Features of OraOLEDB
In the link above, in the Distributed Transactions part, they mentioned that:
The DistribTX attribute specifies whether sessions are enabled to enlist in distributed transactions. Valid values are 0 (disabled) and 1 (enabled). The default is 1 which indicates that sessions are enabled for distributed transaction enlistments.
Sessions enabled for distributed transaction enlistments cannot run statements that use the direct path load and parallel DML capabilities of the Oracle database. Such statements are executed as conventional path serial statements.
I am not sure if this could help, but it is not bad to give a try.
Oracle Attunity Connectors
Instead of using OLEDB Source to read from oracle, it is better to use Oracle Attunity Connectors for SSIS which guarantee higher performance than OLEDB Source:
Microsoft Connectors By Attunity
Attunity's high speed connectors for Oracle and Teradata have been selected by Microsoft to be included with SQL Server Integration Services (SSIS).

OCISessionBegin hangs in multithreaded COM+ applications [Delphi + ODAC + Oracle]

We have here a application that uses ODAC components inside COM+ dlls to connect to Oracle Server 11g.
Lately we are facing a problem that we cannot find the solution.
For some reason, when the concurrency of the application server at some of our clients is too high, some dlls starts to hang and they have to kill the process to restore the usability of our product. Trying to reproduce the error here at our office, we created a test environment to stress an application server. We start 30-50 programs that make calls to application and after some time the problem appears.
Debugging our DLL after the server hangs, shows that any subsequent call to OCISessionBegin cannot complete. No error is generated. No other symptoms are visible.
The last line that the we try to execute is: Check(OCISessionBegin(...)); on OraClasses.pas
We checked the database no contention, no lock.
We are using ODAC 6 on our clients, but we upgraded it to the last version and the problem persists. We have to use the Oracle Client 10 to connect to the database 11g because the are using the version 6 of ODAC.
Thanks a lot
AFAIK you need to create your environment with both OCI_EVENTS + OCI_THREADED attributes sets, in such a configuration.
For instance, here is how it is initialized in our Open Source direct Oracle access unit:
fEnvironmentInitializationMode := OCI_EVENTS or OCI_THREADED;
...
with OCI do
try
if fEnv=nil then
// will use UTF-8 encoding by default, in a multi-threaded context
// OCI_EVENTS is needed to support Oracle RAC Connection Load Balancing
EnvNlsCreate(fEnv,Props.EnvironmentInitializationMode,
nil,nil,nil,nil,0,nil,OCI_UTF8,OCI_UTF8);
I suspect you have to check how your OCI environment is created in ODAC.

Informatica: reduced performance for simplet one to one mapping when target change from Oracle to SQL Server

I have simple informatica(9.1) mapping(one to one) which loads data from flat file to RDBMS
it take 5 mins to load to Oracle db and 20 mins to load same file in SQL Server 2008 R2.
Can there be any source/pointers for performance improvement
A few things I can think of
for both tests is the file local to the app server running the mapping?
is the connection/distance between the app server and the data servers comparable
Is "Target load type" of the Target in the Session Properties set to "Bulk"?
Check the thread statistics in the session log to understand if the issue is while writing to db or while reading from file.
Is the PC server installed on Oracle db server? Is it the same case with SQL Server? Are SQL Server and PC server on the same box?
Is the mapping using ODBC or native connection to DB ?

Resources