I'm trying to load a table from SQL Server using the Microsoft OLE DB Provider into an Oracle table (using the Oracle Provider for OLE DB). The package is a straight forward OLE DB Source (SQL Server) -> OLE DB Destination (Oracle).
I'm using SQL Server 2008 R2 and Oracle 11g.
Every time I run the package, I get a different number of rows in the destination table, and BIDS reports fewer rows read than there are in the source table. The number of rows returned is different each time I run it. I get no errors or kickouts, but the boxes for the source and destination remain yellow even after BIDS says "Package completed successfully".
Dumping the source table into a flat file instead of the Oracle destination works fine, and I get all the rows that I expect. I can use this flat file to pull the information into the Oracle destination table without problems as well.
Even though I have a work-around, I want to understand why this is happening, and what I can do to resolve this problem without having to use flat files.
Edit: Looks like even using the flat file to Oracle doesn't bring over all the rows. The first time was just luck?
Edit/Update: Running the package out of Integration Services (not BIDS) seems to have eliminated the problem (tested three times). Still don't understand why this is happening though.
Related
We have a legacy process that runs on SSIS 2016 on Windows Server 2016, executes custom queries against databases on remote servers, pulls the results (thousands or sometimes millions of records) and stores them in a local SQL Server database. These other databases are on DB2 and Oracle 19c.
This process has always connected using an OLE DB driver and a data flow with OLE DB source and destination components. It also has always been slow.
Because of some article we read recently talking about how OLE DB transfers only 1 record at a time, but with ADO.NET this network transfer could be done in batches (is this even true?), we decided to try to use an ADO.NET driver to connect to DB2 and replace the OLE DB source and destination components by ADO.NET components.
The transfer we were using as test case, which involved 46 million records, basically flew and we could see it bring down around 10K records at a time. Something that used to run in 13 hours ran in 24 minutes with no other changes. Some small tweaks in the query allowed us to bring that time even lower to 11 minutes.
This is obviously major and we want to be able to replicate it with our Oracle data sources. Network bandwidth seems to have been the main issue, so we want to be able to transfer data from Oracle 19c to our SQL Server 2016 databases using SSIS in batches, but want to ask the experts what the best/fastest way to do this is.
Is Microsoft Connector for Oracle the way to go as far as driver? Since we're not on SQL Server 2019, this article says we also need to install the Oracle Client and Microsoft Connector Version 4.0 for Oracle by Attunity. What exactly is the Oracle Client? Is it one of these? If so, which one, based on our setup?
Also, should we use ADO.NET components in the data flow just like we did with DB2? In other words, is the single record vs. record batches difference driven by the driver used to connect, the type of components in the data flow or both need to go hand in hand for this to work?
Thanks in advance for your responses!
OLEDB connections are not slow by themselves - it's a matter or what features the driver has available to it. It sounds like the ADO.NET driver for DB2 allows bulk insert and the OLEDB one does not.
Regarding Oracle, the attunity driver is the way to go. You'll need to install the oracle driver as well. The links that you have look correct to me but I don't have access to test.
Also, please note that dataflows will batch data by default in increments of the buffer size. 10k rows for example.
Afternoon Folks,
We are using Visual Studio 2013 and have an SSIS package that we are creating. We have a simple Data Flow Task that essentially takes some data from SQL and Pushes through to an Access Database. It has three DFD flow items:
I have a OLE DB Source (obtains the SQL via a select statement) --> Data Conversion (Convert SQL Data Types to Access) --> OLE DB Destination (Access Database)
The steps selecting the SQL and converting it works fine.
The issue we have is the SQL command that we are using to update the Access 2010 database.
We have tried to run and create a simple UPDATE statement to update a couple of fields with hardcoded data, but this doesn't update. We have also tried creating a stored procedure and then executing this within the SQL Command line in the OLE Destination Editor.
We can see from posts on the net that we can create a procedure in access 2010 and use this. We are also using Native OLE DB\Microsoft Office 12.0 access Database Engine OLE DB Provider. This connection tests successfully.
We can write a SELECT statement within the SQL Command line and this does pull back data. We just seem to have a problem with the UPDATE and or Create Procedure. In turn we are unable to populate the Mappings. the mappings display the destination box but no fields are displayed within here.
We have had a good look around on the internet but we are struggling to find a solution.
Here is a sample of the code in the form of the update statement we are trying to get working.
UPDATE ReferenceFields
INNER JOIN Addresses
ON ReferenceFields.ID = Addresses.ID
SET ReferenceFields.Reference2 =CustomerName,
ReferenceFields.Reference3 = telephone
WHERE Addresses.UPRN = 12345678910
If I Parse the query it is successful but when I select Mappings a warning is displayed....
Error at Data Flow Task [OLE DB Destination [136]]: No column
information was returned by the SQL command
https://msdn.microsoft.com/EN-US/library/office/ff845861.aspx
https://msdn.microsoft.com/en-us/library/ms141044.aspx
SSIS OLEDB destination with SQL command (Insert if not exists)
After some time I have managed to answer this question myself.
I have added unique parameters to each field I need to update and used the # char to set this.
UPDATE ReferenceFields
INNER JOIN Addresses
ON ReferenceFields.ID = Addresses.ID
SET ReferenceFields.Reference2 = #CustomerName,
ReferenceFields.Reference3 = #telephone
WHERE UPRN = #UPRN
This runs through the code and updates the MS Access database.
This throws me the below error:
the media family on device is incorrectly formed 3241.
I tried loading the .dmp file as .bak file and restored the db. It did not work.
Only way I know to extract from dmp is to use the "INDEXFILE" parameter for IMP, this will generate a readable SQL script with the DDL and DML.
However often times this script is not 100% usable as it (usually) wraps the statements, so some pre-processing may be required, for example parse the file by each discrete statement (INSERT, CREATE), join each statement into a single line then squirt into the target database. Having said that, you would probably need to pre-process anyway to translate Oracle to SQL server dialogue anyway.
Also, might not be so good for BLOB/binary type data.
The other indirect way to do this would be to create a bridge Oracle database, import the file into there, then use the normal extract and load tools to push the data into SQL server.
A *.dmp file in Oracle is nothing but a backup file. You meant to say restoring a Oracle DB backup file in SQL Server.
AFAIK, the answer is NO. You can't do that. Probably you can check, if there is any third party utility present using which you can perform a DB migration.
The dmp file comes in an Oracle specific format that cannot be parsed/interpreted by anything other than Oracle's imp tool. So, that means you cannot import the dmp file into SQL Server.
Of course there are ways to transfer data from Oracle to SQL Server but which one is optimal depends on your needs, amount of data, number of tables, number of Oracle schemas, datatypes etc etc.
I'm running into a problem while linking some tables and views present on a Oracle 11g database to a Access 2007 file.
I'm using the Oracle Client (SQORA32.DLL) version 11.02.00.03.
If the view/table returns a small amount of data, there's no problem. The problem happens when the view or the table returns a "large" amount of data. I've tried to increase the buffer size on the driver (default is 64000) to see if that happens. I've also removed the "Enable query timeout" option - otherwise I would get a "Query cancelled by user" or a "ODBC - Call Failed" error.
In order to link the tables/views, I've used the "native tool" (External Data -> ODBC Database -> Link to data source by creating a linked table).
I was wondering if I could retrieve the data from the tables/views using vba. Sometimes, I (you should read "I" as "the users") may need to update data on some tables (control tables).
Please let me know your thoughts.
EDIT: Our goal with this project was to migrate from SQL Server 2005 to Oracle 11gR2. After analyzing the behaviour of the Access files regarding the SQL Server, I've concluded that the results are showing like a "cursor" - if you scroll down on the result window, it will load more.
I think that this may be the issue because, AFAIK, Oracle (driver, maybe?) pulls everything from the DB and, only then, populates MS Access.
It's a long time after this so here goes the solution. MS access has a flag for the ODBC connection as "Treat Float as Numeric". This have made the trick.
I'm developing an application which runs on an Oracle database. I'm now in the process of creating an installation package which will contain some SQL scripts containing the default data that comes with the program.
Some of the tables have BLOB columns which need to contain MS Word documents. I'm not sure how to get these BLOBs into the SQL scripts. I know I could do it through Data Pump, but it is a requirement that all database stuff is included in plain text SQL files.
Does anyone know how to get these BLOBs into an SQL script which the client can just run?
Thanks!
I solved this problem by creating a PHP script that is run as part of the installation process - it loops through all my word documents and inserts them into the database. I would still rather have SQL scripts or something similar but this works for now.