I have a question on Oracle's ODP.NET BulkCopy method.
Does anyone have any ideea how this method is implemented ?
I want to know if it uses array binding...
Thank you!
Consult the documentation:
The ODP.NET Bulk Copy feature uses a
direct path load approach, which is
similar to, but not the same as Oracle
SQL*Loader. Using direct path load is
faster than conventional loading
(using conventional SQL INSERT
statements). Conventional loading
formats Oracle data blocks and writes
the data blocks directly to the data
files. Bulk Copy eliminates
considerable processing overhead.
For more, see the documentation.
Related
I may have missed it, but it looks like Snowflake only lets the user define JavaScript UDFs. I don't dislike JavaScript per se, but I have a package containing PL/SQL stored procedures and a couple of functions. I'd like to run these on Snowflake, but would rather not need to convert everything to JavaScript.
Especially because I can't do something like
INSERT INTO...
but now need to do something like
var sql='INSERT INTO...'
Snowflake.execute (sql);
Most of the PL/SQL inserts from one table based off the select from another query. Some functions do bulk fetches. Is there an easier way?
Though Snowflake SQL does not support PL/SQL or native SQL cursors but there are options which can be leveraged for your scenario. Please take a look at below links. Also please be informed that Snowflake's real processing power in terms of performance is when data is processed in bulk instead of processing data row by row.
https://community.snowflake.com/s/question/0D50Z00009f7StWSAU/i-have-written-below-cursor-in-sql-and-working-file-but-i-am-not-able-to-run-the-same-cursor-on-snowflake-please-help
https://docs.snowflake.com/en/user-guide/python-connector-example.html
Snowflake does not support PL/SQL, which is proprietary to Oracle. Looks like the recommended approach is to rewrite the procedures in Python and use Snowflake's Python API.
https://redpillanalytics.com/so-you-want-to-migrate-to-snowflake-part-2/
https://support.snowflake.net/s/question/0D50Z00008nRRhdSAG/i-am-migrating-oracle-plsql-code-into-snowflake-which-is-the-best-way-to-implement-this-using-java-api-or-python-api
As an organization we are moving towards the purchase of ODI as an ELT tool.
We have plenty of PLSQL resource but I have heard ODI is powerful enough at data manipulation to replace much of what was previously done in PLSQL.
What are its strengths? And weaknesses?
And can it completely do away with coding the data transformation in PLSQL?
No, it doesn't however you might be 99% correct here.
It's actually a tricky question as PL/SQL might be submitted by ODI too.
I would reserve it (PL/SQL) for defining functions/procedures (if you REALLY need to) to be later called by ODI.
This should NEVER be something immediately related to ETL like INSERT INTO … SELECT … FROM … - that's where ODI fits the bill perfectly.
The only justified cases, I came across during my ODI experience (9yrs) were:
- creating PL/SQL function to authenticate (and later authorize through OBIEE) an LDAP/AD user
- creating helper functions to be later called by ODI DQ(CKM) modules like is_number, is_date
- creating XML files directly by DB (even with never ODI XML driver you might still find it's best to use native DB XML API/functionality to produce XML) - for performance reasons. Other direct file operations (load/unload) could be done in this way.
- creating my own (optimized) hierarchy traversal query for performance reasons (beaten the standard Oracle SQL 'Recursive Subquery Factoring' feature to about 1000:1)
It's up to you if you want to make a reusable piece of logic by using PL/SQL and call it from ODI or code it from ODI directly (in the PL/SQL form)
I am using Derby In-Memory DB. I need to perform some data loading from csv files in the beginning. For now, it takes about 25 seconds to load all the csv files into their tables. I hope the time can be reduced. Due to the data files are not very large actually.
What I have done is using the built-in procedure from derby.
{CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE (?,?,?,',','"','UTF-8',1 )} or
{CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE (?,?,?,',','"','UTF-8',0 )}
The only special thing is sometimes the data in one tables is splitted into many small csv files. So I have to load them one by one.And I have tested if I can combine them together, it will only take 16 seconds. However I cannot remove this feature because it is needed by the user.
Is there anything I can do to reduce the time of loading data? Should I disable log or write some user-defined function/procedure or any other tune can be done? Any advice will be fine.
Thanks!
Use H2 instead of Derby, and use the CSVREAD feature. If that's still too slow, see the fast import optimization, or use the CSV tool directly (without using a database). Disclaimer: I wrote the CSV support for H2.
I have to develop one program that consists data manipulation (retrieve data, update data and new insert data) to multiple tables. Which will be suitable approach and faster in performance using DataSet object and stored procedure with CURSOR object? Please point me out. Thanks you all!
Data manipulation is typically faster when done in the DB in the stored procedure.
Unless there is a reason you have to do the manipulation within the application, do it on the DB itself.
I sugget to go for ORM aproach like Entety Framework , LINQ to SQL or Nhibernate you both get better performance and greater development speed.
I want to load data into text file that is generated after executing "views" in Oracle?How can I achieve this in oracle using UNIX.for example-
I want the same in Oracle on unix box.Please help me out as it alredy cosume lots of time.
your early response is highly appreciated!!
As Thomas asked, we need to know what you are doing with the "flat file". For example, if you're loading it into spreadsheet or doing some other processing that expects a defined format, then you need to use SQL*Plus and spool to a file. If you're looking to save a table (data + table definition) for moving it to another Oracle database then EXP/IMP is the tool to use.
We generally describe the data retrieval process as "selecting" from a table/view, not "executing" a table/view.
If you have access to directories on the database server, and authority to create "Directory" objects in Oracle, then you have lots of options.
For example, you can use the UTL_FILE package (part of the PL/SQL built-ins) to read or write files at the operating system level.
Or use the "external table" functionality to define objects that look like single tables to Oracle but are actually flat files at the OS level. Well documented in the Oracle docs.
Also, for one-time tasks, most of the tools for working SQL and PL/SQL provide facilities for moving data to and from the database. In the Windows environment, Toad's good at that. So is Oracle's free SQLDeveloper, which runs on many platforms. You wouldn't want to use those for a process that runs every day, but they're fine for single moves. I've generally found these easier to use than SQLPlus spooling, but that's a primitive version of the same functionality.
As stated by others, we need to know a bit more about what you're trying to do.