Is there a Oracle equivalent of mysqldump - oracle

Is there a way to dump the content of a oracle table in to a file formated as INSERT statements. I can't use oradump as it is on GPL. I will be running it from a perl CGI script. I am looking for something to dump data directly from oracle server using a single command. Running a select and creating insert statements using perl is too slow as there will be lot of data.
I know I can probably do this using spool command and a plsql block at server side. But is there a built in command to do this instead of formating the INSERT statements myself?

Generating large numbers of INSERT statement will likely be slow no matter how you do it, and it will be slow to execute all the inserts as well. Why are you doing this? A more efficient solution, if you can't use a tool like data pump, would be to generate a text file you could later import with SQL*Loader.

the short answer is : NO.
The performance of generating those insert statements can be very positively influenced using bulk fetches. There is a good chance that dbi does support bulk fetches. Check it out and experiment with it. I also wrote a little program called fun that generates sql loader files in PRO*c. Not the best code but you can fetch it from a recent blog I wrote http://ronr.blogspot.com/2010/11/proc-and-xcode-32-how-to-get-it-working.html In the article I explained how to get PRO*c working on a mac using xcode and the program by coincident wat fun. (Fast Un Load). It almost does what you want, you can adjust it a little...
I hope it helps.

Related

Porting Oracle PL/SQL to Snowflake without JavaScript

I may have missed it, but it looks like Snowflake only lets the user define JavaScript UDFs. I don't dislike JavaScript per se, but I have a package containing PL/SQL stored procedures and a couple of functions. I'd like to run these on Snowflake, but would rather not need to convert everything to JavaScript.
Especially because I can't do something like
INSERT INTO...
but now need to do something like
var sql='INSERT INTO...'
Snowflake.execute (sql);
Most of the PL/SQL inserts from one table based off the select from another query. Some functions do bulk fetches. Is there an easier way?
Though Snowflake SQL does not support PL/SQL or native SQL cursors but there are options which can be leveraged for your scenario. Please take a look at below links. Also please be informed that Snowflake's real processing power in terms of performance is when data is processed in bulk instead of processing data row by row.
https://community.snowflake.com/s/question/0D50Z00009f7StWSAU/i-have-written-below-cursor-in-sql-and-working-file-but-i-am-not-able-to-run-the-same-cursor-on-snowflake-please-help
https://docs.snowflake.com/en/user-guide/python-connector-example.html
Snowflake does not support PL/SQL, which is proprietary to Oracle. Looks like the recommended approach is to rewrite the procedures in Python and use Snowflake's Python API.
https://redpillanalytics.com/so-you-want-to-migrate-to-snowflake-part-2/
https://support.snowflake.net/s/question/0D50Z00008nRRhdSAG/i-am-migrating-oracle-plsql-code-into-snowflake-which-is-the-best-way-to-implement-this-using-java-api-or-python-api

Sql huge insert script

I took a backup of a table in the form of insert script using toad for oracle. I could not use that script in toad to perform inserts because of the huge size. Is there a way that i can run the huge script using toad?
1. Reduce network time by running the script on the server. Chances are the vast majority of the time is spent waiting for the network. Normally each INSERT statement is a separate round-trip.
2. Reduce network time by batching the inserts. Wrap a begin and end; around a large number of inserts. A PL/SQL block only requires one round-trip. Note that you probably cannot put the entire script in a single anonymous block as there are parsing limits. You will get DIANA errors with anonymous blocks larger than roughly a few megabytes in size.
3. Run the code indirectly. Maybe just loading the file in Toad is the problem? Run a script that simply calls that script, perhaps something like #my_script.sql?
Without knowing more about Toad or what the script looks like I cannot say for sure if these will work. But I've used these approaches with similar issues, there is usually a way to make simplistic install scripts run more than 10 times faster.
Try running the script in SQLPLUS using '#'
from the View menu, show the Project Manager.
add sql files to the project
select the files, right click and choose Execute

JDBC query to Oracle

We are planning to migrate our DB to Oracle.We need to manually check each of the embedded SQL is working in Oracle as few may follow different SQL rules.Now my need is very simple.
I need to browse through a file which may contain queries like this.
String sql = "select * from test where name="+test+"and age="+age;
There are nearly 1000 files and each file has different kind of queries like this where I have to pluck the query alone which I have done through an unix script.But I need to convert these Java based queries to Oracle compatible queries.
ie.
select * from test where name="name" and age="age"
Basically I need to check the syntax of the queries by this.I have seen something like this in TOAD but I have more than 1000 files and can't manually change each one.Is there a way?
I will explain more i the question is not clear
For performance and security reasons you should use PreparedStatement.bind(...) rather than string concatenation to build your SQL strings.
I don't know of a way to tackle this problem other than fixing the code that needs to be fixed. If you can find common patterns then you can automate some of the editing using find/replace or sed or some other tool, as long as you diff the result before checking it in.
If there are thousands of files I guess that there is a reasonable sized team that built the code this way. It seems fair to share the workload out amongst the people that built the system, rather than dump it all on one person. Otherwise you will end up as the "SQL fixing guy" and nobody else on the team will have any incentive to write SQL code in a more portable way.
Does your current application execute SQL through a common class? Could you add some logging to print out the raw SQL in this common class? From that output you could write a small script to run each statement against Oracle.

load data into text file from oracle database views

I want to load data into text file that is generated after executing "views" in Oracle?How can I achieve this in oracle using UNIX.for example-
I want the same in Oracle on unix box.Please help me out as it alredy cosume lots of time.
your early response is highly appreciated!!
As Thomas asked, we need to know what you are doing with the "flat file". For example, if you're loading it into spreadsheet or doing some other processing that expects a defined format, then you need to use SQL*Plus and spool to a file. If you're looking to save a table (data + table definition) for moving it to another Oracle database then EXP/IMP is the tool to use.
We generally describe the data retrieval process as "selecting" from a table/view, not "executing" a table/view.
If you have access to directories on the database server, and authority to create "Directory" objects in Oracle, then you have lots of options.
For example, you can use the UTL_FILE package (part of the PL/SQL built-ins) to read or write files at the operating system level.
Or use the "external table" functionality to define objects that look like single tables to Oracle but are actually flat files at the OS level. Well documented in the Oracle docs.
Also, for one-time tasks, most of the tools for working SQL and PL/SQL provide facilities for moving data to and from the database. In the Windows environment, Toad's good at that. So is Oracle's free SQLDeveloper, which runs on many platforms. You wouldn't want to use those for a process that runs every day, but they're fine for single moves. I've generally found these easier to use than SQLPlus spooling, but that's a primitive version of the same functionality.
As stated by others, we need to know a bit more about what you're trying to do.

External Tables vs SQLLoader

So, I often have to load data into holding tables to run some data validation checks and then return the results.
Normally, I create the holding table, then a sqlldr control file and load the data into the table, then I run my queries.
Is there any reason I should be using external tables for thing instead?
In what way will they make my life easier?
The big advantage of external tables is that we can query them from inside the database using SQL. So we can just run the validation checks as SELECT statements without the need for a holding table. Similarly if we need to do some manipulation of the loaded data it is almost always easier to do this with SQL rather than SQLLDR commands. We can also manage data loads with DBMS_JOB/DBMS_SCHEDULER routines, which further cuts down the need for shell scripts and cron jobs.
However, if you already have a mature and stable process using SQLLDR then I concede it is unlikely you would realise tremendous benefits from porting to external tables.
There are also some cases - especially if you are loading millions of rows - where the SQLLDR approach may be considerably faster. Howver, the difference will not be as marked with more recent versions of the database. I fully expect that SQLLDR will eventually be deprecated in favour of external tables.
If you look at the External Table syntax, it looks suspiciously like SQL*Loader control file syntax :-)
If your external table is going to be repeatedly used in multiple queries it might be faster to load a table (as you're doing now) rather than rescan your external table for each query. As #APC notes, Oracle is making improvements in them, so depending on your DB version YMMV.
I would use external tables for their flexibility.
It's easier to modify the data source on them to be a different file alter table ... location ('my_file.txt1','myfile.txt2')
You can do multitable inserts, merges, run it through a pipelined function etc...
Parallel query is easier ...
It also establishes dependencies better ...
The code is stored in the database so it's automatically backed up ...
Another thing that you can do with external tables is read compressed files. If your files are gzip compressed for example, then you can use the PREPROCESSOR directive within your external table definition, to decompress the files as they are read.

Resources