Sql huge insert script - oracle

I took a backup of a table in the form of insert script using toad for oracle. I could not use that script in toad to perform inserts because of the huge size. Is there a way that i can run the huge script using toad?

1. Reduce network time by running the script on the server. Chances are the vast majority of the time is spent waiting for the network. Normally each INSERT statement is a separate round-trip.
2. Reduce network time by batching the inserts. Wrap a begin and end; around a large number of inserts. A PL/SQL block only requires one round-trip. Note that you probably cannot put the entire script in a single anonymous block as there are parsing limits. You will get DIANA errors with anonymous blocks larger than roughly a few megabytes in size.
3. Run the code indirectly. Maybe just loading the file in Toad is the problem? Run a script that simply calls that script, perhaps something like #my_script.sql?
Without knowing more about Toad or what the script looks like I cannot say for sure if these will work. But I've used these approaches with similar issues, there is usually a way to make simplistic install scripts run more than 10 times faster.

Try running the script in SQLPLUS using '#'

from the View menu, show the Project Manager.
add sql files to the project
select the files, right click and choose Execute

Related

How to profile pgSQL functions?

I have a script written in pgSQL that moves records from one table to others. The logic is quite sophisticated: it involves lots of SELECT queries. For readability, the script is divided into many functions calling one another.
I would like to find a bottleneck in the script.
I tried to use pgbadger to find the most time-consuming queries. I enabled logging according to the instructions from https://github.com/dalibo/pgbadger/blob/master/README (POSTGRESQL CONFIGURATION section). But the problem is that pgbadger relies on logs and the logs are on the level of pgSQL functions, not single SELECT statements. So the information I get is that running a given function took X seconds. I would like to see a chart that shows how much time every SELECT query was running. Is there a way to do it without reorganizing the script?

How can I easily analyze an Oracle package's execution for performance issues?

I have a pl/sql package in an 11g R2 DB that has a fair number of related procedures & functions. I execute a top level function and it does a whole lot of stuff. It is currently processing < 10 rows per second. I would like to improve the performance, but I am not sure which of the routines to focus on.
This seems like a good job for Oracle's "PL/SQL hierarchical profiler (DBMS_HPROF)" Being the lazy person that I am I was hoping that either SQLDeveloper v3.2.20.09 or Enterprise Manager would be able do what I want with a point & click interface. I cannot find this.
Is there an "easy" way to analyze the actual PL/SQL code in a procedure/package?
I have optimized the queries in the package using the "Top Activity" section of Enterprise Manager, looking at all of the queries that were taking a long time. Now all I have is a single "Execute PL/SQL" showing up, and I need to break that down into at least the procedures & functions that are called, and what percentage of the time they are taking.
The PL/SQL Hierarchical Profiler, documented here, is actually very easy to run, assuming that you have the necessary privileges.
Basically you just need to run a PL/SQL block like this:
begin
dbms_hprof.start_profiling('PLSHPROF_DIR', 'test.trc');
your_top_level_procedure;
dbms_hprof.stop_profiling;
end;
The plshprof utility will generate HTML reports from the raw profiler output file (test.trc in the example above).

Is there a Oracle equivalent of mysqldump

Is there a way to dump the content of a oracle table in to a file formated as INSERT statements. I can't use oradump as it is on GPL. I will be running it from a perl CGI script. I am looking for something to dump data directly from oracle server using a single command. Running a select and creating insert statements using perl is too slow as there will be lot of data.
I know I can probably do this using spool command and a plsql block at server side. But is there a built in command to do this instead of formating the INSERT statements myself?
Generating large numbers of INSERT statement will likely be slow no matter how you do it, and it will be slow to execute all the inserts as well. Why are you doing this? A more efficient solution, if you can't use a tool like data pump, would be to generate a text file you could later import with SQL*Loader.
the short answer is : NO.
The performance of generating those insert statements can be very positively influenced using bulk fetches. There is a good chance that dbi does support bulk fetches. Check it out and experiment with it. I also wrote a little program called fun that generates sql loader files in PRO*c. Not the best code but you can fetch it from a recent blog I wrote http://ronr.blogspot.com/2010/11/proc-and-xcode-32-how-to-get-it-working.html In the article I explained how to get PRO*c working on a mac using xcode and the program by coincident wat fun. (Fast Un Load). It almost does what you want, you can adjust it a little...
I hope it helps.

How can I limit memory usage when generating a CSV from a large resultset?

I have a web application in Spring that has a functional requirement for generating a CSV/Excel spreadsheet from a result set coming from a large Oracle database. The expected rows are in the 300,000 - 1,000,000 range. Time to process is not as large of an issue as keeping the application stable -- and right now, very large result sets cause it to run out of memory and crash.
In a normal situation like this, I would use pagination and have the UI display a limited number of results at a time. However, in this case I need to be able to produce the entire set in a single file, no matter how big it might be, for offline use.
I have isolated the issue to the ParameterizedRowMapper being used to convert the result set into objects, which is where I'm stuck.
What techniques might I be able to use to get this operation under control? Is pagination still an option?
A simple answer:
Use a JDBC recordset (or something similar, with an appropriate array/fetch size) and write the data back a LOB, either temporary or back into the database.
Another choice:
Use PL/SQL in the database to write a file using UTL_FILE for your recordset in CSV format. As the file will be on the database server, not on the client, Use UTL_SMTP or JavaMail using Java Stored Procedures to mail the file. After all, I'd be surprised if someone was going to watch the hourglass turn over repeatedly waiting for a 1 million row recordset to be generated.
Instead of loading an entire file in memory you can process each row individually and use output stream to send the output directly to the web browser. E.g. in servlets API, you can get the output stream from ServletResponse.getOutputStream() and then simply write result CSV lines to that stream.
I would push back on those requirements- they sound pretty artificial.
What happens if your application fails, or the power goes out before the user looks at that data?
From your comment above, sounds like you know the answer- you need filesystem or oracle access, in order to do your job.
You are being asked to generate some data- something that is not repeatable by sql?
If it were repeatable, you would just send pages of data back to the user at a time.
Since this report, I'm guessing, has something to do with the current state of your data, you need to store that result somewhere, if you can't stream it out to the user. I'd write a stored procedure in oracle- it's much faster not to send data back and forth across the network. If you have special tools or its just easier, sounds like there's nothing wrong with doing it on the java side instead.
Can you schedule this report to run once a week?
Have you considered the performance of an Excel spreadsheet with 1,000,000 rows?

Oracle - timed sampling from v$session_longops

I am trying to track performance on some procedures that run too slow (and seem to keep getting slower). I am using v$session_longops to track how much work has been done, and I have a query (sofar/((v$session_longops.LAST_UPDATE_TIME-v$session_longops.start_time)*24*60*60)) that tells me the rate at which work is being done.
What I'd like to be able to do is capture the rate at which work is being done and how it changes over time. Right now, I just re-execute the query manually, and then copy/paste to Excel. Not very optimal, especially when the phone rings or something else happens to interrupt my sampling frequency.
Is there a way to have script in SQL*Plus run a query evern n seconds, spool the results to a file, and then continue doing this until the job ends?
(Oracle 10g)
Tanel Poder's snapper script does a wonderful job of actively monitoring performance.
It has parameters for
<seconds_in_snap> - the number of seconds between taking snapshots
<snapshot_count> - the number of snapshots to take ( maximum value is power(2,31)-1 )
It uses PL/SQL and a call to DBMS_LOCK.SLEEP
If you can live with running PL/SQL instead of a SQL*Plus script, you could consider using the Oracle Scheduler. See chapters 26, 27, and 28 of the Oracle Database Administrator's Guide.

Resources