I am writing a data conversion in PL/SQL that processes data and loads it into a table. According to the PL/SQL Profiler, one of the slowest parts of the conversion is the actual insert into the target table. The table has a single index.
To prepare the data for load, I populate a variable using the rowtype of the table, then insert it into the table like this:
insert into mytable values r_myRow;
It seems that I could gain performance by doing the following:
Turn logging off during the insert
Insert multiple records at once
Are these methods advisable? If so, what is the syntax?
It's much better to insert a few hundred rows at a time, using PL/SQL tables and FORALL to bind into insert statement. For details on this see here.
Also be careful with how you construct the PL/SQL tables. If at all possible, prefer to instead do all your transforms directly in SQL using "INSERT INTO t1 SELECT ..." as doing row-by-row operations in PL/SQL will still be slower than SQL.
In either case, you can also use direct-path inserts by using INSERT /*+APPEND*/, which basically bypasses the DB cache and directly allocates and writes new blocks to data files. This can also reduce the amount of logging, depending on how you use it. This also has some implications, so please read the fine manual first.
Finally, if you are truncating and rebuilding the table it may be worthwhile to first drop (or mark unusable) and later rebuild indexes.
Regular insert statements are the slowest way to get data in a table and not meant for bulk inserts. The following article references a lot of different techniques for improving performance: http://www.dba-oracle.com/oracle_tips_data_load.htm
Drop the index, then insert the rows, then re-create the index.
If dropping the index doesn't speed things up enough, you need the Oracle SQL*Loader:
http://www.oracle.com/technology/products/database/utilities/htdocs/sql_loader_overview.html
Suppose you have taken eid,ename,sal,job. So create a table first as:
SQL>create table tablename(eid number, ename varchar2(20),sal number,job char(10));
Now insert data:-
SQL>insert into tablename values(&eid,'&ename',&sal,'&job');
Check this link
http://www.dba-oracle.com/t_optimize_insert_sql_performance.htm
main points to consider for your
case is to use Append hint as this
will directly append into the table
instead of using freelist. If you can afford to turn off logging than use append with nologging hint to do it
Use a bulk insert instead instead of iterating in PL/SQL
Use sqlloaded to load the data directly into the table if you are getting data from a file feed
Here are my recommendations on fast insert.
Trigger - Disable any triggers associated with a table. Enable after Inserts are complete.
Index - Drop Index and re-create it after your Inserts are complete.
Stale stats - Re-analyze table and index stats.
Index de-fragmentation - Rebuild Index if needed
Use No Logging -Insert using INSERT APPEND (Oracle only). This approach is very risky approach, no redo logs are generated therefore you can’t do a rollback - make a backup of table before you start and don't try on live tables. Check if your db has similar option
Parallel Insert: Running parallel insert will get the job faster.
Use Bulk Insert
Constraints - Not much overhead during inserts but still a good idea to check, if it is still slow after even after step 1
You can learn more on http://www.dbarepublic.com/2014/04/slow-insert.html
Maybe one of your best option is to avoid Oracle as much as possible actually.
I've been baffled by this myself, but very often a Java process can outperform many of the Oracle's utilities which either use OCI (read: SQL Plus) or will take up so much of your time to get right (read: SQL*Loader).
This doesn't prevent you to use specific hints either (like /APPEND/).
I've been pleasantly surprised each time I've turned to that kind of solution.
Cheers,
Rollo
Related
In my business case, I need insert one row and can't use batch insert . So I want to know what the throughput can made by Oracle. I try these ways:
Effective way
I use multi-thread, each thread owns one connection to insert data
I use ssd to store oracle datafile
Ineffective way
I use multi table to store data in one schema
I use table partition
I use multi schema to store data
Turn up data file block size
Use append hint in insert SQL
In the end the best TPS is 1w/s+
Other:
Oracle 11g
Single insert data size 1k
CPU i7, 64GB memory
Oracle is highly optimized for anything from one row inserts to batches of hundreds of rows. You do not mention whether you are having performance problems with this one row insert nor how long the insert takes. For such a simple operation, you don't need to worry about any of those details. If you have thousands of web-based users inserting one row into a table every minute, no problem. If you are committing your work at the appropriate time, and you don't have a huge number of indexes, a single row insert should not take more than a few milliseconds.
In SQL*Plus try the commands
set autotrace on explain statistics
set timing on
and run your insert statement.
Edit your question to include the results of the explain plan. And be sure to indent the results 4 spaces.
I need to delete a large amount of data from my database on a regular basis. The process generates huge volume of archive logs. We had a database crash at one point because there was no storage space available on archive destination. How can I avoid generation of logs while I delete data?
The data to be deleted is already marked as inactive in the database. Application code ignores inactive data. I do not need the ability to rollback the operation.
I cannot partition the data in such a way that inactive data falls in one partition that can be dropped. I have to delete the data with delete statements.
I can ask DBAs to set certain configuration at table level/schema level/tablespace level/server level if needed.
I am using Oracle 11g.
What proportion of the data on the table would be deleted, what volume? Are there any referential integrity constraints to manage or is this table childless?
Depending on the answers , you might consider:
"CREATE TABLE keep_data UNRECOVERABLE AS SELECT * FROM ... WHERE
[keep condition]"
Then drop the original table
Then rename keep_table to original table
Rebuild the indexes (again with unrecoverable to prevent redo),constraints etc.
The problem with this approach is it's a multi-step DDL, process, which you will have a job to make fault tolerant and reversible.
A safer option might be to use data-pump to:
Data-pump expdp to extract the "Keep" data
TRUNCATE the table
Data-pump impdp import of data from step 1, with direct-path
At this point I suggest you read the Oracle manual on Data Pump, particularly the section on Direct Path Loads to be sure this will work for you.
MY preferred option would be partitioning.
Of course, the best way would be TenG solution (CTAS, drop and rename table) but it seems it's impossible for you.
Your only problem is the amount of archive logs and database crash problem. In this case, maybe you could partition your delete statement (for example per 10.000 rows).
Something like:
declare
e number;
i number
begin
select count(*) from myTable where [delete condition];
f :=trunc(e/10000)+1;
for i in 1.. f
loop
delete from myTable where [delete condition] and rownum<=10000;
commit;
dbms_lock.sleep(600); -- purge old archive if it's possible
end loop;
end;
After this operation, you should reorganize your table which is surely fragmented.
Alter the table to set NOLOGGING, delete the rows, then turn logging back on.
We are running Oracle 11g and have some partitioned tables. I am trying to write an automated process to script out the indexes on these tables. (Basically when we do bulk loads, we want to drop all the indexes beforehand and recreate them afterward.)
The problem I have is knowing how to script out the partitioned indexes. Some are created with "LOCAL STORE IN (tablespacename)" and others just with "LOCAL" (which stores index extents in the same partition as the data). In either case, dba_indexes.tablespace_name is null, and I have having a heck of a time scripting out the two different cases correctly.
I know I can simply re-run the original DDL to recreate the indexes, but multiple parts of the organization can make changes, and there would be less risk if the loader tool could be self-contained and simply rebuild whatever was there to begin with.
I can query dba_ind_subpartitions, and if the tablespace_name values for every subpartition all match, then I can assume/infer that I should STORE IN that tablespace name. But, if the table is in a small single-partition state (e.g. newly created or just after archival), then the ones created with just LOCAL also match this test, so this is also not a perfect way of telling them apart.
I can compare the names of the index subpartition tablespaces to the data table partition tablespaces, and if they match, then I can assume/infer that those should be created with just LOCAL. But, that drags a bunch of extra tables into my query and makes it really hard to read, so I am worried about maintainability going forward. Plus, it just seems like a kludge.
It seems like there should be someplace in Oracle's data dictionaries where it is simply keeping track of this, and where I can just directly look it up instead of having to do a bunch of math and rely on assumptions. But, I have done a good deal of digging and haven't yet found it. So, any help would be much appreciated.
Although an insert alone is faster without the presence of indexes, have you benchmarked a load into tables with indexes enabled and established that it is slower than disabling (more robust than dropping!) and rebuilding them?
When you direct path insert into a table with indexes, Oracle optimises the index maintenance process by creating temporary segments to hold just the data required for the index builds. This generally allows the index maintenance to scan much smaller segments than otherwise required -- the temp segments plus the existing indexes.
Well, as jonearles describes, the dbms_metadata package is the way to generate DDL for existing objects.
But, it seems to me, this is more work than is required for what you're trying to achieve. If this is all for loading data, I recommend you simply alter the indexes to be unusable, set 'skip_unusable_indexes=true', do the data load, and the rebuild the indexes.
This should achieve what you want, without having to drop and re-create the indexes.
DBMS_METADATA.GET_DDL is easier than querying the data dictionary:
--Sample table and index.
create table test1(a number);
create index test1_idx on test1(a);
--Store the DDL, drop the index, then re-create it.
declare
ddl_before clob;
begin
ddl_before := dbms_metadata.get_ddl('INDEX', 'TEST1_IDX');
execute immediate 'drop index test1_idx';
--Do some processing here.
execute immediate ddl_before;
end;
/
In Oracle I came across two types of insert statement
1) Insert All: Multiple entries can be inserted using a single sql statement
2) Insert : One entry will be updated per insert.
Now I want to insert around 100,000 records at a time. (Table have 10 fields with includes a primary key). I am not concerned about any return value.
I am using oracle 11g.
Can you please help me with respect to performance which is better "Insert" or "Insert All".
I know this is kind of a Necro but it's pretty high on the google search results so I think this is a point that worth making.
Insert All can give dramatic performance benefits if you are building a web application because it is a single SQL statement that requires only one round trip to your database. In most cases although far from all cases. the majority of the cost of a query is actually latency. Depending on what framework you are using, this syntax can help you avoid unnecessary round trips.
This might seem incredibly obvious but I have seen many, many production web applications in large companies that have forgotten this simple fact.
Insert statement and insert all statement are practically the same conventional insert statement. insert all, which has been introduced in 9i version simply allows you to do insertion into multiple tables using one statement. Another type of insert that you could use to speed up the process is direct-path insert - you use /*+ append*/ or /*+ append_values*/(Oracle 11g) hints
insert /*+ append*/ into some_table(<<columns>>)
select <<columns or literals>>
from <<somwhere>>
or (Oracle 11g)
insert /*+ append_values*/ into some_table(<<columns>>)
values(<<values>>)
to tell Oracle that you want to perform direct-path insert. But, 100K rows it's not that many rows and conventional insert statement will do just fine. You wont get significant performance advantage using direct-path insert with that amount of data. Moreover direct-path insert wont reuse free space, it adds new data after HWM(high water mark), hence require more space. You wont be able to use select statement or other DML statement, if you has not issued commit.
To use FORALL you would need PLSQL tables.
This process is quite fast.
You can also choose the table to have NO LOG option which would speed the process up during inserts.
We're trying to figure out the best way to handle BULK INSERTs using Oracle (10gR2), and I'm finding that it can be a pretty complicated subject. One method that I've found involves using the Append optimizer hint:
INSERT /*+ Append*/
INTO some_table (a, b)
VALUES (1, 2)
My understanding is that this will tell Oracle to ignore indexes and just put the results at the end of the table. Then, all I should have to do is rebuild the indexes:
ALTER INDEX some_index REBUILD
This would be easier than trying to launch SQL*Loader as an external process or doing some pl/SQL. This almost seems too easy. Is there something I'm missing? Any things that could come back to bite me if I take this approach?
A few notes ...
A single row cannot be appended, therefore APPEND is only valid with INSERT INTO ... SELECT FROM syntax.
An append is the addition of data above the high water mark of the table, in which the data is formatted into complete blocks that are then written to the table and which bypass the SQL engine
An append in parallel mode requires that each parallel query thread allocate at least one new extent to the table, into which the new blocks are written. This can be wasteful of space.
The indexes are not ignored, but maintenance of them is defered until the blocks have been written into the table.
See he docs for more important information: http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tables.htm#ADMIN01509