Why is a select query working faster than a delete query in Oracle - oracle

I am concerned about the below queries:
select * from TABLEA where COLUMNA in (select....) and
COLUMNA in (select....)
delete from TABLEA where COLUMNA in (select....) and
COLUMNA in (select....)
Both have huge differences in running time.
The select query runs much faster than a similar delete query.

SELECT is just a PROJECTION. SELECT with FILTER PREDICATE is SELECTION. And both are completely different from DELETE, which is a DML.
Your question is too broad as it involves lot of stuff.
Select statement doesn't manipulate the data, DML does.
Select doesn't update the index keys, DML does.
DB writers are not involved with select, but with DML.
The OPTIMIZER acts differently for DIFFERENT OPERATIONS. Look at the explain plan at least.
...and many more reasons.
Two different contexts, no point in comparing.
GUI ISSUES
A select might look like faster when executed on GUI based tools like SQL Developer etc. which actually shows you the first fetched rows and not complete resultset. However, the DELETE needs to actually act on all the rows.

Please check table structure and indexes. Maybe there a lot of ON DELETE CASCADE constraints or a lot of indices to delete. Indicies speed up select time and slow down delete time.

Related

Is there any use to create index on all the table columns in oracle?

In our one of production database, we have 4 column table and there are no PK,UK constraints on it. only one notnull constraint on one column. The inserts are slow on this table and when I checked the indexes , there is one index which is built on all columns.
It is a normal table and not IOT. I really don't see a need of all column index, but wondering why the developers has created it?
Appreciate your thoughts?
It might be usefull, i.e. if you (mainly) query all columns oracle doesn't have to access the table at all, but can get all the data from the index. Though inserts take longer because a larger index has to be maintained by the dbms everytime.
One case where it could be useful is,
Say for example, you are trying to check the existence of records in this table and for that you have to have joins on all four columns. So in such a case if you have written a correlated query like below,
SELECT <something>
FROM table_1 t1
WHERE EXISTS
(SELECT 1 FROM table_t2 t2 where t1.c1=t2.c1 and t1.c2=t2.c2 and t1.c3=t2.c3 and t1.c4=t2.c4)
Apart from above case, it looks an error to me from developer's side.
Indexes are good to better query optimization but causes slow updates/inserts because the indexes needs to be updated at each modification.
If these tables first use is querying and inserts happens only in a specific periods like a batch at the beginning or the end of the day only, then you can remove the indexes before updating tables and then restore them.
In addition, all the queries all these tables need to be analysed to see which indexes are useful and which are not?
Anyway, You need to ask developers before removing these indexes.

Oracle: Having join or simple from/where clause has no affect on performance?

My manger just told me that having joins or where clause in oracle query doesn't affect performance even when you have million records in each table. And I am just not satisfied with this and want to confirm that.
which of the following queries is better in performance on oracle and in postgresql also
1- select a.name,b.salary,c.address
from a,b,c
where a.id=b.id and a.id=c.id;
2- select a.name,b.salary,c.address
from a
JOIN b on a.id=b.id
JOIN C on a.id=c.id;
I have tried Explain in postgresql for a small data set and query time was same (may be because I have just few rows) and right now I have no access to oracle and actual database to analyze the Explain in real envoirnment.
Using JOINS makes the code easier to read, since it's self-explanatory.
In speed there is no difference (I have just tested it) and the execution plan is the same
If the query optimizer is doing its job right, there should be no difference between those queries.
They are just two ways to specify the same desired result.

Oracle Insert All vs Insert

In Oracle I came across two types of insert statement
1) Insert All: Multiple entries can be inserted using a single sql statement
2) Insert : One entry will be updated per insert.
Now I want to insert around 100,000 records at a time. (Table have 10 fields with includes a primary key). I am not concerned about any return value.
I am using oracle 11g.
Can you please help me with respect to performance which is better "Insert" or "Insert All".
I know this is kind of a Necro but it's pretty high on the google search results so I think this is a point that worth making.
Insert All can give dramatic performance benefits if you are building a web application because it is a single SQL statement that requires only one round trip to your database. In most cases although far from all cases. the majority of the cost of a query is actually latency. Depending on what framework you are using, this syntax can help you avoid unnecessary round trips.
This might seem incredibly obvious but I have seen many, many production web applications in large companies that have forgotten this simple fact.
Insert statement and insert all statement are practically the same conventional insert statement. insert all, which has been introduced in 9i version simply allows you to do insertion into multiple tables using one statement. Another type of insert that you could use to speed up the process is direct-path insert - you use /*+ append*/ or /*+ append_values*/(Oracle 11g) hints
insert /*+ append*/ into some_table(<<columns>>)
select <<columns or literals>>
from <<somwhere>>
or (Oracle 11g)
insert /*+ append_values*/ into some_table(<<columns>>)
values(<<values>>)
to tell Oracle that you want to perform direct-path insert. But, 100K rows it's not that many rows and conventional insert statement will do just fine. You wont get significant performance advantage using direct-path insert with that amount of data. Moreover direct-path insert wont reuse free space, it adds new data after HWM(high water mark), hence require more space. You wont be able to use select statement or other DML statement, if you has not issued commit.
To use FORALL you would need PLSQL tables.
This process is quite fast.
You can also choose the table to have NO LOG option which would speed the process up during inserts.

Will inserting half a million entries with the same date value be slowed by a non-unique index on that date column?

I have a cursor that selects all rows in a table, a little over 500,000 rows. Read a row from cursor, INSERT into other table, which has two indexes, neither unique, one numeric, one 'DATE' type. COMMIT. Read next row from Cursor, INSERT...until Cursor is empty.
All my DATE column's values are the same, from a timestamp initialized at the start of the script.
This thing's been running for 24 hours, only posted 464K rows, a little less than 10K rows / hr.
Oracle 11g, 10 processors(!?)
Something has to be wrong. I think it's that DATE index trying to process all these entries with exactly the same value for that column.
Why don't you just do:
insert into target (columns....)
select columns and computed values
from source
commit
?
This slow by slow is doing far more damage to performance than an index that may not make any sense.
Indexes slow down inserts but speed up queries. This is normal.
If it is a problem you can remove the index, insert the rows, then add the index again. This can be faster if you are doing many inserts at once.
The way you are copying the data using cursors seems to be inefficient. You could try a set-based approach instead:
INSERT INTO table1 (x, y, z)
SELECT x, y, z FROM table2 WHERE ...
Committing after every inserted row doesn't make much sense. If you're worried about exceeding undo capacity, for example, you can keep a count of the inserts and issue a commit after every thousand rows.
Updating the indexes will have some impact but that's unavoidable if you can't drop (or disable) while the inserts are performed, but that's just how it goes. I'd expect the commits to have a bigger impact, though I suspect that's a topic with varied opinions.
This assumes you have a good reason for inserting from a cursor rather than as a direct insert into ... select from model.
In general, its often a good idea to delete the indexes before doing a massive insert and then add them back afterwards, so that the db doesnt have to try to update the indexes with each insert. Its been a long while since I've used oracle, but had you tried putting more than one insert statement in a transaction? That should also speed it up.
For operations like this you should look at oracle bulk operations, using FORALL and BULK COLLECT. It will reduce the number of DDL operations on the underlying tables considerably
create or replace procedure fast_proc is
type MyTable is table of source_table%ROWTYPE;
MyTable table;
begin
select * BULK COLLECT INTO table from source_table;
forall x in table.First..table.Last
insert into dest_table values table(x) ;
end;
Agreed on comment that what is killing your time is the 'slow by slow' processing. Copying 500,000 rows should be a matter of minutes.
The single INSERT ... SELECT FROM .... approach would be the best one, provided you have big enough Rollback segments. The database may even automatically apply parallel techniques to a plain SQL statement that it will not do with PL/SQL.
In addition you could look at using the /*+ APPEND */ hint - read up on it and see if it may apply to the situation with your target table.
o use all 10 cores you will need to either use plain parallel SQL, or run 10 copies of your pl/sql block, splitting the source table across the 10 copies.
In Oracle 10 this is a manual task (roll your own parallelism) but Oracle 11.2 introduces DBMS_PARALLEL_EXECUTE.
Failing that, bulking up your fetch / insert using the BULK COLLECT & bulk insert would be the next best option - process in chunks of 1000 or so rows (or larger). Again take a look as to whether DBMS_PARALLEL_EXECUTE may help you, or if you could submit the job in chunks via DBMS_JOB.
(Caveat : I don't have access to anything later than Oracle 10)

How do you create an index on a subquery factored temporary table?

I've got a query which has a WITH statement for a subquery at the top, and I'm then running a couple of CONNECT BYs on the subquery. The subquery can contain tens of thousands of rows, and there's no limit to the depth of the CONNECT BY hierarchy. Currently, this query takes upwards of 30 seconds; is it possible to specify indexes to put on the temporary table created for the factored subquery to speed up the CONNECT BYs, or speed it up another way?
There is no way to do it right in the query: Oracle does not support Eager Spool.
You can temporarily store your resultset in an indexed temporary table and issue the CONNECT BY query against it.
However, for the unsargable equality conditions in the query, the CONNECT BY usually builds a hash table which is in most cases even better than an index.
Could you please post your query here?
You might be able to use the MATERIALIZE hint with query subfactoring so that the subquery isn't being rerun iteratively. While it's undocumented, it seems to reliably flush the results of a WITH clause into a temporary table.
Jonathan Lewis' blog has several examples of how it can be used. There is some risk, however, due to the hint's undocumented nature.

Resources