Slow query execution in an empty table. (after deleting a large amount of inserts) - oracle

I have a table in an oracle database with 15 fields.
This table had 3500000 inserts. I deleted them all.
delete
from table
After that, whenever I execute a select statement
I get a very slow response (7 sec) even though the table is empty.
I get a normal response only in the case that I search
according to an indexed field.
Why?

As Gritem says, you need to understand high water marks etc
If you do not want to truncate the table now (because fresh data has been inserted), use alter table xyz shrink space documented here for 10g

Tom Kyte has a good explanation of this issue:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:492636200346818072
It should help you understand deletes, truncates, and high watermarks etc.

In sql when you want to completely clear out a table, you should use truncate instead of delete. Let's say you have your table with 3.5 million rows in it and there is an index (unique identifier) on a column of bigint that increments for each row. Truncating the table will completely clear out the table and reset the index to 0. Delete will not clear the index and will continue at 3,500,001 when the next record is inserted. Truncate is also much faster than delete. Read the articles below to understand the differences.
Read this article Read this article that explains the difference between truncate and delete. There are times to use each one. Here is another article from an Oracle point of view.

Related

Deletion is slow in oracle DB

We have a table that doesn't have much data. The table has 3 partitions and we are deleting data in one partition only.
delete from table AB partition(A) where id=value;
here id has an index also but still delete is slow.
The datatype of id is varchar2 and the value is number.
Please help me to understand why the delete statement is slow.
I don't think the index has much use in this case. It has to evaluate every single row in the partition to see if it matches id=value. Typically this will be a full table scan and no index will be used. It totally depends on the number of rows in the partition how long it will take. But maybe i did not understand the question properly. I presumed "value" is a column in the same table, like ID.

Gather Stats Required After Truncate and Insert?

I needed to truncate and reload a table.
I learned that truncate needs stats gathering on the table as its successor process so the database gets the actual statistics, otherwise previous stats are not cleared by the truncate statement.
After doing these two operations (truncate and stats gathering on the empty table), ran the insert... but don't see new statistics in all_tab_statistics table for my table. Sample_size is still 0.
Why is that? Shouldn't have Oracle done the automatic stats gathering after the insert?
Do I need to rerun the stats or is it just fine considering the performance around this table (please note it's going to truncate and reload each time)?
Consider the following approach. It has the advantage of the table always being present.
Create an empty new table like the old one.
Load the data into the new table. This is the slowest step.
Do whatever cleanup you might need, such as refreshing the statistics.
RENAME tables to swap the new table in place. This step is fast enough so you won't notice.
I know it's a long time since I posted my question above. But recently, we again faced the similar situation and this time below steps worked towards a much better performance on a table with 800 million rows.
Take a backup of the original table.
Truncate the original table.
Gather stats on the truncated table, so that statistics show 0 in the DB. Us CASCADE=>TRUE in the command to also include indexes in the process.
Drop the indexes on the truncated table and Insert the required data from the backup table.
Recreate the indexes and gather stats again (ofcourse, with CASCADE=>TRUE; however recreation of the indexes should ideally have calculated the appropriate stats).
Drop the backup table if not needed.

Is there any use to create index on all the table columns in oracle?

In our one of production database, we have 4 column table and there are no PK,UK constraints on it. only one notnull constraint on one column. The inserts are slow on this table and when I checked the indexes , there is one index which is built on all columns.
It is a normal table and not IOT. I really don't see a need of all column index, but wondering why the developers has created it?
Appreciate your thoughts?
It might be usefull, i.e. if you (mainly) query all columns oracle doesn't have to access the table at all, but can get all the data from the index. Though inserts take longer because a larger index has to be maintained by the dbms everytime.
One case where it could be useful is,
Say for example, you are trying to check the existence of records in this table and for that you have to have joins on all four columns. So in such a case if you have written a correlated query like below,
SELECT <something>
FROM table_1 t1
WHERE EXISTS
(SELECT 1 FROM table_t2 t2 where t1.c1=t2.c1 and t1.c2=t2.c2 and t1.c3=t2.c3 and t1.c4=t2.c4)
Apart from above case, it looks an error to me from developer's side.
Indexes are good to better query optimization but causes slow updates/inserts because the indexes needs to be updated at each modification.
If these tables first use is querying and inserts happens only in a specific periods like a batch at the beginning or the end of the day only, then you can remove the indexes before updating tables and then restore them.
In addition, all the queries all these tables need to be analysed to see which indexes are useful and which are not?
Anyway, You need to ask developers before removing these indexes.

Sybase ASE remote row insert locking

Im working on an application which access a Sybase ASE 15.0.2 ,where the current code access a remote database
(CIS) to insert a row using a proxy table definition (the destination table is a DOL - DRL table - The PK
row is defined as identity ,and is always growing). The current code performs a select to check if the row
already exists to avoid duplicate data to be inserted.
Since the remote table also have a PK definition on the table, i do understand that the PK verification will
be done again prior to commiting the row.
Im planning to remove the select check since its being effectively done again by the PK verification,
but im concerned about if when receiving a file with many duplicates, the table may suffer
some unecessary contention when the data is tried to be commited.
Its not clear to me if Sybase ASE tries to hold the last row and writes the data prior to check for the
duplicate. Also, if the table is very big, im concerned also about the time it will spend looking the
whole index to find duplicates.
I've found some documentation for SQL anywhere, but not ASE in the following link
http://dcx.sybase.com/1200/en/dbusage/insert-how-transact.html
The best i could find is the following explanation
https://groups.google.com/forum/?fromgroups#!topic/comp.databases.sybase/tHnOqptD7X8
But it doesn't enlighten in details how the row is locked (and if there is any kind of
optimization to write it ahead or at the same time of PK checking)
, and also if it will waste a full PK look if im positively inserting a row which the PK
positively greater than the last row commited
Thanks
Alex
Unlike SqlAnywhere there is no option for ASE to set wait_for_commit. The primary key constraint is checked during the insert and not at the commit time. The problem as I understand from your post I see is if you have a mass insert from a file that may contain duplicates is to load into a temp table , check for duplicates, remove the duplicates and then insert the unique ones. Mass insert are lot faster though it still checks for primary key violations. However there is no cost associated as there is no rolling back. The insert statement is always all or nothing. Even if one row is duplicate the entire insert statement will fail. Check before insert in more of error free approach as opposed to use of constraint to the verification because it is going to fail and rollback is going to again be costly.
Thanks Mike
The link does have a very quick explanation about the insert from the CIS perspective. Its a variable to keep an eye on given that CIS may become a representative time consumer
if its performing data and syntax checking if it will be done again when CIS forwards the insert statement to the target server. I was afraid that CIS could have some influence beyond the network traffic/time over the locking/PK checking
Raju
I do agree that avoiding the PK duplication by checking if the row already exists by running a select and doing in a batch, but im currently looking for a stop gap solution, and that may be to perform the insert command in batches of about 50 rows and leave the
duplicate key check for the PK.
Hopefully the PK check will be done over a join of the 50 newly inserted rows, and thus
avoid to traverse the index for each single row...
Ill try to test this and comment back
Alex

How is the TRUNCATE command in Oracle able to retrieve the structure of a table after dropping it?

The SQL command TRUNCATE in Oracle is faster than than DELETE FROM table; in that the TRUNATE comand first drops the specified table in it's entirely and then creates a new table with same structure (clarification may require in case I may be wrong). Since TRUNCATE is a part of DDL it implicitly issues COMMIT before being executed and after the completion of execution. If such is a case then, the table that is dropped by the TRUNCATE command is lost permanently with it's entire structure in the data dictionary. In such a scenario, how is the TRUNCATE command able to drop first the table and recreate the same with the same structure?
(Note that I work for Sybase in SQL Anywhere engineering and my answer comes from my knowledge of how truncate is implemented there, but I imagine it's similar in Oracle as well.)
I don't believe the table is actually dropped and re-created; the contents are simply thrown away. This is much faster than delete from <table> because no triggers need to be executed, and rather than deleting a row at a time (both from the table and the indexes), the server can simply throw away all pages that contain rows for that table and any indexes.
I thought a truncate (amoungst other things) simply reset the High Water Mark.
see: http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_10007.htm#SQLRF01707
however in
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2816964500346433991
It is clear that the data segment changes after a truncate.

Resources