continuous values in oracle primary key - oracle

Is there a way in oracle to create a column with auto-increment and if a row is deleted the next value that's been deleted should replace the row that is deleted. Is there a way to do that in oracle?

That behavior y ou are describing (having "holes" in the sequence after deletes) will always happen with SEQUENCE. For most applications, it is a good thing and works perfectly because most of the time, the id of the table is artificial and meaningless. Its only use is to connect tables with relations and for that, holes are unimportant.
In your case, if you want to create a continuous sequence and fill gaps if they are created, you need to create a trigger on insert that updates your ID with the value of the first "hole" found in your sequence using a SELECT like this :
SELECT MIN(tb.id) + 1 "first_seq_hole"
FROM yourTable tb
WHERE NOT EXISTS
(SELECT tb.id FROM yourTable tb2 WHERE tb.id + 1 = tb2.id)
Although, I am not sure what is your requirement here but at some point you might still have holes in your sequence (say you delete 10 random rows and never insert any to fill them). That's unavoidable though unless you work on a way to change existing IDs to instantly fill gaps when rows are deleted. It would be complicated and risky if you have child tables using that ID though.

Related

Delete records in an efficient way

I've two tables say STOCK and ITEM. We have a query to delete some records from ITEM table,
delete from ITEM where item_id not in(select itemId from STOCK)
And now I've more than 15,00,000 records to delete, the query was taking much time to do the operation.
When I searched, I found some efficient ways to do this action.
One way:
CREATE TABLE ITEM_TEMP AS
SELECT * FROM ITEM WHERE item_id in(select itemId from STOCK) ;
TRUNCATE TABLE ITEM;
INSERT /+ APPEND +/ INTO ITEM SELECT * FROM ITEM_TEMP;
DROP TABLE ITEM_TEMP;
Secondly instead of truncating just drop the ITEM and then rename the ITEM_TEMP to ITEM. But in this case I've to re create all the indexes.
Can anyone please suggest which one of the above is more efficient, as I could not check this in Production.
I think the correct approach depends on your environment, here.
If you have privileges on the table that must not be affected, or at least must be restored if you drop the table, then the INSERT /*+ APPEND */ may simply be more reliable. Triggers, similarly, or foreign keys, or any objects that will be automatically dropped when the base table is dropped (foreign keys complicate the truncate, of course).
I would usually go for the truncate and insert method based on that. don't worry about the presence on indexes on the table -- a direct path insert is very efficient at building them.
However, if you have a simple table without dependent objects then there's nothing wrong with the drop-and-rename approach.
I also would not rule out just running multiple deletes of a limited number of rows, especially if this is in a production environment.
Best way from used space (and high watermark) and performance is to drop table and then rename ITEM_TEMP table. But, as you mentioned, after that you need to recreate indexes (also grants, triggers, constraints). Also all depending objects will be invalidated.
Some times I try to delete by portions:
begin
loop
delete from ITEM where item_id not in(select itemId from STOCK) and rownum < 10000;
exit when SQL%ROWCOUNT = 0;
commit;
end loop;
end;
Since you have very high number of rows, it better use partition table , may be List partition on "itemId". Then you can easily drop a partition.
Also if your application could run faster. This need design change but it will give benefit in long run.

Optimizing a delete... where query with rownum

I'm working with an application that has a large amount of outdated data clogging up a table in my databank. Ideally, I'd want to delete all entries in the table whose reference date is too old:
delete outdatedTable where referenceDate < :deletionCutoffDate
If this statement were to be run, it would take ages to complete, so I'd rather break it up into chunks with the following:
delete outdatedTable where referenceData < :deletionCutoffDate and rownum <= 10000
In testing, this works suprisingly slowly. The following query, however, runs dramatically faster:
delete outdatedTable where rownum <= 10000
I've been reading through multiple blogs and similar questions on StackOverflow, but I haven't yet found a straightforward description of how/whether using rownum affects the Oracle optimizer when there are other Where clauses in the query. In my case, it seems to me as if Oracle checks
referenceData < :deletionCutoffDate
on every single row, executes a massive Select on all matching rows, and only then filters out the top 10000 rows to return. Is this in fact the case? If so, is there any clever way to make Oracle stop checking the Where clause as soon as it's found enough matching rows?
How about a different approach without so much DML on the table. As a permanent solution for future you could go for table partitioning.
Create a new table with required partition(s).
Move ONLY the required rows from your existing table to the new partitioned table.
Once the new table is populated, add the required constraints and indexes.
Drop the old table.
In future, you would just need to DROP the old partitions.
CTAS(create table as select) is another way, however, if you want to have a new table with partition, you would have to go for exchange partition concept.
First of all, you should read about SQL statement's execution plan and learn how to explain in. It will help you to find answers on such questions.
Generally, one single delete is more effective than several chunked. It's main disadvantage is extremal using of undo tablespace.
If you wish to delete most rows of table, much faster way usially a trick:
create table new_table as select * from old_table where date >= :date_limit;
drop table old_table;
rename table new_table to old_table;
... recreate indexes and other stuff ...
If you wish to do it more than once, partitioning is a much better way. If table partitioned by date, you can select actual date quickly and you can drop partion with outdated data in milliseconds.
At last, paritioning if a way to dismiss 'deleting outdated records' at all. Sometimes we need old data, and it's sad if we delete it by own hands. With paritioning you can archive outdated partitions outside of the database, but connects them when you need to access old data.
This is an old request, but I'd like to show another approach (also using partitions).
Depending on what you consider old, you could create corresponding partitions (optimally exactly two; one current, one old; but you could just as well make more), e.g.:
PARTITION BY LIST ( mod(referenceDate,2) )
(
PARTITION year_odd VALUES (1),
PARTITION year_even VALUES (0)
);
This could as well be months (Jan, Feb, ... Dec), decades (XX0X, XX1X, ... XX9X), half years (first_half, second_half), etc. Anything circular.
Then whenever you want to get rid of old data, truncate:
ALTER TABLE mytable TRUNCATE PARTITION year_even;
delete from your_table
where PK not in
(select PK from your_table where rounum<=...) -- these records you want to leave

Oracle sequence generator within interval

I'm using oracle 11gr2 and for the product table when a new product is inserted I need to assign an autoincrement id going from 1 to 65535. Product could be then be deleted.
When I reach the 65535th, I need to scan the table to find a free hole for assigning new ID.
As I have this requirement oracle sequence could not be used, so I am using a function (tried also a trigger on insert) in order to generate a free id...
The problem is that I could not handle batch insert for example and I have concurrency problems...
How could I solve this ? By using some sort of external Id generator ?
Sounds like an arbitrary design. Is there a good reason for having a 16-bit max product id or for reusing IDs? Both constraints are bad practice.
I doubt any external generator is going to provide anything that Oracle doesn't already provide. I recommend using sequences for batch insert. The problem you have is how to recycle the IDs. Oracle plain sequences don't track the primary key, so you need a solution to find recycled keys first, then fallback to the sequence perhaps.
Product ID Recycling
Batch Inserts - Use sequence for keys the first time you load them. For this small range, set NOCACHE on the sequence to eliminate gaps.
Deletes - When a product is deleted, instead of actually deleting the row, set a DELETED = 'Y' flag on the row.
Inserts - Update the first record available with DELETED flag set, or either select the min ID from product table where DELETED = 'Y'. Update record with new product info (but same ID) and set DELETED = 'N'
This ensures you always recycle before you insert new sequence IDs
If you want to implement the logic in the database, you can create a view (VIEW$PRODUCTS) where DELETED = 'N' and an INSTEAD OF INSERT trigger to do the insert.
In any scenario, when you run out of sequences (or sequence wraps), you are out of luck for batch inserts. I'd reconsider that part of the design if I were you.

Deleting duplicate rows in oracle

Shouldn't the following query work fine for deleting duplicate rows in oracle
SQL> delete from sessions o
where 1<=(select count(*)
from sessions i
where i.id!=o.id
and o.data=i.data);
It seems to delete all the duplication rows!! (I wish to keep 1 tough)
Your statement doesn't work because your table has at least one row where two different ID's share the same values for DATA.
Although your intent may be to look for differing values of DATA ID by ID, what your SQL is saying is in fact set-based: "Look at my table as a whole. If there are any rows in the table such that the DATA is the same but the ID's are different (i.e., that inner COUNT(*) is anything greater than 0), then DELETE every row in the table."
You may be attempting specific, row-based logic, but your statement is big-picture (set-based). There's nothing in it to single out duplicate rows, as there is in the solution Ollie has linked to, for example.

T-SQL Performance dilemma when looping through a Big table (details inside)

Let's say I have a Big and a Bigger table.
I need to cycle through the Big table, that is indexed but not sequential (since it is a filter of a sequentially indexed Bigger table).
For this example, let's say I needed to cycle through about 20000 rows.
Should I do 20000 of these
set #currentID = (select min(ID) from myData where ID > #currentID)
or
Creating a (big) temporary sequentially indexed table (copy of the Big table) and do 20000 of
#Row = #Row + 1
?
I imagine that doing 20000 filters of the Bigger table just to fetch the next ID is heavy, but so must be filling a big (Big sized) temporary table just to add a dummy identity column.
Is the solution somewhere else?
For example, if I could loop through the results of the select statement (the filter of the Bigger table that originates "table" (actually a resultset) Big) without needing to create temporary tables, it would be ideal, but I seem to be unable to add something like an IDENTITY(1,1) dummy column to the results.
Thanks!
You may want to consider finding out how to do your work set based instead of RBAR. With that said, for very big tables, you may want to not make a temp table so that you are sure that you have live data if you suspect that the proc may run for a while in production. If your proc fails, you'll be able to pick up where you left off. If you use a temp table then if your proc crashes, then you could lose data that hasn't been completed yet.
You need to provide more information on what your end result is, It is only very rarely necessary to do row-by-row processing (and almost always the worst possible choice from a performance perspective). This article will get you started on how to do many tasks in a set-based manner:
http://wiki.lessthandot.com/index.php/Cursors_and_How_to_Avoid_Them
If you just want a temp table with an identity, here are two methods:
create table #temp ( test varchar (10) , id int identity)
insert #temp (test)
select test from mytable
select test, identity(int) as id into #temp from mytable
I think a join will serve your purposes better.
SELECT BIG.*, BIGGER.*, -- Add additional calcs here involving BIG and BIGGER.
FROM TableBig BIG (NOLOCK)
JOIN TableBigger BIGGER (NOLOCK)
ON BIG.ID = BIGGER.ID
This will limit the set you are working with to. But again it comes down to the specifics of your solution.
Remember too, you can do bulk inserts and bulk updates in this manner too.

Resources