How to add an auto increment primary key to a table already loaded with data in Oracle? - oracle

Tried this several time, just wanted to know if there is a workaround

You can take advantage of the identity column in this case as follows:
alter table test
add col1 number generated always as identity (start with 1 increment by 1)
Db<>fiddle demo
It will automatically assign the sequence number to the already existing rows and will give number in sequence to new inserts also.

Related

How to find the position of the primary key which is varchar GUID generated by application row in Oracle

So my use case is i have to find the location of the primary key column so that i can write query like select * from my_table where ID <='00000536-37ee-471c-a8e0-3d233b8102f5'
So my table has a primary key which is varchar type and values of the column is GUID generated by an application.
Here is an example of primary key
000000bd-104e-4fd6-a791-c5422f29e1b5
0000016e-7e68-4453-b360-7ffd1627dc22
00000196-2dba-4532-8cba-1e853c466697
0000025a-cfae-41b4-b8e7-ef854d49e54a
00000260-8bdb-4b30-acdb-5a67efd4dbfe
00000366-552d-48a0-b8a1-20190ccd087c
000003f2-d6d8-4a51-96cc-407063bc568b
000003ff-3d16-4e88-9cf3-bcdf01c39a2b
00000487-1e6c-4d6d-a683-6f11d517962c
000004cc-6359-4a9a-aa2a-70a6b73a06b1
00000536-37ee-471c-a8e0-3d233b8102f5
Now i need to use this table in aws DMS which accepts only query like select * from table where column =,<=,>=
My use case is to find the exact location of the millions of GUID so that i can divide table into multiple query and select based on GUID .
For example if we have 100th GID is 00000536-37ee-471c-a8e0-3d233b8102f5 then i can write query like select * from my table where GUID <=100
The limitation is i can not add any new columns in the existing table because application impact is huge .
How can i do this ?
One Option that i thought but wanted to confirm is below
Create a temp table
Temp table will have auto generated sequence and ID column
Inset into temp table select only GUID from main table with order of GUID .
In this case the value will be stored on order and i an first select GUID based on 100th number and then i can pass that GUID and write my oroginal query
But i am not sure whether this will work on not
Can some one suggest on this or suggest some other option ?
So let me explain what i want .
I want DMS to read may main table in parallel and migrate .
So lets say one DMS task can read nd migrate from 1 to 100,another 100 to 200 another >200 like that .
Currently i can not do because we dont know the position of the primary key and write the query .
If you want to divide your table into chunks of equal sizes, I would take advantage of the hexadecimal nature of the GUIDs. It will be 256 instead of 100 chunks, but this might be acceptable.
CREATE TABLE t (pk VARCHAR2(36) PRIMARY KEY);
INSERT INTO t VALUES ('000000bd-104e-4fd6-a791-c5422f29e1b5');
The easiest option would be
SELECT * FROM t WHERE pk LIKE '%b5';
A bit more advanced:
SELECT pk, to_number(substr(pk, -2),'xx') FROM t;
If you have millions of rows, this is probably faster:
ALTER TABLE t ADD (mycol GENERATED ALWAYS AS (to_number(substr(pk, -2),'xx')));
CREATE INDEX i ON t(mycol);
SELECT * FROM t WHERE mycol=181;
Once your migration is done, you can undo the additional virtual column:
DROP INDEX i;
ALTER TABLE t DROP (mycol);

ORACLE APEX / SQL DEVELOPER: Cannot get PK to autoincrement

I am trying to implement my SQLDeveloper DB into Oracle APEX. I cannot figure out how to get the PK's in my table to auto-increment starting from a certain value (i.e. 400001). I have tried making triggers and sequences but when I try to add a row using a form in APEX, my PK increments from 40 for some reason.
Here is my APEX form outcome
enter image description here
Here is how it inserts into SQL Developer
enter image description here
Basically, can someone describe to me how I can edit the existing trigger, or create a sequence, that would make application_id of a new entry auto-increment by 1.
Thanks!
Find max application_id:
select max(application_id) From your_Table;
Suppose it is 400010 (as screenshot suggests). Now recreate the sequence (presuming its name is seq_app):
drop sequence seq_app;
create sequence seq_app start with 400011 increment by 1 nocache;
Trigger is most probably OK, as you see values being inserted into the table.
Side note: sequences will be unique, but not necessarily gapless. CACHE (or NOCACHE) might affect that, but - for performance sake, you'd rather let Oracle cache sequence numbers (default is 20) which means that - if you don't use some of those cached numbers, they will be lost. I wouldn't worry, if I were you.

Why is my Spring Boot entity ID generation failing?

I'm trying to auto-generate ID's for my entity, but it's not generating. Instead, it's starting from 1 when there already exists an entry with id "1" in my DB. Why is it not generating id "9" for my new entity?
Typically when creating a table with GenerationType.IDENTITY on postgres, Hibernate will setup the id column plus a database sequence to manage this id.
By convention the sequence name will be "tablename_id_seq". E.g., for the table ad_group_action there will be a corresponding sequence ad_group_action_id_seq. You can connect to the database to double-check the actual sequence name created.
The sequence just starts from 1 and increments each time a row is inserted by Hibernate.
But if there are pre-existing rows -- or if rows with existing IDs are inserted "manually" into the table -- those rows can conflict with the sequence.
One solution is to simply reset the sequence (from pgAdmin or another database client) to start at a higher number (say 100), using something like:
ALTER SEQUENCE ad_group_action_id_seq RESTART WITH 100;
Now Hibernate will not conflict with the existing rows (assuming their max id is < 100).
Alternatively, when inserting rows manually, omit the id column and let postgres automatically set them. This way the table and the sequence will always be in sync.

Oracle 12c - refreshing the data in my tables based on the data from warehouse tables

I need to update the some tables in my application from some other warehouse tables which would be updating weekly or biweekly. I should update my tables based on those. And these are having foreign keys in another tables. So I cannot just truncate the table and reinsert the whole data every time. So I have to take the delta and update accordingly based on few primary key columns which doesn't change. Need some inputs on how to implement this approach.
My approach:
Check the last updated time of those tables, views.
If it is most recent then compare each row based on the primary key in my table and warehouse table.
update each column if it is different.
Do nothing if there is no change in columns.
insert if there is a new record.
My Question:
How do I implement this? Writing a PL/SQL code is it a good and efficient way? as the expected number of records are around 800K.
Please provide any sample code or links.
I would go for Pl/Sql and bulk collect forall method. You can use minus in your cursor in order to reduce data size and calculating difference.
You can check this site for more information about bulk collect, forall and engines: http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html
There are many parts to your question above and I will answer as best I can:
While it is possible to disable referencing foreign keys, truncate the table, repopulate the table with the updated data then reenable the foreign keys, given your requirements described above I don't believe truncating the table each time to be optimal
Yes, in principle PL/SQL is a good way to achieve what you are wanting to
achieve as this is too complex to deal with in native SQL and PL/SQL is an efficient alternative
Conceptually, the approach I would take is something like as follows:
Initial set up:
create a sequence called activity_seq
Add an "activity_id" column of type number to your source tables with a unique constraint
Add a trigger to the source table/s setting activity_id = activity_seq.nextval for each insert / update of a table row
create some kind of master table to hold the "last processed activity id" value
Then bi/weekly:
retrieve the value of "last processed activity id" from the master
table
select all rows in the source table/s having activity_id value > "last processed activity id" value
iterate through the selected source rows and update the target if a match is found based on whatever your match criterion is, or if
no match is found then insert a new row into the target (I assume
there is no delete as you do not mention it)
on completion, update the master table "last processed activity id" to the greatest value of activity_id for the source rows
processed in step 3 above.
(please note that, depending on your environment and the number of rows processed, the above process may need to be split and repeated over a number of transactions)
I hope this proves helpful

Unique constraint violation during insert: why? (Oracle)

I'm trying to create a new row in a table. There are two constraints on the table -- one is on the key field (DB_ID), the other constrains a value to be one of several the the field ENV. When I do an insert, I do not include the key field as one of the fields I'm trying to insert, yet I'm getting this error:
unique constraint (N390.PK_DB_ID) violated
Here's the SQL that causes the error:
insert into cmdb_db
(narrative_name, db_name, db_type, schema, node, env, server_id, state, path)
values
('Test Database', 'DB', 'TYPE', 'SCH', '', 'SB01', 381, 'TEST', '')
The only thing I've been able to turn up is the possibility that Oracle might be trying to assign an already in-use DB_ID if rows were inserted manually. The data in this database was somehow restored/moved from a production database, but I don't have the details as to how that was done.
Any thoughts?
Presumably, since you're not providing a value for the DB_ID column, that value is being populated by a row-level before insert trigger defined on the table. That trigger, presumably, is selecting the value from a sequence.
Since the data was moved (presumably recently) from the production database, my wager would be that when the data was copied, the sequence was not modified as well. I would guess that the sequence is generating values that are much lower than the largest DB_ID that is currently in the table leading to the error.
You could confirm this suspicion by looking at the trigger to determine which sequence is being used and doing a
SELECT <<sequence name>>.nextval
FROM dual
and comparing that to
SELECT MAX(db_id)
FROM cmdb_db
If, as I suspect, the sequence is generating values that already exist in the database, you could increment the sequence until it was generating unused values or you could alter it to set the INCREMENT to something very large, get the nextval once, and set the INCREMENT back to 1.
Your error looks like you are duplicating an already existing Primary Key in your DB. You should modify your sql code to implement its own primary key by using something like the IDENTITY keyword.
CREATE TABLE [DB] (
[DBId] bigint NOT NULL IDENTITY,
...
CONSTRAINT [DB_PK] PRIMARY KEY ([DB] ASC),
);
It looks like you are not providing a value for the primary key field DB_ID. If that is a primary key, you must provide a unique value for that column. The only way not to provide it would be to create a database trigger that, on insert, would provide a value, most likely derived from a sequence.
If this is a restoration from another database and there is a sequence on this new instance, it might be trying to reuse a value. If the old data had unique keys from 1 - 1000 and your current sequence is at 500, it would be generating values that already exist. If a sequence does exist for this table and it is trying to use it, you would need to reconcile the values in your table with the current value of the sequence.
You can use SEQUENCE_NAME.CURRVAL to see the current value of the sequence (if it exists of course)

Resources