Oracle Auto-Increment Primay Key without a trigger - oracle

There are lots of posts that indicate the accepted way to do an auto-increment primary key (like MySQL's auto_increment property) in Oracle is a trigger.
However, what if I don't want a trigger? I've found a number of approaches to this, and I'm wondering what the merits/demerits are of these approaches.
1st Option
I think I know why this approach isn't recommended. This is obvious from a human perspective, but potentially dangerous from a database perspective.
INSERT INTO MY_TABLE (PK, NAME, PASSWORD) VALUES
(((SELECT MAX(PK) FROM MY_TABLE)+1), :bound_name, :bound_password)
2nd Option
Assuming MY_TABLE_PK is a sequence we've created beforehand:
VARIABLE id NUMBER;
BEGIN
:id := MY_TABLE_PK.NEXTVAL;
INSERT INTO MY_TABLE (PK, NAME, PASSWORD) VALUES
(:id,:bound_name,:bound_value);
END;
3rd Option
Again assuming MY_TABLE_PK is a sequence we've created beforehand:
INSERT INTO MY_TABLE (PK, NAME, PASSWORD)
SELECT MY_TABLE_PK.NEXTVAL, 'literal name', 'literal password'
FROM DUAL
In my experiments, all of these work in certain contexts, though not 100% of the time.

My approach is always this:
INSERT INTO MY_TABLE (PK, NAME, PASSWORD)values (MY_TABLE_PK.NEXTVAL, 'literal name', 'literal password');
Its simplest then why go for complicated ones?
Option2 is not at all needed, other options are fine but always take simplest approach to keep less bugs and easy maintence.

Normally #Lokesh's answer is best. If you're using 12c then definitely look into #kordirko's comment about identity.
Another option is to use SYS_GUID to automatically generate primary keys. The primary key will use more space than a number but has the added advantage of being globally unique.
create table test1(id raw(16) default sys_guid(), a number);
insert into test1(a) values(1);
select * from test1;
ID A
-------------------------------- -
BFFE63BD3ADE4209AC906CECE750C3AE 1

Related

"Sequence generated" Is Not Shown to be in Order? Oracle forms

execute_query applied to show all records, sequence works fine but not in descending order
please help if there's any way to order this data block by the sequence in ("No", column)
create table Citizens_lic
(
No NUMBER(10) ,
ID NUMBER(10)
constraint Citizens_ID_pk_1 PRIMARY KEY,
F_Name VARCHAR2(32) ,
M_Name VARCHAR2(32) ,
L_Name VARCHAR2(32) ,
DOB DATE ,
POB VARCHAR2(32) ,
GENDER VARCHAR2(32) ,
WORK_STATUS VARCHAR2(32) ,
Soc_status VARCHAR2(32) ,
ISS_DATE date ,
EXP_Date date
)
this is the table, here is the sequence:
CREATE SEQUENCE CITIZENS_LIC_NO_SEQ_1
START WITH 1
INCREMENT BY 1
here is the trigger:
CREATE OR REPLACE TRIGGER CITIZENS_LIC_NO_TRIGG_1
BEFORE INSERT ON CITIZENS_LIC
FOR EACH ROW
BEGIN
SELECT CITIZENS_LIC_NO_SEQ_1.NEXTVAL
INTO :new.NO
FROM DUAL;
END;
try to add any value, it will work fine, but when you add values from the forms builder, it will change the order according to your mouse click
another matter is when I try to delete anything from the table,
the sequence ruined the order and the deleted value disappeared with its sequence number forever!
might forms' trigger help but I don't know which one is good to use
If you want to sort rows in a data block, open its properties palette, find ORDER BY property and put whatever you want in there. In your case, it seems that it would be
order by no desc
When you execute a query in that data block, the result will be sorted by the no column in descending order.
As of deleting rows: of course, it will be lost. What did you expect? Sequences guarantee unique, but not gapless list of numbers. Mind its cache; you don't even have to delete any rows, but two consecutive sessions might produce gaps. That's how sequences work; I'd just accept it, if I were you.
If you want to create gapless numbers, you'll have to write your own code and it won't be as simple as you think. You'll have to pay attention to inserts (which is simple), updates and deletes. Once again: stick to sequences.

How can I alter a table in Oracle while adding a column, which is of the same type as a column from a different table

I have two tables : table1 and table2.
There is a customerId field in table2 of data type Varchar2(30), how can I alter table1 to add the customerId field of the same data type as table2 using %type.
I tried the below code but no luck.
alter table table2
add customer_id table1.CUSTOMER_ID%type;
is it possible to alter using %type? Will this work. Please advise.
If it does not work, shall I do it manually by stating
alter table table2
add customer_id varchar2(30);
alter table table2
add customer_id table1.CUSTOMER_ID%type;
%type is a PL/SQL construct. We use it to define local variables in a program which are based on table columns. It does not work in SQL.
"if the data type of customerId changes, we have to manually change everywhere, instead if it is a copy, we just need to change at one place, "
This is not how Oracle (and most other if not all) databases work. They are engines for storing and retrieving data. They make this easy by enforcing strong data-typing and by making it hard to lose data carelessly. The rigour of the data dictionary is there to protect us from our lazy selves.
As a thought experiment, consider the impact on table2.customer_id if we did any one of the following:
alter table table1 modify customer_id not null;
alter table table1 modify customer_id number(6,0); -- from number(9,0)
alter table table1 modify customer_id number(6,0); -- from varchar2(6)
alter table table1 drop column customer_id;
All of these are possible real-life cases. For any of them ,the state of data in table2.customer_id could cause the statement to fail on table2 (even though they would succeed on table1). Is that desirable? Almost certainly yes. But it now means we cannot change table1, which greatly reduces the utility of having a template column.
"i thought is more of a good practice."
The best practice is to get it right first time. Obviously that's not always possible, because circumstances evolve over time. We need to accept there will be change, and the good practice for handling change is to run an impact assessment: if we change table1.customer_id what else might be affected? What else will need to change after that? What about all the program code which uses these columns?
Data management is hard, but it's hard for a good reason. Unlike source code, databases have state. Changing state is expensive, and reverting to a previous state even more so. Changing the datatype of a column means changing the state of all the data in that column. This is not something which should be done lightly.
So. Do proper analysis. Have a decent data model. Understand your data structures. These are good practices.
This is not an answer (mathguy told it to you already), but a comment is a little bit "short" for what I'd like to say.
While attending HrOUG conference, I saw a man wearing a T-shirt saying
Thank you for spending months in coding & saving us days in planing
In other words, carefully choose CUSTOMERID data type. If you are selling products to 13 customers today, don't set it to NUMBER(2) because (if your company develops and becomes prosperous), you'll soon be selling products to thousands of customers. Will you first alter it (and its all dependant column data types, as well as all its appearances in your application(s)) to NUMBER(3), and then to NUMBER(4), etc.? Think about the future!
Similarly, at the same conference, there were guys who said that they have tables with 570 columns. Gosh! 5-7-0! What are they doing with such a tables? Their answer was: "We pay Oracle a lot of money. It allows us to create tables with 1000 columns, and we are going to use every single one of them." The audience was kind of puzzled (hint: normalization?), but hey - it's their choice.
Yes, I noticed that you chose a VARCHAR2 data type for that ID column. (I'm not saying that it is wrong, but I, somehow, prefer numbers over strings for such purposes.) So, what do you think? Will 30 characters be enough? How much would it cost if you set it to 50 characters? Or 100? They won't take any additional space on a disk. If there is 'A234' in your VARCHAR2(100 BYTE) column, it'll take only 4 bytes on a disk. Memory is a different story, as Oracle will pre-allocate space when you use such a variable in your PL/SQL code, so you might end up in wasting space unnecessarily. Adding more RAM? Sure, it is an option, but it costs money.
Therefore, once again - design your data model carefully and you should be OK, following the supported ALTER TABLE syntax.
Note : Use this with caution.
Use this only after you have read all other answers and still think you want it that way.
To use a DDL Trigger. The below is just a sample which considers customer_id as NUMBER type.For VARCHAR2, DATE etc, you need a generic way to construct the DDL. Refer Issue in dynamic table creation
CREATE OR REPLACE TRIGGER trg_alter_table1
AFTER ALTER
ON SCHEMA
WHEN (ORA_DICT_OBJ_TYPE = 'TABLE' AND ORA_DICT_OBJ_NAME = 'TABLE1')
DECLARE
v_ddl VARCHAR2 (200);
BEGIN
SELECT 'ALTER TABLE TABLE2 MODIFY '
|| column_name
|| ' '
|| data_type
|| '('
|| data_precision
|| ')'
INTO v_ddl
FROM user_tab_columns
WHERE table_name = 'TABLE1' AND column_name = 'CUSTOMER_ID';
EXECUTE IMMEDIATE v_ddl;
END;
/

How to remove a default value in oracle [duplicate]

A column in a table has a default value of sysdate and I want to change it so it gets no default value, how do I do this?
ALTER TABLE YourTable MODIFY YourColumn DEFAULT NULL;
Joe's answer is correct in the sense that a column with DEFAULT NULL is functionally equivalent to having never defined a default value for that column in the first place: if a column has no default value, inserting a new row without specifying a value for that column results in NULL.
However, Oracle internally represents the two cases distinctly, as can be seen by looking at the ALL_TAB_COLUMNS system view. (This applies to Oracle 10.x, 11.x, and 12.x, and probably to older versions as well.)
The case where a column has been created, or ALTERed, with DEFAULT NULL:
create table foo (bar varchar2(3) default null);
select default_length, data_default from all_tab_columns where table_name='FOO';
=> default_length data_default
-------------- ------------
4 NULL
select dbms_metadata.get_ddl('TABLE','FOO') from dual;
=> CREATE TABLE "FOO"
( "BAR" VARCHAR(3) DEFAULT NULL
…
)
No default ever specified:
create table foo (bar varchar2(3));
select default_length, data_default from all_tab_columns where table_name='FOO';
=> default_length data_default
-------------- ------------
(null) (null)
select dbms_metadata.get_ddl('TABLE','FOO') from dual;
=> CREATE TABLE "FOO"
( "BAR" VARCHAR(3)
…
)
As shown above, there is an important case where this otherwise-meaningless distinction makes a difference in Oracle's output: when using DBMS_METADATA.GET_DDL() to extract the table definition.
If you are using GET_DDL() to introspect your database, then you will get slightly different DDL output for functionally-identical tables.
This is really quite annoying when using GET_DDL() for version control and comparison among multiple instances of a database, and there is no way to avoid it, other than to manually modify the output of GET_DDL(), or to completely recreate the table with no default value.
The only way to do what you want is to recreate the table.
It is pretty easy to do in Toad, just right click on the table and select "Rebuild Table". Toad will create script which will rename the table and recreate a new table. The script will recreate indexes, constraints, foreign keys, comments, etc... and populate the table with data.
Just modify the script to remove "default null" after the column in question.

Executing triggers in Oracle for copying the old values to a Mirror table

We are trying to copy the current row of a table to mirror table by using a trigger before delete / update. Below is the working query
BEFORE UPDATE OR DELETE
ON CurrentTable FOR EACH ROW
BEGIN
INSERT INTO MirrorTable
( EMPFIRSTNAME,
EMPLASTNAME,
CELLNO,
SALARY
)
VALUES
( :old.EMPFIRSTNAME,
:old.EMPLASTNAME,
:old.CELLNO,
:old.SALARY
);
END;
But the problem is we have more than 50 coulmns in the current table and dont want to mention all those column names. Is there a way to select all coulmns like
:old.*
SELECT * INTO MirrorTable FROM CurrentTable
Any suggestions would be helpful.
Thanks,
Realistically, no. You'll need to list all the columns.
You could, of course, dynamically generate the trigger code pulling the column names from DBA_TAB_COLUMNS. But that is going to be dramatically more work than simply typing in 50 column names.
If your table happens to be an object table, :new would be an instance of that object so you could insert that. But it would be rather rare to have an object table.
If your 'current' and 'mirror' tables have EXACTLY the same structure you may be able to use something like
INSERT INTO MirrorTable
SELECT *
FROM CurrentTable
WHERE CurrentTable.primary_key_column = :old.primary_key_column
Honestly, I think that this is a poor choice and wouldn't do it, but it's a more-or-less free world and you're free (more or less :-) to make your own choices.
Share and enjoy.
For what it's worth, I've been writing the same stuff and used this to generate the code:
SQL> set pagesize 0
SQL> select ':old.'||COLUMN_NAME||',' from all_tab_columns where table_name='BIGTABLE' and owner='BOB';
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
...
If you feed all columns, no need to mention them twice (and you may use NULL for empty columns):
INSERT INTO bigtable VALUES (
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
NULL,
NULL);
people writing tables with that many columns should have no desserts ;-)

Inserting into Oracle the wrong way - how to deal with it?

I've just found the following code:
select max(id) from TABLE_NAME ...
... do some stuff ...
insert into TABLE_NAME (id, ... )
VALUES (max(id) + 1, ...)
I can create a sequence for the PK, but there's a bunch of existing code (classic asp, existing asp.net apps that aren't part of this project) that's not going to use it.
Should I just ignore it, or is there a way to fix it without going into the existing code?
I'm thinking that the best option is just to do:
insert into TABLE_NAME (id, ... )
VALUES (select max(id) + 1, ...)
Options?
You can create a trigger on the table that overwrites the value for ID with a value that you fetch from a sequence.
That way you can still use the other existing code and have no problems with concurrent inserts.
If you cannot change the other software and they still do the select max(id)+1 insert that is most unfortunate. What you then can do is:
For your own insert use a sequence and populate the ID field with -1*(sequence value).
This way the insert will not interfere with the existing programs, but also not conflict with the existing programs.
(of do the insert without a value for id and use a trigger to populate the ID with the negative value of a sequence).
As others have said, you can override the max value in a database trigger using a sequence. However, that could cause problems if any of the application code uses that value like this:
select max(id) from TABLE_NAME ...
... do some stuff ...
insert into TABLE_NAME (id, ... )
VALUES (max(id) + 1, ...)
insert into CHILD_TABLE (parent_id, ...)
VALUES (max(id) + 1, ...)
Use a seqeunce in a before insert row trigger. select max(id) + 1 doesn't work in a multi concerrency environment.
This quickly turns in to a discussion of application architecture, especially when the question boils down to "what should I do?"
Primary keys in Oracle really need to come from sequences and since you're dealing with complex insert logic (parent/child inserts, at least) in your application code, you should go into the existing code, as you say (since triggers probably won't help you).
On one extreme you could take away direct SQL access from applications and make them call services so the insert/update/delete code can be centralized. Or you could rewrite your code using some sort of MVC architecture. I'm assuming both are overkill for your situation.
Is the id column at least set to be a true primary key so there's a constraint that will keep duplicates from occurring? If not, start there.
Once the primary key is in place, or if it already is, it's only a matter of time until inserts start to fail; you'll know when they start to fail, right? If not, get on the error-logging.
Now fix the application code. While you're in there, you should at least write and call helper code so your database interactions are in as few places as possible. Then provide some leadership to the other developers and make sure they use the helper code too.
Big question: does anybody rely on the value of the PK? If not I would recommend using a trigger, fetching the id from a sequence and setting it. The inserts wouldn't specify and id at all.
I am not sure but the
insert into TABLE_NAME (id, ... )
VALUES (select max(id) + 1, ...)
might cause problems when to sessions reach that code. It might be that oracle reads the table (calculating max(id)) and then trys to get the lock on the PK for insertion. If that is the case two concurrent session might try to use the same id, causing an exception in the second session.
You could add some logging to the trigger, to check if inserts get processed that already have an ID set. So you know you have still to hunt down some place where the old code is used.
It can be done by fetching the max value in a variable and then just insert it in the table like
Declare
v_max int;
select max(id) into v_max from table;
insert into table values((v_max+rownum),val1,val2....,valn);
commit;
This will create a sequence in a single as well as Bulk inserts.

Resources