Altering table property to default when NULL only for new inserts - oracle

Is there any way in Oracle that we can set Default column value when NULL only for New inserts? I don't want to change the Existing records if they have NULL.
I want to do this at table level. Not with NVL insert logic.

As far as I know, if you alter the table and set a default value for a column, it should only affect new records which would come in via an insert, not existing records.
ALTER TABLE yourTable MODIFY (col VARCHAR(100) DEFAULT 'some value');
Using the above approach, col values which are already NULL should remain NULL, at least from the point of view of inserts not changing those NULL values. And newly inserted records which do not specify a value for col should receive the default value some value.

Here's a demonstration which shows what's going on.
First, a test table and some inserts:
SQL> create table test (id number, col varchar2(10));
Table created.
SQL> insert into test (id, col) values (1, null);
1 row created.
SQL> insert into test (id, col) values (2, 'Littlefoot');
1 row created.
SQL> select * from test;
ID COL
---------- ----------
1
2 Littlefoot
Alter the table so that newly added rows contain 'some value' for the COL column:
SQL> alter table test modify col default 'some value';
Table altered.
OK; and now, important part of the story: pay attention to following:
SQL> -- this won't work as you wanted, because I explicitly inserted NULL into COL
SQL> insert into test (id, col) values (3, null);
1 row created.
SQL> -- this will work, because COL is omitted from the INSERT statement
SQL> insert into test (id) values (4);
1 row created.
SQL> select * From test;
ID COL
---------- ----------
1
2 Littlefoot
3
4 some value
SQL>
See? If you explicitly put NULL into a column, it won't get the default value.
However, if you were on 12c (I know, you aren't - just saying, for future reference), there's yet another option: DEFAULT ON NULL. It goes like this:
SQL> alter table test modify col default on null 'some value';
alter table test modify col default on null 'some value'
*
ERROR at line 1:
ORA-02296: cannot enable (SCOTT.) - null values found
Ooops! Won't work if there are NULLs in the column. I know #2 that you don't want to modify existing rows, but - for this demonstration, I'll do that:
SQL> update test set col = 'x' where col is null;
2 rows updated.
SQL> alter table test modify col default on null 'some value';
Table altered.
OK; let's see how it behaves: I'm explicitly inserting NULL into the column. In the previous example, it didn't put 'some value' in there, but left it NULL. How about now?
SQL> insert into test (id, col) values (5, null);
1 row created.
SQL> select * From test;
ID COL
---------- ----------
1 x
2 Littlefoot
3 x
4 some value
5 some value
Nice; we have 'some value' in the column.
Now you have some more info about the issue; see if it helps.

As Littlefoot mentioned, If you explicitly put NULL into a column, it won't get the default value.
If no value is mentioned for the column in the insert query, it uses DEFAULT. But, an explicit NULL overrides the default expression.
For 12c and above you can use the DEFAULT ON NULL option.
For prior versions, only way as far as I can tell is to replicate that functionality through a TRIGGER
CREATE TABLE YOURTABLE ( yourcolumn VARCHAR(100) );
CREATE OR REPLACE TRIGGER trg_mod_yourtabcol BEFORE
INSERT ON yourtable
FOR EACH ROW
WHEN ( new.yourcolumn IS NULL )
BEGIN
:new.yourcolumn := 'SOME DEFAULT VALUE';
END;
/
INSERT INTO YOURTABLE(yourcolumn) VALUES(NULL);
select * from YOURTABLE;
Table YOURTABLE created.
Trigger TRG_MOD_YOURTABCOL compiled
1 row inserted.
YOURCOLUMN
----------------------------------------------------------------------------------------------------
SOME DEFAULT VALUE
1 row selected.

Related

Insert statement with blank values without defining all columns

I have a need to insert 100+ rows of data into a table that has 25 text columns.
I only want to insert some data into those columns and the rest be represented by a white space.
(Note: Text fields on PeopleSoft tables are defined as NOT NULLABLE, with a single white space character used to indicate no data instead of null.)
Is there a way to write an insert statement that does not define all the columns along with the blank space. As an example:
INSERT INTO CUST.RECORD(BUSINESS_UNIT, PROJECT_ID, EFF_STATUS, TMPL, DESCR) VALUES('TOO1','PROJ1','A','USA00','USA00 CONTRACT');
For every other column in CUST.RECORD I'd like to insert ' ' without defining the column or the space in the insert.
One way is to set a Default value in table definition like this:
CREATE TABLE CUST.RECORD(
id NUMBER DEFAULT detail_seq.NEXTVAL,
master_id varchar2(10) DEFAULT ' ',
description VARCHAR2(30)
);
Edit: for your table you can use :
alter table CUST.RECORD modify( col2 varchar2(10) default ' ' );
You do not have to supply a value for a specific column IF either condition is true:
The column is defined as nullable. That is, it was NOT defined with the 'not null' clause.
or
The column is defined with a default value
SQL> create table my_test (my_id number not null,
2 fname varchar2(10), -- nullable
3 dob date default sysdate -- default value
4 )
5 ;
Table created.
SQL> --
SQL> -- only supplying value for my_id
SQL> insert into my_test(my_id) values (1);
1 row created.
SQL> --
SQL> -- and see the results
SQL> select *
2 from my_test;
MY_ID FNAME DOB
1 12-MAR-21
1 row selected.
SQL> --
SQL> select my_id,
2 nvl(fname,'NULL'),
3 dob
4 from my_test;
MY_ID NVL(FNAME, DOB
1 NULL 12-MAR-21
1 row selected.

How to change column identify from sequence to GENERATED ALWAYS with data

as my title, I want to change my identity column by sequence to GENERATED ALWAYS.
For ex, I have a table like this:
CREATE SEQUENCE DPT.Deposit_SEQ
START WITH 1
INCREMENT BY 10
NOCACHE
NOCYCLE;
CREATE TABLE DPT.TEST(
Id NUMBER(10)DEFAULT DPT.Deposit_SEQ.nextval NOT NULL
,Code VARCHAR2(20),
CONSTRAINT PK_TEST PRIMARY KEY (ID)
);
Insert into DPT.TEST (ID, CODE) values (1,'ABC');
COMMIT;
Now, I want to change from sequence to GENERATED ALWAYS like this:
Id NUMBER(10) GENERATED ALWAYS AS IDENTITY START WITH 6
INCREMENT BY 10
NOCACHE
NOCYCLE;
I tried by create one more column and drop old column but failed. How can I do that?
Thanks!
"But failed" is not an Oracle error and is difficult to debug.
Anyway, it works for me:
Create table and a sequence, insert some rows:
SQL> CREATE SEQUENCE Deposit_SEQ START WITH 1 INCREMENT BY 10 NOCACHE NOCYCLE;
Sequence created.
SQL> CREATE TABLE TEST
2 (
3 Id NUMBER (10) DEFAULT Deposit_SEQ.NEXTVAL NOT NULL,
4 Code VARCHAR2 (20),
5 CONSTRAINT PK_TEST PRIMARY KEY (ID)
6 );
Table created.
SQL>
SQL> INSERT INTO TEST (ID, CODE)
2 VALUES (1, 'ABC');
1 row created.
SQL> INSERT INTO TEST (ID, CODE)
2 VALUES (3, 'DEF');
1 row created.
SQL> SELECT * FROM test;
ID CODE
---------- --------------------
1 ABC
3 DEF
Drop current primary key column (ID) and add a new, identity column:
SQL> ALTER TABLE test
2 DROP COLUMN id;
Table altered.
SQL> ALTER TABLE test
2 ADD id NUMBER GENERATED ALWAYS AS IDENTITY START WITH 6;
Table altered.
SQL> SELECT * FROM test;
CODE ID
-------------------- ----------
ABC 6
DEF 7
SQL> ALTER TABLE test ADD CONSTRAINT pk_test PRIMARY KEY (id);
Table altered.
SQL>
As you can see, no problem.

How add autoincrement to existing table in Oracle

Is there any ways to add autoincrement to primary key in already existing table in Oracle 12c. May be with ALTER TABLE function or smth, I mean without triggers and sequences.
As far as I can tell, you can not "modify" existing primary key column into a "real" identity column.
If you want to do that, you'll have to drop the current primary key column and then alter table and add a new identity column.
Workaround is to use a sequence (or a trigger), but - you said you don't want to do that. Anyway, if you decide to use it:
SQL> create table test
2 (id number constraint pk_test primary key,
3 name varchar2(10));
Table created.
SQL> insert into test values (1, 'LF');
1 row created.
SQL> create sequence seq_test start with 2;
Sequence created.
SQL> alter table test modify id default seq_test.nextval;
Table altered.
SQL> insert into test (name) values ('BF');
1 row created.
SQL> select * from test;
ID NAME
---------- ----------
1 LF
2 BF
SQL>
Or, with dropping current primary key column (note that it won't work easy if there are foreign keys involved):
SQL> alter table test drop column id;
Table altered.
SQL> alter table test add id number generated always as identity;
Table altered.
SQL> select * From test;
NAME ID
---------- ----------
LF 1
BF 2
SQL> insert into test (name) values ('test');
1 row created.
SQL> select * From test;
NAME ID
---------- ----------
LF 1
BF 2
test 3
SQL>

Dropping a NOTNULL contraint results in a full table scan

I have a table that contains about 25 million records and has some NOTNULL-constraint on several fields.
When i drop one of these NOTNULL contraints a full table scan is executed (which takes quite a lot of time). I can see that in the session browser of a second instance of TOAD (i use TOAD to drop the constraint).
Is there a way to avoid this full table scan when a constraint gets dropped?
This suggests the column causing the full table scan has a default value, possibly from adding the column to the table while data already existed, with a not-null constraint (which is only possibly since 11gR1).
As a demo, without a default value:
create table t42 (id number);
alter table t42 add (some_col number not null);
select data_default, default_length from user_tab_columns where column_name = 'SOME_COL';
DATA_DEFAULT DEFAULT_LENGTH
------------------------------------------------------------ --------------
insert into t42 (id, some_col)
select level, 0 from dual
connect by level <= 100000;
insert into t42 (id, some_col)
select 100000 + level, 1 from dual
connect by level <= 10000;
select some_col, count(*) from t42 group by some_col;
SOME_COL COUNT(*)
---------- ----------
1 10000
0 100000
set timing on
alter table t42 modify (some_col null);
Table T42 altered.
Elapsed: 00:00:00.056
But with a default value:
create table t42 (id number);
insert into t42 (id)
select level from dual
connect by level <= 100000;
alter table t42 add (some_col number default 0 not null);
select data_default, default_length from user_tab_columns where column_name = 'SOME_COL';
DATA_DEFAULT DEFAULT_LENGTH
------------------------------------------------------------ --------------
0 2
insert into t42 (id, some_col)
select 100000 + level, 1 from dual
connect by level <= 10000;
select some_col, count(*) from t42 group by some_col;
SOME_COL COUNT(*)
---------- ----------
1 10000
0 100000
set timing on
alter table t42 modify (some_col null);
Table T42 altered.
Elapsed: 00:00:04.734
Now the alter takes much longer, because it has to actually update all the pre-constraint rows to physically have a value zero. After the alter you see the same data, even if you change the default value (before or after the original alter; though if you do it before you potentially have a small window where a constraint violation could occur):
alter table t42 modify (some_col default null);
select data_default, default_length from user_tab_columns where column_name = 'SOME_COL';
DATA_DEFAULT DEFAULT_LENGTH
------------------------------------------------------------ --------------
null 4
select some_col, count(*) from t42 group by some_col;
SOME_COL COUNT(*)
---------- ----------
1 10000
0 100000
There isn't really any way around this, other than adding a new column without the default (which will take at least as long, and probably cause other side-effects).
Notice that the default value has change from not being set at all to explicitly being null. When you insert a row there isn't any practical difference - the column value ends up null either way - but you can't completely remove the default once it's been set.
It's also interesting that if you change the column default without dropping the constraint, that has no effect on the reported value for the rows that were using it - they would still show as zero. Oracle seems to be storing that constraint default somewhere else, which makes sense.
Any rows inserted after the default/not-null column was added will have an actual value stored in the table anyway, and changing the default will affect subsequent insertions - but the rows that already existed before the constraint was added behave as if they had actually been updated to whatever default value was specified when the constraint was added.
This change in 11g was mainly to speed the column addition up, and stop you having to to separate steps to add the column without a constraint, then update all existing rows (which was the slow bit), and then altering the table again to add the constraint. This mechanism lets you do it (almost) instantly with just a metadata change. But that update cost still has to be pain if the constraint is then removed.

Not update existing records with default value of new column

I have a table with some records.
I want to add a TIMESTAMP type of column (LAST_MODIFIED) to this table. I want to set the DEFAULT value of this new column to SYSDATE.
But I want to make sure that when this column is added, existing records does not get this column value as SYSDATE. How to achieve this ?
You should do it as two separate actions: adding a column and setting default value.
SQL> create table some_data (id integer);
Table created
SQL> insert into some_data select rownum from dual connect by level <= 5;
5 rows inserted
SQL> alter table some_data add date_modified date;
Table altered
SQL> alter table some_data modify date_modified default sysdate;
Table altered
SQL> insert into some_data (id) values (6);
1 row inserted
SQL> select * from some_data;
ID DATE_MODIFIED
--------------------------------------- -------------
1
2
3
4
5
6 17.03.2015
6 rows selected

Resources