How to remove a default value in oracle [duplicate] - oracle

A column in a table has a default value of sysdate and I want to change it so it gets no default value, how do I do this?

ALTER TABLE YourTable MODIFY YourColumn DEFAULT NULL;

Joe's answer is correct in the sense that a column with DEFAULT NULL is functionally equivalent to having never defined a default value for that column in the first place: if a column has no default value, inserting a new row without specifying a value for that column results in NULL.
However, Oracle internally represents the two cases distinctly, as can be seen by looking at the ALL_TAB_COLUMNS system view. (This applies to Oracle 10.x, 11.x, and 12.x, and probably to older versions as well.)
The case where a column has been created, or ALTERed, with DEFAULT NULL:
create table foo (bar varchar2(3) default null);
select default_length, data_default from all_tab_columns where table_name='FOO';
=> default_length data_default
-------------- ------------
4 NULL
select dbms_metadata.get_ddl('TABLE','FOO') from dual;
=> CREATE TABLE "FOO"
( "BAR" VARCHAR(3) DEFAULT NULL
…
)
No default ever specified:
create table foo (bar varchar2(3));
select default_length, data_default from all_tab_columns where table_name='FOO';
=> default_length data_default
-------------- ------------
(null) (null)
select dbms_metadata.get_ddl('TABLE','FOO') from dual;
=> CREATE TABLE "FOO"
( "BAR" VARCHAR(3)
…
)
As shown above, there is an important case where this otherwise-meaningless distinction makes a difference in Oracle's output: when using DBMS_METADATA.GET_DDL() to extract the table definition.
If you are using GET_DDL() to introspect your database, then you will get slightly different DDL output for functionally-identical tables.
This is really quite annoying when using GET_DDL() for version control and comparison among multiple instances of a database, and there is no way to avoid it, other than to manually modify the output of GET_DDL(), or to completely recreate the table with no default value.

The only way to do what you want is to recreate the table.
It is pretty easy to do in Toad, just right click on the table and select "Rebuild Table". Toad will create script which will rename the table and recreate a new table. The script will recreate indexes, constraints, foreign keys, comments, etc... and populate the table with data.
Just modify the script to remove "default null" after the column in question.

Related

How can I alter a table in Oracle while adding a column, which is of the same type as a column from a different table

I have two tables : table1 and table2.
There is a customerId field in table2 of data type Varchar2(30), how can I alter table1 to add the customerId field of the same data type as table2 using %type.
I tried the below code but no luck.
alter table table2
add customer_id table1.CUSTOMER_ID%type;
is it possible to alter using %type? Will this work. Please advise.
If it does not work, shall I do it manually by stating
alter table table2
add customer_id varchar2(30);
alter table table2
add customer_id table1.CUSTOMER_ID%type;
%type is a PL/SQL construct. We use it to define local variables in a program which are based on table columns. It does not work in SQL.
"if the data type of customerId changes, we have to manually change everywhere, instead if it is a copy, we just need to change at one place, "
This is not how Oracle (and most other if not all) databases work. They are engines for storing and retrieving data. They make this easy by enforcing strong data-typing and by making it hard to lose data carelessly. The rigour of the data dictionary is there to protect us from our lazy selves.
As a thought experiment, consider the impact on table2.customer_id if we did any one of the following:
alter table table1 modify customer_id not null;
alter table table1 modify customer_id number(6,0); -- from number(9,0)
alter table table1 modify customer_id number(6,0); -- from varchar2(6)
alter table table1 drop column customer_id;
All of these are possible real-life cases. For any of them ,the state of data in table2.customer_id could cause the statement to fail on table2 (even though they would succeed on table1). Is that desirable? Almost certainly yes. But it now means we cannot change table1, which greatly reduces the utility of having a template column.
"i thought is more of a good practice."
The best practice is to get it right first time. Obviously that's not always possible, because circumstances evolve over time. We need to accept there will be change, and the good practice for handling change is to run an impact assessment: if we change table1.customer_id what else might be affected? What else will need to change after that? What about all the program code which uses these columns?
Data management is hard, but it's hard for a good reason. Unlike source code, databases have state. Changing state is expensive, and reverting to a previous state even more so. Changing the datatype of a column means changing the state of all the data in that column. This is not something which should be done lightly.
So. Do proper analysis. Have a decent data model. Understand your data structures. These are good practices.
This is not an answer (mathguy told it to you already), but a comment is a little bit "short" for what I'd like to say.
While attending HrOUG conference, I saw a man wearing a T-shirt saying
Thank you for spending months in coding & saving us days in planing
In other words, carefully choose CUSTOMERID data type. If you are selling products to 13 customers today, don't set it to NUMBER(2) because (if your company develops and becomes prosperous), you'll soon be selling products to thousands of customers. Will you first alter it (and its all dependant column data types, as well as all its appearances in your application(s)) to NUMBER(3), and then to NUMBER(4), etc.? Think about the future!
Similarly, at the same conference, there were guys who said that they have tables with 570 columns. Gosh! 5-7-0! What are they doing with such a tables? Their answer was: "We pay Oracle a lot of money. It allows us to create tables with 1000 columns, and we are going to use every single one of them." The audience was kind of puzzled (hint: normalization?), but hey - it's their choice.
Yes, I noticed that you chose a VARCHAR2 data type for that ID column. (I'm not saying that it is wrong, but I, somehow, prefer numbers over strings for such purposes.) So, what do you think? Will 30 characters be enough? How much would it cost if you set it to 50 characters? Or 100? They won't take any additional space on a disk. If there is 'A234' in your VARCHAR2(100 BYTE) column, it'll take only 4 bytes on a disk. Memory is a different story, as Oracle will pre-allocate space when you use such a variable in your PL/SQL code, so you might end up in wasting space unnecessarily. Adding more RAM? Sure, it is an option, but it costs money.
Therefore, once again - design your data model carefully and you should be OK, following the supported ALTER TABLE syntax.
Note : Use this with caution.
Use this only after you have read all other answers and still think you want it that way.
To use a DDL Trigger. The below is just a sample which considers customer_id as NUMBER type.For VARCHAR2, DATE etc, you need a generic way to construct the DDL. Refer Issue in dynamic table creation
CREATE OR REPLACE TRIGGER trg_alter_table1
AFTER ALTER
ON SCHEMA
WHEN (ORA_DICT_OBJ_TYPE = 'TABLE' AND ORA_DICT_OBJ_NAME = 'TABLE1')
DECLARE
v_ddl VARCHAR2 (200);
BEGIN
SELECT 'ALTER TABLE TABLE2 MODIFY '
|| column_name
|| ' '
|| data_type
|| '('
|| data_precision
|| ')'
INTO v_ddl
FROM user_tab_columns
WHERE table_name = 'TABLE1' AND column_name = 'CUSTOMER_ID';
EXECUTE IMMEDIATE v_ddl;
END;
/

Batch insert: is there a way to just skip on next record when a constraint is violated?

I am using mybatis to perform a massive batch insert on an oracle DB.
My process is very simple: I am taking records from a list of files and inserting them into a specific table after performing some checks on the data.
-Each file contains an average of 180.000 records and I can have more than one file.
-Some records can be present in more than one file.
-A record is identical to another one if EVERY column matches, in other words I cannot simply perform a check on a specific field. And I have defined a constraint in my DB which makes sure this condition is satisfied.
To put it simply I want to just ignore the constraint exception Oracle will give to me in case that constraint is violated.
Record is not present?-->insert
Record is already present?--> go ahead
is this possible with mybatis?Or can I accomplish something at DB level?
I have control on both Application Server and DB so please tell me what's the most efficient way to accomplish this task (even though I'd like to avoid being too much DB dependant...)
of course, I'd like to avoid performing a select* before each insertion...given the number of records I am dealing with it would ruin my application's performances
Use the IGNORE_ROW_ON_DUPKEY_INDEX hint:
insert /*+ IGNORE_ROW_ON_DUPKEY_INDEX(table_name index_name) */
into table_name
select * ...
I'm not sure about JDBC, but at least in OCI it is possible. With batch operations you pass vectors as bind variables and you also get back vector(s) of returned IDs and also a vector of error codes.
You can also use MERGE on database server side together with custon collection types. Something like:
merge into t
using ( select * from TABLE(:var) v)
on ( v.id = t.id )
when not matched then insert ...
Where :var is bind variable of SQL type: TABLE OF <recordname>
The word "TABLE" is a construct used to cast from bind variable into a table.
Another option is to use SQL error loggin clause:
DBMS_ERRLOG.create_error_log (dml_table_name => 't');
insert into t(...) values(...) log errors reject limit unlimited;
Then after the load you will have to truncate error loging table err$_t;
another option would be to use external tables
It looks that any solution is quite a lot work to do, when compared to using sqlldr.
Ignore error with error table
insert
into table_name
select *
from selected_table
LOG ERRORS INTO SANJI.ERROR_LOG('some comment' )
REJECT LIMIT UNLIMITED;
and error table schema is :
CREATE GLOBAL TEMPORARY TABLE SANJI.ERROR_LOG (
ora_err_number$ number,
ora_err_mesg$ varchar2(2000),
ora_err_rowid$ rowid,
ora_err_optyp$ varchar2(2),
ora_err_tag$ varchar2(2000),
n1 varchar2(128)
)
ON COMMIT PRESERVE ROWS;

Oracle Auto-Increment Primay Key without a trigger

There are lots of posts that indicate the accepted way to do an auto-increment primary key (like MySQL's auto_increment property) in Oracle is a trigger.
However, what if I don't want a trigger? I've found a number of approaches to this, and I'm wondering what the merits/demerits are of these approaches.
1st Option
I think I know why this approach isn't recommended. This is obvious from a human perspective, but potentially dangerous from a database perspective.
INSERT INTO MY_TABLE (PK, NAME, PASSWORD) VALUES
(((SELECT MAX(PK) FROM MY_TABLE)+1), :bound_name, :bound_password)
2nd Option
Assuming MY_TABLE_PK is a sequence we've created beforehand:
VARIABLE id NUMBER;
BEGIN
:id := MY_TABLE_PK.NEXTVAL;
INSERT INTO MY_TABLE (PK, NAME, PASSWORD) VALUES
(:id,:bound_name,:bound_value);
END;
3rd Option
Again assuming MY_TABLE_PK is a sequence we've created beforehand:
INSERT INTO MY_TABLE (PK, NAME, PASSWORD)
SELECT MY_TABLE_PK.NEXTVAL, 'literal name', 'literal password'
FROM DUAL
In my experiments, all of these work in certain contexts, though not 100% of the time.
My approach is always this:
INSERT INTO MY_TABLE (PK, NAME, PASSWORD)values (MY_TABLE_PK.NEXTVAL, 'literal name', 'literal password');
Its simplest then why go for complicated ones?
Option2 is not at all needed, other options are fine but always take simplest approach to keep less bugs and easy maintence.
Normally #Lokesh's answer is best. If you're using 12c then definitely look into #kordirko's comment about identity.
Another option is to use SYS_GUID to automatically generate primary keys. The primary key will use more space than a number but has the added advantage of being globally unique.
create table test1(id raw(16) default sys_guid(), a number);
insert into test1(a) values(1);
select * from test1;
ID A
-------------------------------- -
BFFE63BD3ADE4209AC906CECE750C3AE 1

Executing triggers in Oracle for copying the old values to a Mirror table

We are trying to copy the current row of a table to mirror table by using a trigger before delete / update. Below is the working query
BEFORE UPDATE OR DELETE
ON CurrentTable FOR EACH ROW
BEGIN
INSERT INTO MirrorTable
( EMPFIRSTNAME,
EMPLASTNAME,
CELLNO,
SALARY
)
VALUES
( :old.EMPFIRSTNAME,
:old.EMPLASTNAME,
:old.CELLNO,
:old.SALARY
);
END;
But the problem is we have more than 50 coulmns in the current table and dont want to mention all those column names. Is there a way to select all coulmns like
:old.*
SELECT * INTO MirrorTable FROM CurrentTable
Any suggestions would be helpful.
Thanks,
Realistically, no. You'll need to list all the columns.
You could, of course, dynamically generate the trigger code pulling the column names from DBA_TAB_COLUMNS. But that is going to be dramatically more work than simply typing in 50 column names.
If your table happens to be an object table, :new would be an instance of that object so you could insert that. But it would be rather rare to have an object table.
If your 'current' and 'mirror' tables have EXACTLY the same structure you may be able to use something like
INSERT INTO MirrorTable
SELECT *
FROM CurrentTable
WHERE CurrentTable.primary_key_column = :old.primary_key_column
Honestly, I think that this is a poor choice and wouldn't do it, but it's a more-or-less free world and you're free (more or less :-) to make your own choices.
Share and enjoy.
For what it's worth, I've been writing the same stuff and used this to generate the code:
SQL> set pagesize 0
SQL> select ':old.'||COLUMN_NAME||',' from all_tab_columns where table_name='BIGTABLE' and owner='BOB';
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
...
If you feed all columns, no need to mention them twice (and you may use NULL for empty columns):
INSERT INTO bigtable VALUES (
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
NULL,
NULL);
people writing tables with that many columns should have no desserts ;-)

Inserting data into Oracle

I am trying to migrate a DB from Informix to Oracle.Informix had an option like while inserting into a table if the size of the value exceeds the column length then Informix automatically trims the data.But Oracle does not support this and always throws an exception.Is there a way to prevent and allow trim or we have to respect religiously?
There is no automatic trimming of data in Oracle, you have to trim it explicitly yourself e.g.
insert into mytable (id, text) values (123, substr(var,1,4000));
Oracle does support a variety of SQL functions which trim variables. I suspect the one you'll want is 'SUBSTR()'. The problem is that you will need to specify the desired length explicitly. In this example T23.WHATEVER is presumed to be VARCHAR2(30) and T24.TOO_LONG_COLUMN is, er, longer:
insert into t23
(id
, whatever)
select pk_col
, substr(too_long_col, 1, 30)
from t42
/
As well as Tony's suggestion, you can use a CAST
select cast ('1234' as varchar2(3)) a
from dual
If you are doing data migration, look into DML Error Logging
Having all your non-conformant data put into a corresponding table with the failure reason is positively dreamy.

Resources