Trigger - inserting default values into primary key when missing data - oracle

I have problem with creating triggers to table.
create table dwarfs (
name varchar2(20),
nickname varchar2(20),
note varchar2(20),
primary key (name,nickname)
);
Idea:
When someone want to insert data without entering name trigger should add default name for example "Dwarf1"
I created trigger but I get
communicate:SQL Error: ORA-01400: cannot insert NULL into
01400. 00000 - "cannot insert NULL into (%s)"
create or replace trigger t_d
before insert or update on dwarfs
for each row
when (new.name=null or new.name= '')
declare
begin
:new.name:='Dwarf1';
end;

As #kodirko noted in his comment, the comparison (new.name=null or new.name='') will never work, because a comparison with NULL always returns NULL, not TRUE or FALSE. To determine if a column is NULL you need to use the special comparison construct IS NULL. Also note that because nickname is part of the primary key it also must never be NULL - so, when taken all together you might try rewriting your trigger as:
create or replace trigger t_d
before insert or update on dwarfs
for each row
when (new.name IS NULL OR NEW.NICKNAME IS NULL)
begin
IF :new.NAME IS NULL THEN
:new.NAME := 'Dwarf1';
END IF;
IF :new.NICKNAME IS NULL THEN
:new.NICKNAME := :new.NAME;
END IF;
end;
Share and enjoy.

This is a typical case of not seeing the forest for the trees. Yes, there is the technical detail of using = in a test for null. The main point, however is...
NEVER, EVER ASSIGN DEFAULT VALUES TO KEY FIELDS!!!
If a field is a key field, the tuple is explicitly unusable without it. If that data is missing, there is something very, very wrong and the remainder of the row should be prevented at all costs from being inserted into the database.
This does not, of course, apply to an Identity or auto generating value that is defined as the surrogate key. Surrogate keys are, by definition, completely independent of the entity data. (This points out a disadvantage of surrogate keys, but that is a different discussion.) This applies only to attribute fields that have been further identified as key fields.
If the value is missing and a default value is not supplied, any attempt to insert the row will generate an error. Which is exactly what you want to happen. Don't make it easy for the users to destroy the integrity of the database.

Related

Adding a unique key to the oracle database table

Trying to implement a friendship table ..
To explain wat i have done till now
my DDL
<!-- WORKING -- "relationship" - This table used to store the relationship between users -->
create table relationship(
relation_id number(8),
FromUserName varchar2(30),
ToUserName varchar2(30),
StatusId number,
SentTime timestamp,
constraint relationship_pk primary key(relation_id),
foreign key (FromUserName) references users(username),
foreign key (ToUserName) references users(username)
);
<!--WORKING add the unique key to 'relationship' table so that a user can send request at time to user only oncle -->
ALTER TABLE relationship
ADD CONSTRAINT relation_unique UNIQUE (FromUserName, ToUserName);
Here is an image to explain the problem
My problem
have a look at last two rows . .. the users kamlesh1 send request to jitu1 and again jitu1 sends request to kamlesh1 and when i kamlesh1 accepts the request the statusid changes to 1 similar case for kamlesh to jitu when jitu accepts the request.
I want to prevent this kind of duplication i.e
once a user has sent u a request u cannot sent a request to him just accept his request or reject it.
I just could'nt think of proper question title ...if u could help with that too.
Please help
You could create a unique function-based index for this:
CREATE UNIQUE INDEX relation_unique ON relationship ( LEAST(FromUserName, ToUserName), GREATEST(FromUserName, ToUserName) );
A couple of side notes: You don't need a NUMBER (38 digits of precision) to store a value that is either 0 or 1. NUMBER(1) should suffice. Also, you probably don't need the granularity of TIMESTAMP for SentTime - a DATE should do the trick, and might make arithmetic a bit easier (DATE arithmetic is expressed in days, TIMESTAMP arithmetic in intervals). Last, using CamelCase for column names in Oracle isn't a good idea since Oracle object names aren't case-sensitive unless you enclose them in double quotes. If you were to inspect the data dictionary you would see your columns like this: FROMUSERNAME, TOUSERNAME. Much better to use column names like FROM_USERNAME and TO_USERNAME (or USERNAME_FROM and USERNAME_TO).
You should order the persons. Say, add
alter table relationship
add constraint relation_order_chk
check (fromusername < tousername);
Then, when inserting, do something like
create or replace procedure AddRelationship(p_from varchar2, p_to varchar2 ...) is
begin
insert into relationship (fromusername, tousername, ...)
values(least(p_from, p_to), greatest(p_from, p_to), ...);
end;

DB2 duplicate key error when inserting, BUT working after select count(*)

I have a - for me unknown - issue and I don't know what's the logic/cause behind it. When I try to insert a record in a table I get a DB2 error saying:
[SQL0803] Duplicate key value specified: A unique index or unique constraint *N in *N
exists over one or more columns of table TABLEXXX in SCHEMAYYY. The operation cannot
be performed because one or more values would have produced a duplicate key in
the unique index or constraint.
Which is a quite clear message to me. But actually there would be no duplicate key if I inserted my new record seeing what records are already in there. When I do a SELECT COUNT(*) from SCHEMAYYY.TABLEXXX and then try to insert the record it works flawlessly.
How can it be that when performing the SELECT COUNT(*) I can suddenly insert the records? Is there some sort of index associated with it which might give issues because it is out of sync? I didn't design the data model, so I don't have deep knowledge of the system yet.
The original DB2 SQL is:
-- Generate SQL
-- Version: V6R1M0 080215
-- Generated on: 19/12/12 10:28:39
-- Relational Database: S656C89D
-- Standards Option: DB2 for i
CREATE TABLE TZVDB.PRODUCTCOSTS (
ID INTEGER GENERATED BY DEFAULT AS IDENTITY (
START WITH 1 INCREMENT BY 1
MINVALUE 1 MAXVALUE 2147483647
NO CYCLE NO ORDER
CACHE 20 )
,
PRODUCT_ID INTEGER DEFAULT NULL ,
STARTPRICE DECIMAL(7, 2) DEFAULT NULL ,
FROMDATE TIMESTAMP DEFAULT NULL ,
TILLDATE TIMESTAMP DEFAULT NULL ,
CONSTRAINT TZVDB.PRODUCTCOSTS_PK PRIMARY KEY( ID ) ) ;
ALTER TABLE TZVDB.PRODUCTCOSTS
ADD CONSTRAINT TZVDB.PRODCSTS_PRDCT_FK
FOREIGN KEY( PRODUCT_ID )
REFERENCES TZVDB.PRODUCT ( ID )
ON DELETE RESTRICT
ON UPDATE NO ACTION;
I'd like to see the statements...but since this question is a year old...I won't old my breath.
I'm thinking the problem may be the
GENERATED BY DEFAULT
And instead of passing NULL for the identity column, you're accidentally passing zero or some other duplicate value the first time around.
Either always pass NULL, pass a non-duplicate value or switch to GENERATED ALWAYS
Look at preceding messages in the joblog for specifics as to what caused this. I don't understand how the INSERT can suddenly work after the COUNT(*). Please let us know what you find.
Since it shows *N (ie n/a) as the name of the index or constraing, this suggests to me that is is not a standard DB2 object, and therefore may be a "logical file" [LF] defined with DDS rather than SQL, with a key structure different than what you were doing your COUNT(*) on.
Your shop may have better tools do view keys on dependent files, but the method below will work anywhere.
If your table might not be the actual "physical file", check this using Display File Description, DSPFD TZVDB.PRODUCTCOSTS, in a 5250 ("green screen") session.
Use the Display Database Relations command, DSPDBR TZVDB.PRODUCTCOSTS, to find what files are defined over your table. You can then DSPFD on each of these files to see the definition of the index key. Also check there that each of these indexes is maintained *IMMED, rather than *REBUILD or *DELAY. (A wild longshot guess as to a remotely possible cause of your strange anomaly.)
You will find the DB2 for i message finder here in the IBM i 7.1 Information Center or other releases
Is it a paging issue? we seem to get -0803 on inserts occasionally when a row is being held for update and it locks a page that probably contains the index that is needed for the insert? This is only a guess but it appears to me that is what is happening.
I know it is an old topic, but this is what Google shown me on the first place.
I had the same issue yesterday, causing me a lot of headache. I did the same as above, checked the table definitions, keys, existing items...
Then I found out the problem was with my INSERT statement. It was trying to insert to identical records at once, but as the constraint prevented the commit, I could not find anything in the database.
Advice: review your INSERT statement carefully! :)

while creating a audit trigger throwing warning as Compilation error

I try to create a audit trigger it throwing compilation error.
could you please help me for creating trigger..
DROP TRIGGER DB.DAT_CAMPLE_REQ_Test;
CREATE OR REPLACE TRIGGER DB."DAT_CAMPLE_REQ_Test"
AFTER insert or update or delete on DAT_CAMPLE_REQ
FOR EACH ROW
declare
dmltype varchar2(6);
BEGIN
if deleting then
INSERT INTO h_dat_cample_req VALUES (
:Old.REQUEST_ID,
:Old.SAMPLE_ID,
:Old.CASSAY_ID,
:Old.CASCADE_ID,
:Old.STATUS_ID,
:Old.AUTHOR,
:Old.CRT_SAE,
:Old.SCREEN_SAE
);
else
if inserting then
dmltype := 'insert';
elsif updating then
dmltype := 'update';
end if;
INSERT INTO h_dat_cample_req VALUES
(
:New.REQUEST_ID,
:New.SAMPLE_ID,
:New.CASSAY_ID,
:New.CASCADE_ID,
:New.STATUS_ID,
:New.AUTHOR,
:New.CRT_SAE,
:New.SCREEN_SAE
);
end if;
END;
You haven't provided the exact error message nor the structure of the table h_dat_cample_req, so I'm afraid I'm going to have to guess.
I suspect the column names in your h_dat_cample_req are not in the order you expect, or there are other columns in the table that you haven't specified a value for in your INSERT statements.
You are using INSERT statements without listing the columns that each value should go in to. The problem with using this form of INSERT statement is that if the columns in the table aren't in exactly the order you think they are, or there are columns that have been added or removed, you'll get an error and it'll be difficult to track it down. Furthermore, if you don't get a compilation error there's still the chance that data will be inserted into the wrong columns. Naming the columns makes it clear which value goes in which column, makes it easier to identify columns that have been removed, and also means that you don't have to specify values for all of the columns in the table - any column not listed gets a NULL value.
I would strongly recommend always naming columns in INSERT statements. In other words, instead of writing
INSERT INTO some_table VALUES (value_1, value_2, ...);
write
INSERT INTO some_table (column_1, column_2, ...) VALUES (value_1, value_2, ...);
Incidentally, you're assigning a value to your variable dmltype but you're not using its value anywhere. This won't cause a compilation error, but it is a sign that your trigger might not be doing quite what you would expect it to. Perhaps your h_dat_cample_req table is a history table and has a column for the type of operation performed?

How can I constrain multiple columns to prevent duplicates, but ignore null values?

Here's a little experiment I ran in an Oracle database (10g). Aside from (Oracle's) implementation convenience, I can't figure out why some insertions are accepted and others rejected.
create table sandbox(a number(10,0), b number(10,0));
create unique index sandbox_idx on sandbox(a,b);
insert into sandbox values (1,1); -- accepted
insert into sandbox values (1,2); -- accepted
insert into sandbox values (1,1); -- rejected
insert into sandbox values (1,null); -- accepted
insert into sandbox values (2,null); -- accepted
insert into sandbox values (1,null); -- rejected
insert into sandbox values (null,1); -- accepted
insert into sandbox values (null,2); -- accepted
insert into sandbox values (null,1); -- rejected
insert into sandbox values (null,null); -- accepted
insert into sandbox values (null,null); -- accepted
Assuming that it makes sense to occasionally have some rows with some column values unknown, I can think of two possible use cases involving preventing duplicates:
1. I want to reject duplicates, but accept when any constrained column's value is unknown.
2. I want to reject duplicates, even in cases when a constrained column's value is unknown.
Apparently Oracle implements something different though:
3. Reject duplicates, but accept (only) when all constrained column values are unknown.
I can think of ways to make use of Oracle's implementation to get to use case (2) -- for example, have a special value for "unknown", and make the columns non-nullable. But I can't figure out how to get to use case (1).
In other words, how can I get Oracle to act like this?
create table sandbox(a number(10,0), b number(10,0));
create unique index sandbox_idx on sandbox(a,b);
insert into sandbox values (1,1); -- accepted
insert into sandbox values (1,2); -- accepted
insert into sandbox values (1,1); -- rejected
insert into sandbox values (1,null); -- accepted
insert into sandbox values (2,null); -- accepted
insert into sandbox values (1,null); -- accepted
insert into sandbox values (null,1); -- accepted
insert into sandbox values (null,2); -- accepted
insert into sandbox values (null,1); -- accepted
insert into sandbox values (null,null); -- accepted
insert into sandbox values (null,null); -- accepted
Try a function-based index:
create unique index sandbox_idx on sandbox(CASE WHEN a IS NULL THEN NULL WHEN b IS NULL THEN NULL ELSE a||','||b END);
There are other ways to skin this cat, but this is one of them.
create unique index sandbox_idx on sandbox
(case when a is null or b is null then null else a end,
case when a is null or b is null then null else b end);
A functional index! Basically I just needed to make sure all the tuples I want to ignore (ie - accept) get translated to all nulls. Ugly, but not butt ugly. Works as desired.
Figured it out with the help of a solution to another question: How to constrain a database table so only one row can have a particular value in a column?
So go there and give Tony Andrews points too. :)
I'm not an Oracle guy, but here's an idea that should work, if you can include a computed column in an index in Oracle.
Add an additional column to your table (and your UNIQUE index) that is computed as follows: it's NULL if both a and b are non-NULL, and it's the table's primary key otherwise. I call this additional column "nullbuster" for obvious reasons.
alter table sandbox add nullbuster as
case when a is null or b is null then pk else null end;
create unique index sandbox_idx on sandbox(a,b,pk);
I gave this example a number of times around 2002 or so in the Usenet group microsoft.public.sqlserver.programming. You can find the discussions if you search groups.google.com for the word "nullbuster". The fact that you're using Oracle shouldn't matter much.
P.S. In SQL Server, this solution is pretty much superseded by filtered indexes:
create unique index sandbox_idx on sandbox(a,b)
(where a is not null and b is not null);
The thread you referenced suggests that Oracle doesn't give you this option. Does it also not have the possibility of an indexed view, which is another alternative?
create view sandbox_for_unique as
select a, b from sandbox
where a is not null and b is not null;
create index sandbox_for_unique_idx on sandbox_for_unique(a,b);
I guess you can then.
Just for the record though, I leave my paragraph to explain why Oracle behaves like that if you have a simple unique index on two columns:
Oracle will never accept two (1, null) pairs if the columns are uniquely indexed.
A pair of 1 and a null, is considered an "indexable" pair. A pair of two nulls cannot be indexed, that's why it lets you insert as many null,null pairs as you like.
(1, null) gets indexed because 1 can be indexed. Next time you try to insert (1, null) again, 1 is picked up by the index and the unique constraint is violated.
(null,null) isn't indexed because there is no value to be indexed. That's why it doesn't violate the unique constraint.

ORA-04091: table [blah] is mutating, trigger/function may not see it

I recently started working on a large complex application, and I've just been assigned a bug due to this error:
ORA-04091: table SCMA.TBL1 is mutating, trigger/function may not see it
ORA-06512: at "SCMA.TRG_T1_TBL1_COL1", line 4
ORA-04088: error during execution of trigger 'SCMA.TRG_T1_TBL1_COL1'
The trigger in question looks like
create or replace TRIGGER TRG_T1_TBL1_COL1
BEFORE INSERT OR UPDATE OF t1_appnt_evnt_id ON TBL1
FOR EACH ROW
WHEN (NEW.t1_prnt_t1_pk is not null)
DECLARE
v_reassign_count number(20);
BEGIN
select count(t1_pk) INTO v_reassign_count from TBL1
where t1_appnt_evnt_id=:new.t1_appnt_evnt_id and t1_prnt_t1_pk is not null;
IF (v_reassign_count > 0) THEN
RAISE_APPLICATION_ERROR(-20013, 'Multiple reassignments not allowed');
END IF;
END;
The table has a primary key "t1_pk", an "appointment event id"
t1_appnt_evnt_id and another column "t1_prnt_t1_pk" which may or may
not contain another row's t1_pk.
It appears the trigger is trying to make sure that nobody else with the
same t1_appnt_evnt_id has referred to the same one this row is referring to a referral to another row, if this one is referring to another row.
The comment on the bug report from the DBA says "remove the trigger, and perform the check in the code", but unfortunately they have a proprietary code generation framework layered on top of Hibernate, so I can't even figure out where it actually gets written out, so I'm hoping that there is a way to make this trigger work. Is there?
I think I disagree with your description of what the trigger is trying to
do. It looks to me like it is meant to enforce this business rule: For a
given value of t1_appnt_event, only one row can have a non-NULL value of
t1_prnt_t1_pk at a time. (It doesn't matter if they have the same value in the second column or not.)
Interestingly, it is defined for UPDATE OF t1_appnt_event but not for the other column, so I think someone could break the rule by updating the second column, unless there is a separate trigger for that column.
There might be a way you could create a function-based index that enforces this rule so you can get rid of the trigger entirely. I came up with one way but it requires some assumptions:
The table has a numeric primary key
The primary key and the t1_prnt_t1_pk are both always positive numbers
If these assumptions are true, you could create a function like this:
dev> create or replace function f( a number, b number ) return number deterministic as
2 begin
3 if a is null then return 0-b; else return a; end if;
4 end;
and an index like this:
CREATE UNIQUE INDEX my_index ON my_table
( t1_appnt_event, f( t1_prnt_t1_pk, primary_key_column) );
So rows where the PMNT column is NULL would appear in the index with the inverse of the primary key as the second value, so they would never conflict with each other. Rows where it is not NULL would use the actual (positive) value of the column. The only way you could get a constraint violation would be if two rows had the same non-NULL values in both columns.
This is perhaps overly "clever", but it might help you get around your problem.
Update from Paul Tomblin: I went with the update to the original idea that igor put in the comments:
CREATE UNIQUE INDEX cappec_ccip_uniq_idx
ON tbl1 (t1_appnt_event,
CASE WHEN t1_prnt_t1_pk IS NOT NULL THEN 1 ELSE t1_pk END);
I agree with Dave that the desired result probalby can and should be achieved using built-in constraints such as unique indexes (or unique constraints).
If you really need to get around the mutating table error, the usual way to do it is to create a package which contains a package-scoped variable that is a table of something that can be used to identify the changed rows (I think ROWID is possible, otherwise you have to use the PK, I don't use Oracle currently so I can't test it). The FOR EACH ROW trigger then fills in this variable with all rows that are modified by the statement, and then there is an AFTER each statement trigger that reads the rows and validate them.
Something like (syntax is probably wrong, I haven't worked with Oracle for a few years)
CREATE OR REPLACE PACKAGE trigger_pkg;
PROCEDURE before_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID);
PROCEDURE after_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE PACKAGE BODY trigger_pkg AS
TYPE rowid_tbl IS TABLE OF(ROWID);
modified_rows rowid_tbl;
PROCEDURE before_stmt_trigger IS
BEGIN
modified_rows := rowid_tbl();
END before_each_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID) IS
BEGIN
modified_rows(modified_rows.COUNT) = row;
END for_each_row_trigger;
PROCEDURE after_stmt_trigger IS
BEGIN
FOR i IN 1 .. modified_rows.COUNT LOOP
SELECT ... INTO ... FROM the_table WHERE rowid = modified_rows(i);
-- do whatever you want to
END LOOP;
END after_each_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE TRIGGER before_stmt_trigger BEFORE INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.before_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER after_stmt_trigger AFTER INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.after_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER for_each_row_trigger
BEFORE INSERT OR UPDATE ON mytable
WHEN (new.mycolumn IS NOT NULL) AS
BEGIN
trigger_pkg.for_each_row_trigger(:new.rowid);
END;
With any trigger-based (or application code-based) solution you need to
put in locking to prevent data corruption in a multi-user environment.
Even if your trigger worked, or was re-written to avoid the mutating table
issue, it would not prevent 2 users from simultaneously updating
t1_appnt_evnt_id to the same value on rows where t1_appnt_evnt_id is not
null: assume there are currenly no rows where t1_appnt_evnt_id=123 and
t1_prnt_t1_pk is not null:
Session 1> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =456;
/* OK, trigger sees count of 0 */
Session 2> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =789;
/* OK, trigger sees count of 0 because
session 1 hasn't committed yet */
Session 1> commit;
Session 2> commit;
You now have a corrupted database!
The way to avoid this (in trigger or application code) would be to lock
the parent row in the table referenced by t1_appnt_evnt_id=123 before performing the check:
select appe_id
into v_app_id
from parent_table
where appe_id = :new.t1_appnt_evnt_id
for update;
Now session 2's trigger must wait for session 1 to commit or rollback before it performs the check.
It would be much simpler and safer to implement Dave Costa's index!
Finally, I'm glad no one has suggested adding PRAGMA AUTONOMOUS_TRANSACTION to your trigger: this is often suggested on forums and works in as much as the mutating table issue goes away - but it makes the data integrity problem even worse! So just don't...
I had similar error with Hibernate. And flushing session by using
getHibernateTemplate().saveOrUpdate(o);
getHibernateTemplate().flush();
solved this problem for me. (I'm not posting my code block as I was sure that everything was written properly and should work - but it did not until I added the previous flush() statement). Maybe this can help someone.

Resources