I have a table that has an id column but not primary key.They all manually added id's.So what I want to do is insert some values to that table but it requires id.How can i add id column specific values while inserting?
CREATE SEQUENCE seq$table1
START WITH 10
INCREMENT BY 1
NOCACHE
NOCYCLE;
CREATE OR REPLACE TRIGGER trg$table1
BEFORE INSERT
ON table1
FOR EACH ROW
REFERENCING NEW AS N OLD AS O
BEGIN
select seq$table1.nextVal
into :n.id
from dual;
END;
/
update table1
set id = seq$table1.nextVal;
commit;
I can suggest one approach, code you can write by yourself with little help of google or let me know if you need a code will edit the answer:
1.create a sequence.
2.write a procedure for the insert which will check if id generated by a sequence exists in the table or not. Use a loop to check If id exists then call sequence.next till you get id which does not exist in the table.
Note: You can also use trigger but that will give mutating table error so if you can handle that error you will have the perfect solution.
Let me know if you need code for the above steps but I would suggest you try first.
Thanks
Related
I am trying to use trigger to check whether whether the plate number that user is inserting already exist or not.If found then display message insertion blocked. There were some errors but i cant solve it. Below is the script:
CREATE OR REPLACE TRIGGER trig_check_plate_no
BEFORE INSERT OR UPDATE OF plate_number ON cars
FOR EACH ROW
WHEN(NEW.plate_number IS NOT NULL)
DECLARE
vn_count NUMBER(10);
BEGIN
SELECT COUNT(*)
INTO vn_count
FROM cars
WHERE plate_number = NEW.plate_number
IF(:vn_count>0)
THEN DBMS_OUTPUT.PUT_LINE('Insertion blocked.');
ELSE DBMS_OUTPUT.PUT_LINE('Data inserted.');
END IF;
END trig_check_plate_no;
/
How can i solve it?
In fact, you can't force a trigger to abort the operation that triggered its execution other than providing invalid values on purpose which is not a good style at all.
To achieve what you want to do, the easiest way would be to create a unique index on that table:
ALTER TABLE cars ADD CONSTRAINT c_plate_no UNIQUE ( plate_number );
Your DBMS should throw an error when you try to insert a value that already exists.
After doing so, you can omit your trigger entirely.
I am developing a Online Registration System using JSP and Oracle where I need to give every successful registrant a unique registration number sequentially. For this I want to adopt the NEXTVAL facility of Oracle, but I am in a dilemma for which policy I would adopt.
Policy 1: First I will store the NEXTVAL of a sequence in the following way.
select seq_form.nextval slno from dual;
Then I will store the slno into a variable say
int slno;
Now I will use the slno for in the insert query when the user finally submits the form, like
insert into members(registration_no, name,...) values(slno, name, ...);
Here the registration_no is primary key.
Policy 2: In my second policy, I will run the insert the query first
insert into members(registration_no, name,...) values(seq_form.nextval, name, ...);
fetch the last inserted ID like
select seq_form.currval slno from dual;
And then store the same in some variable say
int slno;
And use the same to show it to the registrant. Now I can't come to a conclusion which is better in terms of safety and efficiency. Here, I must make it clear that, in both the cases, my intension is to give the user a unique sequential number after successful submission of the form and by safety I meant to say that the user should get the ID respect to his/her own web session. Please help me.
I suggest you do it slightly differently:
Create a BEFORE INSERT trigger on your MEMBERS table. Set REGISTRATION_NO column to SEQ_FORM.NEXTVAL in the trigger:
CREATE OR REPLACE TRIGGER MEMBERS_BI
BEFORE INSERT ON MEMBERS
FOR EACH ROW
BEGIN
:NEW.REGISTRATION_NO := SEQ_FORM.NEXTVAL;
END MEMBERS_BI;
Do NOT put REGISTRATION_NO into the column list in your INSERT statement - it will be set by the trigger so there's no need to supply any value for it:
Use the RETURNING clause as part of the INSERT statement to get back the value put into REGISTRATION_NO by the trigger:
INSERT INTO MEMBERS (NAME, ...)
VALUES ('Fred', ...)
RETURNING REGISTRATION_NO INTO some_parameter
If you are using oracle 12, you can use identity column.
Then use RETURNING to get auto-generated value back.
Go with the policy 2. Because you cant always be sure that the insert query will be successful. If the insert fails, your oracle sequence has been rolled forward and you lose a sequence.
it is a better idea to insert and then later fetch it into a variable.
I have a table participants having structure as shown below:
Pid number
name varchar2(20)
version number
Whenever i inserted any record in participants table ,version =1 get populated.
For Example ,if i inserted pid=1 ,name='Gaurav' then record with version =1 get populated in participants table .
Now my issue is with update on participants table,
Suppose i am updating name ='Niharika' for pid=1 in participants table then a new record with pid=1 ,name='Niharika' and version =2 need to be created on the same table .
Again i update name='Rohan' for pid='1' in participants table a new record with pid=1 ,name='Rohan' and version=3 needs to be created .
How can i achieve this , clearly speaking i need to get max(version)+1 for that pid that is going to update .
I can achieve this using view and insert into view using instead of trigger ,but i am not satisfied with my solution .
I have also created compound trigger ,even that is not working for me because inside trigger i need to use insert statement for that table and this will give me recursive error
You should really have two tables. Make one with the structure you described as a "logging" table. It will keep the history of all the records. Have another table which is considered "current" which is the same but without the version column. Then, when inserts/update occur on the "current" tables' records, have a mechanism (trigger, for example) SELECT FOR UPDATE the max(version) in the logging table, add one, and insert into the logging table. This way, you're not going to run into mutating table errors or anything weird like that. There is a bit of serialization this way, but it's the closest to what you're trying to do.
Not usually recommended, but here's how you can do it anyways with no other extra logging table(s)-
CREATE or REPLACE
TRIGGER part_upd
AFTER UPDATE of name
ON participants
FOR EACH ROW
DECLARE
retval BOOLEAN;
BEGIN
retval := insert_row(:old.pid,:new.name);
END part_upd;
The function-
CREATE or REPLACE
FUNCTION insert_row (pid1 number, name1 varchar2)
RETURN boolean
IS
PRAGMA autonomous_transaction;
BEGIN
INSERT INTO participants
SELECT pid1, name1, max(vers)+1
FROM participants
WHERE pid = pid1;
COMMIT;
RETURN true;
END;
You'll have to fine tune the Trigger and Function properly by adding logging and exception handling. Read more about autonomous_transaction.
The below code is giving a mutating error.
Can any1 pls help in solving this.
CREATE OR REPLACE TRIGGER aso_quote_cuhk_trigger
BEFORE INSERT
ON aso.aso_quote_headers_all
FOR EACH ROW
BEGIN
UPDATE aso.aso_quote_headers_all
SET quote_expiration_date=sysdate+90
where quote_header_id=:new.quote_header_id;
END;
/
In oracle there are two levels of triggers: row level and table level.
Row level triggers are executed for each row. Table level triggers executed per statement, even if a statement changed more then one row.
In a row level trigger, you cannot select/update the table itself that has the trigger: you will get a mutating error.
In this case, there is no need for an UPDATE statement. Just try this:
CREATE OR REPLACE TRIGGER aso_quote_cuhk_trigger
BEFORE INSERT
ON aso.aso_quote_headers_all
FOR EACH ROW
BEGIN
:new.quote_expiration_date=sysdate+90;
END;
/
EDIT Rajesh mentioned it is possible, that before inserting a new row, OP wants to update all other records in the aso_quote_headers_all table.
Well, this is feasible, but it's a little tricky. To do this properly, you will need
A pl/sql package and a variable in the package header that is modified by the triggers. This variable could be a list holding the IDs of newly inserted records. Row level after insert trigger would add a new ID to the list. The content of this package variable will be different for each different session, so let's call this variable session_variable.
Row level after insert trigger, that would add new ID to the session_variable.
Table level after insert trigger that would get IDs from the session_variable, process the ID and then remove it from the session_variable. This trigger could execute necessary selects/updates on the aso_quote_headers_all. After a newly inserted ID is processed, this trigger should make sure it gets removed from the session_variable.
I realise you must have resolved your issue by now. However I am adding this answer below to help anyone else facing similar problem as you and I faced.
I recently encountered mutating table (ORA-04091: table XXXX is mutating, trigger/function may not see it) issue and after searching around realised the Compound Triggers feature available in 11g. If you're on 11g following compound trigger would have solved your issue.
CREATE OR REPLACE TRIGGER aso_quote_cuhk_trigger
FOR INSERT ON aso.aso_quote_headers_all
COMPOUND TRIGGER
row_id rowid;
AFTER EACH ROW IS
BEGIN
row_id := :new.rowid;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
UPDATE aso.aso_quote_headers_all
SET quote_expiration_date = sysdate+90
WHERE rowid = row_id;
END AFTER STATEMENT;
END aso_quote_cuhk_trigger;
/
A word about how it works. This compound trigger fires 2 events :
First is AFTER EACH ROW where we capture the rowid of newly inserted row
Next is AFTER STATEMENT where we update the table using rowid (captured during first event) in the WHERE clause.
A useful link if you want to read more about Compound Triggers.
I recently started working on a large complex application, and I've just been assigned a bug due to this error:
ORA-04091: table SCMA.TBL1 is mutating, trigger/function may not see it
ORA-06512: at "SCMA.TRG_T1_TBL1_COL1", line 4
ORA-04088: error during execution of trigger 'SCMA.TRG_T1_TBL1_COL1'
The trigger in question looks like
create or replace TRIGGER TRG_T1_TBL1_COL1
BEFORE INSERT OR UPDATE OF t1_appnt_evnt_id ON TBL1
FOR EACH ROW
WHEN (NEW.t1_prnt_t1_pk is not null)
DECLARE
v_reassign_count number(20);
BEGIN
select count(t1_pk) INTO v_reassign_count from TBL1
where t1_appnt_evnt_id=:new.t1_appnt_evnt_id and t1_prnt_t1_pk is not null;
IF (v_reassign_count > 0) THEN
RAISE_APPLICATION_ERROR(-20013, 'Multiple reassignments not allowed');
END IF;
END;
The table has a primary key "t1_pk", an "appointment event id"
t1_appnt_evnt_id and another column "t1_prnt_t1_pk" which may or may
not contain another row's t1_pk.
It appears the trigger is trying to make sure that nobody else with the
same t1_appnt_evnt_id has referred to the same one this row is referring to a referral to another row, if this one is referring to another row.
The comment on the bug report from the DBA says "remove the trigger, and perform the check in the code", but unfortunately they have a proprietary code generation framework layered on top of Hibernate, so I can't even figure out where it actually gets written out, so I'm hoping that there is a way to make this trigger work. Is there?
I think I disagree with your description of what the trigger is trying to
do. It looks to me like it is meant to enforce this business rule: For a
given value of t1_appnt_event, only one row can have a non-NULL value of
t1_prnt_t1_pk at a time. (It doesn't matter if they have the same value in the second column or not.)
Interestingly, it is defined for UPDATE OF t1_appnt_event but not for the other column, so I think someone could break the rule by updating the second column, unless there is a separate trigger for that column.
There might be a way you could create a function-based index that enforces this rule so you can get rid of the trigger entirely. I came up with one way but it requires some assumptions:
The table has a numeric primary key
The primary key and the t1_prnt_t1_pk are both always positive numbers
If these assumptions are true, you could create a function like this:
dev> create or replace function f( a number, b number ) return number deterministic as
2 begin
3 if a is null then return 0-b; else return a; end if;
4 end;
and an index like this:
CREATE UNIQUE INDEX my_index ON my_table
( t1_appnt_event, f( t1_prnt_t1_pk, primary_key_column) );
So rows where the PMNT column is NULL would appear in the index with the inverse of the primary key as the second value, so they would never conflict with each other. Rows where it is not NULL would use the actual (positive) value of the column. The only way you could get a constraint violation would be if two rows had the same non-NULL values in both columns.
This is perhaps overly "clever", but it might help you get around your problem.
Update from Paul Tomblin: I went with the update to the original idea that igor put in the comments:
CREATE UNIQUE INDEX cappec_ccip_uniq_idx
ON tbl1 (t1_appnt_event,
CASE WHEN t1_prnt_t1_pk IS NOT NULL THEN 1 ELSE t1_pk END);
I agree with Dave that the desired result probalby can and should be achieved using built-in constraints such as unique indexes (or unique constraints).
If you really need to get around the mutating table error, the usual way to do it is to create a package which contains a package-scoped variable that is a table of something that can be used to identify the changed rows (I think ROWID is possible, otherwise you have to use the PK, I don't use Oracle currently so I can't test it). The FOR EACH ROW trigger then fills in this variable with all rows that are modified by the statement, and then there is an AFTER each statement trigger that reads the rows and validate them.
Something like (syntax is probably wrong, I haven't worked with Oracle for a few years)
CREATE OR REPLACE PACKAGE trigger_pkg;
PROCEDURE before_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID);
PROCEDURE after_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE PACKAGE BODY trigger_pkg AS
TYPE rowid_tbl IS TABLE OF(ROWID);
modified_rows rowid_tbl;
PROCEDURE before_stmt_trigger IS
BEGIN
modified_rows := rowid_tbl();
END before_each_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID) IS
BEGIN
modified_rows(modified_rows.COUNT) = row;
END for_each_row_trigger;
PROCEDURE after_stmt_trigger IS
BEGIN
FOR i IN 1 .. modified_rows.COUNT LOOP
SELECT ... INTO ... FROM the_table WHERE rowid = modified_rows(i);
-- do whatever you want to
END LOOP;
END after_each_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE TRIGGER before_stmt_trigger BEFORE INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.before_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER after_stmt_trigger AFTER INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.after_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER for_each_row_trigger
BEFORE INSERT OR UPDATE ON mytable
WHEN (new.mycolumn IS NOT NULL) AS
BEGIN
trigger_pkg.for_each_row_trigger(:new.rowid);
END;
With any trigger-based (or application code-based) solution you need to
put in locking to prevent data corruption in a multi-user environment.
Even if your trigger worked, or was re-written to avoid the mutating table
issue, it would not prevent 2 users from simultaneously updating
t1_appnt_evnt_id to the same value on rows where t1_appnt_evnt_id is not
null: assume there are currenly no rows where t1_appnt_evnt_id=123 and
t1_prnt_t1_pk is not null:
Session 1> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =456;
/* OK, trigger sees count of 0 */
Session 2> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =789;
/* OK, trigger sees count of 0 because
session 1 hasn't committed yet */
Session 1> commit;
Session 2> commit;
You now have a corrupted database!
The way to avoid this (in trigger or application code) would be to lock
the parent row in the table referenced by t1_appnt_evnt_id=123 before performing the check:
select appe_id
into v_app_id
from parent_table
where appe_id = :new.t1_appnt_evnt_id
for update;
Now session 2's trigger must wait for session 1 to commit or rollback before it performs the check.
It would be much simpler and safer to implement Dave Costa's index!
Finally, I'm glad no one has suggested adding PRAGMA AUTONOMOUS_TRANSACTION to your trigger: this is often suggested on forums and works in as much as the mutating table issue goes away - but it makes the data integrity problem even worse! So just don't...
I had similar error with Hibernate. And flushing session by using
getHibernateTemplate().saveOrUpdate(o);
getHibernateTemplate().flush();
solved this problem for me. (I'm not posting my code block as I was sure that everything was written properly and should work - but it did not until I added the previous flush() statement). Maybe this can help someone.