TRIGGER in oracle to prevent insert duplicate data - oracle

I'm learning Oracle and I had a problem. I have my "Chat" table:
CREATE TABLE chat (
id_chat NUMBER,
id_user NUMBER,
start_chat DATE,
end_chat DATE
);
Now, I created a trigger so that it does not allow me to enter a "Chat" if an old one was already registered with the same id_user. This is my trigger:
create or replace trigger distChat
before insert on Chat
for each row
begin
if :new.id_user = :old.id_user then
Raise_Application_Error(-20099,'YOU CAN'T INSERT DUPLICATED DATA.');
end if;
end distChat;
But still it still allows me to enter Chat with the same user code. Any help or recommendation to improve my trigger that does not work.
Thank you.

Don't use a TRIGGER.
Two reasons:
Oracle provides a mechanism for preventing duplicates
Triggers are expensive, and another database object to maintain
Do this
ALTER table CHAT ADD CONSTRAINT xpk_chat PRIMARY KEY ( ID_CHAT );
I don't know your data model, but I think you want ID_CHAT to distinguish a chat. If you do this for ID_USER, then a user couldn't ever have more than one chat...and who would want to use that system? If I'm wrong, just change the column referenced in the ALTER command above.
Now your table will have a constraint that prevents duplicate values on ID_CHAT column. This is called a PRIMARY KEY (Docs)
Additionally, you will have an INDEX, so querying your CHAT's by their ID value could be quicker.
P.S. Your Trigger isn't doing what you want it to do. If you were to do it with a trigger you would need to raise the exception if :new.id_user in (select distinct id_user from chat)...so basically if the ID resulting in the INSERT was already found in the table, there would be an exception. The beauty of the PK constraint is, that the database does this FOR YOU.

Related

ORA-22816 while updating Joined View with Instead of trigger

I read a lot about it but didn't found any help on that.
My Situation:
I've two database tables which belongs together. This tables I want to query with EntityFramework. Because Table B contains for EntityFramework the discriminator (for choosing the correct class for Table A) I've created a View which Joins Table A and Table B.
This join is quite simple. But: I want also to store data with that View. The issue is, that EntityFramework also wants to store the discriminator. But this isn't possible because it would update/insert into two tables.
So I've tried to create an "Instead of" trigger to just update/insert Table A (because Table B doesn't matter and will never be updated).
When I created the trigger - everything fine. If I insert something with an SQL Statement - everything is fine. But: If I'm inserting directly in the View (using Oracle SQL Developer) it throws the Exception as below:
ORA-22816 (unsupported feature with RETURNING clause).
If I do the same with EntityFramework I get the same error. Can someone help me?
Below my Code:
Table A and Table B:
CREATE Table "TableA"
(
"ID" Number NOT NULL,
"OTHER_VALUESA" varchar2(255),
"TableB_ID" number not null,
CONSTRAINT PK_TableA PRIMARY KEY (ID)
);
CREATE Table "TableB"
(
"ID" Number NOT NULL,
"NAME" varchar2(255),
"DISCRIMINATOR" varchar2(255),
CONSTRAINT PK_TableB PRIMARY KEY (ID)
);
The Joined View:
Create or Replace View "JoinTableAandB"
(
"ID",
"OTHER_VALUESA",
"TableB_ID",
"DISCRIMINATOR"
) AS
select tableA.ID, tableA.OTHER_VALUESA, tableA.TableB_ID, tableB.DISCRIMINATOR
from TABLEA tableA
inner join TABLEB tableB on tableA.TableB_ID = tableB.ID;
And finally the Trigger:
create or replace TRIGGER "JoinTableAandB_TRG"
INSTEAD OF INSERT ON "JoinTableAandB"
FOR EACH ROW
BEGIN
insert into TABLEA(OTHER_VALUESA, TABLEB_ID)
values (:NEW.OTHER_VALUESA, :NEW.TABLEB_ID);
END;
I've also tried it (to verify if the insert is correct just to enter "NULL" into the trigger instead of insert. But got the same error message.
Does anybody know how to solve this? Or does anybody have a good alternative (better Idea)?
Thanks!
Note: I've also defined a sequence for TableA's ID so that it will be generated automatically.
// Edit:
I found a possible Solution for MS SQL:
https://stackoverflow.com/a/26897952/3598980
But I don't know how to translate this to Oracle... How can I return something from a trigger?
Note: I've also defined a sequence for TableA's ID so that it will be generated automatically.
In EF StoreGenerated keys in Oracle are incompatible with INSTEAD OF triggers. EF uses a RETURNING clause to output the store generated keys, which doesn't work with INSTEAD OF triggers.

Make an Oracle foreign key constraint referencing USER_SEQUENCES(SEQUENCE_NAME)?

I want to create a table with a column that references the name of a sequence I've also created. Ideally, I'd like to have a foreign key constraint that enforces this. I've tried
create table testtable (
sequence_name varchar2(128),
constraint testtableconstr
foreign key (sequence_name)
references user_sequences (sequence_name)
on delete set null
);
but I'm getting a SQL Error: ORA-01031: insufficient privileges. I suspect either this just isn't possible, or I need to add something like on update cascade. What, if anything, can I do to enforce this constraint when I insert rows into this table?
I assume you're trying to build some sort of deployment management system to keep track of your schema objects including sequences.
To do what you ask, you might explore one of the following options:
Run a report after each deployment that compares the values in your table vs. the data dictionary view, and lists any discrepancies.
Create a DDL trigger which does the insert automatically whenever a sequence is created.
Add a trigger to the table which does a query on the sequences view and raises an exception if not found.
I'm somewhat confused at what you are trying to achieve here - a sequence (effectively) only has a single value, the next number to be allocated, not all the values that have been previously allocated.
If you simply want to ensure that an attribute in the relation is populated from the sequence, then a trigger would be the right approach.

ODBC with Oracle Trigger Key Column

I'm trying to update some existing code that is supposed to write data to a variety of Databases (SQL, Access, Oracle) via ODBC, but I'm having a few problems with Oracle and am looking for any suggestions.
I've set my Oracle database up using a Trigger (basic tutorial online, which I'd like to support).
CREATE TABLE TABLE1 (
RECORDID NUMBER NOT NULL PRIMARY KEY,
ID VARCHAR(40) NULL,
COUNT NUMBER NULL
);
GO
CREATE SEQUENCE TABLE1_SEQ
GO
CREATE or REPLACE TRIGGER TABLE1_TRG
BEFORE INSERT ON TABLE1
FOR EACH ROW
WHEN (new.RECORDID IS NULL)
BEGIN
SELECT TABLE1_SEQ.nextval
INTO :new.RECORDID
FROM dual;
end;
GO
I then populate a DataTable using a SELECT * FROM TABLE1. The first problem is that this DataTable doesn't know that the RecordId column is auto-generated. If I have data in my table then I can't alter it because I get a error
Cannot change AutoIncrement of a DataColumn with type 'Double' once it
has data.
If I continue, ignoring this, then I quickly get stuck. If I create a new DataRow and try to insert it, I can't set RecordID to DBNull.Value because it complains that the column has to be non-null (NoNullAllowedException). I can't however generate a value myself, because I don't know what value I should be using really, and don't want to screw up the trigger by using the next available value.
Any suggestions on how I should insert data without ODBC complaining?
It does not appear that your first problem is with an Oracle database. There is no such thing as an "Autoincrement" column in Oracle. Are you sure that message is coming from an Oracle database?
With Oracle, you should be able to provide any dummy value on insert for the primary key, and the trigger will overwrite it.
There is also nothing in your provided description that would prevent you from updating this value in Oracle (since your trigger is on insert only) unless you have foreign key references to the key.

Make column unique in two tables in our database

I have come into a bump at my current company where they have an account and a member. For some reason or another both are stored in separate tables.
Right now a member and an account can be registered. That's fine, except the users of both member and an account can have the same username. This is of course as you all know just wrong. Especially since they use the username to login to the same system except with different functionality levels.
Right now we are doing a check at the application level, and we're just wondering if it's possible to get the database to enforce two columns to be unique, say like a union of the two tables.
Can't set them up as primary or foreign key at the moment but that's for future anyway. Right now looking for a quick fix. In the future I will probably merge databases and get all members added on as new rows in the account table and add a boolean for IsMember.
In general, I agree with the consensus opinion that it's better to fix the design than to kluge a fix using triggers. However, a properly implemented trigger-based solution is still probably better than your current situation.
If you're going to use triggers, the right way to do it is to:
Create a new table that will contain nothing but usernames, with a primary key enforcing uniqueness (this may, in fact, be a good candidate for an indes-organized table).
Create before-insert triggers on both existing tables that add the new username to the new table. If the new username already exists, an error will be thrown, preventing the insert of both rows. Of course, the application will need to be able to handle this error gracefully (presumably it already can, for scenarios in which the new username already exists in the table it's being added to).
The wrong way to do this would be to make the trigger select from the other table, in order to verify uniqueness.
You can add a trigger that enforces your requirement.
The recommended triggers tend to be really brittle with concurrent transaction.
What you can do (AFAIK) is to create a materialized view containing the union of the column in question and put a unique constraint on that column.
Make sure you do some performance tests though.
As you use a soft delete pattern.
A trigger could be used (on each table) as a temporary measure.
By inserting a disabled record in the the other table, you will get a failure if the other record already exists
Remember this will not enforce the rule on existing data, only records that are inserted will be checked
Something like this:
-- Insert into the accounts table too
CREATE OR REPLACE TRIGGER tr_member_chk
BEFORE INSERT ON members
FOR EACH ROW
BEGIN
INSERT INTO account (name, id, etc, isenabled) VALUES(:new.name, :new.id, :new.etc, 0);
END;
-- Insert into the members table too
CREATE OR REPLACE TRIGGER tr_account_chk
BEFORE INSERT ON accounts
FOR EACH ROW
BEGIN
INSERT INTO members (name, id, etc,isenabled) VALUES(:new.name, :new.id, :new.etc,0);
END;

Is it possible to compare other tables within a trigger?

I have a database with tables that are chained together with foreign keys, and the last one in the chain also has a foreign key to itself. I want to delete them with cascade on, exapt for the last one in the chain. That one should be set null, unless it's parent record has a certain value. I figured i would do that with a trigger: whenever the last table updated, if the foreign key to itself had been set to null, check the field in the parent record, and if it is the value "default", delete the record in the last table.
However, I haven't found any help online indicting that comparing a parent record in another table.
Is this possible?
In general, a row-level trigger on table A cannot query table A. Doing so would generally raise a mutating table exception (ORA-04091). So a trigger is generally not the right solution.
Presumably, you have some sort of API (i.e. a stored procedure) to delete records from the parent table. That API should query this last table before issuing the DELETE against the parent table. It should take care of updating the last table in the chain as well as deleting the data from the parent table.
If you really wanted a trigger-based solution, life would get substantially more complicated. You could work around the mutating table exception by
Creating a package with a collection of primary keys from the parent table
Creating a before statement trigger that initializes this collection
Creating a row-level trigger that populates the collection with the primary keys that were modified by the SQL statement
Creating an after statement trigger that iterates over the collection and issues whatever DML is necessary (unlike row-level triggers, statement-level triggers on table A can query or modify table A).
If you're using 11g, you can simplify this a bit with a compound trigger with before statement, after row, and after statement sections. But you've still got a number of moving pieces to try to coordinate.
AFAIK you won't be able to really delete the record in the last table (mutating table problem), but you could update a status field indicating the record has been logically deleted (untested):
create or replace trigger last_table_trig
before update on last_table
for each row
declare
l_parentField varchar2(100);
begin
if :new.self_ref_fk is null then
select p.parent_field into l_parentField from parent_table p
where p.pk = :new.parent_fk;
if l_parentField = 'default' then
:new.status := 'DELETED';
end if;
end if;
end;

Resources