ORA-00001 in UPDATE statement without duplicate - oracle

I fail to understand the logic of the unique constraint when it's based on 2 fields.
I have the following table named DESCRIPTIONS including 3 columns: ID_DESCRIPTION, NAME, ID_DESCRIPTION_TYPE
Now ID_DESCRIPTION is the primary key, and there is a unique constraint UK_DESCRIPTION on couple (ID_DESCRIPTION, NAME).
If I try to run the following query:
UPDATE DESCRIPTIONS SET NAME = 'USA' WHERE ID_DESCRIPTION = 9255813
I'm getting an ORA-00001 exception, saying that unique constraint UK_DESCRIPTION is violated.
Now this would mean that the couple (9255813,'USA') already exists right ?
However, I don't see how this is possible since the ID_DESCRIPTION is a primary key and therefore unique AND the results of the query
SELECT * FROM DESCRIPTIONS WHERE ID_DESCRIPTION = 9255813
only return 1 result, the one I want to update.
What am I failing to understand here ?

I am going to guess that uk_description is in fact a unique key based on the single column of NAME.
"It is unfortunately not."
Okay, the other explanation is that it is a multi-column key based on a different set of columns from what you think. (NAME, ID_DESCRIPTION_TYPE) would also fit the described behaviour.
To be fair, a unique key on(NAME, ID_DESCRIPTION_TYPE) makes more sense. For example, this is the key you'd want when the table is a single reference data look-up (which is a horrible model but common enough). Whereas a compound key of ID_DESCRIPTION, NAME) would do nothing but undermine the primary key.

Related

Postgres (AWS Aurora) is not enforcing unique index/constraint

We are using Postgres for our production database, it's technically an Amazon AWS Aurora database using the 10.11 engine version. It doesn't seem to be under any unreasonable load (100-150 concurrent connections, CPU always under 10%, about 50% of the memory used, spikes to 300 write IOPS / 1500 read IOPS per second).
We like to ensure really good data consistency, so we make extensive use of foreign keys, triggers to validate data as it's being inserted/updated and also lots of unique constraints.
Most of the writes originate from simple REST API requests, which result in very standard insert and update queries. However, in some cases we also use triggers and functions to handle more complicated logic. For example, an update to one table will result in some fairly complicated cascading updates to other tables.
All queries are always wrapped in transactions, and for the most part we do not make use of explicit locking.
So what's wrong?
We have many (dozens of rows, across dozens of tables) instances where data exists in the database which does not conform to our unique constraints.
Sometimes the created_at and updated_at timestamps for the offending rows are identical, other times they are very similar (within half a second). This leads me to believe that this is being caused by a race condition.
We're not certain, but are fairly confident that the thing in common with these records is that the writes either triggered a function (the record was written from a simple insert or update, and caused several other tables to be updated) or that the write came from a function (a different record was written from a simple insert or update, which triggered a function that wrote the offending data).
From what I have been able to research, unique constraints/indexes are incredibly reliable and "just work". Is this true? If so, then why might this be happening?
Here is an example of some offending data, I've had to black out some of it, but I promise you the values in the user_id field are identical. As you will see below, there is a unique index across user_id, position, and undeleted. So the presence of this data should be impossible.
Here is an export of table structure:
-- Table Definition ----------------------------------------------
CREATE TABLE guides.preferences (
id uuid DEFAULT gen_random_uuid() PRIMARY KEY,
user_id uuid NOT NULL REFERENCES users.users(id),
guide_id uuid NOT NULL REFERENCES users.users(id),
created_at timestamp without time zone NOT NULL,
updated_at timestamp without time zone NOT NULL,
undeleted boolean DEFAULT true,
deleted_at timestamp without time zone,
position integer NOT NULL CHECK ("position" >= 0),
completed_meetings_count integer NOT NULL DEFAULT 0,
CONSTRAINT must_concurrently_set_deleted_at_and_undeleted CHECK (undeleted IS TRUE AND deleted_at IS NULL OR undeleted IS NULL AND deleted_at IS NOT NULL),
CONSTRAINT preferences_guide_id_user_id_undeleted_unique UNIQUE (guide_id, user_id, undeleted),
CONSTRAINT preferences_user_id_position_undeleted_unique UNIQUE (user_id, position, undeleted) DEFERRABLE INITIALLY DEFERRED
);
COMMENT ON COLUMN guides.preferences.undeleted IS 'Set simultaneously with deleted_at to flag this as deleted or undeleted';
COMMENT ON COLUMN guides.preferences.deleted_at IS 'Set simultaneously with deleted_at to flag this as deleted or undeleted';
-- Indices -------------------------------------------------------
CREATE UNIQUE INDEX preferences_pkey ON guides.preferences(id uuid_ops);
CREATE UNIQUE INDEX preferences_user_id_position_undeleted_unique ON guides.preferences(user_id uuid_ops,position int4_ops,undeleted bool_ops);
CREATE INDEX index_preferences_on_user_id_and_guide_id ON guides.preferences(user_id uuid_ops,guide_id uuid_ops);
CREATE UNIQUE INDEX preferences_guide_id_user_id_undeleted_unique ON guides.preferences(guide_id uuid_ops,user_id uuid_ops,undeleted bool_ops);
We're really stumped by this, and hope that someone might be able to help us. Thank you!
I found it the reason! We have been building a lot of new functionality over the last few months, and have been running lots of migrations to change schema and update data. Because of all the triggers and functions in our database, it often makes sense to temporarily disable triggers. We do this with “set session_replication_role = ‘replica’;”.
Turns out that this also disables all deferrable constraints, because deferrable constraints and foreign keys are trigger based. As you can see from the schema in my question, the unique constraint in question is set as deferrable.
Mystery solved!

Do I need a primary key with an Oracle identity column?

I am using Oracle 12c and I have an IDENTITY column set as GENERATED ALWAYS.
CREATE TABLE Customers
(
id NUMBER GENERATED ALWAYS AS IDENTITY,
customerName VARCHAR2(30) NULL,
CONSTRAINT "CUSTOMER_ID_PK" PRIMARY KEY ("ID")
);
Since the ID is automatically from a sequence it will be always unique.
Do I need a PK on the ID column, and if yes, will it impact the performance?
Would an index produce the same result with a better performance on INSERT?
No, you don't need a primary key necessarily, but you should always provide the optimiser as much information about your data as possible - including a unique constraint whenever possible.
In the case of a surrogate key (like your ID), it's almost always appropriate to declare it as a Primary Key, since it's the most likely candidate for referential constraints.
You could use an ordinary index on ID, and performance of lookups will be comparable depending on data volume - but there is virtually no good reason in this case to use a non-unique index instead of a unique index - and there is no good reason in this case to avoid the constraint which will require an index anyway.
Yes, you always should have a Uniqueness constraint (with rare exceptions) on an ID column if indeed it is (and should be) unique, regardless of the method by which it is populated - whether the value is provided by your application code or via an IDENTITY column.

Oracle SQL Data Modeler missing a PRIMARY KEY on DDL script export

The diagram has over 40 tables, most of them have a primary key defined.
For some reason there is this one table, which has a primary key defined, but that's being ignored when I export the model to a DDL script.
This is the "offending" key (even though it's checked it is nowhere to be found on the generated DDL script):
Has anybody had the same problem? Any ideas on how to solve it?
[EDIT] This is where the key is defined:
And this is the DDL preview (yes, the primary key shows up there):
This is what happens if I try to generate the DDL for just that table (primary key still not generated):
I was finally able to identify and reproduce the problem.
It was a simple conflict of constraints.
Table MIEMBROS had a mandatory 1 to n relationship (foreign key) from another table on its primary key column and vice-versa (there was a foreign key on MIEMBROS against the other table's primary key).
This kind of relationship between two tables makes it impossible to add a record to any of them: The insert operation will return an error complaining about the foreign key restriction pointing the other table.
Anyway I realized that one of the relationships was 0 to n so I simply unchecked the "mandatory" checkbox on the foreign key definition and everything went fine.
So, in a nutshell: The Data Modeler "fails" silently if you are defining a mutual relationship (two foreign keys, one on each table against the other table) on non nullable unique columns, by not generating the primary key of one of the tables.
Such an odd behavior, if you ask me!
"This kind of relationship between two tables makes it impossible to add a record to any of them: The insert operation will return an error complaining about the foreign key restriction pointing the other table."
Actually, if you have deferred constraints, this is not impossible. The constraints can be enforced, for example, at commit time rather than immediately at insert time.
From the Data Modeler menu under File, I used Export -> DDL File. The keys appeared in the DDL, then when I went back to the diagram and did DDL Preview, it showed all the missing stuff.

Can a check constraint relate to another table? Oracle

I have searched for solution to my problem and this question describes it perfectly.
Let´s say I have one table called ProjectTimeSpan (which I haven´t, just as example!) containing the columns StartDate and EndDate.
And that I have another table called SubProjectTimeSpan, also containing columns called StartDate and EndDate, where I would like to set a Check constraint that makes it impossible to set StartDate and EndDate to values "outside" the ProjectTimeSpan.StartDate to ProjectTimeSpan.EndDate
Kind of a Check constraint that knows about another tables values...
Is this possible?
But I have a hard time to implement the solution to oracle. I've got even more puzzled when other articles stated that check constraint can not relate to other tables.
No it can't.
A FOREIGN KEY constraint can (and must) relate to another table, but it can only perform equiality checks.
I.e. you can test that a column (or a set of columns) are equal to those in the other table but not more complex conditions (like inside a span or whatever).
You'll have to implement a trigger for that.

Unique constraint violation during insert: why? (Oracle)

I'm trying to create a new row in a table. There are two constraints on the table -- one is on the key field (DB_ID), the other constrains a value to be one of several the the field ENV. When I do an insert, I do not include the key field as one of the fields I'm trying to insert, yet I'm getting this error:
unique constraint (N390.PK_DB_ID) violated
Here's the SQL that causes the error:
insert into cmdb_db
(narrative_name, db_name, db_type, schema, node, env, server_id, state, path)
values
('Test Database', 'DB', 'TYPE', 'SCH', '', 'SB01', 381, 'TEST', '')
The only thing I've been able to turn up is the possibility that Oracle might be trying to assign an already in-use DB_ID if rows were inserted manually. The data in this database was somehow restored/moved from a production database, but I don't have the details as to how that was done.
Any thoughts?
Presumably, since you're not providing a value for the DB_ID column, that value is being populated by a row-level before insert trigger defined on the table. That trigger, presumably, is selecting the value from a sequence.
Since the data was moved (presumably recently) from the production database, my wager would be that when the data was copied, the sequence was not modified as well. I would guess that the sequence is generating values that are much lower than the largest DB_ID that is currently in the table leading to the error.
You could confirm this suspicion by looking at the trigger to determine which sequence is being used and doing a
SELECT <<sequence name>>.nextval
FROM dual
and comparing that to
SELECT MAX(db_id)
FROM cmdb_db
If, as I suspect, the sequence is generating values that already exist in the database, you could increment the sequence until it was generating unused values or you could alter it to set the INCREMENT to something very large, get the nextval once, and set the INCREMENT back to 1.
Your error looks like you are duplicating an already existing Primary Key in your DB. You should modify your sql code to implement its own primary key by using something like the IDENTITY keyword.
CREATE TABLE [DB] (
[DBId] bigint NOT NULL IDENTITY,
...
CONSTRAINT [DB_PK] PRIMARY KEY ([DB] ASC),
);
It looks like you are not providing a value for the primary key field DB_ID. If that is a primary key, you must provide a unique value for that column. The only way not to provide it would be to create a database trigger that, on insert, would provide a value, most likely derived from a sequence.
If this is a restoration from another database and there is a sequence on this new instance, it might be trying to reuse a value. If the old data had unique keys from 1 - 1000 and your current sequence is at 500, it would be generating values that already exist. If a sequence does exist for this table and it is trying to use it, you would need to reconcile the values in your table with the current value of the sequence.
You can use SEQUENCE_NAME.CURRVAL to see the current value of the sequence (if it exists of course)

Resources