I have a nullable column with null values and I want to add a default constraint for only new inserts into the table.
My alter code:
alter table customer_02
modify reference default on null
'No References';
I keep getting error ORA-02296 meaning, I have null values pre-existing in the table so I cant enable my new default. How can I insert the default for only new inserts and not affect the previous data?
Run
Update customer_02 set reference = 'No references' where reference is null
First
The entire column has to comply with the constraint; you can't have the old data remain null and then install a "not null" rule. You're saying it's ok for the references column to contain "no references" where it once contained null (meaning "no reference") so there shouldn't be any harm in updating those old null values so they're consistent with the new rule, then you can implement the new rule
If you desperately want to have old rows remain null while new rows cannot be null you'll need to use a (before insert) trigger that throws an error if the :new.reference column is null, and leave the column as nullable. I would avoid this for two reasons, one that it uses triggers and they're usually a bad way to get things done and two because it establishes a seemingly needless inconsistency that would puzzle future developers. As mentioned before, if null has fallen out of favour as the way to indicate there are no references, the old data should be adjusted. Keeping it null might also leave to errors elsewhere, if the front end expects a value - you might end up with your users experiencing crashes when they call up old records.
I'd recommend to always strive for consistency in data modelling, even if it means adjusting old data
Related
I am using oracle database and I have a table named MyTitle and into this existing table I have to add a column of type boolean so the name of the column is IsChecked and the default value should be false of that column, I have tried the below way please advise is it correct or not
alter Table MyTitle add IsChecked Number(1) default 0 not null ;
It looks reasonable. Do you have a problem with it? Different people/ systems have different conventions for pseudo-boolean columns. Some use a number with 0 and 1. Some use a char(1) with a 'Y' and 'N'. Be consistent with whatever convention exists in your system.
I'd normally include a check constraint that limits the values in the column to the values you want, i.e.
check( isChecked in (0,1) )
If you're building a data warehouse, though, there are schools of thought that including check constraints like this is unnecessary overhead since there is (or should be) only a very small number of paths (ideally one) to load the data via the ETL process so you merely need to ensure that the ETL process isn't inserting invalid values.
We are using Postgres for our production database, it's technically an Amazon AWS Aurora database using the 10.11 engine version. It doesn't seem to be under any unreasonable load (100-150 concurrent connections, CPU always under 10%, about 50% of the memory used, spikes to 300 write IOPS / 1500 read IOPS per second).
We like to ensure really good data consistency, so we make extensive use of foreign keys, triggers to validate data as it's being inserted/updated and also lots of unique constraints.
Most of the writes originate from simple REST API requests, which result in very standard insert and update queries. However, in some cases we also use triggers and functions to handle more complicated logic. For example, an update to one table will result in some fairly complicated cascading updates to other tables.
All queries are always wrapped in transactions, and for the most part we do not make use of explicit locking.
So what's wrong?
We have many (dozens of rows, across dozens of tables) instances where data exists in the database which does not conform to our unique constraints.
Sometimes the created_at and updated_at timestamps for the offending rows are identical, other times they are very similar (within half a second). This leads me to believe that this is being caused by a race condition.
We're not certain, but are fairly confident that the thing in common with these records is that the writes either triggered a function (the record was written from a simple insert or update, and caused several other tables to be updated) or that the write came from a function (a different record was written from a simple insert or update, which triggered a function that wrote the offending data).
From what I have been able to research, unique constraints/indexes are incredibly reliable and "just work". Is this true? If so, then why might this be happening?
Here is an example of some offending data, I've had to black out some of it, but I promise you the values in the user_id field are identical. As you will see below, there is a unique index across user_id, position, and undeleted. So the presence of this data should be impossible.
Here is an export of table structure:
-- Table Definition ----------------------------------------------
CREATE TABLE guides.preferences (
id uuid DEFAULT gen_random_uuid() PRIMARY KEY,
user_id uuid NOT NULL REFERENCES users.users(id),
guide_id uuid NOT NULL REFERENCES users.users(id),
created_at timestamp without time zone NOT NULL,
updated_at timestamp without time zone NOT NULL,
undeleted boolean DEFAULT true,
deleted_at timestamp without time zone,
position integer NOT NULL CHECK ("position" >= 0),
completed_meetings_count integer NOT NULL DEFAULT 0,
CONSTRAINT must_concurrently_set_deleted_at_and_undeleted CHECK (undeleted IS TRUE AND deleted_at IS NULL OR undeleted IS NULL AND deleted_at IS NOT NULL),
CONSTRAINT preferences_guide_id_user_id_undeleted_unique UNIQUE (guide_id, user_id, undeleted),
CONSTRAINT preferences_user_id_position_undeleted_unique UNIQUE (user_id, position, undeleted) DEFERRABLE INITIALLY DEFERRED
);
COMMENT ON COLUMN guides.preferences.undeleted IS 'Set simultaneously with deleted_at to flag this as deleted or undeleted';
COMMENT ON COLUMN guides.preferences.deleted_at IS 'Set simultaneously with deleted_at to flag this as deleted or undeleted';
-- Indices -------------------------------------------------------
CREATE UNIQUE INDEX preferences_pkey ON guides.preferences(id uuid_ops);
CREATE UNIQUE INDEX preferences_user_id_position_undeleted_unique ON guides.preferences(user_id uuid_ops,position int4_ops,undeleted bool_ops);
CREATE INDEX index_preferences_on_user_id_and_guide_id ON guides.preferences(user_id uuid_ops,guide_id uuid_ops);
CREATE UNIQUE INDEX preferences_guide_id_user_id_undeleted_unique ON guides.preferences(guide_id uuid_ops,user_id uuid_ops,undeleted bool_ops);
We're really stumped by this, and hope that someone might be able to help us. Thank you!
I found it the reason! We have been building a lot of new functionality over the last few months, and have been running lots of migrations to change schema and update data. Because of all the triggers and functions in our database, it often makes sense to temporarily disable triggers. We do this with “set session_replication_role = ‘replica’;”.
Turns out that this also disables all deferrable constraints, because deferrable constraints and foreign keys are trigger based. As you can see from the schema in my question, the unique constraint in question is set as deferrable.
Mystery solved!
I'm trying to understand Room database library. I am struck with a scenario where two tables are linked via #ForeignKey constraint. What I need is, when the parent row is deleted, all the child columns in the child table should be set to NULL or some default value. But when I tried to use
onDelete=SET_NULL or SET_DEFAULT with #ForeignKey
I get the following error:
android.database.sqlite.SQLiteConstraintException: NOT NULL constraint
failed: Log.tagId
From the error I can see that the child column has been set as NOT NULL during the table definition, can someone say how to change it to be NULLABLE since room creates the tables for us? Also, it is ok if we can set a standard default value on the columns. If so, how to set the default value of columns? I think there should be some way else the constants SET_NULL or SET_DEFAULT has no meaning and purpose.
Thanks in advance!
I fail to understand the logic of the unique constraint when it's based on 2 fields.
I have the following table named DESCRIPTIONS including 3 columns: ID_DESCRIPTION, NAME, ID_DESCRIPTION_TYPE
Now ID_DESCRIPTION is the primary key, and there is a unique constraint UK_DESCRIPTION on couple (ID_DESCRIPTION, NAME).
If I try to run the following query:
UPDATE DESCRIPTIONS SET NAME = 'USA' WHERE ID_DESCRIPTION = 9255813
I'm getting an ORA-00001 exception, saying that unique constraint UK_DESCRIPTION is violated.
Now this would mean that the couple (9255813,'USA') already exists right ?
However, I don't see how this is possible since the ID_DESCRIPTION is a primary key and therefore unique AND the results of the query
SELECT * FROM DESCRIPTIONS WHERE ID_DESCRIPTION = 9255813
only return 1 result, the one I want to update.
What am I failing to understand here ?
I am going to guess that uk_description is in fact a unique key based on the single column of NAME.
"It is unfortunately not."
Okay, the other explanation is that it is a multi-column key based on a different set of columns from what you think. (NAME, ID_DESCRIPTION_TYPE) would also fit the described behaviour.
To be fair, a unique key on(NAME, ID_DESCRIPTION_TYPE) makes more sense. For example, this is the key you'd want when the table is a single reference data look-up (which is a horrible model but common enough). Whereas a compound key of ID_DESCRIPTION, NAME) would do nothing but undermine the primary key.
I'm trying to create a new row in a table. There are two constraints on the table -- one is on the key field (DB_ID), the other constrains a value to be one of several the the field ENV. When I do an insert, I do not include the key field as one of the fields I'm trying to insert, yet I'm getting this error:
unique constraint (N390.PK_DB_ID) violated
Here's the SQL that causes the error:
insert into cmdb_db
(narrative_name, db_name, db_type, schema, node, env, server_id, state, path)
values
('Test Database', 'DB', 'TYPE', 'SCH', '', 'SB01', 381, 'TEST', '')
The only thing I've been able to turn up is the possibility that Oracle might be trying to assign an already in-use DB_ID if rows were inserted manually. The data in this database was somehow restored/moved from a production database, but I don't have the details as to how that was done.
Any thoughts?
Presumably, since you're not providing a value for the DB_ID column, that value is being populated by a row-level before insert trigger defined on the table. That trigger, presumably, is selecting the value from a sequence.
Since the data was moved (presumably recently) from the production database, my wager would be that when the data was copied, the sequence was not modified as well. I would guess that the sequence is generating values that are much lower than the largest DB_ID that is currently in the table leading to the error.
You could confirm this suspicion by looking at the trigger to determine which sequence is being used and doing a
SELECT <<sequence name>>.nextval
FROM dual
and comparing that to
SELECT MAX(db_id)
FROM cmdb_db
If, as I suspect, the sequence is generating values that already exist in the database, you could increment the sequence until it was generating unused values or you could alter it to set the INCREMENT to something very large, get the nextval once, and set the INCREMENT back to 1.
Your error looks like you are duplicating an already existing Primary Key in your DB. You should modify your sql code to implement its own primary key by using something like the IDENTITY keyword.
CREATE TABLE [DB] (
[DBId] bigint NOT NULL IDENTITY,
...
CONSTRAINT [DB_PK] PRIMARY KEY ([DB] ASC),
);
It looks like you are not providing a value for the primary key field DB_ID. If that is a primary key, you must provide a unique value for that column. The only way not to provide it would be to create a database trigger that, on insert, would provide a value, most likely derived from a sequence.
If this is a restoration from another database and there is a sequence on this new instance, it might be trying to reuse a value. If the old data had unique keys from 1 - 1000 and your current sequence is at 500, it would be generating values that already exist. If a sequence does exist for this table and it is trying to use it, you would need to reconcile the values in your table with the current value of the sequence.
You can use SEQUENCE_NAME.CURRVAL to see the current value of the sequence (if it exists of course)