I'm trying to create a new row in a table. There are two constraints on the table -- one is on the key field (DB_ID), the other constrains a value to be one of several the the field ENV. When I do an insert, I do not include the key field as one of the fields I'm trying to insert, yet I'm getting this error:
unique constraint (N390.PK_DB_ID) violated
Here's the SQL that causes the error:
insert into cmdb_db
(narrative_name, db_name, db_type, schema, node, env, server_id, state, path)
values
('Test Database', 'DB', 'TYPE', 'SCH', '', 'SB01', 381, 'TEST', '')
The only thing I've been able to turn up is the possibility that Oracle might be trying to assign an already in-use DB_ID if rows were inserted manually. The data in this database was somehow restored/moved from a production database, but I don't have the details as to how that was done.
Any thoughts?
Presumably, since you're not providing a value for the DB_ID column, that value is being populated by a row-level before insert trigger defined on the table. That trigger, presumably, is selecting the value from a sequence.
Since the data was moved (presumably recently) from the production database, my wager would be that when the data was copied, the sequence was not modified as well. I would guess that the sequence is generating values that are much lower than the largest DB_ID that is currently in the table leading to the error.
You could confirm this suspicion by looking at the trigger to determine which sequence is being used and doing a
SELECT <<sequence name>>.nextval
FROM dual
and comparing that to
SELECT MAX(db_id)
FROM cmdb_db
If, as I suspect, the sequence is generating values that already exist in the database, you could increment the sequence until it was generating unused values or you could alter it to set the INCREMENT to something very large, get the nextval once, and set the INCREMENT back to 1.
Your error looks like you are duplicating an already existing Primary Key in your DB. You should modify your sql code to implement its own primary key by using something like the IDENTITY keyword.
CREATE TABLE [DB] (
[DBId] bigint NOT NULL IDENTITY,
...
CONSTRAINT [DB_PK] PRIMARY KEY ([DB] ASC),
);
It looks like you are not providing a value for the primary key field DB_ID. If that is a primary key, you must provide a unique value for that column. The only way not to provide it would be to create a database trigger that, on insert, would provide a value, most likely derived from a sequence.
If this is a restoration from another database and there is a sequence on this new instance, it might be trying to reuse a value. If the old data had unique keys from 1 - 1000 and your current sequence is at 500, it would be generating values that already exist. If a sequence does exist for this table and it is trying to use it, you would need to reconcile the values in your table with the current value of the sequence.
You can use SEQUENCE_NAME.CURRVAL to see the current value of the sequence (if it exists of course)
Related
I want to create a table with a column that references the name of a sequence I've also created. Ideally, I'd like to have a foreign key constraint that enforces this. I've tried
create table testtable (
sequence_name varchar2(128),
constraint testtableconstr
foreign key (sequence_name)
references user_sequences (sequence_name)
on delete set null
);
but I'm getting a SQL Error: ORA-01031: insufficient privileges. I suspect either this just isn't possible, or I need to add something like on update cascade. What, if anything, can I do to enforce this constraint when I insert rows into this table?
I assume you're trying to build some sort of deployment management system to keep track of your schema objects including sequences.
To do what you ask, you might explore one of the following options:
Run a report after each deployment that compares the values in your table vs. the data dictionary view, and lists any discrepancies.
Create a DDL trigger which does the insert automatically whenever a sequence is created.
Add a trigger to the table which does a query on the sequences view and raises an exception if not found.
I'm somewhat confused at what you are trying to achieve here - a sequence (effectively) only has a single value, the next number to be allocated, not all the values that have been previously allocated.
If you simply want to ensure that an attribute in the relation is populated from the sequence, then a trigger would be the right approach.
I need to insert new rows to Cassandra, to a table that has only primary key columns, e.g.:
CREATE TABLE users (
user_id bigint,
website_id bigint,
PRIMARY KEY (user_id, website_id)
)
The obvious way to do it would be by INSERT:
INSERT INTO users(user_id, website_id) VALUES(1,2);
But I want to do it with use of Hadoop CqlOutputFormat and CqlRecordWriter only supports UPDATE statements. That's usually not a problem as UPDATE is in theory semantically the same as INSERT. (It will create rows if given primary key does not exist).
But here... I don't know how to construct UPDATE statement - it seems that CQL just does not
support my case, where there are non-primary key columns. See what I tried:
> update users set where user_id=3 and website_id=2 ;
Bad Request: line 1:18 no viable alternative at input 'where'
> update users set website_id=2 where user_id=3;
Bad Request: PRIMARY KEY part website_id found in SET part
> update users set website_id=2 where user_id=3 and website_id=2;
Bad Request: PRIMARY KEY part website_id found in SET part
> update users set website_id=2,user_id=1;
Bad Request: line 1:40 mismatched input ';' expecting K_WHERE
Some ideas on how to resolve it?
Many thanks.
Not sure if you can do this with update like that. But why not just create a new dummy column that you never use for anything else? Then you could do
update users set dummy=1 where user_id=3 and website_id=2;
You can't update primary key values in Cassandra as you have explained. As a solution you could also delete the row and insert a new one with the correct value in it. It's just a bit cleaner than creating two rows with one incorrect.
I have a table A (3 columns) in production which is around 10 million records. I wanted to add one more column to that table and also I want to make default value to 1. Is it going to impact production DB performance If add a column with default value 1 or something else. What would be best approach to this to avoid any kind of performance impact on DB? your thoughts are much appreciated!!
In Oracle 11g the process of adding a new column with a default value has been considerably optimized. If a newly added column is specified as NOT NULL, default value for that column is maintained in the data dictionary and it's no longer required for a default value of a column to be stored for all records in a table, so it's no longer required to update each record with a default value. Such an optimization considerably reduces amount of time the table is exclusively locked during the operation.
alter table <tab_name> add(<col_name> <data_type> default <def_val> not null)
Moreover, column with a default value added that way will not consume space, until you deliberately start to update that column or insert a record with a non default value for that column. So the operation of adding a new column with a default value and not null constraint specified completes pretty quick.
i think that it is better that you create a table as backup table with this syntax:
create table BackUpTable as SELECT * FROM YourTable;
alter table BackUpTable add (newColumn number(5,0)default 1);
I'm trying to create a constraint on the OE.PRODUCT_INFORMATION table which is delivered with Oracle 11g R2.
The constraint should make the PRODUCT_NAME unique.
I've tried it with the following statement:
ALTER TABLE PRODUCT_INFORMATION
ADD CONSTRAINT PRINF_NAME_UNIQUE UNIQUE (PRODUCT_NAME);
The problem is, that in the OE.PRODUCT_INFORMATION there are already product names which currently exist more than twice.
Executing the code above throws the following error:
an alter table validating constraint failed because the table has
duplicate key values.
Is there a possibility that a new created constraint won't be used on existing table data?
I've already tried the DISABLED keyword. But when I enable the constraint then I receive the same error message.
You can certainly create a constraint which will validate any newly inserted or updated records, but which will not be validated against old existing data, using the NOVALIDATE keyword, e.g.:
ALTER TABLE PRODUCT_INFORMATION
ADD CONSTRAINT PRINF_NAME_UNIQUE UNIQUE (PRODUCT_NAME)
NOVALIDATE;
If there is no index on the column, this command will create a non-unique index on the column.
If you are looking to enforce some sort of uniqueness for all future entries whilst keeping your current duplicates you cannot use a UNIQUE constraint.
You could use a trigger on the table to check the value to be inserted against the current table values and if it already exists, prevent the insert.
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_triggers.htm
or you could just remove the duplicate values and then enfoce your UNIQUE constraint.
EDIT: After Jonearles and Jeffrey Kemp's comments, I'll add that you can actually enable a unique constraint on a table with duplicate values present using the NOVALIDATE clause but you'd not be able to have a unique index on that constrained column.
See Tom Kyte's explanation here.
However, I would still worry about how obvious the intent was to future people who have to support the database. From a support perspective, it'd be more obvious to either remove the duplicates or use the trigger to make your intent clear.
YMMV
You can use deferrable .
ALTER TABLE PRODUCT_INFORMATION
ADD CONSTRAINT PRINF_NAME_UNIQUE UNIQUE (PRODUCT_NAME)
deferrable initially deferred NOVALIDATE;
I have table in which a constraint has been set on a field called LoginId.While inserting a new row i am getting an error on this constratint associated with this field(LoginID)stating the below error.
The insert command is below:
Type 1 with sequence
insert into TemplateModule
(LoginID,MTtype, Startdate TypeId, TypeCase, MsgType, MsgLog, FileName,UserName, CrID, RegionaltypeId)
values
(MODS_SEQ.NEXTVAL,3434,2843,2453,2392,435,2390,'pension.txt','rereee',454545,3434);
Failed with error
Type 2 without sequence a hardcoded value::
insert into TemplateModule
(LoginID,MTtype, Startdate TypeId, TypeCase, MsgType, MsgLog, FileName,UserName, CrID, RegionaltypeId)
values
(3453,3434,2843,2453,2392,435,2390,'pension.txt','rereee',454545,3434)
I crosschecked many times for duplicates.But nothing found.What could be the rootcause
ORA-00001: unique constraint error (LGN_INDEX)violated
First, do a describe on LGN_INDEX on that table to make absolutely certain you are looking at the right column. Is LGN_INDEX a constraint+index or just an index? Try re-building your index to make sure it isn't corrupt. Make sure you don't have any other constraints that might be interfering.
Second, perform a SELECT MAX(LOGINID) FROM TEMPLATEMODULE and compare that to the next sequence value to make sure your sequence isn't set lower than the maximum ID you are working with.
Third, check if you have any triggers on that table.
If none of these things work, try re-creating the table using just the schema. Cross-load the data and try again. There might be a configuration setting on that table that is causing the issue. CREATE TABLE MY_TEMP AS SELECT * FROM TEMPLATEMODULE.
I encountered the same problem.
An Insert statement populating an Integer value (not in the table) to the Primary Key column.
The problem was a before trigger tied to a sequence. The next_val for the sequence was already present in the table.
The trigger fires, grabs the sequence number and fails with a Primary Key violation.
I encountered this same issue while importing from an excel file. I thought the file was free of duplicates until I tried removing duplicates in excel.
To find and remove duplicates in excel,
Select the data. Ctrl + a should work.
Click Data -> Remove Duplicates
Select the fields that have the constraints in your database and click OK
Excel should remove any duplicate records based on the fields selected at step 3 above.
You should now be able to import records from the file into your db.