no duplicates but still constraint violation error - oracle

I have table in which a constraint has been set on a field called LoginId.While inserting a new row i am getting an error on this constratint associated with this field(LoginID)stating the below error.
The insert command is below:
Type 1 with sequence
insert into TemplateModule
(LoginID,MTtype, Startdate TypeId, TypeCase, MsgType, MsgLog, FileName,UserName, CrID, RegionaltypeId)
values
(MODS_SEQ.NEXTVAL,3434,2843,2453,2392,435,2390,'pension.txt','rereee',454545,3434);
Failed with error
Type 2 without sequence a hardcoded value::
insert into TemplateModule
(LoginID,MTtype, Startdate TypeId, TypeCase, MsgType, MsgLog, FileName,UserName, CrID, RegionaltypeId)
values
(3453,3434,2843,2453,2392,435,2390,'pension.txt','rereee',454545,3434)
I crosschecked many times for duplicates.But nothing found.What could be the rootcause
ORA-00001: unique constraint error (LGN_INDEX)violated

First, do a describe on LGN_INDEX on that table to make absolutely certain you are looking at the right column. Is LGN_INDEX a constraint+index or just an index? Try re-building your index to make sure it isn't corrupt. Make sure you don't have any other constraints that might be interfering.
Second, perform a SELECT MAX(LOGINID) FROM TEMPLATEMODULE and compare that to the next sequence value to make sure your sequence isn't set lower than the maximum ID you are working with.
Third, check if you have any triggers on that table.
If none of these things work, try re-creating the table using just the schema. Cross-load the data and try again. There might be a configuration setting on that table that is causing the issue. CREATE TABLE MY_TEMP AS SELECT * FROM TEMPLATEMODULE.

I encountered the same problem.
An Insert statement populating an Integer value (not in the table) to the Primary Key column.
The problem was a before trigger tied to a sequence. The next_val for the sequence was already present in the table.
The trigger fires, grabs the sequence number and fails with a Primary Key violation.

I encountered this same issue while importing from an excel file. I thought the file was free of duplicates until I tried removing duplicates in excel.
To find and remove duplicates in excel,
Select the data. Ctrl + a should work.
Click Data -> Remove Duplicates
Select the fields that have the constraints in your database and click OK
Excel should remove any duplicate records based on the fields selected at step 3 above.
You should now be able to import records from the file into your db.

Related

session item does not change when using lov in primary key

I am implementing a Interactive grid to perform DML operations on a table.
it has combined primary key of two columns
One primary key column is display only and refer to master table and another primary key column I want to have a LOV to select value. LOV is dynamic lov having a display and return value picked from another table.
Inserts are fine but session state item value is set for one row and all the operations are performed on that same row irrespective of which row is selected.
you can see a sample here
https://apex.oracle.com/pls/apex/f?p=128616:2:1964277347439::NO:::
master table name: sample
detail table name: sample_child
primary key in sample child : ID and Name
pop lov is implemented in NAME
LOV values are picked from table: Sample_uncle
LOV display : ID || '-' || NAME
LOV return : ID
you can try to update blabla column of sample_child table to see the issue.
I am not sure how I can give you access to look at the implementation.
I have already tried all the options I can think of
This is to do with your primary keys, the detail table does not appear to have proper ones, thats why it always tried to update the first entry, and I think this is also why every row is marked when you load the table.
Primary keys also do the annoying thing of refusing to be empty, as you can see if you insert a new row, the middle column(which is a PK) is filled with 't1001'.
Since you are dealing with simple tables(and not a whole bunch of joined tables) I always consider it best to use ROWID as PK. So set ROWID as PK for the master table, and ROWID for the detail table. And have the detail table have a Master table be your master table, and then click on the first column in the detail table and set the master column for it. And I also personaly always hide the column that is linked.
I would advise you use ROWID whenever possible as its just so much easier to work with, it does mean you might need to set up a validation to prevent someone adding duplicated values for your actual PK, but since the PK is in the underlying table, they cant enter it anyways(but if you have a validation, the error will be much prettier), whilst if the column is a PK, APEX will prevent duplicates by default.
I hope this helps

Make an Oracle foreign key constraint referencing USER_SEQUENCES(SEQUENCE_NAME)?

I want to create a table with a column that references the name of a sequence I've also created. Ideally, I'd like to have a foreign key constraint that enforces this. I've tried
create table testtable (
sequence_name varchar2(128),
constraint testtableconstr
foreign key (sequence_name)
references user_sequences (sequence_name)
on delete set null
);
but I'm getting a SQL Error: ORA-01031: insufficient privileges. I suspect either this just isn't possible, or I need to add something like on update cascade. What, if anything, can I do to enforce this constraint when I insert rows into this table?
I assume you're trying to build some sort of deployment management system to keep track of your schema objects including sequences.
To do what you ask, you might explore one of the following options:
Run a report after each deployment that compares the values in your table vs. the data dictionary view, and lists any discrepancies.
Create a DDL trigger which does the insert automatically whenever a sequence is created.
Add a trigger to the table which does a query on the sequences view and raises an exception if not found.
I'm somewhat confused at what you are trying to achieve here - a sequence (effectively) only has a single value, the next number to be allocated, not all the values that have been previously allocated.
If you simply want to ensure that an attribute in the relation is populated from the sequence, then a trigger would be the right approach.

Oracle - Delete One Row in Dimension Table is Slow

I have a datamart with 5 dimension table and a fact table.
I'm trying to clean a dimension table that has few rows (4000 rows). But, the fact table have millions rows (25GB)(Indexes and partitions).
When I try to delete a row in the table dimension, the process becomes very slow. It's just as slow despite the absence of relationship with a row in the fact table (cascade delete).
Is there any way to optimize this?. Thanks in advance.
Presumably, there is a cascading delete of some sort between the dimension table and the fact table.
Adding an index on the key column in the fact table may be sufficient. Then Oracle can immediately tell if/where any given value is.
If that doesn't work, drop the foreign key constraint altogether. Delete the unused values and add the constraint back in.
You could try these strategies as well :
create another copy of the fact table but, without the dim foreign key column of the table to be cleaned.
create fact_table_new as
select dim1_k, dim2_k, dim3_k, dim4_k, dim5_k (not this column), fact_1, fact_2, ...
from fact_table ;
or
update fact_table
set dim5_fk_col = null
where dim5_fk_col in (select k_col from dim5_table) ;

Unique constraint violation during insert: why? (Oracle)

I'm trying to create a new row in a table. There are two constraints on the table -- one is on the key field (DB_ID), the other constrains a value to be one of several the the field ENV. When I do an insert, I do not include the key field as one of the fields I'm trying to insert, yet I'm getting this error:
unique constraint (N390.PK_DB_ID) violated
Here's the SQL that causes the error:
insert into cmdb_db
(narrative_name, db_name, db_type, schema, node, env, server_id, state, path)
values
('Test Database', 'DB', 'TYPE', 'SCH', '', 'SB01', 381, 'TEST', '')
The only thing I've been able to turn up is the possibility that Oracle might be trying to assign an already in-use DB_ID if rows were inserted manually. The data in this database was somehow restored/moved from a production database, but I don't have the details as to how that was done.
Any thoughts?
Presumably, since you're not providing a value for the DB_ID column, that value is being populated by a row-level before insert trigger defined on the table. That trigger, presumably, is selecting the value from a sequence.
Since the data was moved (presumably recently) from the production database, my wager would be that when the data was copied, the sequence was not modified as well. I would guess that the sequence is generating values that are much lower than the largest DB_ID that is currently in the table leading to the error.
You could confirm this suspicion by looking at the trigger to determine which sequence is being used and doing a
SELECT <<sequence name>>.nextval
FROM dual
and comparing that to
SELECT MAX(db_id)
FROM cmdb_db
If, as I suspect, the sequence is generating values that already exist in the database, you could increment the sequence until it was generating unused values or you could alter it to set the INCREMENT to something very large, get the nextval once, and set the INCREMENT back to 1.
Your error looks like you are duplicating an already existing Primary Key in your DB. You should modify your sql code to implement its own primary key by using something like the IDENTITY keyword.
CREATE TABLE [DB] (
[DBId] bigint NOT NULL IDENTITY,
...
CONSTRAINT [DB_PK] PRIMARY KEY ([DB] ASC),
);
It looks like you are not providing a value for the primary key field DB_ID. If that is a primary key, you must provide a unique value for that column. The only way not to provide it would be to create a database trigger that, on insert, would provide a value, most likely derived from a sequence.
If this is a restoration from another database and there is a sequence on this new instance, it might be trying to reuse a value. If the old data had unique keys from 1 - 1000 and your current sequence is at 500, it would be generating values that already exist. If a sequence does exist for this table and it is trying to use it, you would need to reconcile the values in your table with the current value of the sequence.
You can use SEQUENCE_NAME.CURRVAL to see the current value of the sequence (if it exists of course)

How do I prevent the loading of duplicate rows in to an Oracle table?

I have some large tables (millions of rows). I constantly receive files containing new rows to add in to those tables - up to 50 million rows per day. Around 0.1% of the rows I receive are duplicates of rows I have already loaded (or are duplicates within the files). I would like to prevent those rows being loaded in to the table.
I currently use SQLLoader in order to have sufficient performance to cope with my large data volume. If I take the obvious step and add a unique index on the columns which goven whether or not a row is a duplicate, SQLLoader will start to fail the entire file which contains the duplicate row - whereas I only want to prevent the duplicate row itself being loaded.
I know that in SQL Server and Sybase I can create a unique index with the 'Ignore Duplicates' property and that if I then use BCP the duplicate rows (as defined by that index) will simply not be loaded.
Is there some way to achieve the same effect in Oracle?
I do not want to remove the duplicate rows once they have been loaded - it's important to me that they should never be loaded in the first place.
What do you mean by "duplicate"? If you have a column which defines a unique row you should setup a unique constraint against that column. One typically creates a unique index on this column, which will automatically setup the constraint.
EDIT:
Yes, as commented below you should setup a "bad" file for SQL*Loader to capture invalid rows. But I think that establishing the unique index is probably a good idea from a data-integrity standpoint.
Use Oracle MERGE statement. Some explanations here.
You dint inform about what release of Oracle you have. Have a look at there for merge command.
Basically like this
---- Loop through all the rows from a record temp_emp_rec
MERGE INTO hr.employees e
USING temp_emp_rec t
ON (e.emp_ID = t.emp_ID)
WHEN MATCHED THEN
--- _You can update_
UPDATE
SET first_name = t.first_name,
last_name = t.last_name
--- _Insert into the table_
WHEN NOT MATCHED THEN
INSERT (emp_id, first_name, last_name)
VALUES (t.emp_id, t.first_name, t.last_name);
I would use integrity constraints defined on the appropriate table columns.
This page from the Oracle concepts manual gives an overview, if you also scroll down you will see what types of constraints are available.
use below option, if you will get this much error 9999999 after that your sqlldr will terminate.
OPTIONS (ERRORS=9999999, DIRECT=FALSE )
LOAD DATA
you will get duplicate records in bad file.
sqlldr user/password#schema CONTROL=file.ctl, LOG=file.log, BAD=file.bad

Resources