Unique Constraint Violated on empty table - oracle

I recently received a case which my client came across the ORA-00001: unique constraint violated error. This happened when a program tried to truncate two tables and then insert data into them.
From the error-log file, the truncate step was completed,
delete from INTERNET_GROUP
delete from INTERNET_ITEM
BUT right after this, the insertion to the Internet_group table triggered the ORA-00001 error. I am wondering if there is any database settings related to this error? I never used Oracle and am wondering if Oracle puts a lock on a row with SELECT statement, in which case the row is locked and not deleted somehow? Any help is appreciated.

Please know that there is a difference between truncate and delete. You say you truncated the table, but you mention "delete from" . That is entirely different.
If you're sure you want to empty the tables, try replacing with
truncate table internet_group reuse storage;
Mind you that a commit is not necessary with the truncate statement as this is considered a DDL (data definition language) statement and not a DML (Data modification language) statement like updates and deletes.
Also, there is no row locking on selects. But changes are only applied and visible for other sessions in the database when commit-ed.
I guess that is wat happened; you deleted the records but did not execute a commit (yet) and subsequently inserted new records.
edit:
I now realize you're probably inserting multiple records....
The other option might be, that the data itself causes a violation. Can you please provide the constraints on the table? There must be a primary key or unique constraint. You might want to hold that against your dataset.

Related

ORA-02291: parent key not found when inserting multiple rows

I having a problem executing an stored procedure that does multiple inserts
I am a copying 30 tables from a instance of a server to another by a DBLINK:
INSERT INTO table#dblink (column1)
SELECT column1
FROM table;
But it results in:
ORA-02291: integrity constraint (string.string) violated - parent key not found
There is only one commit at the end of the procedure.
The 4th table that I'm inserting, has an FK to the first one, and its no recognizing the inserts of the first one (I have tried with deferred constraints and same problem: ORA-02291).
The problem here is you are modifying data (DML) through db-link. This might be ill-managed by Oracle, and cause unexpected behavior. You should do it the other way around: instead of pushing data, drag data through this db-link, and do the inserts locally. Of course, you probably cannot technically do what you want on the destination database...
The solution you have is to deactivate the FK before your inserts, then activate the FK.
However, I am not sure this DDL is possible directly through db-link... You may need to create a procedure to deactivate the FKs on the destination database, and call it via db-link.

Operations on certain tables won't finish

We have a table TRANSMISSIONS(ID, NAME) which behaves funny in the following ways:
The statement to add a foreign key in another table referencing TRANSMISSIONS.ID won't finish
The statement to add a column to TRANSMISSIONS won't finish
The statement to disable/drop a unique constraint won't finish
The statement to disable/drop a trigger won't finish
TRANSMISSION's primary key is ID, there is also a unique constraint on NAME - therefore there are indexes on ID and NAME. We also have a trigger which creates values for column ID using a sequence, so that INSERT statements do not need to provide a value for ID.
Besides TRANSMISSIONS, there are two more tables behaving like this. For other tables, the above-mentioned statements work fine.
The database is used in an application with Hibernate and due to an incorrect JPA configuration we produced high values for ID during a time. Note that we use the trigger only for "manual" INSERT statements and that Hibernate produces ID values itself, also using the sequence.
The first thought was that the problems were due to the high IDs but we have the problems also with tables that never had such high IDs.
Anyways we suspected that the indexes might be fragmented somehow and called ALTER INDEX TRANSMISSIONS_PK SHRINK SPACE COMPACT, which ran through but showed no effect.
Also, we wanted to call ALTER TABLE TRANSMISSIONS SHRINK SPACE COMPACT which didn't work because we needed to call first ALTER TABLE TRANSMISSIONS ENABLE ROW MOVEMENT which never finished.
We have another instance of the database which does not behave in such a funny way. So we think it might be that in the course of running the application the database got somehow into an inconsistent state.
Does someone have any suggestions what might have gone out of control/into an inconsitent state?
More hints:
There are no locks present on any objects in the database (according to information in v$lock and v$locked_object)
We tried all these statements in SQL Developer and also using SQLPlus (command-line tool).

Creating a Triggers

How do I start a trigger so that this allows nobody to be able to rent a movie if their unpaid balance exceeds 50 dollars?
What you have here is a cross-row table constraint - i.e. you can't just put a single Oracle CONSTRAINT on a column, as these can only look at data within a single row at a time.
Oracle has support for only two cross-row constraint types - uniqueness (e.g. primary keys and unique constraints) and referential integrity (foreign keys).
In your case, you'll have to hand-code the constraint yourself - and with that comes the responsibility to ensure that the constraint is not violated in the presence of multiple sessions, each of which cannot see data inserted/updated by other concurrent sessions (at least, until they commit).
A simplistic approach is to add a trigger that issues a query to count how many records conflict with the new record; but this won't work because the trigger cannot see rows that have been inserted/updated by other sessions but not committed yet; so the trigger will sometimes allow members to rent 6 videos, as long as (for example) they get two cashiers to enter the data in separate terminals.
One way to get around this problem is to put some element of serialization in - e.g. the trigger would first request a lock on the member record (e.g. with a SELECT FOR UPDATE) before it's allowed to check the rentals; that way, if a 2nd session tries to insert rentals, it will wait until the first session does a commit or rollback.
Another way around this problem is to use an aggregating Materialized View, which would be based on a query that is designed to find any rows that fail the test; the expectation is that the MV will be empty, and you put a table constraint on the MV such that if a row was ever to appear in the MV, the constraint would be violated. The effect of this is that any statement that tries to insert rows that violate the constraint will cause a constraint violation when the MV is refreshed.
Writing the query for this based on your design is left as an exercise for the reader :)
If you want to restrict something about your table data then you should have a look at Constraints and not Triggers.
Constraints are ensuring some conditions about your table data. Like your example.
Triggers are fired when some action (i.e. INSERT, UPDATE, DELETE) took place and you can do some work then as a reaction to this action.

Sybase ASE remote row insert locking

Im working on an application which access a Sybase ASE 15.0.2 ,where the current code access a remote database
(CIS) to insert a row using a proxy table definition (the destination table is a DOL - DRL table - The PK
row is defined as identity ,and is always growing). The current code performs a select to check if the row
already exists to avoid duplicate data to be inserted.
Since the remote table also have a PK definition on the table, i do understand that the PK verification will
be done again prior to commiting the row.
Im planning to remove the select check since its being effectively done again by the PK verification,
but im concerned about if when receiving a file with many duplicates, the table may suffer
some unecessary contention when the data is tried to be commited.
Its not clear to me if Sybase ASE tries to hold the last row and writes the data prior to check for the
duplicate. Also, if the table is very big, im concerned also about the time it will spend looking the
whole index to find duplicates.
I've found some documentation for SQL anywhere, but not ASE in the following link
http://dcx.sybase.com/1200/en/dbusage/insert-how-transact.html
The best i could find is the following explanation
https://groups.google.com/forum/?fromgroups#!topic/comp.databases.sybase/tHnOqptD7X8
But it doesn't enlighten in details how the row is locked (and if there is any kind of
optimization to write it ahead or at the same time of PK checking)
, and also if it will waste a full PK look if im positively inserting a row which the PK
positively greater than the last row commited
Thanks
Alex
Unlike SqlAnywhere there is no option for ASE to set wait_for_commit. The primary key constraint is checked during the insert and not at the commit time. The problem as I understand from your post I see is if you have a mass insert from a file that may contain duplicates is to load into a temp table , check for duplicates, remove the duplicates and then insert the unique ones. Mass insert are lot faster though it still checks for primary key violations. However there is no cost associated as there is no rolling back. The insert statement is always all or nothing. Even if one row is duplicate the entire insert statement will fail. Check before insert in more of error free approach as opposed to use of constraint to the verification because it is going to fail and rollback is going to again be costly.
Thanks Mike
The link does have a very quick explanation about the insert from the CIS perspective. Its a variable to keep an eye on given that CIS may become a representative time consumer
if its performing data and syntax checking if it will be done again when CIS forwards the insert statement to the target server. I was afraid that CIS could have some influence beyond the network traffic/time over the locking/PK checking
Raju
I do agree that avoiding the PK duplication by checking if the row already exists by running a select and doing in a batch, but im currently looking for a stop gap solution, and that may be to perform the insert command in batches of about 50 rows and leave the
duplicate key check for the PK.
Hopefully the PK check will be done over a join of the 50 newly inserted rows, and thus
avoid to traverse the index for each single row...
Ill try to test this and comment back
Alex

How is the TRUNCATE command in Oracle able to retrieve the structure of a table after dropping it?

The SQL command TRUNCATE in Oracle is faster than than DELETE FROM table; in that the TRUNATE comand first drops the specified table in it's entirely and then creates a new table with same structure (clarification may require in case I may be wrong). Since TRUNCATE is a part of DDL it implicitly issues COMMIT before being executed and after the completion of execution. If such is a case then, the table that is dropped by the TRUNCATE command is lost permanently with it's entire structure in the data dictionary. In such a scenario, how is the TRUNCATE command able to drop first the table and recreate the same with the same structure?
(Note that I work for Sybase in SQL Anywhere engineering and my answer comes from my knowledge of how truncate is implemented there, but I imagine it's similar in Oracle as well.)
I don't believe the table is actually dropped and re-created; the contents are simply thrown away. This is much faster than delete from <table> because no triggers need to be executed, and rather than deleting a row at a time (both from the table and the indexes), the server can simply throw away all pages that contain rows for that table and any indexes.
I thought a truncate (amoungst other things) simply reset the High Water Mark.
see: http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_10007.htm#SQLRF01707
however in
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2816964500346433991
It is clear that the data segment changes after a truncate.

Resources