Oracle 11g
step 1.
create table pur_order(
po_number number primary key,
po_date date,
po_vendor number,
:)
step 2.load data with Enterprise Manager 11g from a user file podata.txt content like:
1 25-JUN-2011 1001
2 26-JUN-2011 1002
3 27-JUN-2011 1003
1 27-JUN-2011 1001
2 28-JUN-2011 1002
step 3. the loading process finished successfully.
table content is just like above
problem is as I have already defined po_number as primary key,why duplicate values of po_number still be able to be loaded?
Most likely this method of import uses the SQL*Loader utility with the Direct Path Load method. This disables some integrity constraints as explained in the documentation:
Integrity constraints that depend on other rows or tables, such as referential constraints, are disabled before the direct path load and must be reenabled afterwards. If REENABLE is specified, SQL*Loader can reenable them automatically at the end of the load. When the constraints are reenabled, the entire table is checked. Any rows that fail this check are reported in the specified error log. See Direct Loads, Integrity Constraints, and Triggers.
SQL*Loader tried to reenable the constraint but failed and thus:
The index will be left in an Index Unusable state if a violation of a UNIQUE constraint is detected. See Indexes Left in an Unusable State.
I'm pretty sure you will find that the primary key index is unusable (SELECT index_name, status FROM all_indexes WHERE table_name = 'PUR_ORDER').
Either load the datafile without direct path load or make sure that the constraint is successfully enabled afterwards.
Related
The below statement consumes a huge amount of time for a table containing 70 million records.
ALTER TABLE <table-name> ENABLE CONSTRAINT <constraint-name>
Does table scan all rows while enabling the constraint.
Even though the constraint got enabled, the process just hung for more than 5 hours.
Any ideas on how this can be optimized
As guys said before, depends on constrain type it is possibility skip validate existing data by ALTER TABLE ENABLE NOVALIDATE CONSTRAINT . And check this data by some additional procedure or query.
You can find documentation about that here https://docs.oracle.com/cd/B28359_01/server.111/b28310/general005.htm#ADMIN11546
This problem happens ONLY with the database user, which was imported by datapump. The problem is NOT present with the original database user.
I am getting deadlock in oracle 11.2.0.3 version where the current sqls of two transactions participating in deadlock are as follows :
SELECT /*AApaAA*/ objectid FROM T_DS_0 WHERE objectid = :1 FOR UPDATE
insert /*AApaAA*/ into T_DS_0(OBJECTID) values (:1 )
Both bind variables are 'AApaAA' and it is also primary key. It looks like deadlock on single resource.
There are foreign keys (on delete cascade) pointing to that primary key and they are indexed.
The deadlock graph is as follows :
---------Blocker(s)-------- ---------Waiter(s)---------
Resource Name process session holds waits process session holds waits
TX-000c000f-00000322 49 102 X 46 587 X
TX-00070011-00000da4 46 587 X 49 102 S
It is not clear to me how the deadlock could happen at the single resource. It is true that the insert does not lock the row but the constraint which is probably a different resource, so that the deadlock could theoretically be possible if 1st transaction would perform the lock constraint and then lock row and the other one would be in the reverse order but I do not see any way how this could happen. Theoretically there is the child table locking possible (insert causes SX on child tables) but the select for update should not touch the child tables at all.
The full tracefile from oracle is at : https://gist.github.com/milanro/06f9a76a2607a26ac9ba8c91b88639b3
Did anyone experience a similar behavior?
Additional Info : This problem happens only when datapump is used to duplicate the db user. The original schema contains SYS indexes created during creation of Primary Keys. There are further indexes which start with the PRIMARY KEY column. Datapump then does not create the SYS index on the PRIMARY KEY column but uses the indexes starting with the PRIMARY KEY column.
It looks like when I create following database objects :
create table tbl(objectid varchar2(30), a integer, primary key (objectid));
create index idx1 on tbl(objectid,a);
there is 1 table and 2 indexes created. SYS index (OBJECTID) and idx1(OBJECTID,A). The PRIMARY KEY uses the SYS index.
After the datapump si performed, on the imported side only 1 table and 1 index is created, the index is idx1(OBJECTID, A) and that index is used for the PRIMARY KEY.
This happens in my database schema with the table SDMBase. And the deadlocks happen when I use in different transactions the combination of INSERT INTO SDMBase ... and SELECT ... FROM SDMBase ... FOR UPDATE where I use the same OBJECTID. The same code is executed in those transactions and it can be as follows in 1 transaction
1.INSERT (objectid1)
2.SELECT FOR UPDATE (objectid1)
3.INSERT (objectid1)
4.SELECT FOR UPDATE (objectid1)
...
The deadlocking situation happens on the line 2. and line 3. In my use-case, when these transactions run, the row with objectid1 is already in the database but does not have to be committed yet.
So I suppose that
step 1. should wait until objectid1 is commited and then fail and lock nothing
step 2. should lock objectid1 or wait if another transaction already locked it
step 3. should fail immediately and lock nothing
...
Apparently the step 1 even if failing, holds lock on PK for some time but only in case, the database is duplicated by datapump.
These deadlocks are rare and not reproducible manually, I suppose that the lock is not held the whole transaction but only very short time.
so it could be as follows :
TX1: 1.INSERT (objectid1) (fails and does not lock)
TX1: 2.SELECT (objectid1) (locks SDMBase)
TX2: 1.INSERT (objectid1) (fails but locks PK)
TX1: 3.INSERT (objectid1) (waits on PK)
TX2: 2.SELECT (objectid1) (waits on SDMBase)
Even if I create the index in the imported portal as SDMBase(OBJECTID) and let the PRIMARY KEY to use it, and even if I recreate the other index (OBJECTID,...), it still deadlocks. So I suppose that there is some problem with the PK constraint check.
The fix of this problem is to create the SDMBase(OBJECTID), let the PRIMARY KEY use it and then perform the datapump again. The import must be performed in 2 steps, at first one to exclude indexes and at the second one import only indexes
exclude=INDEX/INDEX,statistics
include=INDEX/INDEX CONTENT=METADATA_ONLY
This problem occurs in both 11.2.0.3 and 12.2.0.1
I want to create a table with a column that references the name of a sequence I've also created. Ideally, I'd like to have a foreign key constraint that enforces this. I've tried
create table testtable (
sequence_name varchar2(128),
constraint testtableconstr
foreign key (sequence_name)
references user_sequences (sequence_name)
on delete set null
);
but I'm getting a SQL Error: ORA-01031: insufficient privileges. I suspect either this just isn't possible, or I need to add something like on update cascade. What, if anything, can I do to enforce this constraint when I insert rows into this table?
I assume you're trying to build some sort of deployment management system to keep track of your schema objects including sequences.
To do what you ask, you might explore one of the following options:
Run a report after each deployment that compares the values in your table vs. the data dictionary view, and lists any discrepancies.
Create a DDL trigger which does the insert automatically whenever a sequence is created.
Add a trigger to the table which does a query on the sequences view and raises an exception if not found.
I'm somewhat confused at what you are trying to achieve here - a sequence (effectively) only has a single value, the next number to be allocated, not all the values that have been previously allocated.
If you simply want to ensure that an attribute in the relation is populated from the sequence, then a trigger would be the right approach.
I am reading about Direct-Path INSERT in oracle documentation Loading Tables
It is written that :
During direct-path INSERT operations, the database appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache. Free space in the table is not reused, and referential integrity constraints are ignored. Direct-path INSERT can perform significantly better than conventional insert.
Can anyone explain me ,how referential integrity constraints is been ignored,According to my understanding it will load the data into the table ignoring the referential constraint .and after insert it will check for referential constraint.
If this is so ,if i use like this .
FORALL i IN v_temp.first..v_temp.last save exceptions
INSERT /*+ APPEND_VALUES */ INTO orderdata
VALUES(v_temp(i).id,v_temp(i).name);
COMMIT;
Will this will gave me correct index ,in case of any exceptions and how ?.
Sorry to ask so many questions in one ,but they are releated to each other.
How refrential constraint is been ignored
What is Free Space in table above
How it will give correct Index in case of any exceptions.
The first question should really be (Do I want/need to use direct path insert?", and the second should be "Did my query use direct path insert?"
If you need referential integrity checks, then you do not use direct path insert.
If you do not want the table to be exclusively locked for modifications, then do not use direct path insert.
If you remove data by deletion and only insert with this code, then do not use direct path insert.
One quick and easy check on whether direct path insert was used is to immediately, before committing the insert, issue a select of one row from the table. If it succeeds then direct path insert was not used -- you will receive an error message if it was because your change has to be commited before your session can read the table.
Referential Integrity is is not ignored in that statement.
See this AskTom thread for an explanation and an example:
what it seems to neglect to say in that old documentation is that....
insert /*+ append */ will ignore the append hint and use conventional
path loading when the table has referential integrity or a trigger
Free space is an in it doesn't reuse space freed up in the table by deletes, whereas a standard insert would.
I can't see anywhere where it says it will do a referential integrity check after the operation. I suspect you have to do that yourself.
erm what index?
Edited to insert.
Index as in the 3rd row to insert, I believe not necessarily anything to do with the table unless the index in the inserts happens to be the key of the table.
Check to see if it is maintaining referential integrity? Put a "bad" record in e.g. an order with a customerid that doesn't exist.
Free space.
Lets say you have a table of nchar(2) with an int primary key
e.g.
1 AA
2 AB
3 AC
So in your index on the key
1 points to 0
2 points to 4 (unicode one char = two bytes)
3 points to 8
Now you delete record with key 2, now you have
1 points to 0
3 points to 8
If you do a normal insert which reuses free space you get
1 points to 0
3 points to 8
4 points to 4
This direct insert stuff however saves time by not reusing the space so you get
1 points to 0
3 points to 8
4 points to 12
Very simplified scenario for illustrative purposes by the way...
edit: Look to the end of this question for what caused the error and how I found out.
I have a very strange exception thrown on me from Hibernate when I run an app that does batch inserts of data into an oracle database. The error comes from the Oracle database, ORA-00001, which
" means that an attempt has been made to
insert a record with a duplicate
(unique) key. This error will also be
generated if an existing record is
updated to generate a duplicate
(unique) key."
The error is weird because I have created the same table (exactly same definition) on another machine where I do NOT get the same error if I use that through my app. AND all the data get inserted into the database, so nothing is really rejected.
There has to be something different between the two setups, but the only thing I can see that is different is the banner output that I get when issuing
select * from v$version where banner like 'Oracle%';
The database that gives me trouble:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
The one that works:
Oracle Database 10g Release 10.2.0.3.0 - 64bit Production
Table definitions, input, and the app I wrote is the same for both. The table involved is basically a four column table with a composite id (serviceid, date, value1, value2) - nothing fancy.
Any ideas on what can be wrong? I have started out clean several times, dropping both tables to start on equal grounds, but I still get the error from the database.
Some more of the output:
Caused by: java.sql.BatchUpdateException: ORA-00001: unique constraint (STATISTICS.PRIMARY_KEY_CONSTRAINT) violated
at oracle.jdbc.driver.DatabaseError.throwBatchUpdateException(DatabaseError.java:367)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:8728)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
How I found out what caused the problem
Thanks to APC and ik_zelf I was able to pinpoint the root cause of this error. It turns out the Quartz scheduler was wrongly configured for the production database (where the error turned up).
For the job running against the non-failing oracle server I had <cronTriggerExpression>0/5 * * * * ?</cronTriggerExpression> which ran the batch job every five seconds. I figured that once a minute was sufficent for the other oracle server, and set the quartz scheduler up with * */1 * * * ?. This turns out to be wrong, and instead of running every minute, this ran every second!
Each job took approximately 1.5-2 seconds, and thus two or more jobs were running concurrently, thus causing simultaneous inserts on the server. So instead of inserting 529 elements, I was getting anywhere from 1000 to 2000 inserts. Changing the crontrigger expression to the same as the other one, running every five seconds, fixed the problem.
To find out what was wrong I had to set true in hibernate.cfg.xml and disable the primary key constraint on the table.
-- To catch exceptions
-- to find the offending rows run the following query
-- SELECT * FROM uptime_statistics, EXCEPTIONS WHERE MY_TABLE.rowid = EXCEPTIONS.row_id;
create table exceptions(row_id rowid,
owner varchar2(30),
table_name varchar2(30),
constraint varchar2(30));
-- This table was set up
CREATE TABLE MY_TABLE
(
LOGDATE DATE NOT NULL,
SERVICEID VARCHAR2(255 CHAR) NOT NULL,
PROP_A NUMBER(10,0),
PROP_B NUMBER(10,0),
CONSTRAINT PK_CONSTRAINT PRIMARY KEY (LOGDATE, SERVICEID)
);
-- Removed the constraint to see what was inserted twice or more
alter table my_table
disable constraint PK_CONSTRAINT;
-- Enable this later on to find rows that offend the constraints
alter table my_table
enable constraint PK_CONSTRAINT
exceptions into exceptions;
You have a unique compound constraint. ORA-00001 means that you have two or more rows which have duplicate values in ServiceID, Date, Value1 and/or Value2. You say the input is the same for both databases. So either:
you are imagining that your program is hurling ORA-00001
you are mistaken that the input is the same in both runs.
The more likely explanation is the second one: one or more of your key columns is populated by an external source or default value (e.g. code table for ServiceId or SYSDATE for the date column). In your failing database this automatic population is failing to provide a unique value. There can be any number of reasons why this might be so, depending on what mechanism(s) you're using. Remember that in a unique compound key NULL entries count. That is, you can have any number of records (NULL,NULL.NULL,NULL) but only one for (42,NULL,NULL,NULL).
It is hard for us to guess what the actual problem might be, and almost as hard for you (although you do have the advantage of being the code's author, which ought to grant you some insight). What you need is some trace statements. My preferred solution would be to use Bulk DML Exception Handling but then I am a PL/SQL fan. Hibernate allows you to hook in some logging to your programs: I suggest you switch it on. Code is a heck of a lot easier to debug when it has decent instrumentation.
As a last resort, disable the constraint before running the batch insert. Afterwards re-enable it like this:
alter table t42
enable constraint t42_uk
exceptions into my_exceptions
/
This will fail if you have duplicate rows but crucially the MY_EXCEPTIONS table will list all the rows which clash. That at least will give you some clue as to the source of the duplication. If you don't already have an exceptions table you will have to run a script: $ORACLE_HOME/rdbms/admin/utlexcptn.sql ( you may need a DBA to gain access to this directory).
tl;dr
insight requires information: instrument your code.
The one that has problems is a EE and the other looks like a SE database. I expect that the first is on quicker hardware. If that is the case, and your date column is filled using SYSDATE, it could very well be that the time resolution is not enough; that you get duplicate date values. If the other columns of your data are also not unique, you get ORA-00001.
It's a long shot but at first sight I would look into this direction.
Can you use an exception table to identify the data? See Reporting Constraint Exceptions
My guess would be the service id. Whatever service_id hibernate is using for the 'fresh' insert has already been used.
Possibly the table is empty in one database but populated in another
I'm betting though that the service_id is sequence generated and the sequence number is out of sync with the data content. So you have the same 1000 rows in the table but doing
SELECT service_id_seq.nextval FROM DUAL
in one database gives a lower number than the other. I see this a lot where the sequence has been created (eg out of source control) and data has been imported into the table from another database.