ORA-08103: object no longer exists - insert query fails - oracle

I'm encountering this error multiple times, but it appears to be random.
I perform an INSERT query where I attempt to insert a BLOB file into a designated table.
I do not know if there's a connection between the BLOB and the error.
Worth mentioning that the table is partitioned.
Here is the complete query:
INSERT INTO COLLECTION_BLOB_T
(OBJINST_ID, COLINF_ID, COLINF_PARTNO, BINARY_FILE_NAME, BINARY_FILE_SIZE, BINARY_FILE)
VALUES (:p1, :p2, :p3, :p4, :p5, EMPTY_BLOB());
This is the only INSERT/UPDATE into this table in the entire application.
So I doubt that any other query is locking it, and the error is not about a locked resource.
What can be the cause?
As I've mentioned, this appears to occur randomly.
Thanks in advance.

The table is partitioned as I've mentioned, so between midnight - 3:00AM the partitioned changes and this under some instances the error occurs.

Related

Does creating index in Oracle locks the table for reads?

If we specify ONLINE in the CREATE INDEX statement, the table isn't locked during creation of the index. Without ONLINE keyword it isn't possible to perform DML operations on the table. But is the SELECT statement possible on the table meanwhile? After reading the description of CREATE INDEX statement it still isn't clear to me.
I ask about this, because I wonder if it is similar to PostgreSQL or SQL Server:
In PostgreSQL writes on the table are not possible, but one can still read the table - see the CREATE INDEX doc > CONCURRENTLY parameter.
In SQL Server writes on the table are not possible, and additionally if we create a clustered index reads are also not possible - see the CREATE INDEX doc > ONLINE parameter.
Creating an index does NOT block other users from reading the table. In general, almost no Oracle DDL commands will prevent users from reading tables.
There are some DDL statements that can cause problems for readers. For example, if you TRUNCATE a table, other users who are in the middle of reading that table may get the error ORA-08103: Object No Longer Exists. But that's a very destructive change that we would expect to cause problems. I recently found a specific type of foreign key constraint that blocked reading the table, but that was likely a rare bug. I've caused a lot of production problems while adding objects, but so far I've never seen adding an index prevent users from reading the table.

Oracle deadlock output when caused by foreign keys

We have a multi-threaded batch job ending up in deadlock. I am getting conflicting answers from our dba's as to what actually causes the deadlock.
Caused by: java.sql.SQLException: ORA-00060: deadlock detected while waiting for resource
The error output references the sql for inserting into table A. Every row going into table A should be unique. Table A has foreign keys on two other tables, both of which are indexed and primary keys made up of two columns. Many rows in Table A can point to the same FK in the parent tables. Our code handles FK errors by trying to insert into parent tables and then trying into Table A again.
The sql in the trace log refers to the Table A insert sql (does not show param binding values). Does this mean definitively that there are two identical sql statements trying to be inserted into Table A in which case our prior logic is not thread-safe somewhere? Or could it really be that there are two inserts both referencing an unsatisfied FK? And the deadlock occurs from our error handling in trying to insert into the parent table. If so, would the sql in the trace not then reference the parent table sql?
Or perversely, does the original insert attempt put a lock on the row and then after handling the error, does the second attempt of the insert cause the deadlock? Any further debugging assistance?
There's not much info to work with, but my guess would be that two threads are trying to insert the same rows into one of the 'two other tables' at the same time. Idea on debugging below
Use a trigger on table A and on the other two tables ( 3 triggers in total) that write the inserted data to logging tables in an autonomous transaction that commits. This way you can see the uncommitted inserts that were rolled back due to the deadlock (the rows that exists in the logging tables but not in the actual tables are the ones that were rolled back).
HTH, KR

Unique Constraint Violated on empty table

I recently received a case which my client came across the ORA-00001: unique constraint violated error. This happened when a program tried to truncate two tables and then insert data into them.
From the error-log file, the truncate step was completed,
delete from INTERNET_GROUP
delete from INTERNET_ITEM
BUT right after this, the insertion to the Internet_group table triggered the ORA-00001 error. I am wondering if there is any database settings related to this error? I never used Oracle and am wondering if Oracle puts a lock on a row with SELECT statement, in which case the row is locked and not deleted somehow? Any help is appreciated.
Please know that there is a difference between truncate and delete. You say you truncated the table, but you mention "delete from" . That is entirely different.
If you're sure you want to empty the tables, try replacing with
truncate table internet_group reuse storage;
Mind you that a commit is not necessary with the truncate statement as this is considered a DDL (data definition language) statement and not a DML (Data modification language) statement like updates and deletes.
Also, there is no row locking on selects. But changes are only applied and visible for other sessions in the database when commit-ed.
I guess that is wat happened; you deleted the records but did not execute a commit (yet) and subsequently inserted new records.
edit:
I now realize you're probably inserting multiple records....
The other option might be, that the data itself causes a violation. Can you please provide the constraints on the table? There must be a primary key or unique constraint. You might want to hold that against your dataset.

Identical Oracle db setups: exception on just one of them

edit: Look to the end of this question for what caused the error and how I found out.
I have a very strange exception thrown on me from Hibernate when I run an app that does batch inserts of data into an oracle database. The error comes from the Oracle database, ORA-00001, which
" means that an attempt has been made to
insert a record with a duplicate
(unique) key. This error will also be
generated if an existing record is
updated to generate a duplicate
(unique) key."
The error is weird because I have created the same table (exactly same definition) on another machine where I do NOT get the same error if I use that through my app. AND all the data get inserted into the database, so nothing is really rejected.
There has to be something different between the two setups, but the only thing I can see that is different is the banner output that I get when issuing
select * from v$version where banner like 'Oracle%';
The database that gives me trouble:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
The one that works:
Oracle Database 10g Release 10.2.0.3.0 - 64bit Production
Table definitions, input, and the app I wrote is the same for both. The table involved is basically a four column table with a composite id (serviceid, date, value1, value2) - nothing fancy.
Any ideas on what can be wrong? I have started out clean several times, dropping both tables to start on equal grounds, but I still get the error from the database.
Some more of the output:
Caused by: java.sql.BatchUpdateException: ORA-00001: unique constraint (STATISTICS.PRIMARY_KEY_CONSTRAINT) violated
at oracle.jdbc.driver.DatabaseError.throwBatchUpdateException(DatabaseError.java:367)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:8728)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
How I found out what caused the problem
Thanks to APC and ik_zelf I was able to pinpoint the root cause of this error. It turns out the Quartz scheduler was wrongly configured for the production database (where the error turned up).
For the job running against the non-failing oracle server I had <cronTriggerExpression>0/5 * * * * ?</cronTriggerExpression> which ran the batch job every five seconds. I figured that once a minute was sufficent for the other oracle server, and set the quartz scheduler up with * */1 * * * ?. This turns out to be wrong, and instead of running every minute, this ran every second!
Each job took approximately 1.5-2 seconds, and thus two or more jobs were running concurrently, thus causing simultaneous inserts on the server. So instead of inserting 529 elements, I was getting anywhere from 1000 to 2000 inserts. Changing the crontrigger expression to the same as the other one, running every five seconds, fixed the problem.
To find out what was wrong I had to set true in hibernate.cfg.xml and disable the primary key constraint on the table.
-- To catch exceptions
-- to find the offending rows run the following query
-- SELECT * FROM uptime_statistics, EXCEPTIONS WHERE MY_TABLE.rowid = EXCEPTIONS.row_id;
create table exceptions(row_id rowid,
owner varchar2(30),
table_name varchar2(30),
constraint varchar2(30));
-- This table was set up
CREATE TABLE MY_TABLE
(
LOGDATE DATE NOT NULL,
SERVICEID VARCHAR2(255 CHAR) NOT NULL,
PROP_A NUMBER(10,0),
PROP_B NUMBER(10,0),
CONSTRAINT PK_CONSTRAINT PRIMARY KEY (LOGDATE, SERVICEID)
);
-- Removed the constraint to see what was inserted twice or more
alter table my_table
disable constraint PK_CONSTRAINT;
-- Enable this later on to find rows that offend the constraints
alter table my_table
enable constraint PK_CONSTRAINT
exceptions into exceptions;
You have a unique compound constraint. ORA-00001 means that you have two or more rows which have duplicate values in ServiceID, Date, Value1 and/or Value2. You say the input is the same for both databases. So either:
you are imagining that your program is hurling ORA-00001
you are mistaken that the input is the same in both runs.
The more likely explanation is the second one: one or more of your key columns is populated by an external source or default value (e.g. code table for ServiceId or SYSDATE for the date column). In your failing database this automatic population is failing to provide a unique value. There can be any number of reasons why this might be so, depending on what mechanism(s) you're using. Remember that in a unique compound key NULL entries count. That is, you can have any number of records (NULL,NULL.NULL,NULL) but only one for (42,NULL,NULL,NULL).
It is hard for us to guess what the actual problem might be, and almost as hard for you (although you do have the advantage of being the code's author, which ought to grant you some insight). What you need is some trace statements. My preferred solution would be to use Bulk DML Exception Handling but then I am a PL/SQL fan. Hibernate allows you to hook in some logging to your programs: I suggest you switch it on. Code is a heck of a lot easier to debug when it has decent instrumentation.
As a last resort, disable the constraint before running the batch insert. Afterwards re-enable it like this:
alter table t42
enable constraint t42_uk
exceptions into my_exceptions
/
This will fail if you have duplicate rows but crucially the MY_EXCEPTIONS table will list all the rows which clash. That at least will give you some clue as to the source of the duplication. If you don't already have an exceptions table you will have to run a script: $ORACLE_HOME/rdbms/admin/utlexcptn.sql ( you may need a DBA to gain access to this directory).
tl;dr
insight requires information: instrument your code.
The one that has problems is a EE and the other looks like a SE database. I expect that the first is on quicker hardware. If that is the case, and your date column is filled using SYSDATE, it could very well be that the time resolution is not enough; that you get duplicate date values. If the other columns of your data are also not unique, you get ORA-00001.
It's a long shot but at first sight I would look into this direction.
Can you use an exception table to identify the data? See Reporting Constraint Exceptions
My guess would be the service id. Whatever service_id hibernate is using for the 'fresh' insert has already been used.
Possibly the table is empty in one database but populated in another
I'm betting though that the service_id is sequence generated and the sequence number is out of sync with the data content. So you have the same 1000 rows in the table but doing
SELECT service_id_seq.nextval FROM DUAL
in one database gives a lower number than the other. I see this a lot where the sequence has been created (eg out of source control) and data has been imported into the table from another database.

Oracle IN Condition very slow

I am trying to execute the following statement on a table containing 10,000 rows but the query is executing forever.
delete from Table_A where col1 in ('A','B','C') and col2 in ('K','L','M') and col3 in ('H','R',D')
Please can anyone assist!
Thanks
A
It looks as if another session has locked one of the rows you'd like to delete.
Is somebody else working on the same table (with transactions that last more than a few seconds)? Or do you have another tool or session open where you haven't committed your changes?
Update:
Another problem are foreign keys that aren't properly index: If other tables have a foreign key to the table where you want to delete the rows, and if the foreign key column in those tables isn't indexed, then Oracle will try to lock those tables. This could be the cause. If this is the case, index those columns.
Another possible reason for a database to hang is if the archive log destination is full.
Query the V$SESSION_WAIT and V$SESSION_EVENT views to see what your session is waiting for.

Resources