kdtigetrow-2 error during integration - oracle

I am facing this error, and due to this error, the whole system seems to be going down. After checking logs and all, I found out that one destination tables might be the problem.
This is the error:
MERGE INTO vacations vac
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [kdtigetrow-2], [25], [40], [39],
[], [], [], [], [], [], [], []
This is the source table:
create table TEMP_VACATIONS
(
idd VARCHAR2(10),
start_date VARCHAR2(10),
end_date VARCHAR2(10),
day_count VARCHAR2(10),
vac_type VARCHAR2(10),
arrival_date DATE
)
This is the destination table:
create table VACATIONS
(
user_id NUMBER(10) not null,
start_date DATE not null,
end_date DATE not null,
days_count NUMBER(3) not null,
vacation_type INTEGER,
arrival_date VARCHAR2(20),
idd NUMBER(10)
)
alter table SPENT_VACATIONS
add constraint SPENT_VACATIONS$PK primary key (USER_ID, START_DATE)
using index
tablespace ARCV25
pctfree 10
initrans 2
maxtrans 255
storage
(
initial 320K
next 1M
minextents 1
maxextents unlimited
);
and this is the script:
MERGE INTO vacations vac
USING temp_vacations tmpvac
ON (vac.user_id = TO_NUMBER(tmpvac.idd) AND vac.start_date = TO_DATE(tmpvac.start_date, 'dd.mm.yyyy') AND vac.end_date = TO_DATE(tmpvac.end_date, 'dd.mm.yyyy'))
WHEN NOT MATCHED THEN
INSERT (vac.user_id, vac.start_date, vac.end_date, vac.days_count, vac.vacation_type, vac.arrival_date)
VALUES (TO_NUMBER(tmpvac.idd), TO_DATE(tmpvac.start_date, 'dd.mm.yyyy'), TO_DATE(tmpvac.end_date, 'dd.mm.yyyy'), tmpvac.day_count, tmpvac.vac_type, TO_CHAR(tmpvac.arrival_date, 'dd.mm.yyyy'))
LOG ERRORS INTO stara.migration_err('File: STARA_EHR.SPOLO.TXT => merge operation => annual_vacations') REJECT LIMIT UNLIMITED;
COMMIT;
The oracle version is:
Oracle Database 11g Release 11.2.0.2.0
Is it possible that this error comes during type conversation (char => date or char => number) ?
How can I fix that internal error as well? Do I need to rollback the database to previous backup?
Thanks in advance.

ORA-00600 is Oracle's generic code for signalling unexpected internal behaviour (i.e. bugs). The standard advice is to contact Oracle Support, as by the nature of these things, they tend to be very specific to database version, platform and a whole host of other variables. There is a distinct possibility that you need a patch to fix this problem, or perhaps just an upgrade to the latest version.
Of course, if you don't have a Support contract that advice is not very useful. Unfortunately it's hard for us to be more helpful. There should be further information in the Alert Log, and there may be a trace file too. You'll probably have to ask a DBA to help you with it.
Otherwise you can try searching the internet. The first parameter indicates the specific event i.e. [kdtigetrow-2]. I did find this article on a blog but the writer's scenario appears to be very different from your own.

We have upgraded the Oracle db to 11.2.0.3 with latest patch 21. Now all is working good :-)

This sounds like an index corruption. Disable the constraint, drop the index, recreate the index and enable the constraint again.

Related

how to create check constraint in maria DB for checking char type multiple values?

i have working for my college project of QuestionAnswer site like StackOverflow but i have face one problem in creating tables in Maria DB
i use latest version of XAMPP which by default use maria DB instead of MYSQL
so i want to create post table which contain post type (p_type) like (question, answer, comment),
i use following code
create table post
(
p_type char(1) check(p_type = 'q' OR p_type = 'a' OR p_type = 'c')
);
and i use InnoDB storage engine and Maria DB version is 10.1.30
but when i insert other character like (s,z,x) is store in database
which means Check constraint is not applied,
i also visited the Maria DB manual for Check Constraint but there is no any example related to Char type.
so any answer would be appreciated
thanks in advance
Your syntax is fine, the version is wrong. MariaDB versions before 10.2 (and all available versions of MySQL) parse the CHECK clause, but ignore it completely. MariaDB 10.2 performs the actual check:
MariaDB [test]> create table post ( p_type char(1) check(p_type = 'q' OR p_type = 'a' OR p_type = 'c') );
Query OK, 0 rows affected (0.18 sec)
MariaDB [test]> insert into post values ('s');
ERROR 4025 (23000): CONSTRAINT `p_type` failed for `test`.`post`
MariaDB [test]> insert into post values ('q');
Query OK, 1 row affected (0.04 sec)

Not able to perform UPDATE query with sysdate - ORACLE

I am trying to run the following, fairly simple, update statement in ORACLE.
UPDATE PROJECT_BUG_SNAPSHOTS
SET SNAPSHOT_DATESTAMP = sysdate,
SNAPSHOT_TYPE = P_SNAPSHOT_TYPE
WHERE PROJECT_ID = P_PROJECT_ID
AND BUG_NO = P_BUG_NO
AND BUG_STATUS = P_BUG_STATUS;
It complains of unique constraint violation.
The PK comprises of PROJECT_ID,BUG_NO,SNAPSHOT_DATESTAMP,SNAPSHOT_TYPE.
The table structure is
PROJECT_ID NUMBER
SNAPSHOT_DATESTAMP DATE
SNAPSHOT_TYPE VARCHAR2(20 BYTE)
BUG_NO NUMBER
BUG_STATUS VARCHAR2(100 BYTE)
This is quite weird as sysdate should be different with each run and it should never hit the "unique constraint violation" error.
The primary key is a combination of PROJECT_ID, BUG_NO, SNAPSHOT_DATESTAMP, and SNAPSHOT_TYPE. This means you allow (and probably have!) several rows with the same project id, bug number and snapshot type, but from different dates. Your update statement, will attempt to set all the snapshot dates of a given project, bug number and status to the same date (the current date), thus breaking the uniqueness and failing due to a constraint violation.

Is it possible to have a "deferred check constraint" in Oracle?

I was thinking that I'd like to have a "deferred check constraint" in Postgres, but that is apparently not supported at this time (Postgres 9.3)
Then I saw that Oracle seems to broadly have "deferred" for its constraints, documented here. Therefore, is it true that Oracle 10g+ supports having a "deferred check constraint"?
I might have missed further documentation to the contrary, so I figured to ask here as a double-check, trusting that there are people who actively use Oracle who would know the answer - thus avoiding trial-and-error, wasted hours messing around with Oracle servers.
Yes, though I'm not sure why you'd want to:
create table t42 (id number,
constraint check_id check (id > 0) initially deferred deferrable);
table T42 created.
insert into t42 (id) values (-1);
1 rows inserted.
commit;
Error report -
SQL Error: ORA-02091: transaction rolled back
ORA-02290: check constraint (STACKOVERFLOW.CHECK_ID) violated
02091. 00000 - "transaction rolled back"
*Cause: Also see error 2092. If the transaction is aborted at a remote
site then you will only see 2091; if aborted at host then you will
see 2092 and 2091.
*Action: Add rollback segment and retry the transaction.
You can update it before committing of course:
insert into t42 (id) values (-1);
1 rows inserted.
update t42 set id = 1 where id = -1;
1 rows updated.
commit;
committed.
... but I'm not sure why you would put the invalid value in the table in the first place if you planned to update it. Presumably there is some scenario where this is useful.
More on constraint deferral in the documentation.
Yes, you can define Constraint as
"DEFERRABLE" or "NOT DEFERRABLE"
and then
"INITIALLY DEFERRED" or "INITIALLY IMMEDIATE"
For example:
ALTER TABLE T
ADD CONSTRAINT ck_t CHECK (COL_1 > 0)
DEFERRABLE INITIALLY DEFERRED;
Check Oracle documentation for details...

Why is Oracle losing data during commit?

I have a fairly standard SQL Query as follows:
TRUNCATE TABLE TABLE_NAME;
INSERT INTO TABLE_NAME
(
UPRN,
SAO_START_NUMBER,
SAO_START_SUFFIX,
SAO_END_NUMBER,
SAO_END_SUFFIX,
SAO_TEXT,
PAO_START_NUMBER,
PAO_START_SUFFIX,
PAO_END_NUMBER,
PAO_END_SUFFIX,
PAO_TEXT,
STREET_DESCRIPTOR,
TOWN_NAME,
POSTCODE,
XY_COORD,
EASTING,
NORTHING,
ADDRESS
)
SELECT
BASIC_LAND_AND_PROPERTY_UNIT.UPRN,
LAND_AND_PROPERTY_IDENTIFIER.SAO_START_NUMBER AS SAO_START_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.SAO_START_SUFFIX AS SAO_START_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.SAO_END_NUMBER AS SAO_END_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.SAO_END_SUFFIX AS SAO_END_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.SAO_TEXT AS SAO_TEXT,
LAND_AND_PROPERTY_IDENTIFIER.PAO_START_NUMBER AS PAO_START_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.PAO_START_SUFFIX AS PAO_START_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.PAO_END_NUMBER AS PAO_END_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.PAO_END_SUFFIX AS PAO_END_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.PAO_TEXT AS PAO_TEXT,
STREET_DESCRIPTOR.STREET_DESCRIPTOR AS STREET_DESCRIPTOR,
STREET_DESCRIPTOR.TOWN_NAME AS TOWN_NAME,
LAND_AND_PROPERTY_IDENTIFIER.POSTCODE AS POSTCODE,
BASIC_LAND_AND_PROPERTY_UNIT.GEOMETRY AS XY_COORD,
BASIC_LAND_AND_PROPERTY_UNIT.X_COORDINATE AS EASTING,
BASIC_LAND_AND_PROPERTY_UNIT.Y_COORDINATE AS NORTHING,
decode(SAO_START_NUMBER,null,null,SAO_START_NUMBER||SAO_START_SUFFIX||' ')
||decode(SAO_END_NUMBER,null,null,SAO_END_NUMBER||SAO_END_SUFFIX||' ')
||decode(SAO_TEXT,null,null,SAO_TEXT||' ')
||decode(PAO_START_NUMBER,null,null,PAO_START_NUMBER||PAO_START_SUFFIX||' ')
||decode(PAO_END_NUMBER,null,null,PAO_END_NUMBER||PAO_END_SUFFIX||' ')
||decode(PAO_TEXT,null,null,'STREET RECORD',null,PAO_TEXT||' ')
||decode(STREET_DESCRIPTOR,null,null,STREET_DESCRIPTOR||' ')
||decode(POST_TOWN,null,null,POST_TOWN||' ')
||Decode(Postcode,Null,Null,Postcode) As Address
From (Land_And_Property_Identifier
Inner Join Basic_Land_And_Property_Unit
On Land_And_Property_Identifier.Uprn = Basic_Land_And_Property_Unit.Uprn)
Inner Join Street_Descriptor
On Land_And_Property_Identifier.Usrn = Street_Descriptor.Usrn
Where Land_And_Property_Identifier.Postally_Addressable='Y';
If I run this query in SQL Developer, it runs fine with 1.8million features inserted (select count(*) from TABLE_NAME within the session confirms this).
But when I run the commit, the data disappears! select count(*) from TABLE_NAME now returns 0 results.
We've done a number of things to try and see what's going on:
During the Truncate, tablespace is freed up, and during the insert its filled again. There is no change during the commit. This implies the data is in the database.
If I do the exact same query but with and rownum < 100 appended to the end, the commit works. Same with 1000.
I found this question: oracle commit kills and had our DBA try the "SQL Trace". This produced a >4GB file which when parsed with TKPROF produced a 120 page report but we don't know how to read it and there's nothing obviously wrong in there.
Our error logs have nothing in them. And obviously no error during the commit itself.
There's a trigger/sequence which does increment by 1.8million during the process.
I've repeated this about 4 times now, but the result is always the same.
So my question is simple - what's happening to the data during the commit? How can we find out? Thanks.
Note: This has run fine in the past so I don't believe there's anything wrong with the SQL per-se.
Edit: Issue resolved by recreating the table from scratch. Now when I insert it only takes 500 seconds compared to the previous 2000. And commiting is instantaneous; when it was broken the commit took 4000 seconds!
I still have no idea why it happened though.
For those asking, the Create Table syntax:
CREATE TABLE TABLE_NAME
(
ADDRESS VARCHAR2(4000),
UPRN NUMBER(12),
SAO_START_NUMBER NUMBER(4),
SAO_START_SUFFIX VARCHAR2(1),
SAO_END_NUMBER NUMBER(4),
SAO_END_SUFFIX VARCHAR2(1),
SAO_TEXT VARCHAR2(90),
PAO_START_NUMBER NUMBER(4),
PAO_START_SUFFIX VARCHAR2(1),
PAO_END_NUMBER NUMBER(4),
PAO_END_SUFFIX VARCHAR2(1),
PAO_TEXT VARCHAR2(90),
STREET_DESCRIPTOR VARCHAR2(100),
TOWN_NAME VARCHAR2(30),
POSTCODE VARCHAR2(8),
XY_COORD MDSYS.SDO_GEOMETRY,
EASTING NUMBER(7),
NORTHING NUMBER(7)
)
CREATE INDEX TABLE_NAME_ADD_IDX ON TABLE_NAME (ADDRESS);
Do you still lose the data if you wrap the transaction in an anonymous block?
My guess is that you are opening two SQL windows in SQL Developer and that means two separate sessions. i.e. Running SQL code in window 1 and doing commit; in window 2 will not commit changes done in window 1.
Truncate table does an implicit commit. So the table will be empty until insert + commit finishes.
begin
execute immediate 'truncate table table_name reuse storage'; --use "reuse" if you know the data will be of similar size
-- implicit commit has occured and the table is empty for all sessions
insert into table_name (lots)
select lots from table2;
commit;
end;
You should use truncate with reuse storage, so that the database doesn't go a free all the blocks just to acquire the same number of blocks in the insert.
If you want/need to have the data available at all times a better (but longer) method is
begin
savepoint letsgo;
delete from table_name;
insert into table_name (lots)
select lots from table2;
commit;
exception
when others then
rollback to letsgo;
end;
Probably you had a trigger which you didn't noticed. Can you check the oracle's recyclebin table which might be storing the history of your dropped table and trigger?
Select * from recyclebin;
References : http://www.oraclebin.com/2012/12/recyclebinflashback.html

Why is oracle spewing bad table metadata?

I'm using DBVisualizer to extract DDL from an Oracle 10.2 DB. I'm getting odd instances of repeated columns in constraints, or repeated constraints in the generated DDL. At first I chalked it up to a bug in DBVisualizer, but I tried using Apache DDLUtils against the DB and it started throwing errors which investigation revealed to be caused by the same problem. The table metadata being returned by Oracle appears to have multiple entries for some FK constraints.
I can find no reference to this sort of thing from my google searches and I was wondering if anyone else had seen the same thing. Is this a bug in the Oracle driver, or does the metadata contain extra information which is being dropped when my tools access it, resulting in confusion on the part of the tools...
Here is an example (truncated) DDL output from
CREATE TABLE ARTIST
(
ID INTEGER NOT NULL,
FIRST_NAME VARCHAR2( 128 ),
LAST_NAME VARCHAR2( 128 ),
CONSTRAINT ARTIST_ID_PK PRIMARY KEY( ID ),
CONSTRAINT ARTIST_CONTENT_ID_FK FOREIGN KEY( ID, ID, ID ) REFERENCES CMS_CONTENT( CONTENT_ID, CONTENT_ID, CONTENT_ID )
-- note the multiple instances of ID and CONTENT_ID in the above line
-- rest assured there is nothing bizarre about the foreign table CMS_CONTENT
)
I'm attempting to find a Java example which can show the behaviour, and will update the question when I have a concrete example.
You can try the built-in Oracle DBMS_METADATA.GET_DDL('TABLE','ARTIST') and see if that resolves the issue (ie whether it is a bug in the tools or the DB).
You can look at the data_dictionary tables too. In this case, ALL_CONSTRAINTS and ALL_CONS_COLUMNS.
select ac.owner, ac.constraint_name, ac.table_name, ac.r_owner, ac.r_constraint_name,
acc.column_name, acc.position
from all_constraints ac join all_cons_columns acc on
(ac.owner = acc.owner and ac.constraint_name = acc.constraint_name)
where ac.table_name = 'ARTIST'
and ac.constraint_type = 'R'
I'd suspect that it is a bug in the tools, and they've missed a join on the owning schema and you are picking up the same table/constraint but in another user's schema.
As far as I can see, dbvis (6.5.7) uses own code when you use the 'DDL' tab and it uses dbms_metadata when using the tab 'DDL with Storage'.
Does this make a difference for you ?
Ronald

Resources