Duplicate Indexes error using Informatica 10.4 - oracle

While running an informatica mapping in v10.4 I'm getting the following error.
The mapping essentially calls a complex stored procedure in Oracle to "swap out" a temporary file to a partitioned fact table.
CMN_1022 [
ORA-20014: FINISH_SP: ORA-20010: Duplicate Indexes: ORA-12801: error signaled in parallel query server P00I
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1650
ORA-20010: Duplicate Indexes: ORA-12801: error signaled in parallel query server P00I
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1292
ORA-12801: error signaled in parallel query server P00I
ORA-28604: table too fragmented to build bitmap index (172073921,57,56)
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1277
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1277
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1593
I do not know what this error means to Informatica.
Can anyone help me decipher it SPECIFIC TO INFORMATICA

The problem is specific to Oracle, so not sure how to make the answer specific to Informatica, especially without being able to see the details of what the workflow is trying to do.
The ORA-20014: FINISH_SP: ORA-20010: Duplicate Indexes: error is a custom message from the application code. The real key appears to be here: "ORA-28604: table too fragmented to build bitmap index (172073921,57,56)"
It looks like Informatica is attempting to build an index - indirectly through the DIMDW.FACT_EXCHANGE_PARTITION_PKG package - and the process is throwing an error. A simple Google search on ORA-28604 yields the following:
ORA-28604: table too fragmented to build bitmap index (%s,%s,%s)
*Cause: The table has one or more blocks that exceed the maximum number
of rows expected when creating a bitmap index. This is probably
due to deleted rows. The values in the message are:
(data block address, slot number found, maximum slot allowed)
*Action: Defragment the table or block(s). Use the values in the message
to determine the FIRST block affected. (There may be others).
Since this involves the physical fragmentation of the data in the Oracle database, you will almost certainly need to get the DBA involved to troubleshoot this further. Your Informatica workflow likely isn't going anywhere until this is corrected in the database.

Related

ODI-1228: Task Load data-LKM SQL to Oracle- fails on the target > connection

I'm working with Oracle Data Integrator inserting information from original source to temp table (BI_DSA.TMP_TABLE)
ODI-1228: Task Load data-LKM SQL to Oracle- fails on the target
connection BI_DSA. Caused By: java.sql.BatchUpdateException:
ORA-12899: value too large for column
"BI_DSA"."C$_0DELTA_TABLE"."FIELD" (actual: 11, maximum: 10)
I tried changing the lenght of 'FIELD' to more than 10 and reverse engineering but it didn't work.
Is this error coming from the original source? I'm doing a replica so I just have view privileges on it and I believe so because is the C$ table where the error comes from.
Thanks for the help!
Solution: I tried with the length option before like the answers suggested but didn't work, I noticed the orginal source modified their field lenght so I reverse enginereed source table and problem solved.
Greetings!
As Bobby mentioned in the comment it might come from the byte/char semantics.
The C$ tables created by the LKMs usually copy the structure of the source data. So a workaround would be to go in the model and manually increase the size of the FIELD column in the source datastore (even if it doesn't represent what is in the database). The C$ table will be created whith that size on the next run.

Access bfile from other oracle database

I have two Oracle databases using Oracle Standard Edition Two 12.1, DB1 and DB2.
DB1 contains the following table a FILE_TABLE:
ColumnName DataType
File_Id Number
File_Ref Bfile
I am storing Bfile into File_Ref of a FILE_TABLE table.
I want to access the Bfile from FILE_TABLE table and show it in DB2.
I have tried it using the database link. Using database link, I tried to do so:
SELECT FILE_REF FROM FILE_TABLE#dblink;
but it's giving an error: ORA-22992: cannot use LOB locators selected from remote tables
I also tried the DBMS_FILE_TRANSFER but when I researched on it, I got to know that we can only transfer System data files but not Bfile.
Is it true?
This is the syntax I used to transfer file:
DBMS_FILE_TRANSFER.put_file(
source_directory_object => 'db1_dir',
source_file_name => 'db1test.html',
destination_directory_object => 'db2_dir',
destination_file_name => 'db2test.html');
destination_database => 'db1_to_db2',
END;
/
But I was getting an error:
ORA-19505: failed to identify file "/db1_dir/db1test.html"
ORA-27046: file size is not a multiple of logical block size
Additional information: 1
ORA-02063: preceding 3 lines from db1_to_db2
ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 37
ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 132
ORA-06512: at line 2
00000 - "failed to identify file \"%s\""
*Cause: call to identify the file returned an error
*Action: check additional messages, and check if the file exists
Is there any way to do it using database link? Are there any other solutions to move a Bfile from one Database to other?
The ora-27046 is generally a sign that during the file transfer the file has been changed. This can be caused by firewalls. Have you verified with that the file is the same.
In addition when you get ora-27046 its not the only error, you get an error stack of several errors. It would be good to post the complete error stack. Also you don't mention which oracle release you are using.

How to solve Oracle 9i to 11g migration KOREAN_LEXER problems?

I currently work at migrating an Oracle 9i database (*.dmp file) to an Oracle 11g one.
To achieve this I use the exp and imp Oracle utilities command lines:
exp USERID=<user>/<password>#<database> FILE=<path> OWNER=<owner>
Then I create a skeleton.sql file which will create tables, index tables and finally indexes.
imp <user>/<password>#<database> FILE=<path> INDEXFILE="<path>\skeleton.sql" FROMUSER=<fromuser> TOUSER=<touser>
During this migration I am able to import most of the data correctly, and of course tablespaces are kept the same from one database to the other to avoid any conflicts.
But here comes the problem. In Oracle 11g, KOREAN_LEXER is no longer supported, instead you have to use the KOREAN_MORPH_LEXER. To do so I execute the following SQL commands:
call ctx_ddl.create_preference('korean_lexer','korean_morph_lexer');
call ctx_ddl.add_sub_lexer('global_lexer','korean','korean_lexer',null);
Then I import the skeleton.sql file in order to inject the data needed before the import:
sqlplus <user>/<password>#<database> #<path>\skeleton.sql
The creation of tables and index tables go smoothly until I get the following error for each of the 150+ indexes I created:
CREATE INDEX "<schema>"."WORKORDER_NDX16" ON "WORKORDER"
ERROR at line 1 :
ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-20000: Oracle Text error:
DRG-10502: WORKORDER_NDX16 index does not exist
DRG-13201: KOREAN_LEXER is no longer supported
ORA-06512: at "CTXSYS.DRUE", line 160
ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 366
Indexes are still created, this message is just a warning.
I try to rebuild any of the broken indexes:
ALTER index WORKORDER_NDX16 REBUILD;
Which give me the following error again:
SQL Error : ORA-29874: warning in the execution of ODCIINDEXCREATE routine
ORA-29960: ligne 1,
DRG-10595: failure on ALTER INDEX WORKORDER_NDX16
DRG-50857: oracle error in drixmd.PurgeKGL
ORA-20000: Oracle Text error:
DRG-13201: KOREAN_LEXER is no longer supported
ORA-30576: ConText Option dictionary loading error
DRG-50610: internal error: kglpurge []
29874. 00000 - "warning in the execution of ODCIINDEXALTER routine"
*Cause: A waring was returned from the ODCIIndexAlter routine.
*Action: Check to see if the routine has been coded correctly
Check the user defined warning log tables for greater details.
In my skeleton.sql file, under the creation of each indexes, I have the following lines for each language:
ctxsys.driimp.set_object('LEXER','MULTI_LEXER',12);
...
ctxsys.driimp.set_sub_value('SUB_LEXER','8', NULL, NULL,'KO:KOREAN_LEXER:');
...
So far I am lost on what to do, this seems to be a simple issue to solve but my dba skills are too low to do this on my own.
If anyone could help me on this I would greatly appreciate it !
Thank you.
This looks like your database is using Oracle Text indexes, which are not the same as ordinary indexes. Have you followed completely Oracle Note 300172.1 (Obsolescence of KOREAN_LEXER Lexer Type)? It mentions this code below, which could help.
ALTER INDEX <[schema.]index> REBUILD
PARAMETERS('REPLACE LEXER ko_morph_lexer [MEMORY <size>]');
If all else fails, maybe consider trying to migrate your data into an Oracle 10g database and complete the korean_morph_lexer in 10g. If that works, it would be an easy Data Pump task to move it from 10g to 11g (or 12c).

Oracle Vendor Code 17002 Large Select Columns

When executing a select that returns a large amount of columns over several tables the error "Vendor code 17002" is received. The query only returns one result. When the number of columns returned is less than 635, the query works. When another column is added the error is seen.
The following was seen in a dump file:
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0x45] [PC:0x35797B4, _kkqstcrf()+1342]
DDE: Problem Key 'ORA 7445 [kkqstcrf()+1342]' was flood controlled (0x6) (incident: 10825)
ORA-07445: exception encountered: core dump [kkqstcrf()+1342] [ACCESS_VIOLATION] [ADDR:0x45] [PC:0x35797B4] [UNABLE_TO_READ] []
Dump file c:\app\7609179\diag\rdbms\orcl\orcl\trace\orcl_s001_9928.trc
Thu Feb 07 15:10:56 2013
ORACLE V11.2.0.1.0 - Production vsnsta=0
vsnsql=16 vsnxtr=3
Dumping diagnostics for abrupt exit from ksedmp
Windows 7, Oracle 11.2.0.1.0 Enterprise Edition, SQL Developer, Same result from Java Application.
ORA-07445 is a generic error which Oracle uses to signal unexpected behaviour in the OS i.e. a bug.
There should be some additional information in that trace file:
c:\app\7609179\diag\rdbms\orcl\orcl\trace\orcl_s001_9928.trc
Have you looked in it?
Unfortunately the nature of ORA-07445 means that the solution underlying problem is usually due to the specific combination of platform, OS and database versions. Oracle have published some advice on diagnosis but most routes lead to calling Oracle Support. Find out more.
At least you know the immediate cause. So if you don't have a Support contract there is a workaround: change you application so you don't have to select that 635th column. That is an awful lot of columns to have in a single query.
There isn't an actual limit to the number of columns permitted in a query's projection but it's possible that the total length of the statement exceeds the limit. This limit varies according to several factors and isn't ispecified in the docs. How long (how many chars) is the statement with and with out that pesky additional column? perhaps shortening some column names will do the trick.

Can someone explain ORA-29861 error in plain english and its possible cause?

I have an application implemented in Grails framework using underlying Hibernate. After it runs for a while, I got an Oracle DB error and resolved it by rebuilding the offending index. I wonder if anyone can propose the possible cause(s) and ways to prevent it from happening.
Caused by:
org.springframework.jdbc.UncategorizedSQLException:
Hibernate operation: Could not execute JDBC batch update;
uncategorized SQLException for SQL [update RSS_ITEM set guid=?,
pubdate=?, link=?, rss_source_id=?, title=?, description=?,
rating_raw=?, rating_tuned=?, date_created=?, date_locked=? where
RSS_ITEM_ID=?]; SQL state [99999]; error code [29861]; ORA-29861:
domain index is marked LOADING/FAILED/UNUSABLE
; nested exception is java.sql.BatchUpdateException:
ORA-29861:
domain index is marked LOADING/FAILED/UNUSABLE
To locate broken index use:
select index_name,index_type,status,domidx_status,domidx_opstatus from user_indexes where index_type like '%DOMAIN%' and (domidx_status <> 'VALID' or domidx_opstatus <> 'VALID');
To rebuild the index use:
alter index INDEX_NAME rebuild;
Domain indexes are a special type of index. It is possible to build our own using OCI but the chances are you're using one of the index types offered by Oracle Text. I say this as your table seems to include free text columns.
The most commonly used Text index is the CTXSYS.CONTEXT index type. The point about this index type is that it is not maintained transactionally, so as to minimize the effort involved in indexing large documents. This means when you insert or update a document into your table it is not indexed immediately. Instead is that a background process, such as a database job, which kicks off the index synchronization on a regular basis. The index is unusable while it is being synchronized. If the resync fails for any reason then you will need to drop and recreate the index.
Is this a regular occurrence for you? If so you may need to re-appraise your application. Perhaps a different sort of index (such as CTXSYS.CTXCAT) might be more appropriate. One thing which strikes me about your error message is that your UPDATE statement touches a lot of columns, including what looks like the primary key. This makes me think you have a single generic update statement which sets every column regardless of whether it has actually changed. This is bad practice with normal indexes; it will kill your application if you are using text indexes.
http://ora-29861.ora-code.com/
Cause: An attempt has been made to access a domain index that is being
built or is marked failed by an
unsuccessful DDL or is marked unusable
by a DDL operation.
Action: Wait if the specified index is marked LOADING Drop the
specified index if it is marked FAILED
Drop or rebuild the specified index if
it is marked UNUSABLE.
That should hopefully be enough context. Can you figure out the problem from that?

Resources