Oracle Vendor Code 17002 Large Select Columns - oracle

When executing a select that returns a large amount of columns over several tables the error "Vendor code 17002" is received. The query only returns one result. When the number of columns returned is less than 635, the query works. When another column is added the error is seen.
The following was seen in a dump file:
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0x45] [PC:0x35797B4, _kkqstcrf()+1342]
DDE: Problem Key 'ORA 7445 [kkqstcrf()+1342]' was flood controlled (0x6) (incident: 10825)
ORA-07445: exception encountered: core dump [kkqstcrf()+1342] [ACCESS_VIOLATION] [ADDR:0x45] [PC:0x35797B4] [UNABLE_TO_READ] []
Dump file c:\app\7609179\diag\rdbms\orcl\orcl\trace\orcl_s001_9928.trc
Thu Feb 07 15:10:56 2013
ORACLE V11.2.0.1.0 - Production vsnsta=0
vsnsql=16 vsnxtr=3
Dumping diagnostics for abrupt exit from ksedmp
Windows 7, Oracle 11.2.0.1.0 Enterprise Edition, SQL Developer, Same result from Java Application.

ORA-07445 is a generic error which Oracle uses to signal unexpected behaviour in the OS i.e. a bug.
There should be some additional information in that trace file:
c:\app\7609179\diag\rdbms\orcl\orcl\trace\orcl_s001_9928.trc
Have you looked in it?
Unfortunately the nature of ORA-07445 means that the solution underlying problem is usually due to the specific combination of platform, OS and database versions. Oracle have published some advice on diagnosis but most routes lead to calling Oracle Support. Find out more.
At least you know the immediate cause. So if you don't have a Support contract there is a workaround: change you application so you don't have to select that 635th column. That is an awful lot of columns to have in a single query.
There isn't an actual limit to the number of columns permitted in a query's projection but it's possible that the total length of the statement exceeds the limit. This limit varies according to several factors and isn't ispecified in the docs. How long (how many chars) is the statement with and with out that pesky additional column? perhaps shortening some column names will do the trick.

Related

Duplicate Indexes error using Informatica 10.4

While running an informatica mapping in v10.4 I'm getting the following error.
The mapping essentially calls a complex stored procedure in Oracle to "swap out" a temporary file to a partitioned fact table.
CMN_1022 [
ORA-20014: FINISH_SP: ORA-20010: Duplicate Indexes: ORA-12801: error signaled in parallel query server P00I
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1650
ORA-20010: Duplicate Indexes: ORA-12801: error signaled in parallel query server P00I
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1292
ORA-12801: error signaled in parallel query server P00I
ORA-28604: table too fragmented to build bitmap index (172073921,57,56)
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1277
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1277
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1593
I do not know what this error means to Informatica.
Can anyone help me decipher it SPECIFIC TO INFORMATICA
The problem is specific to Oracle, so not sure how to make the answer specific to Informatica, especially without being able to see the details of what the workflow is trying to do.
The ORA-20014: FINISH_SP: ORA-20010: Duplicate Indexes: error is a custom message from the application code. The real key appears to be here: "ORA-28604: table too fragmented to build bitmap index (172073921,57,56)"
It looks like Informatica is attempting to build an index - indirectly through the DIMDW.FACT_EXCHANGE_PARTITION_PKG package - and the process is throwing an error. A simple Google search on ORA-28604 yields the following:
ORA-28604: table too fragmented to build bitmap index (%s,%s,%s)
*Cause: The table has one or more blocks that exceed the maximum number
of rows expected when creating a bitmap index. This is probably
due to deleted rows. The values in the message are:
(data block address, slot number found, maximum slot allowed)
*Action: Defragment the table or block(s). Use the values in the message
to determine the FIRST block affected. (There may be others).
Since this involves the physical fragmentation of the data in the Oracle database, you will almost certainly need to get the DBA involved to troubleshoot this further. Your Informatica workflow likely isn't going anywhere until this is corrected in the database.

Oracle database performance issue

We are trying to pull data from an oracle database but seem to be getting very low performance.
We have a table of around 10M rows and we have an index via which we are pulling around 1.3k rows {select * from tab where indexed_field = 'value'} (in a simplified form).
SQuirreL reports the query taking "execution: 0.182s, building output: 28.921s". The returned data occupies something like 340kB (eg, when copied/pasted into a text file).
Sometimes the building output phase takes much longer (>5 minutes), particularly the first time a query is run. Repeating it seems to run much faster - eg the 29s value above. Is this likely to just be the result of a transient overload on the database, of might it be due to buffering the repeat data?
Is a second per 50 rows (13kB) a reasonable figure or is this unexpectedly large? (This is unlikely to be a network issue.)
Is it possible that the dbms if failing to leverage the fact that the data could be grouped physically (by having the physical order the same as the index order) and is doing a separate disk read per row, and if so how can it be persuaded to be more efficient?
There isn't much odd about the data - 22 columns per row, mostly defined as varchar2(250) though usually containing a few tens of chars. I'm not sure how big the ironware running Oracle is, but it lives in a datacentre so probably not too puny.
Any thoughts gratefully received.
kfinity> Have you tried setting your fetch size larger, like 500 or so?
That's the one! Speeds it up by an order of magnitude. 1.3k rows in 2.5s, 9.5k rows in 19s. Thanks for that suggestion.
BTW, doing select 1 only provides a speedup of about 10%, which I guess suggests that disk access wasn't the bottleneck.
others>
The fetch plan is:
Operation Options Object Mode Cost Bytes Cardinality
0 SELECT STATEMENT ALL_ROWS 6 17544 86
1 TABLE ACCESS BY INDEX ROWID BATCHED TAB ANALYZED 6 17544 86
2 INDEX RANGE SCAN TAB_IDX ANALYZED 3 86
which, with my limited understanding, looks OK.
The "sho parameter" things didn't work (SQL errors), apart from the select which gave:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE 12.1.0.2.0 Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production
I guess the only outstanding question is "what's the downside of setting the fetch size to a large value?". Given that we will always end up reading the entire result set (unless there is an exception) my guess would be "not much". Is that right?
Anyway, many thanks to those who responded and a big thanks for the solution.
1.3k rows on a table of 10M rows for oracle is not too big.
The reason why second results are faster than first results is that oracle load data in RAM on the fisrt query and just read it from RAM on the second.
Are you sure that the index is well used ? Maybe you can do an explain plan and show us the result ?
Few immediate actions to be taken are:
Rebuild the index on table.
Gather the stats on table.
execute following before rerun the query to extract execution plan.
sql> set autotrace traceonly enable ;
turn this off by:
sql> set autotrace off ;
Also,provide result of following :
sql> sho parameter SGA
sql> sho parameter cursor
sql> select banner from v$version;
Abhi

After upgrading from Sql Server 2008 to Sql Server 2016 a stored procedure that was fast is now slow

We have a stored procedure that returns all of the records that fall within a geospatial region ("geography"). It uses a CTE (with), some unions, some inner joins and returns the data as XML; nothing controversial or cutting edge but also not trivial.
This stored procedure has served us well for many years on SQL Server 2008. It has been running within 1 sec on a relatively slow server. We have just migrated to SQL Server 2016 on a super fast server with lots of memory and a super fast SDDs.
The entire database and associated application is going really fast on this new server and we are very happy with it. However this one stored procedure is running in 16 sec rather than 1 sec - against exactly the same parameters and exactly the same dataset.
We have updated the indexes and statistics on this database. We have also changed the compatibility level of the database from 100 to 130.
Interesting, I have re-written the stored procedure to use a temporary table and 'insert' rather than using the CTE. This has brought the time down from 16 sec to 4 sec.
The execution plan does not provide any obvious insights into where a bottleneck may be.
We are a bit stuck for ideas. What should we do next? Thanks in advance.
--
I have now spent more time on this problem than i care to admit. I have boiled down the stored procedure to the following query to demonstrate the problem.
drop table #T
declare #viewport sys.geography=convert(sys.geography,0xE610000001041700000000CE08C22D7740C002370B7670F4624000CE08C22D7740C002378B5976F4624000CE08C22D7740C003370B3D7CF4624000CE08C22D7740C003378B2082F4624000CE08C22D7740C003370B0488F4624000CE08C22D7740C004378BE78DF4624000CE08C22D7740C004370BCB93F4624000CE08C22D7740C004378BAE99F4624000CE08C22D7740C005370B929FF4624000CE08C22D7740C005378B75A5F4624000CE08C22D7740C005370B59ABF462406F22B7698E7640C005370B59ABF462406F22B7698E7640C005378B75A5F462406F22B7698E7640C005370B929FF462406F22B7698E7640C004378BAE99F462406F22B7698E7640C004370BCB93F462406F22B7698E7640C004378BE78DF462406F22B7698E7640C003370B0488F462406F22B7698E7640C003378B2082F462406F22B7698E7640C003370B3D7CF462406F22B7698E7640C002378B5976F462406F22B7698E7640C002370B7670F4624000CE08C22D7740C002370B7670F4624001000000020000000001000000FFFFFFFF0000000003)
declare #outputControlParameter nvarchar(max) = 'a value passed in through a parameter to the stored that controls the nature of data to return. This is not the solution you are looking for'
create table #T
(value int)
insert into #T
select 136561 union
select 16482 -- These values are sourced from parameters into the stored proc
select
[GeoServices_Location].[GeographicServicesGatewayId],
[GeoServices_Location].[Coordinate].Lat,
[GeoServices_Location].[Coordinate].Long
from GeoServices_Location
inner join GeoServices_GeographicServicesGateway
on GeoServices_Location.GeographicServicesGatewayId = GeoServices_GeographicServicesGateway.GeographicServicesGatewayId
where
(
(len(#outputControlParameter) > 0 and GeoServices_Location.GeographicServicesGatewayId in (select value from #T))
or (len(#outputControlParameter) = 0 and GeoServices_Location.Coordinate.STIntersects(#viewport) = 1)
)
and GeoServices_GeographicServicesGateway.PrimarilyFoundOnLayerId IN (3,8,9,5)
GO
With the stored procedure boiled down to this, it runs in 0 sec on SQL Server 2008 and 5 sec on SQL Server 2016
http://www.filedropper.com/newserver-slowexecutionplan
http://www.filedropper.com/oldserver-fastexecutionplan
Windows Server 2016 is choking on the Geospatial Intersects call with 94% of the time spent there. Sql Server 2008 is spending its time with with a bunch of other steps including Hash Matching and Parallelism and other standard stuff.
Remember this is the same database. One has just been copied to a SQL Server 2016 machine and had its compatibility level increased.
To get around the problem I have actually rewritten the stored procedure so that Sql Server 2016 does not choke. I have running in 250msec. However this should not have happened in the first place and I am concerned that there are other previously finely tuned queries or stored procedures that are now not running efficiently.
Thanks in advance.
--
Furthermore, I had a suggestion to add the traceflag -T6534 to start up parameter of the service. It made no difference to the query time. Also I tried adding option(QUERYTRACEON 6534) to the end of the query too but again it made no difference.
From the query plans you provided I see that spatial index is not used on newer server version.
Use spatial index hint to make sure query optimizer chose the plan with spatial index:
select
[GeoServices_Location].[GeographicServicesGatewayId],
[GeoServices_Location].[Coordinate].Lat,
[GeoServices_Location].[Coordinate].Long
from GeoServices_Location with (index ([spatial_index_name]))...
I see that the problem with the hint is OR operation in query predicate, so my suggestion with hint actually won’t help in this case.
However, I see that predicate depends on #outputControlParameter so rewriting query in order to have these two cases separated might help (see my proposal below).
Also, from your query plans I see that query plan on SQL 2008 is parallel while on SQL 2016 is serial. Use option (recompile, querytraceon 8649) to force parallel plan (should help if your new superfast server has more cores then the old one).
if (len(#outputControlParameter) > 0)
select
[GeoServices_Location].[GeographicServicesGatewayId],
[GeoServices_Location].[Coordinate].Lat,
[GeoServices_Location].[Coordinate].Long
from GeoServices_Location
inner join GeoServices_GeographicServicesGateway
on GeoServices_Location.GeographicServicesGatewayId = GeoServices_GeographicServicesGateway.GeographicServicesGatewayId
where
GeoServices_Location.GeographicServicesGatewayId in (select value from #T))
and GeoServices_GeographicServicesGateway.PrimarilyFoundOnLayerId IN(3,8,9,5)
option (recompile, querytraceon 8649)
else
select
[GeoServices_Location].[GeographicServicesGatewayId],
[GeoServices_Location].[Coordinate].Lat,
[GeoServices_Location].[Coordinate].Long
from GeoServices_Location with (index ([SPATIAL_GeoServices_Location]))
inner join GeoServices_GeographicServicesGateway
on GeoServices_Location.GeographicServicesGatewayId = GeoServices_GeographicServicesGateway.GeographicServicesGatewayId
where
GeoServices_Location.Coordinate.STIntersects(#viewport) = 1
and GeoServices_GeographicServicesGateway.PrimarilyFoundOnLayerId IN (3,8,9,5)
option (recompile, querytraceon 8649)
check the growth of the data/log files on the new server (DBs) vs old server (DBs) configuration: the DB the query is running on + tempdb
check the log for I/O buffer errors
check recovery model of the DB's - simple vs full/bulk
is this a consistent behavior? maybe a process is running during the execution?
regarding statistics/indexes - are you sure it's running on correct data sample? (look at the plan)
many more things can be checked/done - but there is not enough info in this question.

BizTalk WCF-Custom" raised an error ORA-29275: partial multibyte character

I have an interface which is a simple receiveport mapping sendport. The receiveport is the result of an add generated items query. The query just fetches some adres data from the database. This data does contain 'foreign' letters but when i run the query on Oracle SQL Developer, it works fine (gives me 12800 rows).
When BizTalk runs the query, it gives an ORA, which i assumed was an error the db gives to BizTalk am i wrong?
Where do i actually have to fix this problem? and How? Do i need to find out which character set is used on the database and use a convert in the query?
This is an error coming from Oracle - it's very unlikely that it's due to BizTalk or the WCF adapter. It indicates you have some corrupt data in your Oracle DB. You may not be getting the error in SQL Developer because SQL Developer is only returning the first ~50 rows by default (until you actually scroll down past them).
I'd use a strategy like this: http://vibhork.blogspot.com/2011/02/fix-of-ora-29275-partial-multibyte.html to try to find the bad data (e.g. page through the rows using ROWNUM until you find the row that's in error) - you could simulate that in SQL Developer by just scrolling down until you get the error (I think). If you can fix the data, fix it - if the data was put there by another source, you'll either have to get that source to stop putting invalid characters in there or you'll have to convert/concat the column(s) that is (are) causing problems, like:
SELECT problem_column || '' FROM table
or
SELECT CONVERT(COLUMN NAME,'NLS_CHARACTERSET','NLS_CHARACTERSET') FROM table
You might try SELECT CONVERT(COLUMN NAME, 'UTF8', 'US7ASCII') for example.

ORA-01438 Issue while performing dup search

I am a developer with SQL Server experience. We have one legacy application which uses SQR and Oracle to perform a weekly duplicate record search. We got an error while performing this search after 14 years. It says 'ORA-01438: value larger than specified precision allows for this column'. When I googled that error, I found out that it is related to a numeric field and the value passed is larger than it can hold. I can increase the size but don't know for which one. Since no one supports Oracle here, I am trying to trouble shoot this error and found people using
alter system set events='1438 trace name Errorstack forever,level 10';
I would like to know if this is the right way to find out which sql is failing?
Also what does it alter and what is level 10? Anything that I should consider before running this query in production? Is there something I need to roll back after performing this query? I was told that if I do SQL> insert into test values (100000000000000000,'test','test'); where 10000000000000000 is invalid then it will throw generic Oracle message ORA-01438. But in the trace file, it would show ORA-01438: value larger than specified precision allowed for this column. So, where would the trace file be generated?
Current SQL statement for this session:
insert into test values (100000000000000000,'test','test').
Please let me know if I am not in the right path.
Use DBMS_MONITOR to enabling trace for the session affected. This will contain all SQL and errors and bind variables, if you enable it.

Resources