ORA-01426: numeric overflow exception when executing stored procedure - oracle

I ported a Delphi 6 application to Delphi 2007 and it uses BDE to connect to
an Oracle 9i database. I am getting an
ORA-01426: numeric overflow exception
When I execute a stored procedure. This happens randomly and if I
re-run the stored procedure through the application with the same parameters
the exception does not occur.
The old Delphi 6 application works just fine.
Ideas anybody?

Showing a code example could make this easier, but here are a couple of hunches:
Are the data coming from another source (like Excel) that does not have explicit data types? Mixed or ambiguous data may be causing BDE to assign the wrong data type to a field that then is incompatible with the database field.
Could be a numeric formatting issue (some U.S.-centric components do not handle localization properly). Is your localization other than English(U.S.)? Is so, does changing it to English(U.S.) fix the problem?
If these completely miss, more details might help.

Does the D6 version of the app use the same version of BDE, Oracle, and the database? If so, then it's probably something about the data being passed (either content or mechanism).
Not knowing what those data are, nor how they are passed, makes it pretty hard to diagnose.

Related

How to manually corrupt the Oracle CLOB data

I'm wondering if there's any way to manually corrupt the CLOB data for testing purpose.
I can find the steps for intensional block corruption, but can't find anything for the individual data in a table. Can anyone help me with this?
Below is what I'm trying to do and I need help for step 1:
Prepare the corrupted CLOB data
Run expdb and get ORA-01555 error
Test if my troubleshooting procedure works ok
Some background:
DB: Oracle 12.2.0.1 SE2
OS: Windows Server 2016
The app we're using (from the 3rd party) seems to occasionally corrupt the CLOB data when a certain type of data gets inserted in a table. We don't know what triggers it. The corruption doesn't affect the app's function, but leaving it unfixed gives the following error when running expdb for daily backup:
ORA-01555: snapshot too old: rollback segment number
CLOB consists of a mix of alphanumeric characters and line breaks. It gets inserted by the app, no manual insert takes place
Fixing/replacing the app isn't an option, so we've got a fixing procedure with us.
I took over this from another engineer (who's left already), but since then the app is happily working and no problem has occurred so far. I want to test run the fixing procedure in DEV environment, but the app doesn't reproduce the problem for me.
So I thought if I can manually prepare the "broken" CLOB for testing purpose
So this looks like it is caused by a known bug:
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=364607910994084&parent=DOCUMENT&sourceId=833635.1&id=787004.1&_afrWindowMode=0&_adf.ctrl-state=3xr6s00uo_200
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=364470181721910&id=846079.1&_afrWindowMode=0&_adf.ctrl-state=3xr6s00uo_53
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=364481844925661&id=833635.1&_afrWindowMode=0&_adf.ctrl-state=3xr6s00uo_102
The main point here is that the corruption isn't caused by anything inherant in the data, but is more likely caused by something like concurrent access to the LOB by multiple updates (application or end-user behavior), or just by apparently random chance. As such, I doubt that there's any way for you to easily force this condition in order to validate your test for it.

WSO2 Oracle DDL script uses Varchar and not Varchar2

We are looking to use WSO2 API manager at our current client, and are required to use the provided Oracle DDL to set up the necessary tables for Carbon, API manager and message broker. The Client's dba's are coming back asking why the script uses varchar instead of Varchar2 for the relevant fields, which is a good question as the standard approach for oracle is "Varchar2 is the industry standard, don't use varchar."
Is there a really good reason why the oracle scripts use Varchar instead of Varchar2? When implemented on top of Oracle, what is WSO2 doing that required the ability to differentiate null from empty?
This was probably a simple typo that doesn't matter.
It looks like only the column AM_API.INDEXER uses VARCHAR instead of VARCHAR2. The program also contains scripts for H2, MS SQL, MySQL, and PostGreSQL. All the other databases use VARCHAR. This is a pretty common mistake for products that support multiple databases.
Keep in mind that there is no meaningful difference between VARCHAR and VARCHAR2 in Oracle. The documentation claims that someday there will be a difference but I highly doubt it. This issue has existed for a long time and there's a ton of legacy code that depends on the old behavior. I would bet good money that Oracle will never make VARCHAR use different NULL comparison semantics.
I tried to change the script and created a pull request. I don't understand this project and almost certainly did something wrong. So don't be surprised if the request is rejected. But perhaps it will in some way help to lead toward a fix.

Oracle Text will not work with NVARCHAR2. What else might be unavailable?

We are going to migrate an application to have it support Unicode and have to choose between unicode character set for the whole database, or unicode columns stored in N[VAR]CHAR2.
We know that we will no more have the possibility of indexing column contents with Oracle Text if we choose NVARCHAR2, because Oracle Text can only index columns based on the CHAR type.
Apart that, is it likely that other major differences arise when harvesting from Oracle possibilities?
Also, is it likely that some new features are added in newer versions of Oracle, but only supporting either CHAR columns or NCHAR columns but not both?
Thank you for your answers.
Note following Justin's answer:
Thank you for your answer. I will discuss your points, applied to our case:
Our application is usually alone on the Oracle database and takes care of the
data itself. Other software that connect to the database are limited to Toad,
Tora or SQL developer.
We also use SQL*Loader and SQL*Plus to communicate with the database for basic
statements or to upgrade between versions of the product. We have
not heard of any specific problem with all those software regarding NVARCHAR2.
We are also not aware that database administrators among our customers would
like to use other tools on the database that could not support data on
NVARCHAR2 and we are not really concerned whether their tools might disrupt,
after all they are skilled in their job and may find other tools if necessary.
Your last two points are more insightful for our case. We do not use many
built-in packages from Oracle but it still happens. We will explore that
problem.
Could we also expect performance breakage if our application (that is compiled under Visual C++), that uses wchar_t to
store UTF-16, has to perform encoding conversions on all processed data?
If you have anything close to a choice, use a Unicode character set for the entire database. Life in general is just blindingly easier that way.
There are plenty of third party utilities and libraries that simply don't support NCHAR/ NVARCHAR2 columns or that don't make working with NCHAR/ NVARCHAR2 columns pleasant. It's extremely annoying, for example, when your shiny new reporting tool can't report on your NVARCHAR2 data.
For custom applications, working with NCHAR/ NVARCHAR2 columns requires jumping through some hoops that working with CHAR/ VARCHAR2 Unicode encoded columns does not. In JDBC code, for example, you'd constantly be calling the Statement.setFormOfUse method. Other languages and frameworks will have other gotchas; some will be relatively well documented and minor others will be relatively obscure.
Many built-in packages will only accept (or return) a VARCHAR2 rather than a NVARCHAR2. You'll still be able to call them because of implicit conversion but you may end up with character set conversion issues.
In general, being able to avoid character set conversion issues within the database and relegating those issues to the edge where the database is actually sending or receiving data from a client makes the job of developing an application much easier. It's enough work to debug character set conversion issues that result from network transmission-- figuring out that some data got corrupted when a stored procedure concatenated data from a VARCHAR2 and a NVARCHAR2 and stored the result in a VARCHAR2 before it was sent over the network can be excruciating.
Oracle designed the NCHAR/ NVARCHAR2 data types for cases where you are trying to support legacy applications that don't support Unicode in the same database as new applications that are using Unicode and for cases where it is beneficial to store some Unicode data with a different encoding (i.e. you have a large amount of Japanese data that you would prefer to store using the UTF-16 encoding in a NVARCHAR2 rather than the UTF-8 encoding). If you are not in one of those two situations, and it doesn't sound like you are, I would avoid NCHAR/ NVARCHAR2 at all costs.
Responding to your followups
Our application is usually alone on
the Oracle database and takes care of
the data itself. Other software that
connect to the database are limited to
Toad, Tora or SQL developer.
What do you mean "takes care of the data itself"? I'm hoping you're not saying that you've configured your application to bypass Oracle's character set conversion routines and that you do all the character set conversion yourself.
I'm also assuming that you are using some sort of API/ library to access the database even if that is OCI. Have you looked into what changes you'll need to make to your application to support NCHAR/ NVARCHAR2 and whether the API you're using supports NCHAR/ NVARCHAR2? The fact that you're getting Unicode data in C++ doesn't actually indicate that you won't need to make (potentially significant) changes to support NCHAR/ NVARCHAR2 columns.
We also use SQL*Loader and SQL*Plus to
communicate with the database for
basic statements or to upgrade between
versions of the product. We have not
heard of any specific problem with all
those software regarding NVARCHAR2.
Those applications all work with NCHAR/ NVARCHAR2. NCHAR/ NVARCHAR2 introduce some additional complexities into scripts particularly if you are trying to encode string constants that are not representable in the database character set. You can certainly work around the issues, though.
We are also not aware that database
administrators among our customers
would like to use other tools on the
database that could not support data
on NVARCHAR2 and we are not really
concerned whether their tools might
disrupt, after all they are skilled in
their job and may find other tools if
necessary.
While I'm sure that your customers can find alternate ways of working with your data, if your application doesn't play nicely with their enterprise reporting tool or their enterprise ETL tool or whatever desktop tools they happen to be experienced with, it's very likely that the customer will blame your application rather than their tools. It probably won't be a show stopper, but there is also no benefit to causing customers grief unnecessarily. That may not drive them to use a competitor's product, but it won't make them eager to embrace your product.
Could we also expect performance
breakage if our application (that is
compiled under Visual C++), that uses
wchar_t to store UTF-16, has to
perform encoding conversions on all
processed data?
I'm not sure what "conversions" you're talking about. This may get back to my initial question about whether you're stating that you are bypassing Oracle's NLS layer to do character set conversion on your own.
My bottom line, though, is that I don't see any advantages to using NCHAR/ NVARCHAR2 given what you're describing. There are plenty of potential downsides to using them. Even if you can eliminate 99% of the downsides as irrelevant to your particular needs, however, you're still facing a situation where at best it's a wash between the two approaches. Given that, I'd much rather go with the approach that maximizes flexibility going forward, and that's converting the entire database to Unicode (AL32UTF8 presumably) and just using that.

IllegalArgumentException with Date value in jdbc; Openbase sql

I have a WebObjects app, Openbase db, and I'm getting a never before seen exception when doing a raw rows (non ORM) query during a batch operation. It looks like the jdbc adaptor is throwing on a date value in the db and is unable to coerce the raw data into the proper type. It literally kills the app and ends the export process. Here's the top two relevant lines from the trace:
java.lang.IllegalArgumentException
at java.sql.Date.valueOf(Date.java:138)
at com.openbase.jdbc.f.getDate(Unknown Source)
I've tried changing the column type from date, to datetime to timestamp, and adjusting the eo model accordingly, but the exception remains. I'm wondering what I can do to resolve this, specifically if anybody knows a more sophisticated query mechanism I can employ to identify the possibly bad rows? Openbase's documentation is pretty sparse, and I'm hoping maybe somebody knows how to use patterns to identify possible bad values using openbase sql. Or, some other means of identifying the issue. Thanks.
Turns out the problem was due to a version mismatch between the Openbase version and the java version. Unfortunately, I had no choice but to rewrite the dump routine to use the bulk save openbase function, and then parse out the resulting csv. Interestingly, the same dates that were causing problems printed just fine, which enabled saving a lot more rows. Summary: stick with the open source db's; unless you're going high end, there's no advantage to solutions like Openbase anymore.

String or Binary Data Would Be Truncated Error

I'm using Linq to SQL with SQL 2005. I'm parsing a large fixed width file and importing the data into SQL via custom entities that I have mapped to the database using property attributes.
The program runs for about 240 records before throwing this error. I've checked the columns (all four of them) and the data it's trying to put in and it shouldn't be throwing this error. I've even gone so far as to change the columns from varchar to text, and it still throws the error. When I manually insert the same values, they insert fine.
Is there a known bug or anything in Linq to SQL? I'm calling context.submitall() on every loop to insert. I've read that .NET 3.5 SP1 gives better error messages from SQL, but I'm still not seeing anything.
Thanks for any help.
Is it possible that you've changed your schema since you built the Linq to SQL classes? The designer entities won't be updated when you change your SQL schema unless you delete/recreate the class in the designer or hand-edit the properties for the designer-generated class. I know that it keeps track of the column width for string (varchar) columns in the class -- though I don't know if it actually checks it before submitting or just keeps it as a reference for any validation that you would do. I have seen similar problems with things like autogenerated ids, though, that were solved by updating the class in the designer.
Well, I found the bug, and it was definitely not a problem with Linq or with SQL. I was chopping up and changing properties on another object, not realizing that it was now attached to SQL via Linq, and it was that object throwing the errors.
Lesson learned: Do not alter properties directly on an object unless you really want to change them, and especially if it's attached to the data context.

Resources