Cannot store a large double into Oracle (ORA-01426: numeric overflow) - oracle

In my Oracle (Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production), this query fails:
select 9.1E+136 from dual;
It tells me something like: ORA-01426: numeric overflow (I've tried 9.1E136, 9E136 as well). Which is really strange, since numbers up to about 2E+308 should be supported (http://docs.oracle.com/javadb/10.10.1.2/ref/rrefsqljdoubleprecision.html).
I've bumped into this problem from an Hibernate application, which maps a double field to FLOAT with default precision of 126 (Should be more than enough (http://docs.oracle.com/javadb/10.8.3.0/ref/rrefsqlj27281.html).
Anyone any idea? Depends on some configuration parameter? Thank you in advance.

OK, I've found a solution: there is the binary_double type, numbers like that are cast to it when a d is appended to their value:
select 9.1E+136d from dual; # works
select 9.1E+136 from dual; # doesn't work
create table test ( no binary_double primary key );
insert into test values ( 9.2E136d ); # OK
insert into test values ( 9.3E136 ); # Fails
So needlessly stupid...

The Oracle documentation states that:
Positive numbers in the range 1 x 10-130 to 9.99...9 x 10125 with up to 38 significant digits
You are overflowing the number data type.
Numeric Types

Related

Insertion of characters into number column

I have a table with several number columns that are inserted through a Asp.Net application using bind variables.
Due to upgrade of Oracle client to 19c and server change, the code instead of giving an error on insert of invalid data, inserts trash and the application crashes aftewards.
Any help is appreciated in finding the root cause.
SELECT trial1,
DUMP (trial1, 17),
DUMP (trial1, 1016),
trial3,
DUMP (trial3,17),
DUMP (trial3, 1016)
Result in SQL Navigator
results of query
Oracle 12c
Oracle client 19
My DBA found this on Oracle Support and that lead to us find the error in the application side:
NaN is a specific IEEE754 value. However Oracle NUMBER is not IEEE754
compliant. Therefore if you force the data representing NaN into a
NUMBER column results are unpredicatable. SOLUTION If you can put a
value in a C float, double, int etc you can load this into the
database as no checks are undertaken - just as with the Oracle NUMBER
datatype it's up to the application to ensure the data is valid. If
you use the proper IEEE754 compliant type, eg BINARY_FLOAT, then NaN
is recognised and handled correctly.
You have bad data as you have tried to store an double precision NAN value in a NUMBER column rather than a BINARY_DOUBLE column.
We can duplicate the bad data with the function (never use this in a production environment):
CREATE FUNCTION createNumber(
hex VARCHAR2
) RETURN NUMBER DETERMINISTIC
IS
n NUMBER;
BEGIN
DBMS_STATS.CONVERT_RAW_VALUE( HEXTORAW( hex ), n );
RETURN n;
END;
/
Then, we can duplicate your bad values using the hexadecimal values from your DUMP output:
CREATE TABLE table_name (trial1 NUMBER, trial3 NUMBER);
INSERT INTO table_name (trial1, trial3) VALUES (
createNumber('FF65'),
createNumber('FFF8000000000000')
);
Then:
SELECT trial1,
DUMP(trial1, 16) AS t1_hexdump,
trial3,
DUMP(trial3, 16) AS t3_hexdump
FROM table_name;
Replicates your output:
TRIAL1
T1_HEXDUMP
TRIAL3
T3_HEXDUMP
~
Typ=2 Len=2: ff,65
null
Typ=2 Len=8: ff,f8,0,0,0,0,0,0
Any help is appreciated in finding the root cause.
You need to go back through your application and work out where the bad data came from and see if you can determine what the original data was and debug the steps it went through in the application to work out if it was:
Always bad data, and then you need to put in some validation into your application to make sure the bad data does not get propagated; or
Was good data but there is a bug in your code that changed it and then you need to fix the bug.
As for the existing bad data, you either need to correct it (if you know what it should be) or delete it.
We cannot help with any of that as we do not have visibility of your application nor do we know what the correct data should have been.
If you want to store that data as a floating point then you need to change from using a NUMBER to using a BINARY_DOUBLE data type:
CREATE TABLE table_name (value BINARY_DOUBLE);
INSERT INTO table_name(value) VALUES (BINARY_DOUBLE_INFINITY);
INSERT INTO table_name(value) VALUES (BINARY_DOUBLE_NAN);
Then:
SELECT value,
DUMP(value, 16)
FROM table_name;
Outputs:
VALUE
DUMP(VALUE,16)
Inf
Typ=101 Len=8: ff,f0,0,0,0,0,0,0
Nan
Typ=101 Len=8: ff,f8,0,0,0,0,0,0
Then BINARY_DOUBLE_NAN exactly matches the binary value in your column and you have tried to insert a Not-A-Number value into a NUMBER column (that does not support it) in the format expected for a BINARY_DOUBLE column (that would support it).
The issue was a division by zero on the application side that was inserted as infinity into the database, but Oracle has an unpredictable behavior with this values.
Please see original post above for all the details.

Why Does Oracle 10g to_char(date time) Truncate Strings?

I got a bug report where Oracle 10g was truncating return values from to_char(datetime):
SQL> select to_char(systimestamp, '"day:"DD"hello"') from dual;
TO_CHAR(SYSTIMESTAMP,'"DAY:"DD"HE
---------------------------------
day:27hel
Notably, this does not appear to happen in Oracle 11g. My question is, why does it happen at all? Is there some configuration variable to set to tell to_char(datetime) to allocate a bigger buffer for its return value?
I'm not sure but it might be just displaying in SQL*Plus. Have you tried to run it in Toad? Or if you assign result to varchar2 in PL/SQL block and output result?
Here what I've found in SQL*Plus Reference for 10g:
The default width and format of unformatted DATE columns in SQL*Plus
is determined by the database NLS_DATE_FORMAT parameter. Otherwise,
the default format width is A9. See the FORMAT clause of the COLUMN
command for more information on formatting DATE columns.
Your values is trimmed to 9 characters which corresponds to default A9 format. I don't have same version and this behaviour is not reproducing in 11g so can you please check my theory?
I've got the same problem and I know the solution.
I use Release 11.2.0.4.0 but I beleave it is possible to repeat the situation in other versions. It somehow depends on client. (E.g. I cannot repeat it using SQL*Plus, only with PL/SQL Devepoper)
Try this:
select to_char(systimestamp, '"day:"DD"йцукенг OR any other UTF-encoded-something"') from dual
union all
select to_char(systimestamp, '"day:"DD"hello"') from dual;
You'll get the following result:
day:08йцукенг OR any other UTF-encoded-so
day:08hello
You can see the "mething" is lost. This is exactly 7 bytes exceeded because of 7 two-byte simbols "йцукенг". Oracle allocates buffer for the number of characters, not a number of required bytes.
The command
alter session set nls_length_semantics=byte/char
unfortunately does not affect this behavior.
So my solution is to cast a result as varchar2(enough_capacity)
select cast(to_char(systimestamp, '"day:"DD"йцукенг OR any other UTF-encoded-something"') as varchar(1000)) from dual
union all
select to_char(systimestamp, '"day:"DD"hello"') from dual
Explicit typecasting makes expression independent from client or configuration.
BTW, the same thing happens in all implicit to_char-conversions. E.g.
case [numeric_expression]
when 1 then '[unicode_containing_string]'
end
Result might be cutted.

ORA-01438 value doesn't fit into defined Number(11,7) data type

I understand the idea of Number datatype and I am acknowledged with the information from this page http://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#i22289
However it looks that I still miss something because I don't really understand why I am getting this error ORA-01438:
select cast (18000.0 as number(11,7)) from dual;
Results in
ORA-01438: value larger than specified precision allowed for this column
01438. 00000 - "value larger than specified precision allowed for this column"
*Cause: When inserting or updating records, a numeric value was entered
that exceeded the precision defined for the column.
*Action: Enter a value that complies with the numeric column's precision,
or use the MODIFY option with the ALTER TABLE command to expand
the precision.
At the same time reducing scale from 7 to 6 works as a charm
select cast (18000.0 as number(11,6)) from dual;
This is happens under 'Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production'
Can someone enlighten me on why this is happening.
Thank you, appreciate any help.
number(11,7) allows numbers with a total of 11 digits and 7 fractional digits. Which in turn means you have 11-7=4 non-fractional digits.
18000 as five non-fractional digits which is one too many

Why does Oracle round up a number with less than 38 significant digits?

We have Oracle Server 10.2.
To test this, I have a very simple table.
CREATE TABLE MYSCHEMA.TESTNUMBER
(
TESTNUMBER NUMBER
)
When I try to insert 0.98692326671601283 the number gets rounded up.
INSERT INTO MYSCHEMA.TESTNUMBER (TESTNUMBER)
VALUES (0.98692326671601283);
The select returns:
select * from TESTNUMBER
0.986923266716013
It rounds up the last 3 numbers "283" to "3".
Even looking at it with TOAD UI and trying to enter it with TOAD, I get the same result.
Why? Is it possible to insert this number in an Oracle number without it getting rounded up?
I think you need to look into how your client program displays number values. An Oracle NUMBER should store that value with full precision; but the value may be rounded for display by the client.
For instance, using SQLPlus:
dev> create table dctest (x number);
Table created.
dev> insert into dctest VALUES (0.98692326671601283);
1 row created.
dev> select * from dctest;
X
----------
.986923267
dev> column x format 0.000000000000000000000000000
dev> /
X
------------------------------
0.986923266716012830000000000
As you can see, the default format shows only the first 9 significant digits. But when I explicitly change the column formatting (a client-side feature in SQLPlus), the full value inserted is displayed.

Oracle: Coercing VARCHAR2 and CLOB to the same type without truncation

In an app that supports MS SQL Server, MySQL, and Oracle, there's a table with the following relevant columns (types shown here are for Oracle):
ShortText VARCHAR2(1700) indexed
LongText CLOB
The app stores values 850 characters or less in ShortText, and longer ones in LongText. I need to create a view that returns that data, whichever column it's in. This works for SQL Server and MySQL:
SELECT
CASE
WHEN ShortText IS NOT NULL THEN ShortText
ELSE LongText
END AS TheValue
FROM MyTable
However, on Oracle, it generates this error:
ORA-00932: inconsistent datatypes: expected CHAR got CLOB
...meaning that Oracle won't implicitly convert the two columns to the same type, so the query has to do it explicitly. Don't want data to get truncated, so the type used has to be able to hold as much data as a CLOB, which as I understand it (not an Oracle expert) means CLOB, only, no other choices are available.
This works on Oracle:
SELECT
CASE
WHEN ShortText IS NOT NULL THEN TO_CLOB(ShortText)
ELSE LongText
END AS TheValue
FROM MyTable
However, performance is amazingly awful. A query that returns LongText directly took 70-80 ms for about 9k rows, but the above construct took between 30 and 60 seconds, unacceptable.
So:
Are there any other Oracle types I could coerce both columns to
that can hold as much data as a CLOB? Ideally something more
text-oriented, like MySQL's LONGTEXT, or SQL Server's NTEXT (or even
better, NVARCHAR(MAX))?
Any other approaches I should be looking at?
Some specifics, in particular ones requested by #Guido Leenders:
Oracle version: Oracle Database 11g 11.2.0.1.0 64bit Production
Not certain if I was the only user, but the relative times are still striking.
Stats for the small table where I saw the performance I posted earlier:
rowcount: 9,237
varchar column total length: 148,516
clob column total length: 227,020
The to_clob is pretty expensive, so try to avoid it. But I think it should perform reasonable well for 9K rows. Following test case based upon one of the applications we develop which has the similar datamodel behaviour:
create table bubs_projecten_sample
( id number
, toelichting varchar2(1700)
, toelichting_l clob
)
begin
for i in 1..10000
loop
insert into bubs_projecten_sample
( id
, toelichting
, toelichting_l
)
values
( i
, case when mod(i, 2) = 0 then 'short' else null end
, case when mod(i, 2) = 0 then rpad('long', i, '*') else null end
)
;
end loop;
commit;
end;
Now make sure everything in cache and dirty blocks written out:
select *
from bubs_projecten_sample
Test performance:
create table bubs_projecten_flat
as
select id
, to_clob(toelichting) toelichting_any
from bubs_projecten_sample
where toelichting is not null
union all
select id
, toelichting_l
from bubs_projecten_sample
where toelichting_l is not null
The create table take less than 1 second on a normal entry level server, including writing out the data, 17K consistent gets, 4K physical reads. Stored on disk (note the rpad) is 25K for toelichting and 16M for toelichting_l.
Can you further elaborate on the problem?
Please check that large CLOBs are not stored inline. Normally large CLOBs are stored in a separate system-maintained table. Storing large CLOBs inside a table can make going through the table with a Full Table Scan expensive.
Also, I can imagine populating both columns always. You still have the benefits of indexing working for the first so many characters. You just need to memorize in the table using an indicator whether the CLOB or the shortText column is leading.
As a side note; I see a difference between 850 and 1700. I would recommend making them equal, but remember to check that you are creating the table using character semantics. That can be done on statement level by using: "varchar2(850 char)". Please note that Oracle will actually create a column that fits 850 * 4 bytes (in AL32UTF8 at least, there the "32" stands for "4 bytes at most per character"). Good luck!

Resources