ORA-01438 value doesn't fit into defined Number(11,7) data type - oracle

I understand the idea of Number datatype and I am acknowledged with the information from this page http://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#i22289
However it looks that I still miss something because I don't really understand why I am getting this error ORA-01438:
select cast (18000.0 as number(11,7)) from dual;
Results in
ORA-01438: value larger than specified precision allowed for this column
01438. 00000 - "value larger than specified precision allowed for this column"
*Cause: When inserting or updating records, a numeric value was entered
that exceeded the precision defined for the column.
*Action: Enter a value that complies with the numeric column's precision,
or use the MODIFY option with the ALTER TABLE command to expand
the precision.
At the same time reducing scale from 7 to 6 works as a charm
select cast (18000.0 as number(11,6)) from dual;
This is happens under 'Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production'
Can someone enlighten me on why this is happening.
Thank you, appreciate any help.

number(11,7) allows numbers with a total of 11 digits and 7 fractional digits. Which in turn means you have 11-7=4 non-fractional digits.
18000 as five non-fractional digits which is one too many

Related

Insertion of characters into number column

I have a table with several number columns that are inserted through a Asp.Net application using bind variables.
Due to upgrade of Oracle client to 19c and server change, the code instead of giving an error on insert of invalid data, inserts trash and the application crashes aftewards.
Any help is appreciated in finding the root cause.
SELECT trial1,
DUMP (trial1, 17),
DUMP (trial1, 1016),
trial3,
DUMP (trial3,17),
DUMP (trial3, 1016)
Result in SQL Navigator
results of query
Oracle 12c
Oracle client 19
My DBA found this on Oracle Support and that lead to us find the error in the application side:
NaN is a specific IEEE754 value. However Oracle NUMBER is not IEEE754
compliant. Therefore if you force the data representing NaN into a
NUMBER column results are unpredicatable. SOLUTION If you can put a
value in a C float, double, int etc you can load this into the
database as no checks are undertaken - just as with the Oracle NUMBER
datatype it's up to the application to ensure the data is valid. If
you use the proper IEEE754 compliant type, eg BINARY_FLOAT, then NaN
is recognised and handled correctly.
You have bad data as you have tried to store an double precision NAN value in a NUMBER column rather than a BINARY_DOUBLE column.
We can duplicate the bad data with the function (never use this in a production environment):
CREATE FUNCTION createNumber(
hex VARCHAR2
) RETURN NUMBER DETERMINISTIC
IS
n NUMBER;
BEGIN
DBMS_STATS.CONVERT_RAW_VALUE( HEXTORAW( hex ), n );
RETURN n;
END;
/
Then, we can duplicate your bad values using the hexadecimal values from your DUMP output:
CREATE TABLE table_name (trial1 NUMBER, trial3 NUMBER);
INSERT INTO table_name (trial1, trial3) VALUES (
createNumber('FF65'),
createNumber('FFF8000000000000')
);
Then:
SELECT trial1,
DUMP(trial1, 16) AS t1_hexdump,
trial3,
DUMP(trial3, 16) AS t3_hexdump
FROM table_name;
Replicates your output:
TRIAL1
T1_HEXDUMP
TRIAL3
T3_HEXDUMP
~
Typ=2 Len=2: ff,65
null
Typ=2 Len=8: ff,f8,0,0,0,0,0,0
Any help is appreciated in finding the root cause.
You need to go back through your application and work out where the bad data came from and see if you can determine what the original data was and debug the steps it went through in the application to work out if it was:
Always bad data, and then you need to put in some validation into your application to make sure the bad data does not get propagated; or
Was good data but there is a bug in your code that changed it and then you need to fix the bug.
As for the existing bad data, you either need to correct it (if you know what it should be) or delete it.
We cannot help with any of that as we do not have visibility of your application nor do we know what the correct data should have been.
If you want to store that data as a floating point then you need to change from using a NUMBER to using a BINARY_DOUBLE data type:
CREATE TABLE table_name (value BINARY_DOUBLE);
INSERT INTO table_name(value) VALUES (BINARY_DOUBLE_INFINITY);
INSERT INTO table_name(value) VALUES (BINARY_DOUBLE_NAN);
Then:
SELECT value,
DUMP(value, 16)
FROM table_name;
Outputs:
VALUE
DUMP(VALUE,16)
Inf
Typ=101 Len=8: ff,f0,0,0,0,0,0,0
Nan
Typ=101 Len=8: ff,f8,0,0,0,0,0,0
Then BINARY_DOUBLE_NAN exactly matches the binary value in your column and you have tried to insert a Not-A-Number value into a NUMBER column (that does not support it) in the format expected for a BINARY_DOUBLE column (that would support it).
The issue was a division by zero on the application side that was inserted as infinity into the database, but Oracle has an unpredictable behavior with this values.
Please see original post above for all the details.

Oracle SQL PLSQL large number field strange behavior

Have existing table called temptable, column largenumber is a NUMBER field, with no precision set:
largenumber NUMBER;
Query:
select largenumber from temptable;
It returns:
-51524845525550100000000000000000000
But If I do
column largenumber format 999999999999999999999999999999999999999
And then
select largenumber from temptable;
It returns:
-51524845525550:100000000000000000000
Why is there a colon?
To test, I took the number, remove the colon, and insert it to another table temptable2, and did the same column largenumber format, the select returns the number without the colon:
select largenumber from temptable2;
It returns:
-51524845525550100000000000000000000
So the colon is not present here.
So what could possibly be in the original number field to cause that colon?
In the original row, If I do a select and try to do any TO_CHAR, REPLACE, CAST, or concatenate to text, it would give me number conversion error.
For example, trying to generate a csv:
select '"' || largenumber || '",'
FROM temptable;
would result in:
ORA-01722 ("invalid number") error occurs when an attempt is made to convert a character string into a number, and the string cannot be converted into a valid number
In a comment (in response to a question from me), you shared that dump(largenumber) on the offending value returns
Typ=2 Len=8: 45,50,56,53,52,48,46,48
From the outset, that means that the data stored on disk is invalid (it is not a valid representation of a value of number data type). Typ=2 is correct, that is for data type number. The length (8 bytes) is correct (we can all count to eight to see that).
What is wrong is the bytes themselves. And, we only need to inspect the first and the last byte to see that.
The first byte is 45. It encodes the sign and the exponent of your number. The first bit (1 or 0) represents the sign: 1 for positive, 0 for negative. 45 is less than 128, so the first bit in the first byte is 0; so the number is negative. (So far this matches what you know about the intended value.)
But, for negative numbers, the last byte is always the magic value 102. Always. In another comment under your original question, Connor McDonald asks about your platform - but this is platform-independent, it is how Oracle encodes numbers for permanent storage on any platform. So, we already know that the dump value you got tells us the value is invalid.
In fact, Connor, in the same comment, gave the correct representation of that number (according to Oracle's scheme for internal representation of numbers). Indeed, just the last byte is wrong: your dump shows 48, but it should be 102.
How can you fix this? If it's a one-off, just use an update statement to replace the value with the correct one and move on. If your table has a primary key, let's call it id, then find the id for this row, and then
update {your_table} set largenumber = -50...... where id = {that_id};
Question is, how many such corrupt values might you have in your table? If it's just one, you can shrug it off; but if it's many (or even "a handful") you may want to figure out how they got there in the first place.
In most cases, the database will reject invalid values; you can't simply insert 'abc' in a number column, for example. But there are ways to get bad data in; even intentionally, and in a repeatable way. So, you would have to investigate how the bad values were inserted (what process was used for insertion).
For a trivial way to insert bad data in a number column, in a repeatable manner, you can see this thread on the Oracle developers forum: https://community.oracle.com/tech/developers/discussion/3903746/detecting-invalid-values-in-the-db
Please be advised that I had just started learning Oracle at that time (I was less than two months in), so I may have said some stupid things in that thread; but the method to insert bad data is described there in full detail, and it was tested. That shows just one possible (and plausible!) way to insert invalid stuff in a table; how it happened in your specific case, you will have to investigate yourself.

Cannot store a large double into Oracle (ORA-01426: numeric overflow)

In my Oracle (Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production), this query fails:
select 9.1E+136 from dual;
It tells me something like: ORA-01426: numeric overflow (I've tried 9.1E136, 9E136 as well). Which is really strange, since numbers up to about 2E+308 should be supported (http://docs.oracle.com/javadb/10.10.1.2/ref/rrefsqljdoubleprecision.html).
I've bumped into this problem from an Hibernate application, which maps a double field to FLOAT with default precision of 126 (Should be more than enough (http://docs.oracle.com/javadb/10.8.3.0/ref/rrefsqlj27281.html).
Anyone any idea? Depends on some configuration parameter? Thank you in advance.
OK, I've found a solution: there is the binary_double type, numbers like that are cast to it when a d is appended to their value:
select 9.1E+136d from dual; # works
select 9.1E+136 from dual; # doesn't work
create table test ( no binary_double primary key );
insert into test values ( 9.2E136d ); # OK
insert into test values ( 9.3E136 ); # Fails
So needlessly stupid...
The Oracle documentation states that:
Positive numbers in the range 1 x 10-130 to 9.99...9 x 10125 with up to 38 significant digits
You are overflowing the number data type.
Numeric Types

Truncating Numbers with Oracle 12c

This is not about truncating to decimal places. Rather, truncating whole numbers with Oracle 12c.
select CAST('123456789' AS NUMBER(4)) from DUAL;
It would be great if this returned '1234' instead of throwing an exception.
As suggested by Justin, it is not sensible to do so. But it can be done like this.
select cast(substr('123456789',1,4) as integer) from dual;
But this will not work in following scenarios.
You have 0 before the number.
You have a decimal digit before
4th digit (ex 123.4)

Delphi getting Oracle error ORA-12899 Error value too large for the column

I have a database in Oracle. The client application is written in Delphi. When I enter values into the DBGrid which is connected to a table in the database I get "ORA-12899 Error value too large for the column".The datatype of the column specified in the error message is varchar(6).And I exactly enter 6 digits.The error also indicates that the maximum is 6 the actual is 7 which is wrong.I tried changing the datatype to number but I get the same error with just a difference that it say maximum is 3 the actual is 4.Is there a bug with Delphi and Oracle? I use ADO for connection.There's notthing in the BeforePost event.
Not knowing anything at all about Delphi, could it be that your grid data cell is interpreted as a number and a space is being reserved for the sign?
EDIT:
What happens if you type 6 characters but include 1 or more alphas?

Resources