Insertion of characters into number column - oracle

I have a table with several number columns that are inserted through a Asp.Net application using bind variables.
Due to upgrade of Oracle client to 19c and server change, the code instead of giving an error on insert of invalid data, inserts trash and the application crashes aftewards.
Any help is appreciated in finding the root cause.
SELECT trial1,
DUMP (trial1, 17),
DUMP (trial1, 1016),
trial3,
DUMP (trial3,17),
DUMP (trial3, 1016)
Result in SQL Navigator
results of query
Oracle 12c
Oracle client 19
My DBA found this on Oracle Support and that lead to us find the error in the application side:
NaN is a specific IEEE754 value. However Oracle NUMBER is not IEEE754
compliant. Therefore if you force the data representing NaN into a
NUMBER column results are unpredicatable. SOLUTION If you can put a
value in a C float, double, int etc you can load this into the
database as no checks are undertaken - just as with the Oracle NUMBER
datatype it's up to the application to ensure the data is valid. If
you use the proper IEEE754 compliant type, eg BINARY_FLOAT, then NaN
is recognised and handled correctly.

You have bad data as you have tried to store an double precision NAN value in a NUMBER column rather than a BINARY_DOUBLE column.
We can duplicate the bad data with the function (never use this in a production environment):
CREATE FUNCTION createNumber(
hex VARCHAR2
) RETURN NUMBER DETERMINISTIC
IS
n NUMBER;
BEGIN
DBMS_STATS.CONVERT_RAW_VALUE( HEXTORAW( hex ), n );
RETURN n;
END;
/
Then, we can duplicate your bad values using the hexadecimal values from your DUMP output:
CREATE TABLE table_name (trial1 NUMBER, trial3 NUMBER);
INSERT INTO table_name (trial1, trial3) VALUES (
createNumber('FF65'),
createNumber('FFF8000000000000')
);
Then:
SELECT trial1,
DUMP(trial1, 16) AS t1_hexdump,
trial3,
DUMP(trial3, 16) AS t3_hexdump
FROM table_name;
Replicates your output:
TRIAL1
T1_HEXDUMP
TRIAL3
T3_HEXDUMP
~
Typ=2 Len=2: ff,65
null
Typ=2 Len=8: ff,f8,0,0,0,0,0,0
Any help is appreciated in finding the root cause.
You need to go back through your application and work out where the bad data came from and see if you can determine what the original data was and debug the steps it went through in the application to work out if it was:
Always bad data, and then you need to put in some validation into your application to make sure the bad data does not get propagated; or
Was good data but there is a bug in your code that changed it and then you need to fix the bug.
As for the existing bad data, you either need to correct it (if you know what it should be) or delete it.
We cannot help with any of that as we do not have visibility of your application nor do we know what the correct data should have been.
If you want to store that data as a floating point then you need to change from using a NUMBER to using a BINARY_DOUBLE data type:
CREATE TABLE table_name (value BINARY_DOUBLE);
INSERT INTO table_name(value) VALUES (BINARY_DOUBLE_INFINITY);
INSERT INTO table_name(value) VALUES (BINARY_DOUBLE_NAN);
Then:
SELECT value,
DUMP(value, 16)
FROM table_name;
Outputs:
VALUE
DUMP(VALUE,16)
Inf
Typ=101 Len=8: ff,f0,0,0,0,0,0,0
Nan
Typ=101 Len=8: ff,f8,0,0,0,0,0,0
Then BINARY_DOUBLE_NAN exactly matches the binary value in your column and you have tried to insert a Not-A-Number value into a NUMBER column (that does not support it) in the format expected for a BINARY_DOUBLE column (that would support it).

The issue was a division by zero on the application side that was inserted as infinity into the database, but Oracle has an unpredictable behavior with this values.
Please see original post above for all the details.

Related

MD5 of entire row in oracle

Below is the query to get MD5 of entire row in snowflake
SELECT MD5(TO_VARCHAR(ARRAY_CONSTRUCT(*)))FROM T
taken from here
what is the alternative query in oracle to achieve such requirement without having to put all column names manually.
You may use the packaged function dbms_sqlhash.gethash as described below, but remember:
the package was removed from the documentation (I guess in 11g), so in the recent releases this is an undocumented feature
if you calculate the hash code from more than one row you must define order (order by on a unique key(s)) . Otherwise the calculated hash is not deterministic. (This was probaly the reason of the removal)
the columns with other data types than varchar2 are converted to strings before the hash calculation, so the result is dependent on the NLS setting. You must stabilize the NLS setting to get reproducible results, e.g. with alter session set nls_date_format='dd-mm-yyyy hh24:mi:ss';
The column must be concatenated with some special delimiter (that does not occure in the data) to aviod collision: 'A' || null is the same as null || 'A'. This are unknown internals, so it is rather hard to compare the result MD5 hash with hash calculated on other (non Oracle) data.
You need extra grant to execute the package
Some additional info
Example
select * from tab where x=1;
X Y Z
---------- - -------------------
1 a 13.10.2021 00:00:00
select
dbms_sqlhash.gethash(
sqltext => 'select * from tab where x=1',
digest_type => 2 /* dbms_crypto.hash_md5 */
) MD5
from dual;
MD5
--------------------------------
215A9C4642A3691F951DD8060877D191
Order Independent Hash Code of a Table
Contrary to a file (where the order matter) in a database table the order is not relevant. It would be therefore meaningfull to have a possibility to calculate an order independent hash code of a table.
Unfortunately this feature is currently not available in Oracle, but was implemented as a prototype as described here

Oracle determinism requirements and idiosyncrasies

I've been troubled by my lack of understanding about an issue that periodically emerges: Function-Determinicity.
From the docs, it seems fairly clear:
A DETERMINISTIC function may not have side effects.
A DETERMINISTIC function may not raise an unhandled exception.
As these are important core concepts with robust, central implementations in standard packages, I don't think there is a bug or anything (the fault lies in my assumptions and understanding, not Oracle). That being said, both of these requirements sometimes appear to have some idiosyncratic uses within the standard package and the DBMS_ and UTL_ packages.
I hoped to post a couple of examples of Oracle functions that raise some doubts for me in my use of DETERMINISTIC and the nuances in these restrictions, and see if anyone can explain how things fit together. I apologize this is something of a "why" question and it can be migrated if needed, but the response to this question: (Is it ok to ask a question where you've found a solution but don't know why something was behaving the way it was?) made me think it might be appropriate for SO.
Periodically in my coding, I face uncertainty whether my own UDFs qualify as pure, and at other times, I use Oracle functions that surprise me greatly to learn they are impure. If anyone can take a look and advise, I would be grateful.
As a first example, TO_NUMBER. This function seems pure, but it also throws exceptions. In this example I'll use TO_NUMBER in a virtual column (DETERMINISTIC should be required here)
CREATE TABLE TO_NUMBER_IS_PURE_BUT_THROWS (
SOURCE_TEXT CHARACTER VARYING(5 CHAR) ,
NUMERICIZATION NUMBER(5 , 0) GENERATED ALWAYS AS (TO_NUMBER(SOURCE_TEXT , '99999')) ,
CONSTRAINT POSITIVE_NUMBER CHECK (NUMERICIZATION >= 0)
);
Table TO_NUMBER_IS_PURE_BUT_THROWS created.
INSERT INTO TO_NUMBER_IS_PURE_BUT_THROWS VALUES ('0',DEFAULT);
INSERT INTO TO_NUMBER_IS_PURE_BUT_THROWS VALUES ('88088',DEFAULT);
INSERT INTO TO_NUMBER_IS_PURE_BUT_THROWS VALUES ('UH-OH',DEFAULT);
1 row inserted.
1 row inserted.
ORA-01722: invalid number
The ORA-01722 would seem to violate the unhandled-exception requirement. Presumably any function I create that casts via TO_NUMBER should handle this possibility to remain pure. But throwing the exception here seems appropriate, and reliable. It seems there is some debate about whether exceptions violate referential-transparency (Why is the raising of an exception a side effect?)
The second situation I encounter is System functions that seem like they should-be DETERMINISTIC but arent't. There must be some reason they are considered impure. In some cases, it seems unfathomable that the internals would be generating side-effects.
An extreme example of this could be DBMS_ASSERT.NOOP though there are many others. The function returns its input unmodified. How can it be nondeterministic?
CREATE TABLE HOW_IS_NOOP_IMPURE (
SOURCE_TEXT VARCHAR2(256 BYTE),
COPY_TEXT VARCHAR2(256 BYTE) GENERATED ALWAYS AS (DBMS_ASSERT.NOOP(SOURCE_TEXT)),
CONSTRAINT COPY_IS_NOT_NULL CHECK(COPY_TEXT IS NOT NULL)
);
Yields:
ORA-30553: The function is not deterministic
Presumably it violates the requirements for determinicity, but that is hard to imagine. I wondered what I'm missing in my presumption that functions like this would be deterministic.
EDIT In response to Lukasz's comment about session settings:
I can accept it if cross-session repeatability is the root cause of functions like NOOPnot being DETERMINISTIC, but TO_CHAR is deterministic/eligible for use in virtual columns et al. but appears to have sensitivity to session settings in its format masks:
ALTER SESSION SET NLS_NUMERIC_CHARACTERS = '._';
Session altered.
CREATE TABLE TO_CHAR_NLS(
INPUT_NUMBER NUMBER(6,0),
OUTPUT_TEXT CHARACTER VARYING(64 CHAR) GENERATED ALWAYS AS (TO_CHAR(INPUT_NUMBER,'999G999'))
);
Table TO_CHAR_NLS created.
INSERT INTO TO_CHAR_NLS VALUES (123456,DEFAULT);
INSERT INTO TO_CHAR_NLS VALUES (111222,DEFAULT);
SELECT INPUT_NUMBER, OUTPUT_TEXT FROM TO_CHAR_NLS ORDER BY 1 ASC;
1 row inserted.
1 row inserted.
INPUT_NUMBER OUTPUT_TEXT
111222 111_222
123456 123_456
The ORA-01722 would seem to violate the unhandled-exception
requirement. Presumably any function I create that casts via TO_NUMBER
should handle this possibility to remain pure.
Firstly, i must appreciate you for asking such a good question. Now, when you say you used TO_NUMBER, it should convert all the text inputted to the function but you should know that TO_NUMBER has some restrictions.
As per TO_NUMBER definition:
The TO_NUMBER function converts a formatted TEXT or NTEXT expression
to a number. This function is typically used to convert the
formatted numerical output of one application (which includes currency symbols, decimal markers, thousands group markers, and so
forth) so that it can be used as input to another application.
It clearly says,it used to cast the formatted numerical output of one application, that means TO_NUMBER itself expect a numerical input and when you write as below:
INSERT INTO TO_NUMBER_IS_PURE_BUT_THROWS VALUES ('UH-OH',DEFAULT);
You completely passed the unexpected input to TO_NUMBER function and hence it throws the error ORA-01722: invalid number as expected behavior.
Read more about TO_NUMBER.
Secondly,
An extreme example of this could be DBMS_ASSERT.NOOP though there are
many others. The function returns its input unmodified. How can it be
nondeterministic?
DBMS_ASSERT.NOOP function is can be used where someone passing actual piece of code through a variable and don't want it to be checked for SQL injection attacks.
This has to be nondeterministic as it just return what we input to the function.
I show you a example to demonstrate why this has to be non-deterministic.
Let's say i create a function years_from_today as deterministic.
CREATE OR REPLACE FUNCTION years_from_today
( p_date IN DATE )
RETURN NUMBER DETERMINISTIC IS
BEGIN
RETURN ABS(MONTHS_BETWEEN(SYSDATE, p_date) / 12);
END years_from_today;
/
Now i create a table and use this function in a query as below:
CREATE TABLE det_test AS
SELECT TO_DATE('01-JUL-2009', 'DD-MON-YYYY') AS date_value
FROM dual;
SELECT date_value, SYSDATE, years_from_today(date_value)
FROM det_test
WHERE years_from_today(date_value) < 2;
Output
DATE_VALU SYSDATE YEARS_FROM_TODAY(DATE_VALUE)
--------- --------- ----------------------------
01-JUL-09 20-SEP-10 1.21861774
Then i create a function-based index on the new table.
CREATE INDEX det_test_fbi ON det_test (years_from_today(date_value));
Now, to see the implications of our DETERMINISTIC choice, change the date on the server (in a test environment of course) to move ahead a full year. Even though the date has changed, running the query again will still return the same value as before from YEARS_FROM_TODAY, along with the same row, because the index is used instead of executing the function.
SELECT date_value, SYSDATE, years_from_today(date_value)
FROM det_test
WHERE years_from_today(date_value) < 2;
Output:
DATE_VALU SYSDATE YEARS_FROM_TODAY(DATE_VALUE)
--------- --------- ----------------------------
01-JUL-09 20-SEP-11 1.2186201
Without the WHERE clause, the query should return the following:
DATE_VALU SYSDATE YEARS_FROM_TODAY(DATE_VALUE)
--------- --------- ----------------------------
01-JUL-09 20-SEP-11 2.21867063
As is evident from the erroneous output, a function should never be created as deterministic unless it will ALWAYS return the same value given the same parameters.
And hence your assumption to make DBMS_ASSERT.NOOP doesnot stands true in all the cases.

Cannot store a large double into Oracle (ORA-01426: numeric overflow)

In my Oracle (Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production), this query fails:
select 9.1E+136 from dual;
It tells me something like: ORA-01426: numeric overflow (I've tried 9.1E136, 9E136 as well). Which is really strange, since numbers up to about 2E+308 should be supported (http://docs.oracle.com/javadb/10.10.1.2/ref/rrefsqljdoubleprecision.html).
I've bumped into this problem from an Hibernate application, which maps a double field to FLOAT with default precision of 126 (Should be more than enough (http://docs.oracle.com/javadb/10.8.3.0/ref/rrefsqlj27281.html).
Anyone any idea? Depends on some configuration parameter? Thank you in advance.
OK, I've found a solution: there is the binary_double type, numbers like that are cast to it when a d is appended to their value:
select 9.1E+136d from dual; # works
select 9.1E+136 from dual; # doesn't work
create table test ( no binary_double primary key );
insert into test values ( 9.2E136d ); # OK
insert into test values ( 9.3E136 ); # Fails
So needlessly stupid...
The Oracle documentation states that:
Positive numbers in the range 1 x 10-130 to 9.99...9 x 10125 with up to 38 significant digits
You are overflowing the number data type.
Numeric Types

Oracle CHAR Comparison Not Working in Function

Could someone please explain to me the difference between the below two Oracle queries? I know they look very similar but the first one returns results and the second one does not. My implementation of the function can be seen below as well.
--Returns results
SELECT *
FROM <TABLE_NAME>
WHERE ID = CAST(<UserID> AS CHAR(2000)); --ID is defined as CHAR(8) in the DB.
--Does not return results
SELECT *
FROM <TABLE_NAME>
WHERE ID = CAST_TO_CHAR(<UserID>); --ID is defined as CHAR(8) in the DB.
--Function definition
CREATE OR REPLACE FUNCTION CAST_TO_CHAR(varToPad IN VARCHAR2)
RETURN CHAR IS returnVal CHAR(2000);
BEGIN
SELECT CAST(varToPad AS CHAR(2000))
INTO returnVal
FROM DUAL;
RETURN returnVal;
END;
/
It almost seems to me that the type is not persisting when the value is retrieved from the database. From what I understand from CHAR comparisons in Oracle, it will take the smaller of the two fields and truncate the larger one so that the sizes match (that is why I am casting the second variable to length 2000).
The reason that I need to achieve something like this is because a vendor tool that we are upgrading from DB2 to Oracle defined all of the columns in the Oracle database as CHAR instead of VARCHAR2. They did this to make their legacy code more easily portable to a distributed environment. This is causing big issues in our web applications because compares are now being done against fixed length CHAR fields.
I thought about using TRIM() but these queries will be accessed a lot and I do not want them to do a full table scan each time. I also considered RPAD(, ) but I don't really want to hard code lengths in the application as these may change in the future.
Does anyone have any thoughts about this? Thank you in advance for your help!
I have similar problem. It turned out that these are the rules of implicit data conversion. Oracle Database automatically converts a value from one datatype to another when such a conversion makes sense.
If you change your select:
SELECT *
FROM <TABLE_NAME>
WHERE CAST(ID as CHAR(2000)) = CAST_TO_CHAR(<UserID>);
You will see that's works properly.
And here's another test script showing that the function works correctly:
SET SERVEROUTPUT ON --for DBMS_OUTPUT.PUT_LINE.
DECLARE
test_string_c CHAR(8);
test_string_v VARCHAR2(8);
BEGIN
--Assign the same value to each string.
test_string_c := 'string';
test_string_v := 'string';
--Test the strings for equality.
IF test_string_c = CAST_TO_CHAR(test_string_v) THEN
DBMS_OUTPUT.PUT_LINE('The names are the same');
ELSE
DBMS_OUTPUT.PUT_LINE('The names are NOT the same');
END IF;
END;
/
anonymous block completed
The names are the same
Here are some rules govern the direction in which Oracle Database makes implicit datatype conversions:
During INSERT and UPDATE operations, Oracle converts the value to
the datatype of the affected column.
During SELECT FROM operations, Oracle converts the data from the
column to the type of the target variable.
When comparing a character value with a numeric value, Oracle
converts the character data to a numeric value.
When comparing a character value with a DATE value, Oracle converts
the character data to DATE.
When making assignments, Oracle converts the value on the right side
of the equal sign (=) to the datatype of the target of the assignment
on the left side.
When you use a SQL function or operator with an argument of a
datatype other than the one it accepts, Oracle converts the argument
to the accepted datatype.
Complete list of datatype comparison rules you can explore here

Oracle: Coercing VARCHAR2 and CLOB to the same type without truncation

In an app that supports MS SQL Server, MySQL, and Oracle, there's a table with the following relevant columns (types shown here are for Oracle):
ShortText VARCHAR2(1700) indexed
LongText CLOB
The app stores values 850 characters or less in ShortText, and longer ones in LongText. I need to create a view that returns that data, whichever column it's in. This works for SQL Server and MySQL:
SELECT
CASE
WHEN ShortText IS NOT NULL THEN ShortText
ELSE LongText
END AS TheValue
FROM MyTable
However, on Oracle, it generates this error:
ORA-00932: inconsistent datatypes: expected CHAR got CLOB
...meaning that Oracle won't implicitly convert the two columns to the same type, so the query has to do it explicitly. Don't want data to get truncated, so the type used has to be able to hold as much data as a CLOB, which as I understand it (not an Oracle expert) means CLOB, only, no other choices are available.
This works on Oracle:
SELECT
CASE
WHEN ShortText IS NOT NULL THEN TO_CLOB(ShortText)
ELSE LongText
END AS TheValue
FROM MyTable
However, performance is amazingly awful. A query that returns LongText directly took 70-80 ms for about 9k rows, but the above construct took between 30 and 60 seconds, unacceptable.
So:
Are there any other Oracle types I could coerce both columns to
that can hold as much data as a CLOB? Ideally something more
text-oriented, like MySQL's LONGTEXT, or SQL Server's NTEXT (or even
better, NVARCHAR(MAX))?
Any other approaches I should be looking at?
Some specifics, in particular ones requested by #Guido Leenders:
Oracle version: Oracle Database 11g 11.2.0.1.0 64bit Production
Not certain if I was the only user, but the relative times are still striking.
Stats for the small table where I saw the performance I posted earlier:
rowcount: 9,237
varchar column total length: 148,516
clob column total length: 227,020
The to_clob is pretty expensive, so try to avoid it. But I think it should perform reasonable well for 9K rows. Following test case based upon one of the applications we develop which has the similar datamodel behaviour:
create table bubs_projecten_sample
( id number
, toelichting varchar2(1700)
, toelichting_l clob
)
begin
for i in 1..10000
loop
insert into bubs_projecten_sample
( id
, toelichting
, toelichting_l
)
values
( i
, case when mod(i, 2) = 0 then 'short' else null end
, case when mod(i, 2) = 0 then rpad('long', i, '*') else null end
)
;
end loop;
commit;
end;
Now make sure everything in cache and dirty blocks written out:
select *
from bubs_projecten_sample
Test performance:
create table bubs_projecten_flat
as
select id
, to_clob(toelichting) toelichting_any
from bubs_projecten_sample
where toelichting is not null
union all
select id
, toelichting_l
from bubs_projecten_sample
where toelichting_l is not null
The create table take less than 1 second on a normal entry level server, including writing out the data, 17K consistent gets, 4K physical reads. Stored on disk (note the rpad) is 25K for toelichting and 16M for toelichting_l.
Can you further elaborate on the problem?
Please check that large CLOBs are not stored inline. Normally large CLOBs are stored in a separate system-maintained table. Storing large CLOBs inside a table can make going through the table with a Full Table Scan expensive.
Also, I can imagine populating both columns always. You still have the benefits of indexing working for the first so many characters. You just need to memorize in the table using an indicator whether the CLOB or the shortText column is leading.
As a side note; I see a difference between 850 and 1700. I would recommend making them equal, but remember to check that you are creating the table using character semantics. That can be done on statement level by using: "varchar2(850 char)". Please note that Oracle will actually create a column that fits 850 * 4 bytes (in AL32UTF8 at least, there the "32" stands for "4 bytes at most per character"). Good luck!

Resources