I am trying to insert the value 1234567.1234 on NUMBER(21,12) in my oracle 19c.
The number inserted is 1234567.123399999920 instead of 1234567.123400000000.
Is it possible to avoid this behavior without updating the number precision and scale?
Thanks!
Well could you support your claim with some evidence? I.e. the tools you use for insert etc. Most probably it is not a database problem.
Here is an Counterbeispiel:
create table test (col NUMBER(21,12) );
insert into test (col) values (1234567.1234);
select to_char(col,'9999999.9999999') col, dump(col) dmp from test;
COL DMP
---------------- ----------------------------------------
1234567.1234000 Typ=2 Len=7: 196,2,24,46,68,13,35
Related
I have a table with several number columns that are inserted through a Asp.Net application using bind variables.
Due to upgrade of Oracle client to 19c and server change, the code instead of giving an error on insert of invalid data, inserts trash and the application crashes aftewards.
Any help is appreciated in finding the root cause.
SELECT trial1,
DUMP (trial1, 17),
DUMP (trial1, 1016),
trial3,
DUMP (trial3,17),
DUMP (trial3, 1016)
Result in SQL Navigator
results of query
Oracle 12c
Oracle client 19
My DBA found this on Oracle Support and that lead to us find the error in the application side:
NaN is a specific IEEE754 value. However Oracle NUMBER is not IEEE754
compliant. Therefore if you force the data representing NaN into a
NUMBER column results are unpredicatable. SOLUTION If you can put a
value in a C float, double, int etc you can load this into the
database as no checks are undertaken - just as with the Oracle NUMBER
datatype it's up to the application to ensure the data is valid. If
you use the proper IEEE754 compliant type, eg BINARY_FLOAT, then NaN
is recognised and handled correctly.
You have bad data as you have tried to store an double precision NAN value in a NUMBER column rather than a BINARY_DOUBLE column.
We can duplicate the bad data with the function (never use this in a production environment):
CREATE FUNCTION createNumber(
hex VARCHAR2
) RETURN NUMBER DETERMINISTIC
IS
n NUMBER;
BEGIN
DBMS_STATS.CONVERT_RAW_VALUE( HEXTORAW( hex ), n );
RETURN n;
END;
/
Then, we can duplicate your bad values using the hexadecimal values from your DUMP output:
CREATE TABLE table_name (trial1 NUMBER, trial3 NUMBER);
INSERT INTO table_name (trial1, trial3) VALUES (
createNumber('FF65'),
createNumber('FFF8000000000000')
);
Then:
SELECT trial1,
DUMP(trial1, 16) AS t1_hexdump,
trial3,
DUMP(trial3, 16) AS t3_hexdump
FROM table_name;
Replicates your output:
TRIAL1
T1_HEXDUMP
TRIAL3
T3_HEXDUMP
~
Typ=2 Len=2: ff,65
null
Typ=2 Len=8: ff,f8,0,0,0,0,0,0
Any help is appreciated in finding the root cause.
You need to go back through your application and work out where the bad data came from and see if you can determine what the original data was and debug the steps it went through in the application to work out if it was:
Always bad data, and then you need to put in some validation into your application to make sure the bad data does not get propagated; or
Was good data but there is a bug in your code that changed it and then you need to fix the bug.
As for the existing bad data, you either need to correct it (if you know what it should be) or delete it.
We cannot help with any of that as we do not have visibility of your application nor do we know what the correct data should have been.
If you want to store that data as a floating point then you need to change from using a NUMBER to using a BINARY_DOUBLE data type:
CREATE TABLE table_name (value BINARY_DOUBLE);
INSERT INTO table_name(value) VALUES (BINARY_DOUBLE_INFINITY);
INSERT INTO table_name(value) VALUES (BINARY_DOUBLE_NAN);
Then:
SELECT value,
DUMP(value, 16)
FROM table_name;
Outputs:
VALUE
DUMP(VALUE,16)
Inf
Typ=101 Len=8: ff,f0,0,0,0,0,0,0
Nan
Typ=101 Len=8: ff,f8,0,0,0,0,0,0
Then BINARY_DOUBLE_NAN exactly matches the binary value in your column and you have tried to insert a Not-A-Number value into a NUMBER column (that does not support it) in the format expected for a BINARY_DOUBLE column (that would support it).
The issue was a division by zero on the application side that was inserted as infinity into the database, but Oracle has an unpredictable behavior with this values.
Please see original post above for all the details.
Is it possible to add a metadata field at column-level (in the Oracle Data Dictionary)?
The purpose would be to hold a flag identifying where individual data items in a table have been anonymised.
I'm an analyst (not a DBA) and I'm using Oracle SQL Developer which surfaces (and enables querying of) the COLUMN_NAME, DATA_TYPE, NULLABLE, DATA_DEFAULT, COLUMN_ID, and COMMENTS metadata fields of our Oracle DB (see pic).
I'd be looking to add another metadata field at this level (essentially, to add a second 'COMMENTS' field) to hold the 'Anonymisation' flag, to support easy querying of our flagged-anonymised data.
If it's possible (and advisable / supportable), I'd be grateful for any advice for describing the steps required to enable this, which I can then discuss with our Developer and DBA.
Short answer: NO.
But where could you keep that information?
In your data model.
Oracle provides a free data modeling solution, Oracle SQL Developer Data Modeler. It provides the ability to mark table/view columns as sensitive or PII.
Those same models can be stored back in your database so they can be accessed via SQL.
Once you've marked up all of your sensitive attributes/columns, and store it back into the database, you can query it back out.
Disclaimer: I work for Oracle, I'm the product manager for Data Modeler.
[TL;DR] Don't do it. Find another way.
If it's advisable
NO
Never modify the data dictionary; (unless Oracle support tells you to) you are likely to invalidate your support contract with Oracle and may break the database and make it unusable.
If it's possible
Don't do this.
If you really want to try it then still don't.
If you really, really want to try it then find a database you don't care about (the don't care about bit is important!) and log on as a SYSDBA user and:
ALTER TABLE whichever_data_dictionary_table ADD anonymisation_flag VARCHAR2(10);
Then you can test whether the database breaks (and it may not break immediately but at some point later), but if it does then you almost certainly will not get any support from Oracle in fixing it.
Did we say, "Don't do it"... we mean it.
As you already know, you shouldn't do that.
But, nothing prevents you from creating your own table which will contain such an info.
For example:
SQL> CREATE TABLE my_comments
2 (
3 table_name VARCHAR2 (30),
4 column_name VARCHAR2 (30),
5 anonymisation VARCHAR2 (10)
6 );
Table created.
Populate it with some data:
SQL> insert into my_comments (table_name, column_name)
2 select table_name, column_name
3 from user_tab_columns
4 where table_name = 'DEPT';
3 rows created.
Set the anonymisation flag:
SQL> update my_comments set anonymisation = 'F' where column_name = 'DEPTNO';
1 row updated.
When you want to get such an info (along with some more data from user_tab_columns, use (outer) join:
SQL> select u.table_name, u.column_name, u.data_type, u.nullable, m.anonymisation
2 from user_tab_columns u left join my_comments m on m.table_name = u.table_name
3 and m.column_name = u.column_name
4 where u.column_name = 'DEPTNO';
TABLE_NAME COLUMN_NAME DATA_TYPE N ANONYMISATION
---------- --------------- ------------ - ---------------
DEPT DEPTNO NUMBER N F
DSV DEPTNO NUMBER N
DSMV DEPTNO NUMBER Y
EMP DEPTNO NUMBER Y
SQL>
Advantages: you won't break the database & you'll have your additional info.
Drawbacks: you'll have to maintain the table manually.
PostgreSQL supports a RETURNING clause, for instance as in
UPDATE some_table SET x = 'whatever' WHERE conditions RETURNING x, y, z
and MSSQL supports a variant of that syntax with the OUTPUT clause.
However Oracle "RETURNING INTO" seems intended to placing values in variables, from within the context of a stored procedure.
Is there a way to have an SQL equivalent to the one above that would work in Oracle, without involving a stored procedure ?
Note: I am looking for a pure-SQL solution if there exists one, not one that is language-specific, or would require special handling in the code. The actual SQL is dynamic, the code that makes the call is database-agnostic, with only the SQL being adapted.
Oracle does not directly support using the DML returning clause in a SELECT statement, but you can kind of fake that behavior by using a WITH function. Although the below code uses PL/SQL, the statement is still a pure SQL statement and can run anywhere a regular SELECT statement can run.
SQL> create table some_table(x varchar2(100), y number);
Table created.
SQL> insert into some_table values('something', 1);
1 row created.
SQL> commit;
Commit complete.
SQL> with function update_and_return return number is
2 v_y number;
3 --Necessary to avoid: ORA-14551: cannot perform a DML operation inside a query
4 pragma autonomous_transaction;
5 begin
6 update some_table set x = 'whatever' returning y into v_y;
7 --Necessary to avoid: ORA-06519: active autonomous transaction detected and rolled back
8 commit;
9 return v_y;
10 end;
11 select update_and_return from dual;
12 /
UPDATE_AND_RETURN
-----------------
1
Unfortunately there are major limitations with this approach that may make it impractical for non-trivial cases:
The DML must be committed within the statement.
The WITH function syntax requires both client and server versions of 12.1 and above.
Returning multiple columns or rows will require more advanced features. Multiple rows will require the function to return a collection, and the SELECT portion of the statement will have to use the TABLE function. Multiple columns will require a new type for each different result set. If you're lucky, you can use one of the built-in types, like SYS.ODCIVARCHAR2LIST. Otherwise you may need to create your own custom types.
You can do it in SQL, not need to pl/sql, and this depends on your tool and/or language. Here's an example in sqlplus:
SQL> create table t0 as select * from dual;
Table created.
SQL> var a varchar2(2)
SQL> update t0 set dummy='v' returning dummy into :a;
1 row updated.
SQL> print a
A
--------------------------------
v
I am supporting an application whose data storage recently moved from Oracle 9i to 12c. After this migration, a problem emerged in occasional queries that retrieve very large numbers as part of a string concatenation. Assuming a table with the definition
mytable{
test1 number,
test2 number
}
with one row with values
test1 = 100000000000000000000 and test2=100
In 9i I could run a query select test1||','||test2 from mytable and get result
100000000000000000000,100
In 12c I get ORA-01722: invalid number
01722. 00000 - "invalid number"
*Cause: The specified number was invalid.
*Action: Specify a valid number.
Here is the relevant portion of the stack trace:
java.sql.SQLSyntaxErrorException: ORA-01722: invalid number
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:195)
at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:876)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1498)
at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:406)
Here is a dump of two representative columns, the first one of which has the very large value, from 12c: Typ=2 Len=3: cb,2,1 Typ=2 Len=5: bb,36,48,1a,4a
And the same dump from 9i: Typ=2 Len=3: cb,2,1 Typ=2 Len=5: bc,a,12,52,25
Since the string concatenation is in the database, I assume this behavior is not affected by configuration of sqlplus or some other client, but rather is determined in the server itself. Is there some configuration that I can change in 12c or some data transformation call I should make in the select call to fix this problem?
You seem to have corrupt data in your database, unfortunately; and it sounds like it was already corrupt in 9i, rather than corrupted during migration (presumably via exp/imp).
You can demonstrate the problem by forcing the inserting of invalid data (don't do this on a real table):
SQL> create table mytable(test1 number, test2 number);
Table created.
SQL> declare
l_test1 number;
l_test2 number;
begin
dbms_stats.convert_raw_value('cb0201', l_test1);
dbms_stats.convert_raw_value('bc0a125225', l_test2);
insert into mytable(test1, test2) values (l_test1, l_test2);
end;
/
PL/SQL procedure successfully completed.
SQL> select test1||','||test2 from mytable;
select test1||','||test2 from mytable
*
ERROR at line 1:
ORA-01722: invalid number
SQL> select dump(test1, 1016) as d1, dump(test2, 1016) as d2 from mytable;
D1
--------------------------------------------------------------------------------
D2
--------------------------------------------------------------------------------
Typ=2 Len=3: cb,2,1
Typ=2 Len=5: bc,a,12,52,25
Running the same test in 9i does not throw the same error, even though the raw data in the table is invalid:
SQL> select test1||','||test2 from mytable;
TEST1||','||TEST2
--------------------------------------------------------------------------------
100000000000000000000,.0000000009178136
1 row selected.
SQL> select dump(test1, 1016) as d1, dump(test2, 1016) as d2 from mytable;
D1
--------------------------------------------------------------------------------
D2
--------------------------------------------------------------------------------
Typ=2 Len=3: cb,2,1
Typ=2 Len=5: bc,a,12,52,25
The dump value showing as cb,2,1 should not have that last byte (1). If you dump the actual number, in either version, you get:
SQL> select dump(100000000000000000000, 1016) from dual;
DUMP(100000000000
-----------------
Typ=2 Len=2: cb,2
and if you populate the table using that number, instead of forcing in the corrupt value, it works as expected in both versions too.
It's possible a previous export/import in 9i has caused the problem, or an OCI program has caused it (there may be other ways too). Without know when and how the data was corrupted, I'd have to question how much you can trust it, and how recoverable it is. It may be possible to clean it up, but without know what the correct values are it sounds a bit risky.
You may need to involve Oracle support to help you analyse the issue further and suggest a way to recover; though as 9i is so old that in itself may be difficult now.
I am not a programmer, but have a task of automatically copying one field in a table to another field in the same table (it's a long story... :-) ). This should be done on update and insert and I really do not know how to go about it.
I should point out that data is entered to the DB through a user-interface which we do not have the source code for, and therefore we want to do this change on a DB level, using a trigger or likes.
I have tried creating a simple trigger that will copy the values across, but came up with an error message. After Googling the error, I found that I need to create a package which will be used as a variable. Now I am really lost!!!! :-)
I want to also point out that I need a solution that will update this field automatically from now on, but not override any data that already exists in the column.
Could someone show me the easiest and simplest way of doing this entire procedure? I really need a 'Guide for dummies' approach.
Thanks,
David
A simple trigger will be adequate if both fields are on the same table.
Consider:
SQL> CREATE TABLE t (ID NUMBER, source_col VARCHAR2(10), dest_col VARCHAR2(10));
Table created
SQL> CREATE OR REPLACE TRIGGER trg_t
2 BEFORE INSERT OR UPDATE OF source_col ON t
3 FOR EACH ROW
4 BEGIN
5 IF :old.dest_col IS NULL THEN
6 :NEW.dest_col := :NEW.source_col;
7 END IF;
8 END;
9 /
Trigger created
We check if the trigger works for insert then update (the value we inserted will be preserved):
SQL> INSERT INTO t(ID, source_col) VALUES (1, 'a');
1 row inserted
SQL> SELECT * FROM t;
ID SOURCE_COL DEST_COL
---------- ---------- ----------
1 a a
SQL> UPDATE t SET source_col = 'b';
1 row updated
SQL> SELECT * FROM t;
ID SOURCE_COL DEST_COL
---------- ---------- ----------
1 b a
Edit: I updated the trigger to take into account the requirement that the existing data on dest_col is to be preserved.
If you just need the new column to show the exact same data as the old column I think (if you're using Oracle 11g) that you can create a virtual column.
There's an example here.