I am supporting an application whose data storage recently moved from Oracle 9i to 12c. After this migration, a problem emerged in occasional queries that retrieve very large numbers as part of a string concatenation. Assuming a table with the definition
mytable{
test1 number,
test2 number
}
with one row with values
test1 = 100000000000000000000 and test2=100
In 9i I could run a query select test1||','||test2 from mytable and get result
100000000000000000000,100
In 12c I get ORA-01722: invalid number
01722. 00000 - "invalid number"
*Cause: The specified number was invalid.
*Action: Specify a valid number.
Here is the relevant portion of the stack trace:
java.sql.SQLSyntaxErrorException: ORA-01722: invalid number
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:195)
at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:876)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1498)
at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:406)
Here is a dump of two representative columns, the first one of which has the very large value, from 12c: Typ=2 Len=3: cb,2,1 Typ=2 Len=5: bb,36,48,1a,4a
And the same dump from 9i: Typ=2 Len=3: cb,2,1 Typ=2 Len=5: bc,a,12,52,25
Since the string concatenation is in the database, I assume this behavior is not affected by configuration of sqlplus or some other client, but rather is determined in the server itself. Is there some configuration that I can change in 12c or some data transformation call I should make in the select call to fix this problem?
You seem to have corrupt data in your database, unfortunately; and it sounds like it was already corrupt in 9i, rather than corrupted during migration (presumably via exp/imp).
You can demonstrate the problem by forcing the inserting of invalid data (don't do this on a real table):
SQL> create table mytable(test1 number, test2 number);
Table created.
SQL> declare
l_test1 number;
l_test2 number;
begin
dbms_stats.convert_raw_value('cb0201', l_test1);
dbms_stats.convert_raw_value('bc0a125225', l_test2);
insert into mytable(test1, test2) values (l_test1, l_test2);
end;
/
PL/SQL procedure successfully completed.
SQL> select test1||','||test2 from mytable;
select test1||','||test2 from mytable
*
ERROR at line 1:
ORA-01722: invalid number
SQL> select dump(test1, 1016) as d1, dump(test2, 1016) as d2 from mytable;
D1
--------------------------------------------------------------------------------
D2
--------------------------------------------------------------------------------
Typ=2 Len=3: cb,2,1
Typ=2 Len=5: bc,a,12,52,25
Running the same test in 9i does not throw the same error, even though the raw data in the table is invalid:
SQL> select test1||','||test2 from mytable;
TEST1||','||TEST2
--------------------------------------------------------------------------------
100000000000000000000,.0000000009178136
1 row selected.
SQL> select dump(test1, 1016) as d1, dump(test2, 1016) as d2 from mytable;
D1
--------------------------------------------------------------------------------
D2
--------------------------------------------------------------------------------
Typ=2 Len=3: cb,2,1
Typ=2 Len=5: bc,a,12,52,25
The dump value showing as cb,2,1 should not have that last byte (1). If you dump the actual number, in either version, you get:
SQL> select dump(100000000000000000000, 1016) from dual;
DUMP(100000000000
-----------------
Typ=2 Len=2: cb,2
and if you populate the table using that number, instead of forcing in the corrupt value, it works as expected in both versions too.
It's possible a previous export/import in 9i has caused the problem, or an OCI program has caused it (there may be other ways too). Without know when and how the data was corrupted, I'd have to question how much you can trust it, and how recoverable it is. It may be possible to clean it up, but without know what the correct values are it sounds a bit risky.
You may need to involve Oracle support to help you analyse the issue further and suggest a way to recover; though as 9i is so old that in itself may be difficult now.
Related
I am trying to insert the value 1234567.1234 on NUMBER(21,12) in my oracle 19c.
The number inserted is 1234567.123399999920 instead of 1234567.123400000000.
Is it possible to avoid this behavior without updating the number precision and scale?
Thanks!
Well could you support your claim with some evidence? I.e. the tools you use for insert etc. Most probably it is not a database problem.
Here is an Counterbeispiel:
create table test (col NUMBER(21,12) );
insert into test (col) values (1234567.1234);
select to_char(col,'9999999.9999999') col, dump(col) dmp from test;
COL DMP
---------------- ----------------------------------------
1234567.1234000 Typ=2 Len=7: 196,2,24,46,68,13,35
PostgreSQL supports a RETURNING clause, for instance as in
UPDATE some_table SET x = 'whatever' WHERE conditions RETURNING x, y, z
and MSSQL supports a variant of that syntax with the OUTPUT clause.
However Oracle "RETURNING INTO" seems intended to placing values in variables, from within the context of a stored procedure.
Is there a way to have an SQL equivalent to the one above that would work in Oracle, without involving a stored procedure ?
Note: I am looking for a pure-SQL solution if there exists one, not one that is language-specific, or would require special handling in the code. The actual SQL is dynamic, the code that makes the call is database-agnostic, with only the SQL being adapted.
Oracle does not directly support using the DML returning clause in a SELECT statement, but you can kind of fake that behavior by using a WITH function. Although the below code uses PL/SQL, the statement is still a pure SQL statement and can run anywhere a regular SELECT statement can run.
SQL> create table some_table(x varchar2(100), y number);
Table created.
SQL> insert into some_table values('something', 1);
1 row created.
SQL> commit;
Commit complete.
SQL> with function update_and_return return number is
2 v_y number;
3 --Necessary to avoid: ORA-14551: cannot perform a DML operation inside a query
4 pragma autonomous_transaction;
5 begin
6 update some_table set x = 'whatever' returning y into v_y;
7 --Necessary to avoid: ORA-06519: active autonomous transaction detected and rolled back
8 commit;
9 return v_y;
10 end;
11 select update_and_return from dual;
12 /
UPDATE_AND_RETURN
-----------------
1
Unfortunately there are major limitations with this approach that may make it impractical for non-trivial cases:
The DML must be committed within the statement.
The WITH function syntax requires both client and server versions of 12.1 and above.
Returning multiple columns or rows will require more advanced features. Multiple rows will require the function to return a collection, and the SELECT portion of the statement will have to use the TABLE function. Multiple columns will require a new type for each different result set. If you're lucky, you can use one of the built-in types, like SYS.ODCIVARCHAR2LIST. Otherwise you may need to create your own custom types.
You can do it in SQL, not need to pl/sql, and this depends on your tool and/or language. Here's an example in sqlplus:
SQL> create table t0 as select * from dual;
Table created.
SQL> var a varchar2(2)
SQL> update t0 set dummy='v' returning dummy into :a;
1 row updated.
SQL> print a
A
--------------------------------
v
I am using SQL Developer v18.1 and the database version is Oracle 12c.
sql> select * from nls_database_parameters where parameter like 'NLS%CHARACTERSET';
PARAMETER VALUE
------------------------------ -------------------------
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_CHARACTERSET AL32UTF8
I would like to enter some non-ascii characters, such as 'í', but have no clue how to do that, even after some searches here and Google. Actually I am even unable to enter this example character here directly. I just copies it somewhere else and pasted in this question.
Thank you for the helps in advance!
Sam
How about this?
Here's a table which is supposed to contain some data.
SQL> create table test (col varchar2(20));
Table created.
Check ASCII code for the character you'd want to insert, by using the ASCII function:
SQL> select ascii('í') from dual;
ASCII('í')
----------
52103
OK; now we know its code so - insert it, but this time using the CHR function:
SQL> insert into test (col) values (chr(52103));
1 row created.
SQL> select * From test;
COL
--------------------
í
SQL>
I been working with a complex view written by some other company in 2005. I am trying to understand what it is doing for reasons beyond this post. By the highly complex nature of this view (over 500 lines of code) I take it that the writers new what they where doing.
I keep finding things like TO_NUMBER(null), TO_DATE(null) in various places.
Seems to me like totally unnecessary use of a function.
Is there any technical reasons or advantages that justify why this was design like this?
By default NULL does not have a data type:
SQL> select dump(null) from dual;
DUMP
----
NULL
SQL>
However, if we force Oracle into making a decision it will default to making it a string:
SQL> create or replace view v1 as
select 1 as id
, null as dt
from dual
/
2 3 4 5
View created.
SQL> desc v1
Name Null? Type
-------------- -------- ----------------------------
ID NUMBER
DT VARCHAR2
SQL>
But this not always desirable. We might need to use NULL in a view for a number of reasons (defining an API, filling out a jagged UNION, etc) and so we cast the NULL to another datatype to get the projection we need.
SQL> create or replace view v1 as
select 1 as id
, to_date(null) as dt
from dual
/
2 3 4 5
View created.
SQL> desc v1
Name Null? Type
-------------- -------- ----------------------------
ID NUMBER
DT DATE
SQL>
Later versions have got smarter with regards to handling UNION. On my 11gR2 database, even though I use the null in first declared query (and that usually drives things) I still get the correct datatype:
SQL> create or replace view v1 as
select 1 as id
, null as dt
from dual
union all
select 2 as id
, sysdate as something_else
from dual
/
2 3 4 5 6 7 8 9
View created.
SQL>
SQL> desc v1
Name Null? Type
-------------- -------- ----------------------------
ID NUMBER
DT DATE
SQL>
Explicitly casting NULL may be left over from 8i, or to workaround a bug, or as ammoQ said, "superstitious".
In some old and rare cases the implicit conversion of NULL in set operations caused errors like ORA-01790: expression must have same datatype as corresponding expression.
I can't find any great references for this old behavior, but Google returns a few results that claim a query like this would fail in 8i:
select 'a' a from dual
union
select null a from dual;
And there is at least one similar bug, "Bug 9456979 Wrong result from push of NVL / DECODE into UNION view with NULL select list item - superceded".
But don't let 16 year-old software and some rare bug dictate how to program. And don't think there's a positive correlation between code size programming skill. There's a negative correlation: good programmers will create smaller, more readable code, and won't leave as many mysteries for future coders.
Well, I would prefer CAST(NULL AS DATE) or CAST(NULL AS NUMBER) instead of TO_DATA(NULL), looks more logical in my eyes.
I know two scenarios where such an expression is required.
One it the case of UNION, as already stated in the other aswers.
Another scenario is the case of overloaded procedures/functions, for example:
CREATE OR REPLACE PROCEDURE MY_PROC(val IN DATE) AS
BEGIN
DELETE FROM EMP WHERE HIRE_DATE = val;
END;
/
CREATE OR REPLACE PROCEDURE MY_PROC(val IN NUMBER) AS
BEGIN
DELETE FROM EMP WHERE EMP_ID = val;
END;
/
Calling the procedure like MY_PROC(NULL); does not work, Oracle does not know which procedure to execute. You must call it like MY_PROC(CAST(NULL AS DATE)); for example.
As a oracle PL/SQL programmer, I really don't find any logical reason for doing the things you have specified. The only logical approach to deal with null in oracle is to use nvl(), I really don't find any reason to use TO_NUMBER(null), TO_DATE(null) in a complex view.
We have three databases: dev, staging, and production. We do all our coding in the dev environment. We then push all our code and database changes to staging so the client can see how it works in a live environment. After they sign off, we do the final deployment to the production environment.
Now, about these CLOB columns: When using desc and/or querying the all_tab_columns view for the dev database, CLOBs show a data length of 4,000. However, in the staging and production databases, data lengths for dev-equivalent CLOB columns are odd numbers like 86. I've searched for every possible solution as to how this could have come about. I've even tried adding a new CLOB(86) column thinking it would work like it does for VARCHAR2, but Oracle just spits out an error.
Could the DBAs have botched something up? Is this even something to worry about? Nothing has ever seemed to break as a result of this, but I just like the metadata to be the same across all environments.
First of all, I - as a dba - feel sorry to see the lack of cooperation between you and the dbas. We all need to cooperate to be successful. Clob data lengths can be less than 4000 bytes.
create table z ( a number, b clob);
Table created.
insert into z values (1, 'boe');
1 row created.
exec dbms_stats.gather_table_stats (ownname => 'ronr', tabname => 'z');
PL/SQL procedure successfully completed.
select owner, avg_row_len from dba_tables where table_name = 'Z'
SQL> /
OWNER AVG_ROW_LEN
------------------------------ -----------
RONR 109
select length(b) from z;
LENGTH(B)
----------
3
Where do you find that a clob length can not be less than 4000?
DATA_LENGTH stores the maximun # of bytes that will be taken up within the row for a column. If the CLOB can be stored in row, then the maximum is 4000. LOBS will never take up more than 4000 bytes. If in row storage is disabled, then the LOB will only store the pointer information it needs to find the LOB data, which is much less than 4000 bytes.
SQL> create table t (clob_in_table clob
2 , clob_out_of_table clob
3 ) lob (clob_out_of_table) store as (disable storage in row)
4 , lob (clob_in_table) store as (enable storage in row)
5 /
Table created.
SQL> select table_name, column_name, data_length
2 from user_tab_columns
3 where table_name = 'T'
4 /
TABLE_NAME COLUMN_NAME DATA_LENGTH
------------------------------ ------------------------------ -----------
T CLOB_IN_TABLE 4000
T CLOB_OUT_OF_TABLE 86
EDIT, adding info on *_LOBS view
Use the [DBA|ALL|USER]_LOBS view to look at the defined in row out of row storage settings:
SQL> select table_name
2 , cast(substr(column_name, 1, 30) as varchar2(30))
3 , in_row
4 from user_lobs
5 where table_name = 'T'
6 /
TABLE_NAME CAST(SUBSTR(COLUMN_NAME,1,30)A IN_
------------------------------ ------------------------------ ---
T CLOB_IN_TABLE YES
T CLOB_OUT_OF_TABLE NO
EDIT 2, some references
See LOB Storage in Oracle Database Application Developer's Guide - Large Objects for more information on defining LOB storage, especially the third note that talks about what can be changed:
Note:
Only some storage parameters can be modified. For example, you
can use the ALTER TABLE ... MODIFY LOB statement to change RETENTION,
PCTVERSION, CACHE or NO CACHE LOGGING or NO LOGGING, and the STORAGE
clause.
You can also change the TABLESPACE using the ALTER TABLE ...
MOVE statement.
However, once the table has been created, you cannot change the CHUNK
size, or the ENABLE or DISABLE STORAGE IN ROW settings.
Also, LOBs in Index Organized Tables says:
By default, all LOBs in an index organized table created without an overflow segment will be stored out of line. In other words, if an index organized table is created without an overflow segment, then the LOBs in this table have their default storage attributes as DISABLE STORAGE IN ROW. If you forcibly try to specify an ENABLE STORAGE IN ROW clause for such LOBs, then SQL will raise an error.
This explains why jonearles did not see 4,000 in the data_length column when he created the LOB in an index organized table.
CLOBs don't have a specified length. When you query ALL_TAB_COLUMNS, e.g.:
select table_name, column_name, data_length
from all_tab_columns
where data_type = 'CLOB';
You'll notice that data_length is always 4000, but this should be ignored.
The minimum size of a CLOB is zero (0), and the maximum is anything from 8 TB to 128 TB depending on the database block size.
As ik_zelf and Jeffrey Kemp pointed out, CLOBs can store less than 4000 bytes.
But why are CLOB data_lengths not always 4000? The number doesn't actually limit the CLOB, but you're probably right to worry about the metadata being
different on your servers. You might want to run DBMS_METADATA.GET_DDL on the objects on all servers and compare the results.
I was able to create a low data_length by adding a CLOB to an index organized table.
create table test
(
column1 number,
column2 clob,
constraint test_pk primary key (column1)
)
organization index;
select data_length from user_tab_cols
where table_name = 'TEST' and column_name = 'COLUMN2';
On 10.2.0.1.0, the result is 116.
On 11.2.0.1.0, the result is 476.
Those numbers don't make any sense to me and I'd guess it's a bug. But I don't have a good understanding of the different storage options, maybe I'm just missing something.
Does anybody know what's really going on here?