decimal(s,p) or number(s,p)? - oracle

recently, while working on a db2 -> oracle migration project, we came across this situation.
the developers were inadvertently creating new table structures using decimal(s,p) columns. I didn't remember Oracle supporting this, but then some digging showed that its a ANSI data type therefore supported by oracle.
However, question for me remained -
how is this data handled internally ?
is there a cost of using ANSI types instead of Oracle's built in types ?
Will there be an impact during the data migration if the target type was Oracle built-in type ?

In Oracle, they are the same:
SQL statements that create tables and clusters can also use ANSI data
types and data types from the IBM products SQL/DS and DB2. Oracle
recognizes the ANSI or IBM data type name that differs from the Oracle
Database data type name. It converts the data type to the equivalent
Oracle data type, records the Oracle data type as the name of the
column data type, and stores the column data in the Oracle data type
based on the conversions shown in the tables that follow.
The table below this quote shows that DECIMAL(p,s) is treated internally as a NUMBER(p,s):
SQL> create table t (a decimal(*,5), b number (*, 5));
Table created
SQL> desc t;
Name Type Nullable Default Comments
---- ----------- -------- ------- --------
A NUMBER(*,5) Y
B NUMBER(*,5) Y
However, the scale defaults to 0 for DECIMAL, which means that DECIMAL(*) is treated as NUMBER(*, 0), i.e. INTEGER:
SQL> create table t (a decimal, b number, c decimal (5), d decimal (5));
Table created
SQL> desc t;
Name Type Nullable Default Comments
---- --------- -------- ------- --------
A INTEGER Y
B NUMBER Y
C NUMBER(5) Y
D NUMBER(5) Y

Actually, there is difference between decimal and number.
Decimal will truncate the value which is over-scale, number will round the value.

Related

Insertion of characters into number column

I have a table with several number columns that are inserted through a Asp.Net application using bind variables.
Due to upgrade of Oracle client to 19c and server change, the code instead of giving an error on insert of invalid data, inserts trash and the application crashes aftewards.
Any help is appreciated in finding the root cause.
SELECT trial1,
DUMP (trial1, 17),
DUMP (trial1, 1016),
trial3,
DUMP (trial3,17),
DUMP (trial3, 1016)
Result in SQL Navigator
results of query
Oracle 12c
Oracle client 19
My DBA found this on Oracle Support and that lead to us find the error in the application side:
NaN is a specific IEEE754 value. However Oracle NUMBER is not IEEE754
compliant. Therefore if you force the data representing NaN into a
NUMBER column results are unpredicatable. SOLUTION If you can put a
value in a C float, double, int etc you can load this into the
database as no checks are undertaken - just as with the Oracle NUMBER
datatype it's up to the application to ensure the data is valid. If
you use the proper IEEE754 compliant type, eg BINARY_FLOAT, then NaN
is recognised and handled correctly.
You have bad data as you have tried to store an double precision NAN value in a NUMBER column rather than a BINARY_DOUBLE column.
We can duplicate the bad data with the function (never use this in a production environment):
CREATE FUNCTION createNumber(
hex VARCHAR2
) RETURN NUMBER DETERMINISTIC
IS
n NUMBER;
BEGIN
DBMS_STATS.CONVERT_RAW_VALUE( HEXTORAW( hex ), n );
RETURN n;
END;
/
Then, we can duplicate your bad values using the hexadecimal values from your DUMP output:
CREATE TABLE table_name (trial1 NUMBER, trial3 NUMBER);
INSERT INTO table_name (trial1, trial3) VALUES (
createNumber('FF65'),
createNumber('FFF8000000000000')
);
Then:
SELECT trial1,
DUMP(trial1, 16) AS t1_hexdump,
trial3,
DUMP(trial3, 16) AS t3_hexdump
FROM table_name;
Replicates your output:
TRIAL1
T1_HEXDUMP
TRIAL3
T3_HEXDUMP
~
Typ=2 Len=2: ff,65
null
Typ=2 Len=8: ff,f8,0,0,0,0,0,0
Any help is appreciated in finding the root cause.
You need to go back through your application and work out where the bad data came from and see if you can determine what the original data was and debug the steps it went through in the application to work out if it was:
Always bad data, and then you need to put in some validation into your application to make sure the bad data does not get propagated; or
Was good data but there is a bug in your code that changed it and then you need to fix the bug.
As for the existing bad data, you either need to correct it (if you know what it should be) or delete it.
We cannot help with any of that as we do not have visibility of your application nor do we know what the correct data should have been.
If you want to store that data as a floating point then you need to change from using a NUMBER to using a BINARY_DOUBLE data type:
CREATE TABLE table_name (value BINARY_DOUBLE);
INSERT INTO table_name(value) VALUES (BINARY_DOUBLE_INFINITY);
INSERT INTO table_name(value) VALUES (BINARY_DOUBLE_NAN);
Then:
SELECT value,
DUMP(value, 16)
FROM table_name;
Outputs:
VALUE
DUMP(VALUE,16)
Inf
Typ=101 Len=8: ff,f0,0,0,0,0,0,0
Nan
Typ=101 Len=8: ff,f8,0,0,0,0,0,0
Then BINARY_DOUBLE_NAN exactly matches the binary value in your column and you have tried to insert a Not-A-Number value into a NUMBER column (that does not support it) in the format expected for a BINARY_DOUBLE column (that would support it).
The issue was a division by zero on the application side that was inserted as infinity into the database, but Oracle has an unpredictable behavior with this values.
Please see original post above for all the details.

Unable to set precision for INTEGER data type in SQL CREATE TABLE command

I am trying to create the following table in Oracle.
CREATE TABLE CUSTOMER(CUST_ID INT(10),
CUST_NAME VARCHAR2(50),
CUST_SEX CHAR(2),
CUST_STATE VARCHAR2(50),
CUST_COUNTRY VARCHAR2(50));
I get an error saying that the right parenthesis is missing. In reality, the issue is with the INT data type for the CUST_ID column. Once I remove the precision :(10) from the DDL query, I am able to execute it successfully.
Oracle docs don't specify anything with regarding to whether this data type can be accompanied by a precision parameter or not. However Oracle does mention that INTEGER/INT is per ANSI standards.
https://docs.oracle.com/cd/B19306_01/olap.102/b14346/dml_datatypes002.htm
Certain other non-official references describe INT/INTEGER to be a synonym for NUMBER(38).
Can someone please tell me if precision cannot indeed be specified for INT datatype?
The Oracle docs state that:
SQL statements that create tables and clusters can also use ANSI data types and data types from the IBM products SQL/DS and DB2. Oracle recognizes the ANSI or IBM data type name that differs from the Oracle Database data type name. It converts the data type to the equivalent Oracle data type
As the table below that sentence states, int, integer, and (surprisingly?) smallint are all synonyms for number(38), so you cannot specify a precision for them. For your usecase, if you want an integer number with ten digits, you should use number(10).
Let me try: precision cannot indeed be specified for INT datatype.
How did it sound?
Documentation says:
<snip>
| { NUMERIC | DECIMAL | DEC } [ (precision [, scale ]) ] --> precision + scale
| { INTEGER | INT | SMALLINT } --> no precision for integers
| FLOAT [ (size) ]
<snip>
The INT[EGER] data type (which should be , at least mostly, a 4-byte binary integer), in Oracle, exists, if at all, in PL/SQL stored procedures.
Your best bet is to design a NUMBER(5) for a SMALLINT, a NUMBER(9) for an INTEGER, and a NUMBER(18) for a LARGEINT/BIGINT
If you go:
CREATE TABLE dropme (i INT);
, in Oracle, you get a table with a column i NUMBER (with no length specification, which boils down to a pretty in-efficient NUMBER(38).
The Oracle numeric data types are NUMBER , with an optional overall precision and an optional decimal scale, and FLOAT.
And an Oracle NUMBER, at least as I understood it, is a variable-length construct, with a binary, two-byte, length indicator for the whole thing, followed by a binary decimal notation , in which every following half-byte can hold between 0000 and 1001, binary, or 0 to 9 - except the last one, which contains the sign: positive/negative.
As the documentation says, INTEGER is equivalent to NUMBER(38).
You can just use INTEGER where you want to store integers of any size, or you can use NUMBER(n) if you want to constrain the number of digits in the values to n.
Note: the only reason for specifying the precision in Oracle is for validation and documentation. There is no space advantage in using smaller values of n: the value 123456 occupies the same number of bytes in NUMBER(6) and NUMBER(38) and INTEGER columns - i.e. 4 bytes.

No dimension available for SDO_GEOM.SDO_LENGTH and SDO_GEOM.SDO_AREA, and the workaround of giving a tolerance instead doesn't work

We currently run an ArcGIS Server over an Oracle 12c database. The GIS software handles the database itself, but we would like to do measurements on some values its tables contains.
Me, I would like to list the length of some lines contained in the geometry field (shape) of a table, but I don't succeed.
select * from TRONCON;
-- ok. Runs fine.
SELECT c.cur, SDO_GEOM.SDO_LENGTH(c.shape, m.diminfo)
FROM TRONCON c, user_sdo_geom_metadata m
WHERE m.table_name = 'TRONCON' AND m.column_name = 'SHAPE';
-- Fails with this error message :
-- ORA-06553: PLS-306: wrong number or types of arguments in call to 'SDO_LENGTH'
-- 06553. 00000 - "PLS-%s: %s"
-- *Cause:
-- *Action:
select * from user_sdo_geom_metadata;
-- Appears empty.
select * from all_sdo_geom_metadata;
-- Appears empty.
Because we aren't the ones who are filling the database with data (this is the responsability of the GIS software to manage data and perform structural changes in the schema, if needed) we don't know why the user_sdo_geom_metadata and the all_sdo_geom_metadata tables are empty, and how to regenerate them if it can be done. Because this case is unexpected.
A colleague wants to measure an area, and encounters the same kind of problems with an attempt of replacing the missing dimension by a tolerance :
SELECT sdo_geom.sdo_area(SHAPE, 0.005, 'unit=hectare') FROM ZONE_INTERET;
-- ORA-06553: PLS-306: wrong number or types of arguments in call to 'SDO_AREA'
-- 06553. 00000 - "PLS-%s: %s"
How can we make these functions working ?
The fact that the USER_SDO_GEOM_METADATA table is not filled is surprising. ArcGIS is fairly well integrated with Oracle Spatial and does a good job of maintaining that information. Whenever you define a new feature class (using ArcCatalog), you should see a new row in that table. If it is missing, then you are unable to define any spatial index on the data, which would make all queries (including viewing) very slow.
What may be happening is that you are connecting as a different user than the ArcGIS applications ? USER_SDO_GEOM_METADATA (like USER_TABLES) is for the currently connected user. ALL_SDO_GEOM_METADATA (just like ALL_TABLES) is for all the tables accessible by the current user (in other schemas).
Then again you do not need the info in that table when invoking spatial functions: you can just manually pass a tolerance value. In more recent versions (18.1 and later), the tolerance is actually optional: we pick a default value.
Here are some examples:
SQL> SELECT sdo_geom.sdo_area(geom, 0.005, 'unit=hectare') FROM us_counties where id=1;
SDO_GEOM.SDO_AREA(GEOM,0.005,'UNIT=HECTARE')
--------------------------------------------
156486.874
1 row selected.
SQL> SELECT sdo_geom.sdo_length(geom, 0.005, 'unit=km') FROM us_interstates where id=1;
SDO_GEOM.SDO_LENGTH(GEOM,0.005,'UNIT=KM')
-----------------------------------------
3956.18879
1 row selected.
The syntax error you get (ORA-06553) is very odd. Try describing the package:
describe sdo_geom
This produces a large output. But in there you should see the two signatures for SDO_LENGTH:
FUNCTION SDO_LENGTH RETURNS NUMBER
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
GEOM SDO_GEOMETRY IN
DIM SDO_DIM_ARRAY IN
UNIT VARCHAR2 IN DEFAULT
COUNT_SHARED_EDGES NUMBER IN DEFAULT
FUNCTION SDO_LENGTH RETURNS NUMBER
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
GEOM SDO_GEOMETRY IN
TOL NUMBER IN DEFAULT
UNIT VARCHAR2 IN DEFAULT
COUNT_SHARED_EDGES NUMBER IN DEFAULT
Is this what you see ?

Why does Oracle round up a number with less than 38 significant digits?

We have Oracle Server 10.2.
To test this, I have a very simple table.
CREATE TABLE MYSCHEMA.TESTNUMBER
(
TESTNUMBER NUMBER
)
When I try to insert 0.98692326671601283 the number gets rounded up.
INSERT INTO MYSCHEMA.TESTNUMBER (TESTNUMBER)
VALUES (0.98692326671601283);
The select returns:
select * from TESTNUMBER
0.986923266716013
It rounds up the last 3 numbers "283" to "3".
Even looking at it with TOAD UI and trying to enter it with TOAD, I get the same result.
Why? Is it possible to insert this number in an Oracle number without it getting rounded up?
I think you need to look into how your client program displays number values. An Oracle NUMBER should store that value with full precision; but the value may be rounded for display by the client.
For instance, using SQLPlus:
dev> create table dctest (x number);
Table created.
dev> insert into dctest VALUES (0.98692326671601283);
1 row created.
dev> select * from dctest;
X
----------
.986923267
dev> column x format 0.000000000000000000000000000
dev> /
X
------------------------------
0.986923266716012830000000000
As you can see, the default format shows only the first 9 significant digits. But when I explicitly change the column formatting (a client-side feature in SQLPlus), the full value inserted is displayed.

What is the difference between varchar and varchar2 in Oracle?

What is the difference between varchar and varchar2?
As for now, they are synonyms.
VARCHAR is reserved by Oracle to support distinction between NULL and empty string in future, as ANSI standard prescribes.
VARCHAR2 does not distinguish between a NULL and empty string, and never will.
If you rely on empty string and NULL being the same thing, you should use VARCHAR2.
Currently VARCHAR behaves exactly the same as VARCHAR2. However, the type VARCHAR should not be used as it is reserved for future usage.
Taken from: Difference Between CHAR, VARCHAR, VARCHAR2
Taken from the latest stable Oracle production version 12.2:
Data Types
The major difference is that VARCHAR2 is an internal data type and VARCHAR is an external data type. So we need to understand the difference between an internal and external data type...
Inside a database, values are stored in columns in tables. Internally, Oracle represents data in particular formats known as internal data types.
In general, OCI (Oracle Call Interface) applications do not work with internal data type representations of data, but with host language data types that are predefined by the language in which they are written. When data is transferred between an OCI client application and a database table, the OCI libraries convert the data between internal data types and external data types.
External types provide a convenience for the programmer by making it possible to work with host language types instead of proprietary data formats. OCI can perform a wide range of data type conversions when transferring data between an Oracle database and an OCI application. There are more OCI external data types than Oracle internal data types.
The VARCHAR2 data type is a variable-length string of characters with a maximum length of 4000 bytes. If the init.ora parameter max_string_size is default, the maximum length of a VARCHAR2 can be 4000 bytes. If the init.ora parameter max_string_size = extended, the maximum length of a VARCHAR2 can be 32767 bytes
The VARCHAR data type stores character strings of varying length. The first 2 bytes contain the length of the character string, and the remaining bytes contain the string. The specified length of the string in a bind or a define call must include the two length bytes, so the largest VARCHAR string that can be received or sent is 65533 bytes long, not 65535.
A quick test in a 12.2 database suggests that as an internal data type, Oracle still treats a VARCHAR as a pseudotype for VARCHAR2. It is NOT a SYNONYM which is an actual object type in Oracle.
SQL> select substr(banner,1,80) from v$version where rownum=1;
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> create table test (my_char varchar(20));
Table created.
SQL> desc test
Name Null? Type
MY_CHAR VARCHAR2(20)
There are also some implications of VARCHAR for ProC/C++ Precompiler options. For programmers who are interested, the link is at: Pro*C/C++ Programmer's Guide
After some experimentation (see below), I can confirm that as of September 2017, nothing has changed with regards to the functionality described in the accepted answer:-
Rextester demo for Oracle 11g:
Empty strings are inserted as NULLs for both VARCHAR
and VARCHAR2.
LiveSQL demo for Oracle 12c: Same results.
The historical reason for these two keywords is explained well in an answer to a different question.
VARCHAR can store up to 2000 bytes of characters while VARCHAR2 can store up to 4000 bytes of characters.
If we declare datatype as VARCHAR then it will occupy space for NULL values. In the case of VARCHAR2 datatype, it will not occupy any space for NULL values. e.g.,
name varchar(10)
will reserve 6 bytes of memory even if the name is 'Ravi__', whereas
name varchar2(10)
will reserve space according to the length of the input string. e.g., 4 bytes of memory for 'Ravi__'.
Here, _ represents NULL.
NOTE: varchar will reserve space for null values and varchar2 will not reserve any space for null values.
Currently, they are the same. but previously
Somewhere on the net, I read that,
VARCHAR is reserved by Oracle to support distinction between NULL and empty string in future, as ANSI standard prescribes.
VARCHAR2 does not distinguish between a NULL and empty string, and never will.
Also,
Emp_name varchar(10) - if you enter value less than 10 digits then remaining space cannot be deleted. it used total of 10 spaces.
Emp_name varchar2(10) - if you enter value less than 10 digits then remaining space is automatically deleted

Resources