Oracle to_number function parameters - oracle

I'm having trouble with TO_NUMBER function second and third parameters. Does one of them depend on the other one? How does nls_params parameter work? I can't understand how the the result of the query
SELECT TO_NUMBER('17.000,23',
'999G999D99',
'nls_numeric_characters='',.'' ')
REFORMATTED_NUMBER
FROM DUAL;
can be 17000.23. Could somebody please explain the process of the above conversion.
P.S. The above query is taken from an Oracle Database SQL Expert Certificate preparation book.

you are telling the TO_NUMBER function that,
the two characters ,. in nls_numeric_characters represent the decimal and thousand seperator
G (thousands seperator) = .
D (decimal seperator) = ,
so it sees the number as seventeen thousand point twenty three.
see: http://download.oracle.com/docs/cd/B13789_01/olap.101/b10339/x_stddev022.htm#i78653

Now, I'll answer my own question. While using TO_NUMBER function I missed the important point that, whatever I get from TO_NUMBER function is going to be a number. And a number does not include anything else than decimal point and E scientific notation. So 17,788.99 is not actually a number but is rather the string representation of 17788.99.
If we try to subtract 500 from 17,788.99 we'll fail.(Well, Oracle implicitly converts numeric strings to numbers and vice-versa, but principally we can't perform arithmetic operations between strings and numbers). I'm sure that TO_NUMBER function is almost never used to select a column value. It's rather used to be able to make arithmetic operations. Instead, we use TO_CHAR to show a column value or any numeric expression in a neat, easy to read format. The fomat models and nls_params are not only for TO_NUMBER function, but for TO_CHAR as well.

Related

Oracle SQL PLSQL large number field strange behavior

Have existing table called temptable, column largenumber is a NUMBER field, with no precision set:
largenumber NUMBER;
Query:
select largenumber from temptable;
It returns:
-51524845525550100000000000000000000
But If I do
column largenumber format 999999999999999999999999999999999999999
And then
select largenumber from temptable;
It returns:
-51524845525550:100000000000000000000
Why is there a colon?
To test, I took the number, remove the colon, and insert it to another table temptable2, and did the same column largenumber format, the select returns the number without the colon:
select largenumber from temptable2;
It returns:
-51524845525550100000000000000000000
So the colon is not present here.
So what could possibly be in the original number field to cause that colon?
In the original row, If I do a select and try to do any TO_CHAR, REPLACE, CAST, or concatenate to text, it would give me number conversion error.
For example, trying to generate a csv:
select '"' || largenumber || '",'
FROM temptable;
would result in:
ORA-01722 ("invalid number") error occurs when an attempt is made to convert a character string into a number, and the string cannot be converted into a valid number
In a comment (in response to a question from me), you shared that dump(largenumber) on the offending value returns
Typ=2 Len=8: 45,50,56,53,52,48,46,48
From the outset, that means that the data stored on disk is invalid (it is not a valid representation of a value of number data type). Typ=2 is correct, that is for data type number. The length (8 bytes) is correct (we can all count to eight to see that).
What is wrong is the bytes themselves. And, we only need to inspect the first and the last byte to see that.
The first byte is 45. It encodes the sign and the exponent of your number. The first bit (1 or 0) represents the sign: 1 for positive, 0 for negative. 45 is less than 128, so the first bit in the first byte is 0; so the number is negative. (So far this matches what you know about the intended value.)
But, for negative numbers, the last byte is always the magic value 102. Always. In another comment under your original question, Connor McDonald asks about your platform - but this is platform-independent, it is how Oracle encodes numbers for permanent storage on any platform. So, we already know that the dump value you got tells us the value is invalid.
In fact, Connor, in the same comment, gave the correct representation of that number (according to Oracle's scheme for internal representation of numbers). Indeed, just the last byte is wrong: your dump shows 48, but it should be 102.
How can you fix this? If it's a one-off, just use an update statement to replace the value with the correct one and move on. If your table has a primary key, let's call it id, then find the id for this row, and then
update {your_table} set largenumber = -50...... where id = {that_id};
Question is, how many such corrupt values might you have in your table? If it's just one, you can shrug it off; but if it's many (or even "a handful") you may want to figure out how they got there in the first place.
In most cases, the database will reject invalid values; you can't simply insert 'abc' in a number column, for example. But there are ways to get bad data in; even intentionally, and in a repeatable way. So, you would have to investigate how the bad values were inserted (what process was used for insertion).
For a trivial way to insert bad data in a number column, in a repeatable manner, you can see this thread on the Oracle developers forum: https://community.oracle.com/tech/developers/discussion/3903746/detecting-invalid-values-in-the-db
Please be advised that I had just started learning Oracle at that time (I was less than two months in), so I may have said some stupid things in that thread; but the method to insert bad data is described there in full detail, and it was tested. That shows just one possible (and plausible!) way to insert invalid stuff in a table; how it happened in your specific case, you will have to investigate yourself.

Oracle: Why does to_date() accept 2 digit year when I have 4 digits in format string? - and how to enforce 4 digits?

I want to check a date for correctness. (Let's not talk about the fact, that the date is stored in a varchar please ...)
Dates are stored like DDMMYYYY so for instance 24031950 is a correct date. 240319 is not.
So I do this, when the call works, it's a correct date:
select to_date('24031950','DDMMYYYY') from dual;
But unfortunately this also does not return an error:
select to_date('240319','DDMMYYYY') from dual (why?);
But it's interesting, that this one does not work:
select to_date('190324','YYYYMMDD') from dual;
So, how to enforce a check o 4 digit year with the given format mask?
Thanks!
Quoting the docs:
Oracle Database converts strings to dates with some flexibility. [...]
And I believe that flexibility is what you are seeing. You can turn it off with the fx modifier:
FX: Requires exact matching between the character data and the format model
select to_date('240319','fxDDMMYYYY') from dual;
Gives an ORA-01862 error.
Just an additional note to Mat's answer. fx acts like a switch in the string.
For example TO_DATE('2019-11-5','fxYYYY-MM-DD') gives an ORA-01862 error because exact matching applies for the entire string. If you need exact match only for parts of the string, then use for example
TO_DATE('2019-11-5','fxYYYY-MM-fxDD')
In this case YYYY-MM- has to match exactly, whereas DD applies flexible (or lazy) match.
To check whether it is in correct format without exceptions you can also use regex functions.
One possible way would to be check if the string contains 8 digits:
select REGEXP_INSTR ('24031950', '[0-9]{8}') from dual;
1
select REGEXP_INSTR ('240350', '[0-9]{8}') from dual;
0

Different matches when using prepared statements on CHAR(3) column

I had to make a CHAR(1 CHAR) column wider and I forgot to change the column type to VARCHAR2:
DUPLICADO CHAR(3 CHAR)
I noticed the error when my PHP app would no longer find exact matches, e.g.:
SELECT *
FROM NUMEROS
WHERE DUPLICADO = :foo
... with :foo being #4 didn't find the 3-char padded #4 value. However, I initially hit a red herring while debugging the query in SQL Developer because injecting raw values into the query would find matches!
SELECT *
FROM NUMEROS
WHERE DUPLICADO = '#4'
Why do I get matches with the second query? Why do prepared statements make a difference?
To expand a little on my comments, I found a bit in the documentation that explains difference between blankpadded and nonpadded comparison:
http://docs.oracle.com/database/121/SQLRF/sql_elements002.htm#BABJBDGB
If both values in your comparison (the two sides of the equal sign) have datatype CHAR or NCHAR or are literal strings, then Oracle chooses blankpadded comparison. That means that if the lengths are different, then it pads the short one with blanks until they are the same length.
With the column DUPLICADO being a CHAR(3), the value '#4' is stored in the column as three characters '#4 ' (note the blank as third character.) When you do DUPLICADO = '#4' the rule states Oracle will use blankpadded comparison and therefore blankpad the literal '#4' until it has the same length as the column. So it actually becomes DUPLICADO = '#4 '.
But when you do DUPLICADO = :foo, it will depend on the datatype of the bind variable. If the datatype is CHAR, it will also perform blankpadded comparison. But if the datatype is VARCHAR2, then Oracle will use non-padded comparison and then it will be up to you to ensure to do blankpadding where necessary.
Depending on client or client language you may be able to specify the datatype of the bind variable and thereby get blankpadded or nonpadded comparison as needed.
SQL Developer may be a special case that might not allow you to specify datatype - it just possibly might default to bind variables always being datatype VARCHAR2. I don't know sufficient about SQL Developer to be certain about that ;-)

How to Convert integer to decimal in oracle

how can i convert a integer to decimal in oracle? Is there any function like TO_CHAR?
I need to put this function on a where clause. Thanks
SELECT IDDS,
ID
FROM T_TABLEONE
WHERE CAST(ID as DECIMAL)='1,301131832E19';
If you apply a function to the column then you can run the risks of:
being unable to use a regular index on that column
obscuring the expected cardinality of the result set, which can cause query optimisation problems.
So it's generally a bad practice, and you'd do better to cast your literal to the same type as the column, in this case using to_number().
However in this case I'm surprised that it's necessary, as Oracle's type conversion in recent releases should be able to cope with comparing an integer column to a number expressed in scientific notation: http://sqlfiddle.com/#!4/336d3/8
Use CAST:
CAST(myInt AS DECIMAL)

How to efficiently convert text to number in Oracle PL/SQL with non-default NLS_NUMERIC_CHARACTERS?

I'm trying to find an efficient, generic way to convert from string to a number in PL/SQL, where the local setting for NLS_NUMERIC_CHARACTERS settings is inpredictable -- and preferable I won't touch it. The input format is the programming standard "123.456789", but with an unknown number of digits on each side of the decimal point.
select to_number('123.456789') from dual;
-- only works if nls_numeric_characters is '.,'
select to_number('123.456789', '99999.9999999999') from dual;
-- only works if the number of digits in the format is large enough
-- but I don't want to guess...
to_number accepts a 3rd parameter but in that case you to specify a second parameter too, and there is no format spec for "default"...
select to_number('123.456789', null, 'nls_numeric_characters=''.,''') from dual;
-- returns null
select to_number('123.456789', '99999D9999999999', 'nls_numeric_characters=''.,''') from dual;
-- "works" with the same caveat as (2), so it's rather pointless...
There is another way using PL/SQL:
CREATE OR REPLACE
FUNCTION STRING2NUMBER (p_string varchar2) RETURN NUMBER
IS
v_decimal char;
BEGIN
SELECT substr(VALUE, 1, 1)
INTO v_decimal
FROM NLS_SESSION_PARAMETERS
WHERE PARAMETER = 'NLS_NUMERIC_CHARACTERS';
return to_number(replace(p_string, '.', v_decimal));
END;
/
select string2number('123.456789') from dual;
which does exactly what I want, but it doesn't seem efficient if you do it many, many times in a query. You cannot cache the value of v_decimal (fetch once and store in a package variable) because it doesn't know if you change your session value for NLS_NUMERIC_CHARACTERS, and then it would break, again.
Am I overlooking something? Or am I worrying too much, and Oracle does this a lot more efficient then I'd give it credit for?
The following should work:
SELECT to_number(:x,
translate(:x, '012345678-+', '999999999SS'),
'nls_numeric_characters=''.,''')
FROM dual;
It will build the correct second argument 999.999999 with the efficient translate so you don't have to know how many digits there are beforehand. It will work with all supported Oracle number format (up to 62 significant digits apparently in 10.2.0.3).
Interestingly, if you have a really big string the simple to_number(:x) will work whereas this method will fail.
Edit: support for negative numbers thanks to sOliver.
If you are doing a lot of work per session, an option may be to use
ALTER SESSION SET NLS_NUMERIC_CHARACTERS = '.,'
at the beginning of your task.
Of course, if lots of other code is executed in the same session, you may get funky results :-)
However we are able to use this method in our data load procedures, since we have dedicated programs with their own connection pools for loading the data.
Sorry, I noticed later that your question was for the other way round. Nevertheless it's noteworthy that for the opposite direction there is an easy solution:
A bit late, but today I noticed the special format masks 'TM9' and 'TME' which are described as "the text minimum number format model returns (in decimal output) the smallest number of characters possible." on https://docs.oracle.com/cloud/latest/db112/SQLRF/sql_elements004.htm#SQLRF00210.
It seems as if TM9 was invented just to solve this particular problem:
select to_char(1234.5678, 'TM9', 'NLS_NUMERIC_CHARACTERS=''.,''') from dual;
The result is '1234.5678' with no leading or trailing blanks, and a decimal POINT despite my environ containing NLS_LANG=GERMAN_GERMANY.WE8MSWIN1252, which would normally cause a decimal COMMA.
select to_number(replace(:X,'.',to_char(0,'fmd'))) from dual;
btw
select to_number(replace('1.2345e-6','.',to_char(0,'fmd'))) from dual;
and if you want more strict
select to_number(translate(:X,to_char(0,'fmd')||'.','.'||to_char(0,'fmd'))) from dual;
Is it realistic that the number of digits is unlimited?
If we assume it is then isn't it a good reason to look into the requirements more carefully?
If we have that fantastic situation when the initial string is super long, then the following does the trick:
select
to_number(
'11111111.2222'
, 'FM' || lpad('9', 32, '9') || 'D' || lpad('9', 30, '9')
, 'NLS_NUMERIC_CHARACTERS=''.,'''
)
from
dual

Resources