Oracle determinism requirements and idiosyncrasies - oracle

I've been troubled by my lack of understanding about an issue that periodically emerges: Function-Determinicity.
From the docs, it seems fairly clear:
A DETERMINISTIC function may not have side effects.
A DETERMINISTIC function may not raise an unhandled exception.
As these are important core concepts with robust, central implementations in standard packages, I don't think there is a bug or anything (the fault lies in my assumptions and understanding, not Oracle). That being said, both of these requirements sometimes appear to have some idiosyncratic uses within the standard package and the DBMS_ and UTL_ packages.
I hoped to post a couple of examples of Oracle functions that raise some doubts for me in my use of DETERMINISTIC and the nuances in these restrictions, and see if anyone can explain how things fit together. I apologize this is something of a "why" question and it can be migrated if needed, but the response to this question: (Is it ok to ask a question where you've found a solution but don't know why something was behaving the way it was?) made me think it might be appropriate for SO.
Periodically in my coding, I face uncertainty whether my own UDFs qualify as pure, and at other times, I use Oracle functions that surprise me greatly to learn they are impure. If anyone can take a look and advise, I would be grateful.
As a first example, TO_NUMBER. This function seems pure, but it also throws exceptions. In this example I'll use TO_NUMBER in a virtual column (DETERMINISTIC should be required here)
CREATE TABLE TO_NUMBER_IS_PURE_BUT_THROWS (
SOURCE_TEXT CHARACTER VARYING(5 CHAR) ,
NUMERICIZATION NUMBER(5 , 0) GENERATED ALWAYS AS (TO_NUMBER(SOURCE_TEXT , '99999')) ,
CONSTRAINT POSITIVE_NUMBER CHECK (NUMERICIZATION >= 0)
);
Table TO_NUMBER_IS_PURE_BUT_THROWS created.
INSERT INTO TO_NUMBER_IS_PURE_BUT_THROWS VALUES ('0',DEFAULT);
INSERT INTO TO_NUMBER_IS_PURE_BUT_THROWS VALUES ('88088',DEFAULT);
INSERT INTO TO_NUMBER_IS_PURE_BUT_THROWS VALUES ('UH-OH',DEFAULT);
1 row inserted.
1 row inserted.
ORA-01722: invalid number
The ORA-01722 would seem to violate the unhandled-exception requirement. Presumably any function I create that casts via TO_NUMBER should handle this possibility to remain pure. But throwing the exception here seems appropriate, and reliable. It seems there is some debate about whether exceptions violate referential-transparency (Why is the raising of an exception a side effect?)
The second situation I encounter is System functions that seem like they should-be DETERMINISTIC but arent't. There must be some reason they are considered impure. In some cases, it seems unfathomable that the internals would be generating side-effects.
An extreme example of this could be DBMS_ASSERT.NOOP though there are many others. The function returns its input unmodified. How can it be nondeterministic?
CREATE TABLE HOW_IS_NOOP_IMPURE (
SOURCE_TEXT VARCHAR2(256 BYTE),
COPY_TEXT VARCHAR2(256 BYTE) GENERATED ALWAYS AS (DBMS_ASSERT.NOOP(SOURCE_TEXT)),
CONSTRAINT COPY_IS_NOT_NULL CHECK(COPY_TEXT IS NOT NULL)
);
Yields:
ORA-30553: The function is not deterministic
Presumably it violates the requirements for determinicity, but that is hard to imagine. I wondered what I'm missing in my presumption that functions like this would be deterministic.
EDIT In response to Lukasz's comment about session settings:
I can accept it if cross-session repeatability is the root cause of functions like NOOPnot being DETERMINISTIC, but TO_CHAR is deterministic/eligible for use in virtual columns et al. but appears to have sensitivity to session settings in its format masks:
ALTER SESSION SET NLS_NUMERIC_CHARACTERS = '._';
Session altered.
CREATE TABLE TO_CHAR_NLS(
INPUT_NUMBER NUMBER(6,0),
OUTPUT_TEXT CHARACTER VARYING(64 CHAR) GENERATED ALWAYS AS (TO_CHAR(INPUT_NUMBER,'999G999'))
);
Table TO_CHAR_NLS created.
INSERT INTO TO_CHAR_NLS VALUES (123456,DEFAULT);
INSERT INTO TO_CHAR_NLS VALUES (111222,DEFAULT);
SELECT INPUT_NUMBER, OUTPUT_TEXT FROM TO_CHAR_NLS ORDER BY 1 ASC;
1 row inserted.
1 row inserted.
INPUT_NUMBER OUTPUT_TEXT
111222 111_222
123456 123_456

The ORA-01722 would seem to violate the unhandled-exception
requirement. Presumably any function I create that casts via TO_NUMBER
should handle this possibility to remain pure.
Firstly, i must appreciate you for asking such a good question. Now, when you say you used TO_NUMBER, it should convert all the text inputted to the function but you should know that TO_NUMBER has some restrictions.
As per TO_NUMBER definition:
The TO_NUMBER function converts a formatted TEXT or NTEXT expression
to a number. This function is typically used to convert the
formatted numerical output of one application (which includes currency symbols, decimal markers, thousands group markers, and so
forth) so that it can be used as input to another application.
It clearly says,it used to cast the formatted numerical output of one application, that means TO_NUMBER itself expect a numerical input and when you write as below:
INSERT INTO TO_NUMBER_IS_PURE_BUT_THROWS VALUES ('UH-OH',DEFAULT);
You completely passed the unexpected input to TO_NUMBER function and hence it throws the error ORA-01722: invalid number as expected behavior.
Read more about TO_NUMBER.
Secondly,
An extreme example of this could be DBMS_ASSERT.NOOP though there are
many others. The function returns its input unmodified. How can it be
nondeterministic?
DBMS_ASSERT.NOOP function is can be used where someone passing actual piece of code through a variable and don't want it to be checked for SQL injection attacks.
This has to be nondeterministic as it just return what we input to the function.
I show you a example to demonstrate why this has to be non-deterministic.
Let's say i create a function years_from_today as deterministic.
CREATE OR REPLACE FUNCTION years_from_today
( p_date IN DATE )
RETURN NUMBER DETERMINISTIC IS
BEGIN
RETURN ABS(MONTHS_BETWEEN(SYSDATE, p_date) / 12);
END years_from_today;
/
Now i create a table and use this function in a query as below:
CREATE TABLE det_test AS
SELECT TO_DATE('01-JUL-2009', 'DD-MON-YYYY') AS date_value
FROM dual;
SELECT date_value, SYSDATE, years_from_today(date_value)
FROM det_test
WHERE years_from_today(date_value) < 2;
Output
DATE_VALU SYSDATE YEARS_FROM_TODAY(DATE_VALUE)
--------- --------- ----------------------------
01-JUL-09 20-SEP-10 1.21861774
Then i create a function-based index on the new table.
CREATE INDEX det_test_fbi ON det_test (years_from_today(date_value));
Now, to see the implications of our DETERMINISTIC choice, change the date on the server (in a test environment of course) to move ahead a full year. Even though the date has changed, running the query again will still return the same value as before from YEARS_FROM_TODAY, along with the same row, because the index is used instead of executing the function.
SELECT date_value, SYSDATE, years_from_today(date_value)
FROM det_test
WHERE years_from_today(date_value) < 2;
Output:
DATE_VALU SYSDATE YEARS_FROM_TODAY(DATE_VALUE)
--------- --------- ----------------------------
01-JUL-09 20-SEP-11 1.2186201
Without the WHERE clause, the query should return the following:
DATE_VALU SYSDATE YEARS_FROM_TODAY(DATE_VALUE)
--------- --------- ----------------------------
01-JUL-09 20-SEP-11 2.21867063
As is evident from the erroneous output, a function should never be created as deterministic unless it will ALWAYS return the same value given the same parameters.
And hence your assumption to make DBMS_ASSERT.NOOP doesnot stands true in all the cases.

Related

Why does Oracle change the index construction function instead of error output? ORA-01722: invalid number by index on a field with type varchar2

Creating a mySomeTable table with 2 fields
create table mySomeTable (
IDRQ VARCHAR2(32 CHAR),
PROCID VARCHAR2(64 CHAR)
);
Creating an index on the table by the PROCID field
create index idx_PROCID on mySomeTable(trunc(PROCID));
Inserting records:
insert into mySomeTable values ('a', '1'); -- OK
insert into mySomeTable values ('b', 'c'); -- FAIL
As you can see, an error has been made in the index construction script and the script will try to build an index on the field using the trunc() function.
trunct() is a function for working with dates or numbers, and the field has the string type
This index building script successfully works out and creates an index without displaying any warnings and errors.
An index is created on the table using the TRUNC(TO_NUMBER(PROCID)) function
When trying to insert or change an entry in the table, if PROCID cannot be converted to a number, I get the error ORA-01722: invalid number, which is actually logical.
However, the understanding that I am working in a table with rows and adding string values to the table, and the error is about converting to a number, was misleading and not understanding what is happening...
Question: Why does Oracle change the index construction function, instead of giving an error? And how can this be avoided in the future?
Oracle version 19.14
Naturally, there was only one solution - to create the right index with the right script
create index idx_PROCID on mySomeTable(PROCID);
however, this does not explain, to me, this Oracle behavior.
Oracle doesn't know if the index declaration is wrong or the column data type is wrong. Arguably (though some may well disagree!) Oracle shouldn't try to second-guess your intentions or enforce restrictions beyond those documented in the manual - that's what user-defined constraints are for. And, arguably, this index acts as a form of pseudo-constraint. That's a decision for the developer, not Oracle.
It's legal, if usually ill-advised, to store a number in a string column. If you actually intentionally chose to store numbers as strings - against best practice and possibly just to irritate future maintainers of your code - then the index behaviour is reasonable.
A counter-question is to ask where it should draw the line - if you expect it to error on your index expression, what about something like
create index idx_PROCID on mySomeTable(
case when regexp_like(PROCID, '^\d.?\d*$') then trunc(PROCID) end
);
or
create index idx_PROCID on mySomeTable(
trunc(to_number(PROCID default null on conversion error))
);
You might actually have chosen to store both numeric and non-numeric data in the same string column (again, I'm not advocating that) and an index like that might then useful - and you wouldn't want Oracle to prevent you from creating it.
Something that obviously doesn't make sense and shouldn't be allowed to you is much harder for software to evaluate.
Interestingly the documentation says:
Oracle recommends that you specify explicit conversions, rather than rely on implicit or automatic conversions, for these reasons:
...
If implicit data type conversion occurs in an index expression, then Oracle Database might not use the index because it is defined for the pre-conversion data type. This can have a negative impact on performance.
which is presumably why it actually chooses here to apply explicit conversion when it creates the index expression (which you can see in user_ind_expressions - fiddle)
But you'd get the same error if the index expression wasn't modified - there would still be an implicit conversion of 'c' to a number, and that would still throw ORA-01722. As would some strings that look like numbers if your NLS settings are incompatible.

PL/SQL Stored Procedure create tables

I've been tasked with improving old PL/SQL and Oracle SQL legacy code. In all there are around 7000 lines of code! One aspect of the existing code that really surprises me is the previous coder needlessly created hundreds of lines of code by not writing any procedures or functions - instead the coder essentially repeats the same code throughout.
For example, in the existing code there are literally 40 or more repetitions of the following SQL:
CREATE TABLE tmp_clients
AS
SELECT * FROM live.clients;
CREATE TABLE tmp_customers
AS
SELECT * FROM live.customers;
CREATE TABLE tmp_suppliers
AS
SELECT * FROM live.suppliers WHERE type_id = 1;
and many, many more.....
I'm very new to writing in PL/SQL, though I have recently purchased the excellent book "Oracle PL/SQL programming" by Steven Feuerstein. However, as far as I can tell, I should be able to write a callable procedure such as:
procedure create_temp_table (new_table_nme in varchar(60)
source_table in varchar(60))
IS
s_query varchar2(100);
BEGIN
s_query := 'CREATE TABLE ' + new_table_nme + 'AS SELECT * FROM ' + source_table;
execute immediate s_query;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -955 THEN
NULL;
ELSE
RAISE;
END IF;
END;
I would then simply call the procedure as follows:
create_temp_table('tmp.clients', 'live.clients');
create_temp_table('tmp.customers', 'live.customers');
Is my proposed approach reasonable given the problem as stated?
Are the datatypes in the procedure call reasonable, ie should varchar2(60) be used, or is it possible to force the 'source_table' parameter to be a table name in the schema? What happens if the table name is more than 60 characters?
I want to be able to pass a third non-required parameter in cases where the data has to be restricted in a trivial way, ie to deal with cases "WHERE type_id = 1". How do I modify the procedure to include a parameter that is only used occasionally and how would I modify the rest of the code. I would probably add some sort of IF/ELSE statement to check whether the third parameter was not NULL and then construct the s_query accordingly.
How would I check that the table has actually been created successfully?
I want to trap for two other exceptions, namely
The new table (eg 'tmp.clients') already exists; and
The source table doesn't exist.
Does the EXCEPTION as written handle these cases?
More generally, from where can I obtain the SQL error codes and their meanings?
Any suggested improvements to the code would be gratefully received.
You could get rid of a lot of code (gradually!) by using GLOBAL temporary tables.
Execute immediate is not a bad practice but if there are other options then they should be used. Global temp tables are common where you want to extract and transform data but once processed you don't need it anymore until the next load. Each user can only see the data they insert and no redo logs are generated. You can index the data for faster querying if required.
Something like this
-- Create table
create global temporary table GT_CLIENTS
(
id NUMBER(10) not null,
Client_id NUMBER(10) not null,
modified_by_id NUMBER(10),
transaction_id NUMBER(10),
local_transaction_id VARCHAR2(30) not null,
last_modified_date_tz TIMESTAMP(6) WITH TIME ZONE not null
)
on commit preserve rows;
I recommend the on commit preserve rows option so that you can debug your procedure and see what went into the table.
Usage would be
INSERT INTO GT_CLIENTS
SELECT * FROM live.clients;
If this is the route you want to take to minimize changes, then the error for source table does not exist is -942 which you will want to stop for rather than continuing as your temp table would not have been created. Similarly, just continuing if you get an object already exists error will be problematic as you will not have reloaded it with the new data - the create failed so the table still has the data from the last run. So I would definitely do some more thinking about your exception handler.
That said, I also concur that this is generally not the best way to do things. Creating and dropping objects in a multi-user environment is a disaster in the making, and seems a silly waste of resources when there are more appropriate options available.

When does a deterministic function use the previous calculated value?

Consider a deterministic function like:
CREATE OR REPLACE FUNCTION SCHEMA.GET_NAME(ss_id nvarchar2
) RETURN nvarchar2 DETERMINISTIC IS
tmpVar nvarchar2(500);
BEGIN
select name into tmpvar from logistics.organization_items
where id = ss_id ;
return tmpvar ;
END ss_name;
Using Toad I called the SCHEMA.GET_NAME(1) and it returns A. I then changed the value from the table from A to B and recalling the SCHEMA.GET_NAME(1) returned B.
It is a good result. But I'm afraid of the value not being updated according to this page in the documentation, which said:
When Oracle Database encounters a deterministic function in one of these contexts, it attempts to use previously calculated results when possible rather than reexecuting the function. If you subsequently change the semantics of the function, you must manually rebuild all dependent function-based indexes and materialized views.
In what situations would the value of GET_NAME(1) return an old cached value (A instead of B)?
If you select from a table then the results of your function are not deterministic. A deterministic system is one that will always produce the same output, given the same initial conditions.
It is possible to alter the information in a table, therefore a function that selects from a table is not deterministic. To quote from the PL/SQL Language Reference:
Do not specify this clause to define a function that uses package variables or that accesses the database in any way that might affect the return result of the function. The results of doing so are not captured if the database chooses not to reexecute the function.
In other words, Oracle does not guarantee that the results of the function will be accurate (they just might be). If your table is static, and unlikely to ever change, then it should be okay but this is not something I'd ever like to rely on. To answer your question, do not assume that Oracle will return anything other than the cached value within the same transaction/session.
If you need to speed this up there are two ways. Firstly, check that you have an index on ID!
Simply JOIN to this table. If your function is only this then there is no need for the function to exist.
Use scalar sub-query caching (not necessarily possible but worth the try).
select ( select get_name(:id) from dual )
from your_table
Oracle will create an in-memory hash of the results of the function, like a result cache. If you're executing the same function multiple times then Oracle will hit the cache rather than the function.
Ben's answer sums it up nicely, and I would just like to add that the way you used DETERMINISTIC keyword inside your function is not right - keeping in view that you are reading the value from a table and then returning the same to the user.
A deterministic function should be used in cases, where you are evaluating an expression over a fixed input, for example, when you need to return a substring, or upper/lower case for the input string. Programatically, you know that for the same input the lowercase function will always return the same value, and so you would like to cache the result (using deterministic keyword).
When you read a value from a table, Oracle has no way to know that the value in the column has not changed, and so it prefers to rexecute the function and not depend on the cached result (which makes sense)
Can you add a timestamp parameter to your function? Then pass in sysdate to the function from wherever you're calling it.
This way, you're effectively caching the result and you avoid running the function over and over when it generally returns the same value within a given transaction.
The remark of Erez is the answer I was looking for.
Before executing the query or plsql-unit you can with this solution force to execute the function again after you reseted the ret-values of the function (e.g. changing a package var).
I use this for:
select ...
from big_table_vw;
where
create view big_table_vw
as
select ... (analytical functions)
from big_table
where last_mutated >= get_date();
In my case the big_table_vw contains window-functions that prevents Oracle to push the predicate into the view.
This is a late follow-up to a long-answered question, but I just wanted to add that Oracle does provide a caching mechanism for functions with mutable dependencies. RESULT_CACHE is an alternative to DETERMINISTIC that allows Oracle to abandon cached function results any time a referenced object is modified.
This way one can cache costly calculations against rarely-updated objects with confidence that cached results will not return incorrect results.
Here's an example using mythological monsters:
CREATE TABLE MONSTER (
MONSTER_NAME VARCHAR2(100) NOT NULL PRIMARY KEY
);
INSERT INTO MONSTER VALUES ('Chthulu');
INSERT INTO MONSTER VALUES ('Grendel');
INSERT INTO MONSTER VALUES ('Scylla');
INSERT INTO MONSTER VALUES ('Nue');
COMMIT;
CREATE OR REPLACE PACKAGE MONSTER_PKG
IS
FUNCTION IS_THIS_A_MONSTER(P_MONSTER_NAME IN VARCHAR2)
RETURN BOOLEAN RESULT_CACHE;
END MONSTER_PKG;
/
CREATE OR REPLACE PACKAGE BODY MONSTER_PKG
IS
FUNCTION IS_THIS_A_MONSTER(P_MONSTER_NAME IN VARCHAR2)
RETURN BOOLEAN
RESULT_CACHE RELIES_ON (MONSTER)
IS
V_MONSTER_COUNT NUMBER(1, 0) := 0;
BEGIN
SELECT COUNT(*)
INTO V_MONSTER_COUNT
FROM MONSTER
WHERE MONSTER_NAME = P_MONSTER_NAME;
RETURN (V_MONSTER_COUNT > 0);
END;
END MONSTER_PKG;
/
When a scenario like the below occurs, any existing cache is invalidated and a new cache can then be rebuilt.
BEGIN
DBMS_OUTPUT.PUT_LINE('Is Kraken initially a monster?');
IF MONSTER_PKG.IS_THIS_A_MONSTER('Kraken')
THEN
DBMS_OUTPUT.PUT_LINE('Kraken is initially a monster');
ELSE
DBMS_OUTPUT.PUT_LINE('Kraken is not initially a monster');
END IF;
INSERT INTO MONSTER VALUES ('Kraken');
COMMIT;
DBMS_OUTPUT.PUT_LINE('Is Kraken a monster after update?');
IF MONSTER_PKG.IS_THIS_A_MONSTER('Kraken')
THEN
DBMS_OUTPUT.PUT_LINE('Kraken is now a monster');
ELSE
DBMS_OUTPUT.PUT_LINE('Kraken is not now a monster');
END IF;
END;
/
Is Kraken initially a monster?
Kraken is not initially a monster
Is Kraken a monster after update?
Kraken is now a monster

Should I use Oracle's sys_guid() to generate guids?

I have some inherited code that calls SELECT SYS_GUID() FROM DUAL each time an entity is created. This means that for each insertion there are two calls to Oracle, one to get the Guid, and another to insert the data.
I suppose that there may be a good reason for this, for example - Oracle's Guids may be optimized for high-volume insertions by being sequential and thus they maybe are trying to avoid excessive index tree re-balancing.
Is there a reason to use SYS_GUID as opposed to building your own Guid on the client?
Why roll your own if you already have it provided to you. Also, you don't need to grab it first and then insert, you can just insert:
create table my_tab
(
val1 raw(16),
val2 varchar2(100)
);
insert into my_tab(val1, val2) values (sys_guid(), 'Some data');
commit;
You can also use it as a default value for a primary key:
drop table my_tab;
create table my_tab
(
val1 raw(16) default sys_guid(),
val2 varchar2(100),
primary key(val1)
);
Here there's no need for setting up a before insert trigger to use a sequence (or in most cases even caring about val1 or how its populated in the code).
More maintenance for sequences also. Not to mention the portability issues when moving data between systems.
But, sequences are more human friendly imo (looking at and using a number is better than a 32 hex version of a raw value, by far). There may be other benefits to sequences, I haven't done any extensive comparisons, you may wish to run some performance tests first.
If your concern is two database calls, you should be able to call SYS_GUID() within your INSERT statement. You could even use a RETURNING clause to get the value that Oracle generated, so that you have it in your application for further use.
SYS_GUID can be used as a default value for a primary key column, which is often more convenient than using a sequence, but note that the values will be more or less random and not sequential. On the plus side, that may reduce contention for hot blocks, but on the minus side your index inserts will be all over the place as well. We generally recommend against this practice.
for reference click here
I have found no reason to generate a Guid from Oracle. The round trip between Oracle and the client for every Guid is likely slower than the occasional index rebalancing that occurs is random value inserts.

How to efficiently convert text to number in Oracle PL/SQL with non-default NLS_NUMERIC_CHARACTERS?

I'm trying to find an efficient, generic way to convert from string to a number in PL/SQL, where the local setting for NLS_NUMERIC_CHARACTERS settings is inpredictable -- and preferable I won't touch it. The input format is the programming standard "123.456789", but with an unknown number of digits on each side of the decimal point.
select to_number('123.456789') from dual;
-- only works if nls_numeric_characters is '.,'
select to_number('123.456789', '99999.9999999999') from dual;
-- only works if the number of digits in the format is large enough
-- but I don't want to guess...
to_number accepts a 3rd parameter but in that case you to specify a second parameter too, and there is no format spec for "default"...
select to_number('123.456789', null, 'nls_numeric_characters=''.,''') from dual;
-- returns null
select to_number('123.456789', '99999D9999999999', 'nls_numeric_characters=''.,''') from dual;
-- "works" with the same caveat as (2), so it's rather pointless...
There is another way using PL/SQL:
CREATE OR REPLACE
FUNCTION STRING2NUMBER (p_string varchar2) RETURN NUMBER
IS
v_decimal char;
BEGIN
SELECT substr(VALUE, 1, 1)
INTO v_decimal
FROM NLS_SESSION_PARAMETERS
WHERE PARAMETER = 'NLS_NUMERIC_CHARACTERS';
return to_number(replace(p_string, '.', v_decimal));
END;
/
select string2number('123.456789') from dual;
which does exactly what I want, but it doesn't seem efficient if you do it many, many times in a query. You cannot cache the value of v_decimal (fetch once and store in a package variable) because it doesn't know if you change your session value for NLS_NUMERIC_CHARACTERS, and then it would break, again.
Am I overlooking something? Or am I worrying too much, and Oracle does this a lot more efficient then I'd give it credit for?
The following should work:
SELECT to_number(:x,
translate(:x, '012345678-+', '999999999SS'),
'nls_numeric_characters=''.,''')
FROM dual;
It will build the correct second argument 999.999999 with the efficient translate so you don't have to know how many digits there are beforehand. It will work with all supported Oracle number format (up to 62 significant digits apparently in 10.2.0.3).
Interestingly, if you have a really big string the simple to_number(:x) will work whereas this method will fail.
Edit: support for negative numbers thanks to sOliver.
If you are doing a lot of work per session, an option may be to use
ALTER SESSION SET NLS_NUMERIC_CHARACTERS = '.,'
at the beginning of your task.
Of course, if lots of other code is executed in the same session, you may get funky results :-)
However we are able to use this method in our data load procedures, since we have dedicated programs with their own connection pools for loading the data.
Sorry, I noticed later that your question was for the other way round. Nevertheless it's noteworthy that for the opposite direction there is an easy solution:
A bit late, but today I noticed the special format masks 'TM9' and 'TME' which are described as "the text minimum number format model returns (in decimal output) the smallest number of characters possible." on https://docs.oracle.com/cloud/latest/db112/SQLRF/sql_elements004.htm#SQLRF00210.
It seems as if TM9 was invented just to solve this particular problem:
select to_char(1234.5678, 'TM9', 'NLS_NUMERIC_CHARACTERS=''.,''') from dual;
The result is '1234.5678' with no leading or trailing blanks, and a decimal POINT despite my environ containing NLS_LANG=GERMAN_GERMANY.WE8MSWIN1252, which would normally cause a decimal COMMA.
select to_number(replace(:X,'.',to_char(0,'fmd'))) from dual;
btw
select to_number(replace('1.2345e-6','.',to_char(0,'fmd'))) from dual;
and if you want more strict
select to_number(translate(:X,to_char(0,'fmd')||'.','.'||to_char(0,'fmd'))) from dual;
Is it realistic that the number of digits is unlimited?
If we assume it is then isn't it a good reason to look into the requirements more carefully?
If we have that fantastic situation when the initial string is super long, then the following does the trick:
select
to_number(
'11111111.2222'
, 'FM' || lpad('9', 32, '9') || 'D' || lpad('9', 30, '9')
, 'NLS_NUMERIC_CHARACTERS=''.,'''
)
from
dual

Resources