Oracle - applying the to_number function to a varchar column - oracle

I want to convert a varchar column into a number using the to_number function however I have some trouble understanding the order in which Oracle attempts to execute my SQL.
The statement looks like this;
select * from table where column is not null and to_number(column, '999.9') > 20
When Oracle executes this it throws an invalid number exception. I understand that Oracle optimized the SQL statement using some kind of relational algebraic formula however can someone tell me how I can safely use the to_number operator to achieve my goal?

can someone tell me how I can safely use the to_number operator to achieve my goal?
Unfortunately, you'll have to first filter out the rows with non-numerical data somehow before you apply to_number. The conversion function itself is "not safe", if you will, it will crash the whole query on a single invalid input.

Related

Why does Oracle change the index construction function instead of error output? ORA-01722: invalid number by index on a field with type varchar2

Creating a mySomeTable table with 2 fields
create table mySomeTable (
IDRQ VARCHAR2(32 CHAR),
PROCID VARCHAR2(64 CHAR)
);
Creating an index on the table by the PROCID field
create index idx_PROCID on mySomeTable(trunc(PROCID));
Inserting records:
insert into mySomeTable values ('a', '1'); -- OK
insert into mySomeTable values ('b', 'c'); -- FAIL
As you can see, an error has been made in the index construction script and the script will try to build an index on the field using the trunc() function.
trunct() is a function for working with dates or numbers, and the field has the string type
This index building script successfully works out and creates an index without displaying any warnings and errors.
An index is created on the table using the TRUNC(TO_NUMBER(PROCID)) function
When trying to insert or change an entry in the table, if PROCID cannot be converted to a number, I get the error ORA-01722: invalid number, which is actually logical.
However, the understanding that I am working in a table with rows and adding string values to the table, and the error is about converting to a number, was misleading and not understanding what is happening...
Question: Why does Oracle change the index construction function, instead of giving an error? And how can this be avoided in the future?
Oracle version 19.14
Naturally, there was only one solution - to create the right index with the right script
create index idx_PROCID on mySomeTable(PROCID);
however, this does not explain, to me, this Oracle behavior.
Oracle doesn't know if the index declaration is wrong or the column data type is wrong. Arguably (though some may well disagree!) Oracle shouldn't try to second-guess your intentions or enforce restrictions beyond those documented in the manual - that's what user-defined constraints are for. And, arguably, this index acts as a form of pseudo-constraint. That's a decision for the developer, not Oracle.
It's legal, if usually ill-advised, to store a number in a string column. If you actually intentionally chose to store numbers as strings - against best practice and possibly just to irritate future maintainers of your code - then the index behaviour is reasonable.
A counter-question is to ask where it should draw the line - if you expect it to error on your index expression, what about something like
create index idx_PROCID on mySomeTable(
case when regexp_like(PROCID, '^\d.?\d*$') then trunc(PROCID) end
);
or
create index idx_PROCID on mySomeTable(
trunc(to_number(PROCID default null on conversion error))
);
You might actually have chosen to store both numeric and non-numeric data in the same string column (again, I'm not advocating that) and an index like that might then useful - and you wouldn't want Oracle to prevent you from creating it.
Something that obviously doesn't make sense and shouldn't be allowed to you is much harder for software to evaluate.
Interestingly the documentation says:
Oracle recommends that you specify explicit conversions, rather than rely on implicit or automatic conversions, for these reasons:
...
If implicit data type conversion occurs in an index expression, then Oracle Database might not use the index because it is defined for the pre-conversion data type. This can have a negative impact on performance.
which is presumably why it actually chooses here to apply explicit conversion when it creates the index expression (which you can see in user_ind_expressions - fiddle)
But you'd get the same error if the index expression wasn't modified - there would still be an implicit conversion of 'c' to a number, and that would still throw ORA-01722. As would some strings that look like numbers if your NLS settings are incompatible.

How to create Interactive/Classic Report with dynamic SQL?

I'm running Apex 19.2 and I would like to create a classical or interactive report based on dynamic query.
The query I'm using is not known at design time. It depends on an page item value.
-- So I have a function that generates the SQL as follows
GetSQLQuery(:P1_MyItem);
This function may return something like
select Field1 from Table1
or
Select field1,field2 from Table1 inner join Table2 on ...
So it's not a sql query always with the same number of columns. It's completely variable.
I tried using PL/SQL function Body returning SQL Query but it seems like Apex needs to parse the query at design time.
Has anyone an idea how to solve that please ?
Cheers,
Thanks.
Enable the Use Generic Column Names option, as Koen said.
Then set Generic Column Count to the upper bound of the number of columns the query might return.
If you need dynamic column headers too, go to the region attributes and set Type (under Heading) to the appropriate value. PL/SQL Function Body is the most flexible and powerful option, but it's also the most work. Just make sure you return the correct number of headings as per the query.

Oracle CHAR Comparison Not Working in Function

Could someone please explain to me the difference between the below two Oracle queries? I know they look very similar but the first one returns results and the second one does not. My implementation of the function can be seen below as well.
--Returns results
SELECT *
FROM <TABLE_NAME>
WHERE ID = CAST(<UserID> AS CHAR(2000)); --ID is defined as CHAR(8) in the DB.
--Does not return results
SELECT *
FROM <TABLE_NAME>
WHERE ID = CAST_TO_CHAR(<UserID>); --ID is defined as CHAR(8) in the DB.
--Function definition
CREATE OR REPLACE FUNCTION CAST_TO_CHAR(varToPad IN VARCHAR2)
RETURN CHAR IS returnVal CHAR(2000);
BEGIN
SELECT CAST(varToPad AS CHAR(2000))
INTO returnVal
FROM DUAL;
RETURN returnVal;
END;
/
It almost seems to me that the type is not persisting when the value is retrieved from the database. From what I understand from CHAR comparisons in Oracle, it will take the smaller of the two fields and truncate the larger one so that the sizes match (that is why I am casting the second variable to length 2000).
The reason that I need to achieve something like this is because a vendor tool that we are upgrading from DB2 to Oracle defined all of the columns in the Oracle database as CHAR instead of VARCHAR2. They did this to make their legacy code more easily portable to a distributed environment. This is causing big issues in our web applications because compares are now being done against fixed length CHAR fields.
I thought about using TRIM() but these queries will be accessed a lot and I do not want them to do a full table scan each time. I also considered RPAD(, ) but I don't really want to hard code lengths in the application as these may change in the future.
Does anyone have any thoughts about this? Thank you in advance for your help!
I have similar problem. It turned out that these are the rules of implicit data conversion. Oracle Database automatically converts a value from one datatype to another when such a conversion makes sense.
If you change your select:
SELECT *
FROM <TABLE_NAME>
WHERE CAST(ID as CHAR(2000)) = CAST_TO_CHAR(<UserID>);
You will see that's works properly.
And here's another test script showing that the function works correctly:
SET SERVEROUTPUT ON --for DBMS_OUTPUT.PUT_LINE.
DECLARE
test_string_c CHAR(8);
test_string_v VARCHAR2(8);
BEGIN
--Assign the same value to each string.
test_string_c := 'string';
test_string_v := 'string';
--Test the strings for equality.
IF test_string_c = CAST_TO_CHAR(test_string_v) THEN
DBMS_OUTPUT.PUT_LINE('The names are the same');
ELSE
DBMS_OUTPUT.PUT_LINE('The names are NOT the same');
END IF;
END;
/
anonymous block completed
The names are the same
Here are some rules govern the direction in which Oracle Database makes implicit datatype conversions:
During INSERT and UPDATE operations, Oracle converts the value to
the datatype of the affected column.
During SELECT FROM operations, Oracle converts the data from the
column to the type of the target variable.
When comparing a character value with a numeric value, Oracle
converts the character data to a numeric value.
When comparing a character value with a DATE value, Oracle converts
the character data to DATE.
When making assignments, Oracle converts the value on the right side
of the equal sign (=) to the datatype of the target of the assignment
on the left side.
When you use a SQL function or operator with an argument of a
datatype other than the one it accepts, Oracle converts the argument
to the accepted datatype.
Complete list of datatype comparison rules you can explore here

Datatype difference in procedure parameter and sql query inside it

In my backend procedure i have a varchar2 parameter and i am using it in the SQL query to search with number column. Will this cause any kind of performance issues ?
for ex:
Proc (a varchar)
is
select * from table where deptno = a;
end
Here deptno is number column in table and a is varchar .
It might do. The database will resolve the differences in datatype by casting DEPTNO to a VARCHAR2. This will prevent the optimizer from using any (normal) index you have on that column. Depending on the data volumes and distribution, an indexed read may not always be the most efficient access path, in which case the data conversion doesn't matter.
So it does depend. But what are your options if it does matter (you have a highly selective index on that column)?
One solution would be to apply an explicit data conversion in your query:
select * from table
where deptno = to_number(a);
This will cause the query to fail if A contains a value which won't convert to a number.
A better solution would be to change the datatype of A so that the calling program can only pass a numeric value. This throws the responsibility for duff data where it properly belongs.
The least attractive solution is to keep the procedure's signature and the query as is, and build a function-based index on the column:
create index emp_deptchar_fbi on emp(to_char(deptno));
Read the documentation to find out more about function-based indexes.

Good way to deal with comma separated values in oracle

I am getting passed comma separated values to a stored procedure in oracle. I want to treat these values as a table so that I can use them in a query like:
select * from tabl_a where column_b in (<csv values passed in>)
What is the best way to do this in 11g?
Right now we are looping through these one by one and inserting them into a gtt which I think is inefficient.
Any pointers?
This solves exactly same problem
Ask Tom
Oracle does not come with a built-in tokenizer. But it is possible to roll our own, using SQL Types and PL/SQL. I have posted a sample solution in this other SO thread.
That would enable a solution like this:
select * from tabl_a
where column_b in ( select *
from table (str_to_number_tokens (<csv values passed in>)))
/
In 11g you can use the "occurrence" parameter of REGEXP_SUBSTR to select the values directly in SQL:
select regexp_substr(<csv values passed in>,'[^,]+',1,level) val
from dual
connect by level < regexp_count(<csv values passed in>,',')+2;
But since regexp_substr is somewhat expensive I am not sure if it is the most effective in terms of being the fastest.

Resources