oracle - add RAW to RAW (or BINARY to BINARY) - oracle

Three different functions return the same data type according to dump()
select
dump(utl_raw.cast_to_raw('j')),
dump(utl_raw.cast_from_number(1)),
dump(utl_raw.cast_from_binary_integer(1))
from dual;
DUMP(UTL_RAW.CAST_TO_RAW('J'))
-----------------------------------------------------
DUMP(UTL_RAW.CAST_FROM_NUMBER(1))
-----------------------------------------------------
DUMP(UTL_RAW.CAST_FROM_BINARY_INTEGER(1))
-----------------------------------------------------
Typ=23 Len=1: 106
Typ=23 Len=2: 193,2
Typ=23 Len=4: 0,0,0,1
...however non of them can be added together.
select
utl_raw.cast_to_raw('j') +
utl_raw.cast_from_number(1),
utl_raw.cast_from_number(1) +
utl_raw.cast_from_binary_integer(1),
utl_raw.cast_from_binary_integer(1) +
utl_raw.cast_to_raw('j')
from dual;
ORA-00932: inconsistent datatypes: expected NUMBER got BINARY
How can I perform arithmetic on RAW, in particular - add RAW/BINARY to RAW/BINARY?
Edit: This question arose from a need to iterate through alphabet. I spotted that it's not so obvious how to add 1 to ascii code and then return with greater letter, eg. 'a' + 1 = 'b'

You don't want to add together binary values. If you just want to iterate through the alphabet, just use the ASCII and CHR functions. For example
select ascii('a') ascii_code,
ascii('a') + 1 next_ascii_code,
chr( ascii('a') + 1 ) next_char
from dual
will show you the ASCII code of a lower case a (which will be 97 assuming your database character set is a superset of US7ASCII), the ASCII code of the next character (98), and the next character in character set (a lower case b). If you want to iterate through the alphabet in a loop
FOR i IN 1 .. 26
LOOP
dbms_output.put_line( chr( ascii('a') + i - 1 ) );
END LOOP;

Related

Execute immediate use in select * from in oracle

I am trying to get maximum length of column data in oracle by executing a dynamic query as part of Select statement , except seems i don't think we can use execute immediate in an select clause. Could i get some help with syntax or understanding as to better way to do this.
SELECT
owner OWNER,
table_name,
column_name,
'select max(length('||column_name||')) from '||table_name||';' max_data_length
FROM
dba_tab_columns
WHERE
AND ( data_type = 'NUMBER'
OR data_type = 'INTEGER' )
the 4th column in above query spits out a sql string rather than computing the value and returning it.
Here is some food for thought. Note that I am only looking for numeric columns that don't already have precision specified in the catalog. (If you prefer, you can audit all numeric columns and compare the declared precision against the actual precision used by your data.)
I am also looking only in specific schemas. Instead, you may give a list of schemas to be ignored; I hope you are not seriously considering making any changes to SYS, for example, even if it does (and it does!) have numeric columns without specified precision.
The catalog doesn't store INTEGER in the data type; instead, it stores that as NUMBER(38) So I am not searching for data type INTEGER in DBA_TAB_COLUMNS. But this raises an interesting question - perhaps you should search for all columns where DATA_PRECISION is null (as in my code below), but also for DATA_PRECISION = 38.
In the code below I use DBMS_OUTPUT to display the findings directly to the screen. You will probably want to do something smarter with this; either create a table function, or create a table and insert the findings in it, or perhaps even issue DDL already (note that those also require dynamic SQL).
This still leaves you to deal with scale. Perhaps you can get around that with a specification like NUMBER(prec, *) - not sure if that will meet your needs. But the idea is similar; you will just need to write code carefully, like I did for precision (accounting for the decimal point and the minus sign, for example).
Long story short, here is what I ran on my system, and the output it produced.
declare
prec number;
begin
for rec in (
select owner, table_name, column_name
from all_tab_columns
where owner in ('SCOTT', 'HR')
and data_type = 'NUMBER'
and data_precision is null
)
loop
execute immediate
'select max(length(translate(to_char(' || rec.column_name ||
'), ''0-.'', ''0'')))
from ' || rec.owner || '.' || rec.table_name
into prec;
dbms_output.put_line('owner: ' || lpad(rec.owner, 12, ' ') ||
' table name: ' || lpad(rec.table_name, 12, ' ') ||
' column_name: ' || lpad(rec.column_name, 12, ' ') ||
' precision: ' || prec);
end loop;
end;
/
PL/SQL procedure successfully completed.
owner: HR table name: REGIONS column_name: REGION_ID precision: 1
owner: HR table name: COUNTRIES column_name: REGION_ID precision: 1
owner: SCOTT table name: SALGRADE column_name: GRADE precision: 1
owner: SCOTT table name: SALGRADE column_name: LOSAL precision: 4
owner: SCOTT table name: SALGRADE column_name: HISAL precision: 4
PL/SQL procedure successfully completed.
EDIT
Here are several additional points (mostly, corrections) based on extended conversations with Sayan Malakshinov in comments to my answer and to his.
Most importantly, even if we can figure out max precision of numeric columns, that doesn't seem directly related to the ultimate goal of this whole thing, which is to determine the correct Postgre data types for the existing Oracle columns. For example in Postgre, unlike Oracle, it is important to distinguish between integer and non-integer. Unless scale is explicitly 0 in Oracle, we don't know that the column is "integers only"; we could find that out, through a similar dynamic SQL approach, but we would be checking for non-integer values, not precision.
Various corrections: My query is careless with regard to quoted identifiers (schema name, table name, column name). See the proper use of double-quotes in the dynamic query in Sayan's answer; my dynamic query should be modified to use double-quotes in the same way his does.
In my approach I pass numbers through TO_CHAR and then remove minus sign and decimal period. Of course, one's system may use comma, or other symbols, for decimal separator; the safer approach is to remove everything that is not a digit. That can be done with
translate(col_name, '0123456789' || col_name, '0123456789')
The query also doesn't handle very large or very small numbers, which can be stored in the Oracle database, but can only be represented in scientific notation when passed through TO_CHAR().
In any case, since "max precision" doesn't seem directly related to the ultimate goal of mapping to correct data types in Postgre, I am not changing the code - leaving it in the original form.
Thanks to Sayan for pointing out all these issues.
One more thing - *_TAB_COLUMNS contains information about view columns too; very likely those should be ignored for the task at hand. Very easy to do, as long as we realize it needs to be done.
Reading carefully that AWS article and since both previous answers (including mine) use rough estimate (length+to_char without format model and vsize operate decimal length, not bytes), I decided to write another full answer.
Look at this simple example:
with
function f_bin(x number) return varchar2 as
bi binary_integer;
e_overflow exception;
pragma exception_init(e_overflow, -1426);
begin
bi:=x;
return case when bi=x then 'ok' else 'error' end;
exception when e_overflow then return 'error';
end;
function f_check(x number, f varchar2) return varchar2 as
begin
return case when to_number(to_char(abs(x),f),f) = abs(x) then 'ok' else 'error' end;
exception when VALUE_ERROR then return 'error';
end;
a(a1) as (
select * from table(sys.odcinumberlist(
1,
0.1,
-0.1,
-7,
power(2,15)-1,
power(2,16)-1,
power(2,31)-1,
power(2,32)-1
))
)
select
a1,
f_check(a1,'fm0XXX') byte2,
f_check(a1,'fm0XXXXXXX') byte4,
f_bin(a1) ff_signed_binary_int,
to_char(abs(a1),'fm0XXXXXXXXXXXXXXX') f_byte8,
f_check(a1,'fm0XXXXXXXXXXXXXXX') byte8,
vsize(a1) vs,
dump(a1) dmp
from a;
Result:
A1 BYTE2 BYTE4 FF_SIGNED_ F_BYTE8 BYTE8 VS DMP
---------- ---------- ---------- ---------- ---------------- ---------- ---------- ----------------------------------------
1 ok ok ok 0000000000000001 ok 2 Typ=2 Len=2: 193,2
.1 error error error 0000000000000000 error 2 Typ=2 Len=2: 192,11
-.1 error error error 0000000000000000 error 3 Typ=2 Len=3: 63,91,102
-7 ok ok ok 0000000000000007 ok 3 Typ=2 Len=3: 62,94,102
32767 ok ok ok 0000000000007FFF ok 4 Typ=2 Len=4: 195,4,28,68
65535 ok ok ok 000000000000FFFF ok 4 Typ=2 Len=4: 195,7,56,36
2147483647 error ok ok 000000007FFFFFFF ok 6 Typ=2 Len=6: 197,22,48,49,37,48
4294967295 error ok error 00000000FFFFFFFF ok 6 Typ=2 Len=6: 197,43,95,97,73,96
Here I used PL/SQL functions for readability and to make it more clear.
Function f_bin casts an input number parameter to PL/SQL binary_integer (signed int4) and compares the result with input parameter, ie it checks if it loses accuracy. Defined exception shows that it can raise an exception 1426 "numeric overflow".
Function f_check does double conversion to_number(to_char(...)) of the input value and checks if it's still equal to the input value. Here I use hexadecimal format mask (XX = 1 byte), so it checks if an input number can fit an in this format mask. Hexadecimal format model works with non-negative numbers, so we need to use abs() here.
F_BYTE8 shows formatted value that uses a function from the column BYTE8, so you can easily see the loss of accuracy here.
All the above were just for readability, but we can make the same using just pure SQL:
with
a(a1) as (
select * from table(sys.odcinumberlist(
1,
0.1,
-0.1,
-7,
power(2,15)-1,
power(2,16)-1,
power(2,31)-1,
power(2,32)-1
))
)
select
a1,
case when abs(a1) = to_number(to_char(abs(a1),'fmXXXXXXXXXXXXXXX') default null on conversion error,'fmXXXXXXXXXXXXXXX')
then ceil(length(to_char(abs(a1),'fmXXXXXXXXXXXXXXX'))/2)
else -1
end xx,
vsize(a1) vs,
dump(a1) dmp
from a;
Result:
A1 XX VS DMP
---------- ---------- ---------- ----------------------------------------
1 1 2 Typ=2 Len=2: 193,2
.1 -1 2 Typ=2 Len=2: 192,11
-.1 -1 3 Typ=2 Len=3: 63,91,102
-7 1 3 Typ=2 Len=3: 62,94,102
32767 2 4 Typ=2 Len=4: 195,4,28,68
65535 2 4 Typ=2 Len=4: 195,7,56,36
2147483647 4 6 Typ=2 Len=6: 197,22,48,49,37,48
4294967295 4 6 Typ=2 Len=6: 197,43,95,97,73,96
As you can see, here I return -1 in case of conversion errors to byte8 and number of non-zero bytes otherwize.
Obviusly it can be simplified even more: you can just check range limits and that x=trunc(x) or mod(x,1)=0.
Looks like that is what you need:
VSIZE returns the number of bytes in the internal representation of expr. If expr is null, then this function returns null.
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/VSIZE.html
In Oracle INTEGER is just a number(*,0): http://orasql.org/2012/11/10/differences-between-integerint-in-sql-and-plsql/
select
owner,table_name,column_name,
data_type,data_length,data_precision,data_scale,avg_col_len
,x.vs
from (select/*+ no_merge */ c.*
from dba_tab_columns c
where data_type='NUMBER'
and owner not in (select username from dba_users where ORACLE_MAINTAINED='Y')
) c
,xmltable(
'/ROWSET/ROW/VSIZE'
passing dbms_xmlgen.getxmltype('select nvl(max(vsize("'||c.column_name||'")),0) as VSIZE from "'||c.owner||'"."'||c.table_name||'"')
columns vs int path '.'
) x
;
Update: if you read oracle internal number format (exponent+bcd-mantissa and look at a result of dump(x) function, you can see that Oracle stores numbers as a decimal value(4 bits per 1 decimal digit, 2 digits in 1 byte), so for such small ranges you can just take their maximum BCD mantissa+1 exponent as a rough estimation.

Need specific number format in Oracle S9(15)V9(2)

I have a requirement to produce amount fields in zoned decimal format with this specific syntax below.
I don’t know if I need to create a function to handle this or if I can tweak the Oracle number format model. I’m thinking it might require some conditional formatting within a function due to the different requirement for number of digits between positive and negative. I will be performing this formatting on a couple of dozen data elements in the procedure so that might be another reason to use a function. Thoughts?
Requirement:
Amount should be represented by 17 characters (positive number) or 16 characters plus a “}” appended to the end (negative number).
Ex. 0.00 should show as 00000000000000000.
Ex. -935,560.00 should show as 00000000093556000}
Using Oracle 12c.
If I understood you correctly, the input is already formatted and its datatype is VARCHAR2. If that's so, then this might do the job:
SQL> with test (col) as
2 (select '0.00' from dual union all
3 select '25.34' from dual union all
4 select '-935,560.00' from dual
5 )
6 select col,
7 lpad(translate(col, 'x,.-', 'x'),
8 case when substr(col, 1, 1) = '-' then 16
9 else 17
10 end, '0') ||
11 case when substr(col, 1, 1) = '-' then '}'
12 else null
13 end result
14 from test;
COL RESULT
----------- --------------------
0.00 00000000000000000
25.34 00000000000002534
-935,560.00 0000000093556000}
SQL>
What does it do?
lines #1 - 5 - sample data
line #7 - translate removes minus sign, commas and dots
lines #7 - 10 - lpad pads the number (without characters from the previous step) with zeros up to the length of 16 (for negative values) or 17 (for positive values) characters
lines #11 - 13 - if it is a negative value, concatenate } to the end of the result string

How to calculate the arithmetic mean of time oracle?

I have a table in Oracle with the time value in the column, but the column type is nvarchar2, the time format is 0:21:31, how can I calculate the average value i.e. (0: 22: 00 + 0: 24: 00) = 0 : 23: 00
The obvious question is, why are "times" stored as character strings? This makes everything difficult. And, especially, why the N in NVARCHAR2? The strings are just digits and colon, why do you need "national character set" strings?
Be that as it may... Here is one way - which will fail in many different ways on bad inputs - where the output is again in NVARCHAR2 data type. (Notice the NCHAR, with an N, in TO_NCHAR(), which I have never seen anyone use.) The inputs are given as columns A and B in a made-up table T in the WITH clause (which is there just for testing, it is not part of the solution; use your actual table and column names, and remove the WITH clause at the top).
with t as (select '0:24:00' a, '0:22:00' b from dual)
select to_nchar(date '2000-01-01'
+ ( (to_date(a, 'hh24:mi:ss') - date '2000-01-01')
+ (to_date(b, 'hh24:mi:ss') - date '2000-01-01')
) / 2, 'hh24:mi:ss') as avg_ab
from t
;
AVG_AB
----------------
00:23:00
If instead all your times are in a single column, call it A, you could use standard AVG, but still needing to play with TO_DATE() and TO_NCHAR()...
with t as (select '0:24:00' a from dual union all select '0:20:30' from dual)
select to_nchar(date '2000-01-01'
+ avg(to_date(a, 'hh24:mi:ss') - date '2000-01-01'),
'hh24:mi:ss') as avg_a
from t
;
AVG_A
----------------
00:22:15

Indexing very long number column

I have a table with few columns including 2 varchar2(200) columns. In this columns we basically store serial numbers which can be numeric or alpha-numeric. Alpha-numeric values always both serials are same in those 2 columns. However for number serials it is a range like (first column value = 511368000004001226 and second column value = 511368000004001425 with 200 different (Qty)). Maximum length of the serial is 20 digits. I have indexed both the columns.
Now I want to sear a serial in-between the above range. (lets say 511368000004001227). I use following query.
SELECT *
FROM Table_Namr d
WHERE d.FROM_SN <= '511368000004001227'
AND d.TO_SN >= '511368000004001227'
Is it a valid query? Can I use <=> operators for numbers in a varchar column?
Yes, You can use >= and <= operators on Varchar2 columns but it will behave like it is string and comparison between strings will take place.
In this case, 4 will be considered greater than 34 means '4' > '34' but number 4 is less than 34.
It is not a good practice to store a number in Varchar2. You will lose the functionality of Numbers if you store them in varchar2.
You can check the above concept using following:
select * from dual where '4' > '34'; -- gives result 'X'
select * from dual where 4 > 34; -- Gives no result
You can try to convert the varchar2 column to number using to_number if possible in your case.
Cheers!!
Your Query is "valid" in the sense, that it works, and will deliver a result. If you are looking from a numeric standpoint, it will not work correctly, as the range operators for VARCHAR columns work the same way, as it would sort an alphanumeric value.
e.g.
d.FROM_SN >= '51000'
AND d.TO_SN <= '52000'
This would match for values, as you would expect, like 51001, 51700, but would also deliver unexpected values like 52, or 5100000000000000
If you want numeric selection, you would need to parse it - which of course only works, if every value in these columns is numeric:
TO_NUMBER(d.FROM_SN) >= 51000
AND TO_NUMBER(d.TO_SN) <= 52000
You may use alphanumerical comparison provided
1) your ranges are of the same length and
2) all the keys in the range are of the same length
Example data
SERNO
----------------------------------------
101
1011
1012
1013
1014
102
103
104
This doesn't work
select * from tab
where serno >= '101' and serno <= '102';
SERNO
----------------------------------------
101
102
1011
1012
1013
1014
But constraining the lentgh of the result provides the right answer
select * from tab
where serno >= '101' and serno <= '102'
and length(serno) = 3;
SERNO
----------------------------------------
101
102

Oracle cursor removes leading zero

I have a cursor which selects date from column with NUMBER type containg floating point numbers. Numbers like 4,3433 are returned properly while numbers smaller then 1 have removed leading zero.
For example number 0,4513 is returned as ,4513.
When I execute select used in the cursor on the database, numbers are formatted properly, with leading zeros.
This is how I loop over the recors returned by the cursor:
FOR c_data IN cursor_name(p_date) LOOP
...
END LOOP;
Any ideas why it works that way?
Thank you in advance.
You're confusing number format and number value.
The two strings 0.123 and .123, when read as a number, are mathematically equals. They represent the same number. In Oracle the true number representation is never displayed directly, we always convert a number to a character to display it, either implicitly or explicitly with a function.
You assume that a number between 0 and 1 should be represented with a leading 0, but this is not true by default, it depends on how you ask this number to be displayed. If you don't want unexpected outcome, you have to be explicit when displaying numbers/dates, for example:
to_char(your_number, '9990.99');
It's the default number formatting that Oracle provides.
If you want to specify something custom, you shall use TO_CHAR function (either in SQL query or PL/SQL code inside the loop).
Here is how it works:
SQL>
SQL> WITH aa AS (
2 select 1.3232 NUM from dual UNION ALL
3 select 1.3232 NUM from dual UNION ALL
4 select 332.323 NUM from dual UNION ALL
5 select 0.3232 NUM from dual
6 )
7 select NUM, to_char(NUM, 'FM999990D9999999') FORMATTED from aa
8 /
NUM FORMATTED
---------- ---------------
1.3232 1.3232
1.3232 1.3232
332.323 332.323
.3232 0.3232
SQL>
In this example, 'FM' - suppresses extra blanks, '0' indicates number digit including leading/trailing zeros, and '9' indicates digit suppressing leading/trailing zeros.
You can find many examples here:
http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements004.htm#i34570

Resources