Invalid DataType for one value - oracle

I have an odd scenario. On an oracle 11.2 db there is one value that when selected into a table type causes and invalid data type error when the table type is used. I have validated that when the row is excluded everything else works fine.
Pseudo code;
type my_nums is table of number;
select num bulk collect into my_nums from tableA;
select t.my_col from tableB t where t.my_col IN (select column_value from table(my_nums));
I have checked this one key from tableA is a numeric using;
with t as (select to_char(num) as txt from tableA where num = 33)
select txt, case when regexp_like(txt, '^-?[[:digit:],.]*$') then 'Numeric' else 'Non-Numeric' end as type
FROM t;
Taken from How to check if a field is numeric. Is there something else I can look at to find out why this is happening?
To be clear, using the following, all is well in my procedure.
select num bulk collect into my_nums from tableA where num != 33;
Thanks in advance.

Related

Oracle access varray elements in SQL

I'm playing around with array support in Oracle and hit a roadblock regarding array access within a SQL query. I'm using the following schema:
create type smallintarray as varray(10) of number(3,0);
create table tbl (
id number(19,0) not null,
the_array smallintarray,
primary key (id)
);
What I would like to do is get the id and the first element i.e. at index 1 of the array. In PostgreSQL I could write select id, the_array[1] from tbl t but I don't see how I could do that with Oracle. I read that array access by index is only possible in PL/SQL, which would be fine if I could return a "decorated cursor" to achieve the same result through JDBC, but I don't know if that's possible.
DECLARE
c1 SYS_REFCURSOR;
varr smallintarray2;
BEGIN
OPEN c1 FOR SELECT t.id, t.THE_ARRAY from tbl t;
-- SELECT t.THE_ARRAY INTO varr FROM table_with_enum_arrays2 t;
-- return a "decorated cursor" with varr(1) at select item position 1
dbms_sql.return_result(c1);
END;
You can do this in plain SQL; it's not pretty, but it does work. You would prefer that Oracle had syntax to hide this from the programmer (and perhaps it does, at least in the most recent versions; I am still stuck at 12.2).
select t.id, q.array_element
from tbl t cross apply
( select column_value as array_element,
rownum as ord
from table(the_array)
) q
where ord = 1
;
EDIT If order of generating the elements through the table operator is a concern, you could do something like this (in Oracle 12.1 and higher; otherwise the function can't be part of the query itself, but it can be defined on its own):
with
function select_element(arr smallintarray, i integer)
return number
as
begin
return arr(i);
end;
select id, select_element(the_array, 1) as the_array_1
from tbl
/
First of all, please don't do that on production. Use tables instead of storing arrays within a table.
Answer to your question is to use column as a table source
SELECT t.id, ta.*
from tbl t,
table(t.THE_ARRAY) ta
order by column_value
-- offset 1 row -- in case if sometime you'll need to skip a row
fetch first 1 row only;
UPD: as for ordering the array I can only say playing with 2asc/desc" parameters provided me with results I've expected - it has been ordered ascending or descending.
UPD2: found a cool link to description of performance issues might happen

Oracle CLOB column and LAG

I'm facing a problem when I try to use LAG function on CLOB column.
So let's assume we have a table
create table test (
id number primary key,
not_clob varchar2(255),
this_is_clob clob
);
insert into test values (1, 'test1', to_clob('clob1'));
insert into test values (2, 'test2', to_clob('clob2'));
DECLARE
x CLOB := 'C';
BEGIN
FOR i in 1..32767
LOOP
x := x||'C';
END LOOP;
INSERT INTO test(id,not_clob,this_is_clob) values(3,'test3',x);
END;
/
commit;
Now let's do a select using non-clob columns
select id, lag(not_clob) over (order by id) from test;
It works fine as expected, but when I try the same with clob column
select id, lag(this_is_clob) over (order by id) from test;
I get
ORA-00932: inconsistent datatypes: expected - got CLOB
00932. 00000 - "inconsistent datatypes: expected %s got %s"
*Cause:
*Action:
Error at Line: 1 Column: 16
Can you tell me what's the solution of this problem as I couldn't find anything on that.
The documentation says the argument for any analytic function can be any datatype but it seems unrestricted CLOB is not supported.
However, there is a workaround:
select id, lag(dbms_lob.substr(this_is_clob, 4000, 1)) over (order by id)
from test;
This is not the whole CLOB but 4k should be good enough in many cases.
I'm still wondering what is the proper way to overcome the problem
Is upgrading to 12c an option? The problem is nothing to do with CLOB as such, it's the fact that Oracle has a hard limit for strings in SQL of 4000 characters. In 12c we have the option to use extended data types (providing we can persuade our DBAs to turn it on!). Find out more.
Some of the features may not work properly in SQL when using CLOBs(like DISTINCT , ORDER BY GROUP BY etc. Looks like LAG is also one of them but, I couldn't find anywhere in docs.
If your values in the CLOB columns are always less than 4000 characters, you may use TO_CHAR
select id, lag( TO_CHAR(this_is_clob)) over (order by id) from test;
OR
convert it into an equivalent SELF JOIN ( may not be as efficient as LAG )
SELECT a.id,
b.this_is_clob AS lagging
FROM test a
LEFT JOIN test b ON b.id < a.id;
Demo
I know this is an old question, but I think I found an answer which eliminates the need to restrict the CLOB length and wanted to share it. Utilizing CTE and recursive subqueries, we can replicate the lag functionality with CLOB columns.
First, let's take a look at my "original" query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
)
SELECT tt.order_by_col,
tt.clob_col,
LAG(tt.clob_col) OVER (ORDER BY tt.order_by_col)
FROM test_table tt;
As expected, I get the following error:
ORA-00932: inconsistent datatypes: expected - got CLOB
Now, lets look at the modified query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
),
initial_pull AS
(
SELECT tt.order_by_col,
LAG(tt.order_by_col) OVER (ORDER BY tt.order_by_col) AS PREV_ROW,
tt.clob_col
FROM test_table tt
),
recursive_subquery (order_by_col, prev_row, clob_col, prev_clob_col) AS
(
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, NULL
FROM initial_pull ip
WHERE ip.prev_row IS NULL
UNION ALL
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, rs.clob_col
FROM initial_pull ip
INNER JOIN recursive_subquery rs ON ip.prev_row = rs.order_by_col
)
SELECT rs.order_by_col, rs.clob_col, rs.prev_clob_col
FROM recursive_subquery rs;
So here is how it works.
I create the TEST_TABLE, this really is only for the example as you should already have this table somewhere in your schema.
I create a CTE of the data I want to pull, plus a LAG function on the primary key (or a unique column) in the table partitioned and ordered in the same way I would have in my original query.
Create a recursive subquery using the initial row as the root and descending row by row joining on the lagged column. Returning both the CLOB column from the current row and the CLOB column from its parent row.

Alternative for conditional subquery in Oracle 11g

I'm getting more and more experienced with oracle pl/sql but this problem seems to be persistent: I have a procedure that merges external data into a table in the database that looks something like this:
PROCEDURE updateTable (ts DATE, val NUMBER, id NUMBER)
BEGIN
IF id NOT IN (15, 16, 23)
THEN
MERGE INTO myTable dest
USING (SELECT ts, val, id FROM Dual) src
ON (src.id = dest.id AND src.ts = dest.ts)
WHEN MATCHED THEN UPDATE SET dest.val = src.val
WHEN NOT MATCHED THEN INSERT (ts, val, id) VALUES (src.ts, src.val, src.id);
END IF;
END;
This works just fine so far. Now the problem is that the list of id's that are excluded is hardcoded and it would be much more dynamic to have those in another table, i.e. in the code above replace the line
IF id NOT IN (15, 16, 23)
with something like
IF id NOT IN (SELECT id FROM excluTable)
which returns the notorious error: PLS_00405: subquery not allowed in this context
If it was only one id, I could simply create a variable and select the id into it. Unfortunately it's quite a long list. I've tried to bulk collect them into an array but then I don't find a way to put that into the conditional clause either. I'm sure there is an elegant solution for this.
Thanks for your help!
There may be many IDs in your exclusion table, but you are only passing one into the procedure. You can see if that single value exists in the table with a count into a local variable, and then check whether the count was zero or non-zero; something like:
PROCEDURE updateTable (ts DATE, val NUMBER, id NUMBER) IS
l_excl_id PLS_INTEGER;
BEGIN
SELECT count(*)
INTO l_excl_id
FROM excluTable
WHERE excluTable.id = updateTable.id;
IF l_excl_id = 0
THEN
MERGE INTO myTable dest
USING (SELECT ts, val, id FROM Dual) src
ON (src.id = dest.id AND src.ts = dest.ts)
WHEN MATCHED THEN UPDATE SET dest.val = src.val
WHEN NOT MATCHED THEN INSERT (ts, val, id) VALUES (src.ts, src.val, src.id);
END IF;
END;
Incidentally, it can get confusing if your procedure argument names are the same as table column names, or other identifiers. For instance, as id is the procedure argument name and the column name in the table I've had to prefix them both:
WHERE excluTable.id = updateTable.id;
one with the table name (or alias if you add one), the other with the procedure name. If you just did
WHERE excluTable.id = id
then the scoping rules would mean it matched every ID in the table with itself, not the argument, so you would be counting all rows - and it might not be immediately obvious why it wasn't behaving as you expected. If the arguments were named as, say, p_ts and p_id then you wouldn't have to account for that ambiguity. That's also why I've prefixed my local flag variable with l_.

Declaring and using variables in PL-SQL

I am new to PL-SQL. I do not understand why I am getting the error "PLS-00428: an INTO clause is expected in this SELECT statement"
What I'm trying to accomplish is to create a variable c_limit and load it's value. I then want to use that variable later to filter data.
Basically I am playing around in the demo db to see what I can/can't do with PL-SQL.
The code worked up to the point that I added "select * from demo_orders where CUSTOMER_ID = custID;"
declare
c_limit NUMBER(9,2);
custID INT;
BEGIN
custID := 6;
-- Save the credit limit
select credit_limit INTO c_limit
from demo_customers cust
where customer_id = custID;
select * from demo_orders where CUSTOMER_ID = custID;
dbms_output.Put_line(c_limit);
END;
If you are using a SQL SELECT statement within an anonymous block (in PL/SQL - between the BEGIN and the END keywords) you must select INTO something so that PL/SQL can utilize a variable to hold your result from the query. It is important to note here that if you are selecting multiple columns, (which you are by "SELECT *"), you must specify multiple variables or a record to insert the results of your query into.
for example:
SELECT 1
INTO v_dummy
FROM dual;
SELECT 1, 2
INTO v_dummy, v_dummy2
FROM dual;
It is also worth pointing out that if your SELECT * FROM.... will return multiple rows, PL/SQL will throw an error. You should only expect to retrieve 1 row of data from a SELECT INTO.
Looks like the error is from the second select query.
select * from demo_orders where CUSTOMER_ID = custID;
PL-SQL won't allow a standalone sql select query for info.
http://pls-00428.ora-code.com/
You need to do some operation with the second select query

Update or insert based on if employee exist in table

Do want to create Stored procc which updates or inserts into table based on the condition if current line does not exist in table?
This is what I have come up with so far:
PROCEDURE SP_UPDATE_EMPLOYEE
(
SSN VARCHAR2,
NAME VARCHAR2
)
AS
BEGIN
IF EXISTS(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN)
--what ? just carry on to else
ELSE
INSERT INTO pb_mifid (ssn, NAME)
VALUES (SSN, NAME);
END;
Is this the way to achieve this?
This is quite a common pattern. Depending on what version of Oracle you are running, you could use the merge statement (I am not sure what version it appeared in).
create table test_merge (id integer, c2 varchar2(255));
create unique index test_merge_idx1 on test_merge(id);
merge into test_merge t
using (select 1 id, 'foobar' c2 from dual) s
on (t.id = s.id)
when matched then update set c2 = s.c2
when not matched then insert (id, c2)
values (s.id, s.c2);
Merge is intended to merge data from a source table, but you can fake it for individual rows by selecting the data from dual.
If you cannot use merge, then optimize for the most common case. Will the proc usually not find a record and need to insert it, or will it usually need to update an existing record?
If inserting will be most common, code such as the following is probably best:
begin
insert into t (columns)
values ()
exception
when dup_val_on_index then
update t set cols = values
end;
If update is the most common, then turn the procedure around:
begin
update t set cols = values;
if sql%rowcount = 0 then
-- nothing was updated, so the record doesn't exist, insert it.
insert into t (columns)
values ();
end if;
end;
You should not issue a select to check for the row and make the decision based on the result - that means you will always need to run two SQL statements, when you can get away with one most of the time (or always if you use merge). The less SQL statements you use, the better your code will perform.
BEGIN
INSERT INTO pb_mifid (ssn, NAME)
select SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN);
END;
UPDATE:
Attention, you should name your parameter p_ssn(distinguish to the column SSN ), and the query become:
INSERT INTO pb_mifid (ssn, NAME)
select P_SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = P_SSN);
because this allways exists:
SELECT * FROM tblEMPLOYEE a where a.ssn = SSN

Resources