execute SQL query stored in a table - oracle

I have a table having one of the columns that stores SQL query returning ids or it stores comma separated ids.
create table to store query or ids(separated by ,)
create table test1
(
name varchar(20) primary key,
stmt_or_value varchar(500),
type varchar(50)
);
insert into test1 (name, stmt_or_value, type)
values ('first', 'select id from data where id = 1;','SQL_QUERY')
insert into test1 (name, stmt_or_value, type)
values ('second', '1,2,3,4','VALUE')
data table is as follows
create table data
(
id number,
subject varchar(500)
);
insert into data (id, subject) values (1, 'test subject1');
insert into data (id, subject) values (2, 'test subject2');
insert into data (id, subject) values (3, 'test subject2');
I am not able to formulate query that will return values after either executing stored sql or parsing stored ids based on the value of name.
select id, subject
from data
where id in( EXECUTE IMMEDIATE stmt_or_value
where type='SQL_QUERY'
and name = 'first') or
( parse and return ids
from stmt_or_value
where type='VALUE'
and name = 'second')
Could you please help me in this.
Parsing comma separated value is done, I basically need help in below first part of the query:
( EXECUTE IMMEDIATE stmt_or_value
where type='SQL_QUERY'
and name = 'first')

This seems a very peculiar requirement, and one which will be difficult to solve in a robust fashion. STMT_OR_VALUE is the embodiment of the One Column Two Usages anti-pattern. Furthermore, resolving STMT_OR_VALUE requires flow control logic and the use of dynamic SQL. Consequently it cannot be a pure SQL solution: you need to use PL/SQL to assemble and execute the dynamic query.
Here is a proof of concept for a solution. I have opted for a function which you can call from SQL. It depends on one assumption: every query string you insert into TEST1.STMT_OR_VALUE has a projection of a single numeric column and every value string is a CSV of numeric data only. With this proviso it is simple to construct a function which either executes a dynamic query or tokenizes the string into a series of numbers; both of which are bulk collected into a nested table:
create or replace function get_ids (p_name in test1.name%type)
return sys.odcinumberlist
is
l_rec test1%rowtype;
return_value sys.odcinumberlist;
begin
select * into l_rec
from test1
where name = p_name;
if l_rec.type = 'SQL_QUERY' then
-- execute a query
execute immediate l_rec.stmt_or_value
bulk collect into return_value;
else
-- tokenize a string
select xmltab.tkn
bulk collect into return_value
from ( select l_rec.stmt_or_value from dual) t
, xmltable( 'for $text in ora:tokenize($in, ",") return $text'
passing stmt_or_value as "in"
columns tkn number path '.'
) xmltab;
end if;
return return_value;
end;
/
Note there is more than one way of executing a dynamic SQL statement and a multiplicity of ways to tokenize a CSV into a series of numbers. My decisions are arbitrary: feel free to substitute your preferred methods here.
This function can be invoked with a table() call:
select *
from data
where id in ( select * from table(get_ids('first'))) -- execute query
or id in ( select * from table(get_ids('second'))) -- get string of values
/
The big benefit of this approach is it encapsulates the logic around the evaluation of STMT_OR_VALUE and hides use of Dynamic SQL. Consequently it is easy to employ it in any SQL statement whilst retaining readability, or to add further mechanisms for generating a set of IDs.
However, this solution is brittle. It will only work if the values in the test1 table obey the rules. That is, not only must they be convertible to a stream of single numbers but the SQL statements must be valid and executable by EXECUTE IMMEDIATE. For instance, the trailing semi-colon in the question's sample data is invalid and would cause EXECUTE IMMEDIATE to hurl. Dynamic SQL is hard not least because it converts compilation errors into runtime errors.

Following is the set up data used for this example:
create table test1
(
test_id number primary key,
stmt_or_value varchar(500),
test_type varchar(50)
);
insert into test1 (test_id, stmt_or_value, test_type)
values (1, 'select id from data where id = 1','SQL_QUERY');
insert into test1 (test_id, stmt_or_value, test_type)
values (2, '1,2,3,4','VALUE');
insert into test1 (test_id, stmt_or_value, test_type)
values (3, 'select id from data where id = 5','SQL_QUERY');
insert into test1 (test_id, stmt_or_value, test_type)
values (4, '3,4,5,6','VALUE');
select * from test1;
TEST_ID STMT_OR_VALUE TEST_TYPE
1 select id from data where id = 1 SQL_QUERY
2 1,2,3,4 VALUE
3 select id from data where id = 5 SQL_QUERY
4 3,4,5,6 VALUE
create table data
(
id number,
subject varchar(500)
);
insert into data (id, subject) values (1, 'test subject1');
insert into data (id, subject) values (2, 'test subject2');
insert into data (id, subject) values (3, 'test subject3');
insert into data (id, subject) values (4, 'test subject4');
insert into data (id, subject) values (5, 'test subject5');
select * from data;
ID SUBJECT
1 test subject1
2 test subject2
3 test subject3
4 test subject4
5 test subject5
Below is the solution:
declare
sql_stmt clob; --to store the dynamic sql
type o_rec_typ is record(id data.id%type, subject data.subject%type);
type o_tab_typ is table of o_rec_typ;
o_tab o_tab_typ; --to store the output records
begin
--The below SELECT query generates the required dynamic SQL
with stmts as (
select (listagg(stmt_or_value, ' union all ') within group(order by stmt_or_value))||' union all ' s
from test1 t
where test_type = 'SQL_QUERY')
select
q'{select id, subject
from data
where id in (}'||
nullif(s,' union all ')||q'{
select distinct to_number(regexp_substr(s, '[^,]+', 1, l)) id
from (
select level l,
s
from (select listagg(stmt_or_value,',') within group(order by stmt_or_value) s
from test1
where test_type = 'VALUE') inp
connect by level <= length (regexp_replace(s, '[^,]+')) + 1))}' stmt into sql_stmt
from stmts; -- Create the dynamic SQL and store it into the clob variable
--execute the statement, fetch and display the output
execute immediate sql_stmt bulk collect into o_tab;
for i in o_tab.first..o_tab.last
loop
dbms_output.put_line('id: '||o_tab(i).id||' subject: '||o_tab(i).subject);
end loop;
end;
Output:
id: 1 subject: test subject1
id: 2 subject: test subject2
id: 3 subject: test subject3
id: 4 subject: test subject4
id: 5 subject: test subject5
Learnings:
Avoid using key words for table and column names.
Design application tables effectively to serve current and reasonable future requirements.
The above SQL will work. Still it is wise to consider reviewing the table design because, the complexity of the code will keep increasing with changes in requirements in future.
Learned how to convert comma separated values into records. "https://asktom.oracle.com/pls/apex/f?p=100:11:::NO::P11_QUESTION_ID:9538583800346706523"

declare
my_sql varchar2(1000);
v_num number;
v_num1 number;
begin
select stmt_or_value into my_sql from test1 where ttype='SQL_QUERY';
execute immediate my_sql into v_num;
select id into v_num1 from data where id=v_num;
dbms_output.put_line(v_num1);
end;
Answer for part 1.Please check.

Related

Inserting values into newly created table from a pre-existing table using a cursor and for loop - Snowflake SQL (classic web interface)

I'm trying to insert values into a new table in the classic Snowflake SQL web interface using data from a table that was already created, a cursor, and a for loop. My goal is to insert new information and information from the original table into the new table, but when I try and run my code, there is an error where I am referring to the column of my original table. (See code below)
-- Creation and inserting values into table invoice_original
create temporary table invoice_original (id integer, price number(12,2));
insert into invoice_original (id, price) values
(1, 11.11),
(2, 22.22);
-- Creates final empty table invoice_final
create temporary table invoice_final (
study_number varchar,
price varchar,
price_type varchar);
execute immediate $$
declare
c1 cursor for select price from invoice_original;
begin
for record in c1 do
insert into invoice_final(study_number, price, price_type)
values('1', record.price, 'Dollars');
end for;
end;
$$;
My end goal is to have the resulting table invoice_final with 3 columns - study_number, price, and price_type where the price value comes from the invoice_original table. The error I'm currently getting is:
Uncaught exception of type 'STATEMENT_ERROR' on line 6 at position 8 : SQL compilation error: error line 2 at position 20 invalid identifier 'RECORD.PRICE'.
Does anyone know why the record.price is not capturing the price value from the invoice_original table?
there are a number of type of dynamic SQL that do not handle the cursor name, and thus give this error if you push it into a single name temp value it will work:
for record in c1 do
let temp_price number := record.price;
insert into invoice_final(study_number, price, price_type)
values('1', temp_price, 'Dollars');
end for;
this sql has not been run, and could be the wrong format, but it is the base issue.
Also this really looks like an INSERT would work, but I also assume this is the nature of simplify the question down.
See the following for details on working with variables:
https://docs.snowflake.com/en/developer-guide/snowflake-scripting/variables.html#working-with-variables
The revised code below functions as desired:
-- Creation and inserting values into table invoice_original
create
or replace temporary table invoice_original (id integer, price number(12, 2));
insert into
invoice_original (id, price)
values
(1, 11.11),
(2, 22.22);
-- Creates final empty table invoice_final
create
or replace temporary table invoice_final (
study_number varchar,
price number(12, 2),
price_type varchar
);
execute immediate $$
declare
new_price number(12,2);
c1 cursor for select price from invoice_original;
begin
for record in c1 do
new_price := record.price;
insert into invoice_final(study_number, price, price_type) values('1',:new_price, 'Dollars');
end for;
end;
$$;
Note that I changed the target table definition for price to NUMBER (12,2) instead of VARCHAR, and assigned the record.price to a local variable that was passed to the insert statement as :new_price.
That all said ... I would strongly recommend against this approach for loading tables for performance reasons. You can replace all of this with an INSERT .. AS ... SELECT.
Always opt for set based processing over cursor / loop / row based processing with Snowflake.
https://docs.snowflake.com/en/sql-reference/sql/insert.html

Compare differences before insert into oracle table

Could you please tell me how to compare differences between table and my select query and insert those results in separate table? My plan is to create one base table (name RESULT) by using select statement and populate it with current result set. Then next day I would like to create procedure which will going to compare same select with RESULT table, and insert differences into another table called DIFFERENCES.
Any ideas?
Thanks!
You can create the RESULT_TABLE using CTAS as follows:
CREATE TABLE RESULT_TABLE
AS SELECT ... -- YOUR QUERY
Then you can use the following procedure which calculates the difference between your query and data from RESULT_TABLE:
CREATE OR REPLACE PROCEDURE FIND_DIFF
AS
BEGIN
INSERT INTO DIFFERENCES
--data present in the query but not in RESULT_TABLE
(SELECT ... -- YOUR QUERY
MINUS
SELECT * FROM RESULT_TABLE)
UNION
--data present in the RESULT_TABLE but not in the query
(SELECT * FROM RESULT_TABLE
MINUS
SELECT ... );-- YOUR QUERY
END;
/
I have used the UNION and the difference between both of them in a different order using MINUS to insert the deleted data also in the DIFFERENCES table. If this is not the requirement then remove the query after/before the UNION according to your requirement.
-- Create a table with results from the query, and ID as primary key
create table result_t as
select id, col_1, col_2, col_3
from <some-query>;
-- Create a table with new rows, deleted rows or updated rows
create table differences_t as
select id
-- Old values
,b.col_1 as old_col_1
,b.col_2 as old_col_2
,b.col_3 as old_col_3
-- New values
,a.col_1 as new_col_1
,a.col_2 as new_col_2
,a.col_3 as new_col_3
-- Execute the query once again
from <some-query> a
-- Outer join to detect also detect new/deleted rows
full join result_t b using(id)
-- Null aware comparison
where decode(a.col_1, b.col_1, 1, 0) = 0
or decode(a.col_2, b.col_2, 1, 0) = 0
or decode(a.col_3, b.col_3, 1, 0) = 0;

Can't use oracle associative array in object (for custom aggregate function)

Background : My purpose is to write an aggregate function in oracle to make a string contains number of occurrence of each element. For example "Jake:2-Tom:3-Jim:5" should means 2 times occurrence for Jake, 3 times for Tom and 5 times for Jim. So for writing a custom aggregate function I should write an object implements ODCIAggregate routines. And also a Map like data structure for counting each element occurrences. Only Map like data structure in oracle is associative array.
Problem : Unfortunately I can't know any approach to use associative arrays in object. I tried these approaches:
1 – Create a generic type for associative array and use it in object. Oracle doesn't let creating generic associative array types.
CREATE TYPE STR_MAP IS TABLE OF NUMBER INDEX BY VARCHAR2(100);
This get following error :
PLS-00355: use of pl/sql table not allowed in this context
2 – Create map like type in a package and use it in object. Oracle lets creating an associative array in a package, but doesn’t let using an 'in package type' in object. I checked all issues about grant execute on package or make a synonym for 'in package type'. But there is no way for use 'in package type' in object declaration.
P.S. 1 :
Of course we can do it for one column by nested group by. But I prefer to do it for many columns with only agg-func. It is very useful agg-func and I wonder why nobody wrote something like this before. For many columns we have limited number of distinct values, and with such an agg-func we can simply summarize all of them. For example if we had such a agg-func named ocur_count(), we can simply analyze an collection of transactions like this :
select ocur_count(trans_type), ocur_count(trans_state), ocur_count(response_code), ocur_count(something_status) from transaction;
You can use listagg and a simple group by with a count to get what you need (Note the listagg output is limited in size to 4k chars). Here I'm counting occurrences of first names, using ',' as the separator between names and ':' as the separator for count:
SQL> create table person_test
(
person_id number,
first_name varchar2(50),
last_name varchar2(50)
)
Table created.
SQL> insert into person_test values (1, 'Joe', 'Blow')
1 row created.
SQL> insert into person_test values (2, 'Joe', 'Smith')
1 row created.
SQL> insert into person_test values (3, 'Joe', 'Jones')
1 row created.
SQL> insert into person_test values (4, 'Frank', 'Rizzo')
1 row created.
SQL> insert into person_test values (4, 'Frank', 'Jones')
1 row created.
SQL> insert into person_test values (5, 'Betty', 'Boop')
1 row created.
SQL> commit
Commit complete.
SQL> -- get list of first names and counts into single string
SQL> --
SQL> -- NOTE: Beware of size limitations of listagg (4k chars if
SQL> -- used as a SQL statement I believe)
SQL> --
SQL> select listagg(person_count, ',')
within group(order by person_count) as person_agg
from (
select first_name || ':' || count(1) as person_count
from person_test
group by first_name
order by first_name
)
PERSON_AGG
--------------------------------------------------------------------------------
Betty:1,Frank:2,Joe:3
1 row selected.
NOTE: If you do run into a problem with string concatenation too long (exceeds listagg size limit), you can choose to return a CLOB using xmlagg:
-- this returns a CLOB
select rtrim(xmlagg(xmlelement(e,person_count,',').extract('//text()') order by person_count).GetClobVal(),',')
from (
select first_name || ':' || count(1) as person_count
from person_test
group by first_name
order by first_name
);
Hope that helps
EDIT:
If you want counts for multiple columns (firstname and lastname in this example), you can do:
select
typ,
listagg(cnt, ',') within group(order by cnt)
as name_agg
from (
-- FN=FirstName, LN=LastName
select 'FN' as typ, first_name || ':' || count(1) as cnt
from person_test
group by first_name
union all
select 'LN' as typ, last_name || ':' || count(1) as cnt
from person_test
group by last_name
)
group by typ;
Output:
"FN" "Betty:1,Frank:2,Joe:3"
"LN" "Blow:1,Boop:1,Jones:2,Rizzo:1,Smith:1"
I'd also note that you probably can create a custom aggregate function to do this, I just prefer to stick with built in functionality of SQL first if it can solve my problem.

Oracle PL/SQL Use Merge command on data from XML Table

I have a PL/SQL procedure that currently gets data from an XML service and only does inserts.
xml_data := xmltype(GET_XML_F('http://test.example.com/mywebservice');
--GET_XML_F gets the XML text from the site
INSERT INTO TEST_READINGS (TEST, READING_DATE, CREATE_DATE, LOCATION_ID)
SELECT round(avg(readings.reading_val), 2),
to_date(substr(readings.reading_dt, 1, 10),'YYYY-MM-DD'), SYSDATE,
p_location_id)
FROM XMLTable(
XMLNamespaces('http://www.example.com' as "ns1"),
'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime') readings
GROUP BY substr(readings.reading_dt,1,10), p_location_id;
I would like to be able to insert or update the data using a merge statement in the event that it needs to be re-run on the same day to find added records. I'm doing this in other procedures using the code below.
MERGE INTO TEST_READINGS USING DUAL
ON (LOCATION_ID = p_location_id AND READING_DATE = p_date)
WHEN NOT MATCHED THEN INSERT
(TEST_reading_id, site_id, test, reading_date, create_date)
VALUES (TEST_readings_seq.nextval, p_location_id,
p_value, p_date, SYSDATE)
WHEN MATCHED THEN UPDATE
SET TEST = p_value;
The fact that I'm pulling it from an XMLTable is throwing me off. Is there way to get the data from the XMLTable while still using the (much cleaner) merge syntax? I would just delete the data beforehand and re-import or use lots of conditional statements, but I would like to avoid doing so if possible.
Can't you simply put your SELECT into MERGE statement?
I believe, this should look more less like this:
MERGE INTO TEST_READINGS USING (
SELECT
ROUND(AVG(readings.reading_val), 2) AS test
,TO_DATE(SUBSTR(readings.reading_dt, 1, 10),'YYYY-MM-DD') AS reading_date
,SYSDATE AS create_date
,p_location_id AS location_id
FROM
XMLTable(
XMLNamespaces('http://www.example.com' as "ns1")
,'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS
reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime'
) readings
GROUP BY
SUBSTR(readings.reading_dt,1,10)
,p_location_id
) readings ON (
LOCATION_ID = readings.location_id
AND READING_DATE = readings.reading_date
)
WHEN NOT MATCHED THEN
...
WHEN MATCHED THEN
...
;

Update or insert based on if employee exist in table

Do want to create Stored procc which updates or inserts into table based on the condition if current line does not exist in table?
This is what I have come up with so far:
PROCEDURE SP_UPDATE_EMPLOYEE
(
SSN VARCHAR2,
NAME VARCHAR2
)
AS
BEGIN
IF EXISTS(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN)
--what ? just carry on to else
ELSE
INSERT INTO pb_mifid (ssn, NAME)
VALUES (SSN, NAME);
END;
Is this the way to achieve this?
This is quite a common pattern. Depending on what version of Oracle you are running, you could use the merge statement (I am not sure what version it appeared in).
create table test_merge (id integer, c2 varchar2(255));
create unique index test_merge_idx1 on test_merge(id);
merge into test_merge t
using (select 1 id, 'foobar' c2 from dual) s
on (t.id = s.id)
when matched then update set c2 = s.c2
when not matched then insert (id, c2)
values (s.id, s.c2);
Merge is intended to merge data from a source table, but you can fake it for individual rows by selecting the data from dual.
If you cannot use merge, then optimize for the most common case. Will the proc usually not find a record and need to insert it, or will it usually need to update an existing record?
If inserting will be most common, code such as the following is probably best:
begin
insert into t (columns)
values ()
exception
when dup_val_on_index then
update t set cols = values
end;
If update is the most common, then turn the procedure around:
begin
update t set cols = values;
if sql%rowcount = 0 then
-- nothing was updated, so the record doesn't exist, insert it.
insert into t (columns)
values ();
end if;
end;
You should not issue a select to check for the row and make the decision based on the result - that means you will always need to run two SQL statements, when you can get away with one most of the time (or always if you use merge). The less SQL statements you use, the better your code will perform.
BEGIN
INSERT INTO pb_mifid (ssn, NAME)
select SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN);
END;
UPDATE:
Attention, you should name your parameter p_ssn(distinguish to the column SSN ), and the query become:
INSERT INTO pb_mifid (ssn, NAME)
select P_SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = P_SSN);
because this allways exists:
SELECT * FROM tblEMPLOYEE a where a.ssn = SSN

Resources