iterate around values in a bulk collected table - extractvalue - oracle

I have a piece of PL/SQL which does:
SELECT *
BULK COLLECT INTO table_1
FROM XMLTABLE (
'//Match'
PASSING l_xml_string
COLUMNS col_1 VARCHAR2 (8) PATH '#col_1' ,
col_2 VARCHAR2 (40) PATH '#col_2');
I then store these as an XML variable using XMLAGG.
I want to join the col_1 value of a view, but the problem is when I use the EXTRACTVALUE function (from the aggregated xml) I get a terrible explain plan (full table scans over using an index) over when I pass it a single value - even when there's one 1 record within the xml.
When I do the extract before entering this table (storing it in a variable and then joining against the variable) it takes the correct path, but to do so I limit myself to 1 result (where rownum < 2) when I need to remove this restriction.
When I do:
select col_1
into l_col_1 from
TABLE (table_1);
it displays:
ORA-01422: exact fetch returns more than requested number of rows
Is there another way to do this or:
SELECT EXTRACTVALUE (data.COLUMN_VALUE ....

Related

Compare differences before insert into oracle table

Could you please tell me how to compare differences between table and my select query and insert those results in separate table? My plan is to create one base table (name RESULT) by using select statement and populate it with current result set. Then next day I would like to create procedure which will going to compare same select with RESULT table, and insert differences into another table called DIFFERENCES.
Any ideas?
Thanks!
You can create the RESULT_TABLE using CTAS as follows:
CREATE TABLE RESULT_TABLE
AS SELECT ... -- YOUR QUERY
Then you can use the following procedure which calculates the difference between your query and data from RESULT_TABLE:
CREATE OR REPLACE PROCEDURE FIND_DIFF
AS
BEGIN
INSERT INTO DIFFERENCES
--data present in the query but not in RESULT_TABLE
(SELECT ... -- YOUR QUERY
MINUS
SELECT * FROM RESULT_TABLE)
UNION
--data present in the RESULT_TABLE but not in the query
(SELECT * FROM RESULT_TABLE
MINUS
SELECT ... );-- YOUR QUERY
END;
/
I have used the UNION and the difference between both of them in a different order using MINUS to insert the deleted data also in the DIFFERENCES table. If this is not the requirement then remove the query after/before the UNION according to your requirement.
-- Create a table with results from the query, and ID as primary key
create table result_t as
select id, col_1, col_2, col_3
from <some-query>;
-- Create a table with new rows, deleted rows or updated rows
create table differences_t as
select id
-- Old values
,b.col_1 as old_col_1
,b.col_2 as old_col_2
,b.col_3 as old_col_3
-- New values
,a.col_1 as new_col_1
,a.col_2 as new_col_2
,a.col_3 as new_col_3
-- Execute the query once again
from <some-query> a
-- Outer join to detect also detect new/deleted rows
full join result_t b using(id)
-- Null aware comparison
where decode(a.col_1, b.col_1, 1, 0) = 0
or decode(a.col_2, b.col_2, 1, 0) = 0
or decode(a.col_3, b.col_3, 1, 0) = 0;

Function results column names to be used in select statement

I have function which returns column names and i am trying to use the column name as part of my select statement, but my results are coming as column name instead of values
FUNCTION returning column name:
get_col_name(input1, input2)
Can И use this query to the results of the column from table -
SELECT GET_COL_NAME(input1,input2) FROM TABLE;
There are a few ways to run dynamic SQL directly inside a SQL statement. These techniques should be avoided since they are usually complicated, slow, and buggy. Before you do this try to find another way to solve the problem.
The below solution uses DBMS_XMLGEN.GETXML to produce XML from a dynamically created SQL statement, and then uses XML table processing to extract the value.
This is the simplest way to run dynamic SQL in SQL, and it only requires built-in packages. The main limitation is that the number and type of columns is still fixed. If you need a function that returns an unknown number of columns you'll need something more powerful, like the open source program Method4. But that level of dynamic code gets even more difficult and should only be used after careful consideration.
Sample schema
--drop table table1;
create table table1(a number, b number);
insert into table1 values(1, 2);
commit;
Function that returns column name
create or replace function get_col_name(input1 number, input2 number) return varchar2 is
begin
if input1 = 0 then
return 'a';
else
return 'b';
end if;
end;
/
Sample query and result
select dynamic_column
from
(
select xmltype(dbms_xmlgen.getxml('
select '||get_col_name(0,0)||' dynamic_column from table1'
)) xml_results
from dual
)
cross join
xmltable
(
'/ROWSET/ROW'
passing xml_results
columns dynamic_column varchar2(4000) path 'DYNAMIC_COLUMN'
);
DYNAMIC_COLUMN
--------------
1
If you change the inputs to the function the new value is 2 from column B. Use this SQL Fiddle to test the code.

Can't use oracle associative array in object (for custom aggregate function)

Background : My purpose is to write an aggregate function in oracle to make a string contains number of occurrence of each element. For example "Jake:2-Tom:3-Jim:5" should means 2 times occurrence for Jake, 3 times for Tom and 5 times for Jim. So for writing a custom aggregate function I should write an object implements ODCIAggregate routines. And also a Map like data structure for counting each element occurrences. Only Map like data structure in oracle is associative array.
Problem : Unfortunately I can't know any approach to use associative arrays in object. I tried these approaches:
1 – Create a generic type for associative array and use it in object. Oracle doesn't let creating generic associative array types.
CREATE TYPE STR_MAP IS TABLE OF NUMBER INDEX BY VARCHAR2(100);
This get following error :
PLS-00355: use of pl/sql table not allowed in this context
2 – Create map like type in a package and use it in object. Oracle lets creating an associative array in a package, but doesn’t let using an 'in package type' in object. I checked all issues about grant execute on package or make a synonym for 'in package type'. But there is no way for use 'in package type' in object declaration.
P.S. 1 :
Of course we can do it for one column by nested group by. But I prefer to do it for many columns with only agg-func. It is very useful agg-func and I wonder why nobody wrote something like this before. For many columns we have limited number of distinct values, and with such an agg-func we can simply summarize all of them. For example if we had such a agg-func named ocur_count(), we can simply analyze an collection of transactions like this :
select ocur_count(trans_type), ocur_count(trans_state), ocur_count(response_code), ocur_count(something_status) from transaction;
You can use listagg and a simple group by with a count to get what you need (Note the listagg output is limited in size to 4k chars). Here I'm counting occurrences of first names, using ',' as the separator between names and ':' as the separator for count:
SQL> create table person_test
(
person_id number,
first_name varchar2(50),
last_name varchar2(50)
)
Table created.
SQL> insert into person_test values (1, 'Joe', 'Blow')
1 row created.
SQL> insert into person_test values (2, 'Joe', 'Smith')
1 row created.
SQL> insert into person_test values (3, 'Joe', 'Jones')
1 row created.
SQL> insert into person_test values (4, 'Frank', 'Rizzo')
1 row created.
SQL> insert into person_test values (4, 'Frank', 'Jones')
1 row created.
SQL> insert into person_test values (5, 'Betty', 'Boop')
1 row created.
SQL> commit
Commit complete.
SQL> -- get list of first names and counts into single string
SQL> --
SQL> -- NOTE: Beware of size limitations of listagg (4k chars if
SQL> -- used as a SQL statement I believe)
SQL> --
SQL> select listagg(person_count, ',')
within group(order by person_count) as person_agg
from (
select first_name || ':' || count(1) as person_count
from person_test
group by first_name
order by first_name
)
PERSON_AGG
--------------------------------------------------------------------------------
Betty:1,Frank:2,Joe:3
1 row selected.
NOTE: If you do run into a problem with string concatenation too long (exceeds listagg size limit), you can choose to return a CLOB using xmlagg:
-- this returns a CLOB
select rtrim(xmlagg(xmlelement(e,person_count,',').extract('//text()') order by person_count).GetClobVal(),',')
from (
select first_name || ':' || count(1) as person_count
from person_test
group by first_name
order by first_name
);
Hope that helps
EDIT:
If you want counts for multiple columns (firstname and lastname in this example), you can do:
select
typ,
listagg(cnt, ',') within group(order by cnt)
as name_agg
from (
-- FN=FirstName, LN=LastName
select 'FN' as typ, first_name || ':' || count(1) as cnt
from person_test
group by first_name
union all
select 'LN' as typ, last_name || ':' || count(1) as cnt
from person_test
group by last_name
)
group by typ;
Output:
"FN" "Betty:1,Frank:2,Joe:3"
"LN" "Blow:1,Boop:1,Jones:2,Rizzo:1,Smith:1"
I'd also note that you probably can create a custom aggregate function to do this, I just prefer to stick with built in functionality of SQL first if it can solve my problem.

Bulk Insert with large static data in Oracle

I have a simple table with one column of numbers. I want to load about 3000 numbers in it. I want to do that in memory, without using SQL*Loader. I tried
INSERT ALL
INTO t_table (code) VALUES (n1)
INTO t_table (code) VALUES (n2)
...
...
INTO t_table (code) VALUES (n3000)
SELECT * FROM dual
But I fails at 1000 values. What should I do ? Is SQL*Loader the only way ? Can I do LOAD with SQL only ?
Presumably you have an initial value of n. If so, this code will populate code with values n to n+2999 :
insert into t_table (code)
select (&N + level ) - 1
from dual
connect by level <=3000
This query uses a SQL*Plus substitution variable to post the initial value of n. Other clients will need to pass the value in a different way.
"Assume that I am in c++ with a stl::vector, what query should I
write ?"
So when you wrote n3000 what you really meant was n(3000). It's easy enough to use an array in SQL. This example uses one of Oracle's pre-defined collections, a table of type NUMBER:
declare
ids system.number_tbl_type;
begin
insert into t_table (code)
select column_value
from table ( select ids from dual )
;
end;
As for mapping your C++ vector to Oracle types, that's a different question (and one which I can't answer).

How to compare a local CLOB column against a CLOB column in a remote database instance

I want to verify that the data in 2 CLOB columns is the same on 2 different instances. If these were VARCHAR2 columns, I could use a MINUS or a join to determine if rows were in one instance or the other. Unfortunately, Oracle does not allow you to perform set operations on CLOB columns.
How do I compare 2 CLOB columns, one of which is in my local instance and one that is in a remote instance?
Example table structure:
CREATE OR REPLACE TABLE X.TEXT_TABLE
( ID VARCHAR2,
NAME VARCHAR2,
TEXT CLOB
);
You can use an Oracle global temporary table to pull the CLOBs over to your local instance temporarily. You can then use the DBMS_LOB.COMPARE function to compare the CLOB columns.
If this query returns any rows, the CLOBs are different (more or less characters, newlines, etc) or one of the rows exists in only one of the instances.
--Create temporary table to store the text in
CREATE GLOBAL TEMPORARY TABLE X.TEMP_TEXT_TABLE
ON COMMIT DELETE ROWS
AS
SELECT * FROM X.TEXT_TABLE#REMOTE_DB;
--Use this statement if you need to refresh the TEMP_TEXT_TABLE table
INSERT INTO X.TEMP_TEXT_TABLE
SELECT * FROM X.TEXT_TABLE#REMOTE_DB;
--Do the comparision
SELECT DISTINCT
TARGET.NAME TARGET_NAME
,SOURCE.NAME SOURCE_NAME
,DBMS_LOB.COMPARE (TARGET.TEXT, SOURCE.TEXT) AS COMPARISON
FROM (SELECT ID, NAME, TEXT FROM X.TEMP_TEXT_TABLE) TARGET
FULL OUTER JOIN
(SELECT ID, NAME, TEXT FROM X.TEXT_TABLE) SOURCE
ON TARGET.ID = SOURCE.ID
WHERE DBMS_LOB.COMPARE (TARGET.TEXT, SOURCE.TEXT) <> 0
OR DBMS_LOB.COMPARE (TARGETTEXT, SOURCE.TEXT) IS NULL;
You can use DBMS_SQLHASH to compare the hashes of the relevant data. This should use significantly less IO than moving and comparing the CLOBs. The query below will just tell you if there are any differences in the entire table, but you can narrow it down.
select sys.dbms_sqlhash.gethash(sqltext => 'select text from text_table'
,digest_type => 1/*MD4*/) from dual
minus
select sys.dbms_sqlhash.gethash(sqltext => 'select text from text_table#remoteDB'
,digest_type => 1/*MD4*/) from dual;

Resources