I have encountered the following interesting behavior with the following code:
create or replace type schema1.t_varchar2_tab force as table of varchar2(8 byte);
declare
v_list schema1.t_varchar2_tab;
begin
select col1
bulk collect into v_list
from (
select '12345678' col1 from dual
union all
select '12345678 ' from dual
);
for idx in 1..v_list.count
loop
dbms_output.put_line('val = #'||v_list(idx)||'#');
end loop;
end;
/
It returns:
val = #12345678#
val = #12345678#
It looks like when you bulk collect into a nested table and a selected string is longer than the what is set in the type and if the extra characters are spaces, Oracle RTRIMs the spaces to make it fit in the VARCHAR2 size.
Is this behavior normal? Have you encountered this? I've checked the Oracle on-line documentation, but so far could not find anything related.
Related
i have a scenario where i need to create a dynamic query with some if/else statements, i have prepared the query but unable to use this inside loop below is the snippet of what i am trying to acheive.
Query :
select user_name into username from table1 where user ='user1';
for(query)
Loop
Dbms_output.put_line('user name ' || user_name);
END Loop;
is this possible to use the vaiable in the for loop ?
Not really. Notice in the 19c PL/SQL Reference it says:
cursor
Name of an explicit cursor (not a cursor variable) that is not open when the cursor FOR LOOP is entered.
You have to code this the long way, by explicitly fetching the cursor into a record until you hit cursorname%notfound, e.g.
create or replace procedure cheese_report
( cheese_cur in sys_refcursor )
as
type cheese_detail is record
( name varchar2(15)
, region varchar2(30) );
cheese cheese_detail;
begin
loop
fetch cheese_cur into cheese;
dbms_output.put_line(cheese.name || ', ' || cheese.region);
exit when cheese_cur%notfound;
end loop;
close cheese_cur;
end;
Test:
declare
cheese sys_refcursor;
begin
open cheese for
select 'Cheddar', 'UK' from dual union all
select 'Gruyere', 'France' from dual union all
select 'Ossau Iraty', 'Spain' from dual union all
select 'Yarg', 'UK' from dual;
cheese_report(cheese);
end;
/
Cheddar, UK
Gruyere, France
Ossau Iraty, Spain
Yarg, UK
In Oracle 21c you can simplify this somewhat, though you still have to know the structure of the result set:
create or replace procedure cheese_report
( cheese_cur in sys_refcursor )
as
type cheese_detail is record
( cheese varchar2(15)
, region varchar2(30) );
begin
for r cheese_detail in values of cheese_cur
loop
dbms_output.put_line(r.cheese || ', ' || r.region);
end loop;
end;
You could parse an unknown ref cursor using dbms_sql to find the column names and types, but it's not straightforward as you have to do every processing step yourself. For an example of something similar, see www.williamrobertson.net/documents/refcursor-to-csv.shtml.
Here is my clue...
I'm on oracle 11g. Searched a lot, but nothing found.
I need to execute DML operations, which can contain data > 4k characters.
If i use a sql block, directly in oracle, like the next one, everything works fine
declare
txtV varchar2(32000);
BEGIN
txtV:= 'MORE THAN 4k CHARS, here only few for readability' ;
Update FD_FILTERDEF
set SQLFILTER = txtV
where id='blabla';
END;
BUT!!!
if i use merge statement, it gives me error ORA-01461
declare
txtV varchar2(32000);
BEGIN
txtV:= '' ;
MERGE INTO FD_FILTERDEF A
USING ( select txtV C0
from dual) ST
ON (A.CODE = 'bla bla')
WHEN MATCHED THEN
Update set A.SQLFILTER = st.C0
WHEN NOT MATCHED THEN
insert (CODE ,SQLFILTER )
values ('bla bla' , ST.C0 );
END;
If have some hint would be appreciated :)
Use this:
create table fd_filterdef
( code varchar2(10) primary key
, sqlfilter clob );
declare
txtv varchar2(32000);
begin
txtv := rpad('select statement, really really long', 5000, ' etc');
merge into fd_filterdef a
using (select 'bla bla' as code from dual) st
on (a.code = st.code)
when matched then
update set a.sqlfilter = txtv
when not matched then
insert (code, sqlfilter)
values (st.code,txtv);
end;
/
select code, length(sqlfilter) from fd_filterdef;
CODE LENGTH(SQLFILTER)
---------- -----------------
bla bla 5000
Selecting your long variable from dual implicitly casts it to a SQL varchar2 which prior to 12c only holds up to 4000 bytes.
you can't select varchar2 more than 4k https://docs.oracle.com/cd/B28359_01/server.111/b28320/limits001.htm
see your code
select txtV C0 from dual
but in oracle 12c you can
https://docs.oracle.com/database/121/SQLRF/sql_elements001.htm#SQLRF30020
#William great hint, this you posted works.
But i'm quite sure i have tested a syntax very similar, in which the only difference was in the select statement,
The one you provided works fine... the following one will raise the error:
declare
txtV varchar2(32000);
BEGIN
txtV:= '5000 chars ....';
MERGE INTO FD_FILTERDEF A
USING ( select 'not used' Code from dual) ST
ON (A.CODE = 'TESTCODE')
WHEN MATCHED THEN
Update set A.SQLFILTER = txtV --<<<< LOOK HERE I USE DIRECTLY THE VARIABLE DELCARED, NOT THE ONE FROM SELECT STMT
WHEN NOT MATCHED THEN
insert (CODE ,SQLFILTER )
values (st.code , txtV );
END;
#William ... ok, maybe i've made a bit of confusion in writing down the scripts. I was surprised about the "pk error" cause in my mind i have the script done as follows. I intended to no use at all the "select" statement, just passing the code inside insert and update (as follows), cause i build the query programmatically and replace values with placeholders....
In this way, there is no pk error at all. In you examples, of course, cause in insert was used the code from the query, but in update was used the value "TESTCODE" ... so it was searching (and updating) for a code, but inserting another :P
Sorry for my mistake :)
declare
txtV varchar2(32000);
BEGIN
txtV:= '5000 chars ....';
MERGE INTO FD_FILTERDEF A
USING ( select 'not used' Code from dual) ST
ON (A.CODE = 'TESTCODE')
WHEN MATCHED THEN
Update set A.SQLFILTER = txtV --<<<< LOOK HERE I USE DIRECTLY THE VARIABLE DELCARED, NOT THE ONE FROM SELECT STMT
WHEN NOT MATCHED THEN
insert (CODE ,SQLFILTER )
values ('TESTCODE' , txtV );
END;
I am trying to create a PL/SQL script that extracts a root "object" together with all children and other relevant information from an oracle production database. The purpose is to create a set of test-data to recreate issues that are encountered in production. Due to data protection laws the data needs to be anonymized when extracted - object names, certain types of id's, and monetary amounts need to be replaced.
I was trying to create one or more temporary translation tables, which would contain both the original values and anonymized versions. Then I would join the real data with the translation tables and output the anonymized values wherever required.
DECLARE
rootId integer := 123456;
TYPE anonTableRow IS RECORD
(
id NUMBER,
fieldC NUMBER,
anonymizedFieldC NUMBER
);
TYPE anonTable IS TABLE OF anonTableRow;
anonObject anonTable;
BEGIN
FOR cursor_row IN
(
select
id,
fieldC,
1234 -- Here I would create anonymized values based on rowNum or something similar
from
prodTable
where id = rootId
)
LOOP
i := i + 1;
anonObject(i) := cursor_row;
END LOOP;
FOR cursor_row IN
(
select
prod_table.id,
prod_table.fieldB,
temp_table.anonymizedFieldC fieldC,
prod_table.fieldD
from
prod_table
inner join table(temp_table) on prod_table.id = temp_table.id
where prod_table.id = 123456789
)
LOOP
dbms_output.put_line('INSERT INTO prod_table VALUES (' || cursor_row.id || ', ' || cursor_row.fieldB || ', ' || cursor_row.fieldC || ', , ' || cursor_row.fieldD);
END LOOP;
END;
/
However I ran into several problems with this approach - it seems to be near impossible to join oracle PL/SQL tables with real database tables. My access to the production database is severely restricted, so I cannot create global temporary tables, declare types outside PL/SQL or anything of that sort.
My attempt to declare my own PL/SQL types failed with the problems mentioned in this question - the solution does not work for me because of the limited permissions.
Is there a pure PL/SQL way that does not require fancy permissions to achieve something like the above?
Please Note: The above code example is simplified a lot and would not really require a separate translation table - in reality I need access to the original and translated values in several different queries, so I would prefer not having to "recalculate" translations everywhere.
If your data is properly normalized, then I guess this should only be necessary for internal IDs (not sure why you need to translate them though).
The following code should work for you, keeping the mappings in Associative Arrays:
DECLARE
TYPE t_number_mapping IS TABLE OF PLS_INTEGER INDEX BY PLS_INTEGER;
mapping_field_c t_number_mapping;
BEGIN
-- Prepare mapping
FOR cur IN (
SELECT 101 AS field_c FROM dual UNION ALL SELECT 102 FROM dual -- test-data
) LOOP
mapping_field_c(cur.field_c) := mapping_field_c.COUNT; -- first entry mapped to 1
END LOOP;
-- Use mapping
FOR cur IN (
SELECT 101 AS field_c FROM dual UNION ALL SELECT 102 FROM dual -- test-data
) LOOP
-- You can use the mapping when generating the `INSERT` statement
dbms_output.put_line( cur.field_c || ' mapped to ' || mapping_field_c(cur.field_c) );
END LOOP;
END;
Output:
101 mapped to 1
102 mapped to 2
If this isn't a permanent piece of production code, how about "borrowing" an existing collection type - e.g. one define in SYS that you can access.
Using this script from your schema you can generate a SQL Plus script to describe all SYS-owned types:
select 'desc ' || type_name from all_types
where typecode = 'COLLECTION'
and owner = 'SYS';
Running the resulting script will show you the structure of all the ones you can access. This one looks potentially suitable for example:
SQL> desc KU$_PARAMVALUES1010
KU$_PARAMVALUES1010 TABLE OF SYS.KU$_PARAMVALUE1010
Name Null? Type
----------------------------------------- -------- ----------------------------
PARAM_NAME VARCHAR2(30)
PARAM_OP VARCHAR2(30)
PARAM_TYPE VARCHAR2(30)
PARAM_LENGTH NUMBER
PARAM_VALUE_N NUMBER
PARAM_VALUE_T VARCHAR2(4000)
Of course, you can't guarantee that type will still exist or be the same or be accessible to you after a database upgrade, hence my caveat at the start.
More generic way to achieve this goal.
In my example i'm using xquery flwor expressions and dbms_xmlstore. Knowledge about xquery is mandatory.
create table mask_user_objects as select * from user_objects where rownum <0;
declare
v_s_table varchar2(30) := 'USER_OBJECTS'; --uppercase!!!
v_d_table varchar2(30) := 'MASK_USER_OBJECTS'; --uppercase!!!
v_mask_columns xmltype := xmltype('<COLS><OBJECT_NAME>XXXX</OBJECT_NAME>
<DATA_OBJECT_ID>-1</DATA_OBJECT_ID>
<OBJECT_TYPE/>
</COLS>'); --uppercase!!!
insCtx DBMS_XMLSTORE.ctxType;
r NUMBER;
v_source_table xmltype;
v_cursor sys_refcursor;
begin
open v_cursor for 'select * from '||v_s_table||' where rownum <100 ';
v_source_table := xmltype(v_cursor);
close v_cursor;
-- Load source table into xmltype.
insCtx := DBMS_XMLSTORE.newContext(v_d_table); -- Get saved context
for rec in (
select tt.column_value from xmltable('
let $col := $anomyze/COLS
for $i in $doc/ROWSET/ROW
let $row := $i
return <ROWSET>
<ROW>
{
for $x in $row/*
return if(
exists($col/*[name() = $x/name()] )
) then element{$x/name()}{ $col/*[name() = $x/name()]/text() }
else element{$x/name()}{$x/text()}
}
</ROW>
</ROWSET>
'
passing v_source_table as "doc"
, v_mask_columns as "anomyze"
) tt) loop
null;
r := DBMS_XMLSTORE.insertXML(insCtx, rec.column_value);
end loop;
DBMS_XMLSTORE.closeContext(insCtx);
end;
I've found a good question at https://dba.stackexchange.com/questions/3587/oracle-automate-export-unload-of-data. Is it valid to use such a construction:
FOR r IN (SELECT * FROM table) LOOP
UTL_FILE.PUT_LINE(lfFilelog, r.row);
END LOOP;
I'm trying to use something like this:
CREATE OR REPLACE PROCEDURE p_name(DESTFOLDER in varchar2, FILENAME in varchar2)
IS
V_FILEHANDLE UTL_FILE.FILE_TYPE;
CURSOR dataset IS
SELECT
field1,
field2,
fieldN
FROM
table1,
table2,
(SELECT field3 from table3);
-- WHERE CLAUSE ... and so on..
BEGIN
V_FILEHANDLE := UTL_FILE.FOPEN(DESTFOLDER, FILENAME, 'w');
FOR R IN dataset LOOP
UTL_FILE.PUT_LINE(V_FILEHANDLE, R.ROW);
END LOOP;
END;
/
and getting pls-00302 error which states that I should have defined ROW component. So as far as I undrestand this field should already exist in the query. Am I right?
Can I simply write a row from the cursor?
The answer mentionned is not complete, I think it was given as an example (pseudo-code) that lacks implementation details.
As it is:
your SELECT clause is invalid, you aren't selecting anything. What do you want to select?
the construct XX.row where xx is a cursor doesn't exist
furthermore, the UTL_FILE.get_line procedure accepts a VARCHAR2 as its second argument, not any kind of rowtype
you can't name a table table (although you could name it "table").
Given a table mytable(col1, col2, ... , colN) you could write:
CREATE OR REPLACE PROCEDURE p_name()
IS
V_FILEHANDLE UTL_FILE.FILE_TYPE;
CURSOR dataset IS SELECT col1, col2, /*...*/ coln FROM mytable;
BEGIN
/*utl_file.fopen maybe?*/
FOR R IN dataset LOOP
UTL_FILE.PUT_LINE(V_FILEHANDLE, R.col1 ||';'|| r.col2 /*...*/ || r.coln);
END LOOP;
END;
I have a select query on a relational table in a plsql procedure.
I want to convert the results of this query into a user defined type object to return via odp.net.
How would I go about doing this?
(this is from one of my other post today)
this is a walkthrough on getting started: http://www.oracle.com/technology/obe/hol08/dotnet/udt/udt_otn.htm
while this is a bit more detailed: http://download.oracle.com/docs/html/E10927_01/featUDTs.htm
but the real meat & potatoes are already installed on your computer after you install ODP in the Samples directory: %ORA_HOME%\product\11.1.0\client_1\odp.net\samples\2.x\UDT
but the pl/sql side of things:
First create the singleton udt to handle one row at a time
CREATE TYPE TESTTYPE IS OBJECT(COLA VARCHAR2(50) , COLB NUMBER(10) );
create or replace procedure GetTestType(lTestType OUT NOCOPY TESTTYPE)
IS
BEGIN
SELECT TESTTYPE('ValA',123)
INTO LTESTTYPE
FROM DUAL ;
END GetTestType ;
follow the directions in the above links to get the .net side insynch
NOW FOR A COLLECTION:
CREATE TYPE TESTTYPETABLE IS TABLE OF TESTTYPE ;
CREATE OR REPLACE PROCEDURE GETTESTTYPETABLE(lTestTypeTable OUT NOCOPY TestTypeTable)
IS
BEGIN
SELECT TESTTYPE(COLA,COLB)
bulk collect INTO lTestTypeTable
FROM (
SELECT 'ValA' COLA ,123 COLB
FROM DUAL
UNION
SELECT 'ValB' COLA ,234 COLB
FROM DUAL
UNION
SELECT 'Valc' COLA ,456 COLB
FROM DUAL
) ;
END GETTESTTYPETABLe;
then in the .net side of things this will effectively be a value of TESTTYPE()
Now to save you some time you can use the RETURNING clause on INSERT/UPDATE/DELETES
as such
create table testTable (colA varchar2(50) , colB number(10) );
CREATE SEQUENCE TESTSEQ START WITH 1 NOCACHE;
DECLARE
lTestTypeTable TestTypeTable ;
BEGIN
UPDATE TESTTABLE
SET
COLA = '1' ,
COLB = 'a'
WHERE COLA IS NULL
RETURNING TESTTYPE(COLA,COLB) --NOTE IF YOU HAVE ONE ROW YOU MAY DROP THE BULK COLLECT AND PUT IT INTO THE SINGLE ROW TYPE
BULK COLLECT INTO
lTestTypeTable
;
END ;
/
DECLARE
lTestType TestType;
BEGIN
INSERT INTO TESTTABLE(COLA, COLB)
VALUES ('BBB' , testSeq.NEXTVAL )
RETURNING TESTTYPE(COLA,COLB)
INTO
lTestType
;
DBMS_OUTPUT.PUT_LINE('MY NEW SEQUENCE # IS SET TO ' || lTestType.COLB) ;
END ;
/
unfortunately it seems that you cannot do a
insert into TT ... SELECT * from .. RETURNING Type(x,y) BULK COLLECT INTO lVariable;
so instead of rote COPING FROM THIS SITE, IT TELLS OF A "work-around" TO GET THE BULK COLLECT TO WORK IN AN INSERT STATEMENT
http://www.oracle-developer.net/display.php?id=413