Converting Big Clob into Table using XMLTable - oracle

I´m trying to convert a input clob variable from a store procedure into a xmltype and then, using XMLTable, trying to join it with other tables on my db, here is my code:
with clientes_data as (SELECT clientes.identificacion,clientes.tipoDoc,clientes.cuentas
from xmltable('/Clientes/Cliente'
passing xmltype(to_clob('<Clientes>
<Cliente><NumeroIdentificacion>94406495</NumeroIdentificacion><TipoIdentificacion>CC</TipoIdentificacion></Cliente>
<Cliente><NumeroIdentificacion>1136881480</NumeroIdentificacion><TipoIdentificacion>CC</TipoIdentificacion></Cliente>
</Clientes>'))
columns
identificacion varchar2(10) path 'NumeroIdentificacion',
tipoDoc varchar2(2) path 'TipoIdentificacion',
cuentas xmltype path 'Cuentas') clientes)
,cuentas_Data as (
SELECT cl.identificacion,cl.tipoDoc,cuentasT.*
from clientes_data cl
LEFT JOIN
xmltable('/Cuentas/Cuenta'
passing cl.cuentas
columns
numCta varchar2(10) path 'Numero',
tipoCta varchar2(3) path 'Tipo') cuentasT ON 1=1)
select * from cuentas_Data;
--select count(*) from cuentas_Data;
But I´m gettin this error: String literal too long...The string literal is longer tahn 4000 characters when the input (the passing section) is longer than 4000 characters,so, I´m a little bit confused, cause it´s suposed that XMLTable have a parameter xmltype (kind of a clob size), but, I´m assuming that actually is a varchar2(4000)?
Thanks for any "light" on this.

Related

Function results column names to be used in select statement

I have function which returns column names and i am trying to use the column name as part of my select statement, but my results are coming as column name instead of values
FUNCTION returning column name:
get_col_name(input1, input2)
Can И use this query to the results of the column from table -
SELECT GET_COL_NAME(input1,input2) FROM TABLE;
There are a few ways to run dynamic SQL directly inside a SQL statement. These techniques should be avoided since they are usually complicated, slow, and buggy. Before you do this try to find another way to solve the problem.
The below solution uses DBMS_XMLGEN.GETXML to produce XML from a dynamically created SQL statement, and then uses XML table processing to extract the value.
This is the simplest way to run dynamic SQL in SQL, and it only requires built-in packages. The main limitation is that the number and type of columns is still fixed. If you need a function that returns an unknown number of columns you'll need something more powerful, like the open source program Method4. But that level of dynamic code gets even more difficult and should only be used after careful consideration.
Sample schema
--drop table table1;
create table table1(a number, b number);
insert into table1 values(1, 2);
commit;
Function that returns column name
create or replace function get_col_name(input1 number, input2 number) return varchar2 is
begin
if input1 = 0 then
return 'a';
else
return 'b';
end if;
end;
/
Sample query and result
select dynamic_column
from
(
select xmltype(dbms_xmlgen.getxml('
select '||get_col_name(0,0)||' dynamic_column from table1'
)) xml_results
from dual
)
cross join
xmltable
(
'/ROWSET/ROW'
passing xml_results
columns dynamic_column varchar2(4000) path 'DYNAMIC_COLUMN'
);
DYNAMIC_COLUMN
--------------
1
If you change the inputs to the function the new value is 2 from column B. Use this SQL Fiddle to test the code.

Oracle PL/SQL Use Merge command on data from XML Table

I have a PL/SQL procedure that currently gets data from an XML service and only does inserts.
xml_data := xmltype(GET_XML_F('http://test.example.com/mywebservice');
--GET_XML_F gets the XML text from the site
INSERT INTO TEST_READINGS (TEST, READING_DATE, CREATE_DATE, LOCATION_ID)
SELECT round(avg(readings.reading_val), 2),
to_date(substr(readings.reading_dt, 1, 10),'YYYY-MM-DD'), SYSDATE,
p_location_id)
FROM XMLTable(
XMLNamespaces('http://www.example.com' as "ns1"),
'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime') readings
GROUP BY substr(readings.reading_dt,1,10), p_location_id;
I would like to be able to insert or update the data using a merge statement in the event that it needs to be re-run on the same day to find added records. I'm doing this in other procedures using the code below.
MERGE INTO TEST_READINGS USING DUAL
ON (LOCATION_ID = p_location_id AND READING_DATE = p_date)
WHEN NOT MATCHED THEN INSERT
(TEST_reading_id, site_id, test, reading_date, create_date)
VALUES (TEST_readings_seq.nextval, p_location_id,
p_value, p_date, SYSDATE)
WHEN MATCHED THEN UPDATE
SET TEST = p_value;
The fact that I'm pulling it from an XMLTable is throwing me off. Is there way to get the data from the XMLTable while still using the (much cleaner) merge syntax? I would just delete the data beforehand and re-import or use lots of conditional statements, but I would like to avoid doing so if possible.
Can't you simply put your SELECT into MERGE statement?
I believe, this should look more less like this:
MERGE INTO TEST_READINGS USING (
SELECT
ROUND(AVG(readings.reading_val), 2) AS test
,TO_DATE(SUBSTR(readings.reading_dt, 1, 10),'YYYY-MM-DD') AS reading_date
,SYSDATE AS create_date
,p_location_id AS location_id
FROM
XMLTable(
XMLNamespaces('http://www.example.com' as "ns1")
,'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS
reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime'
) readings
GROUP BY
SUBSTR(readings.reading_dt,1,10)
,p_location_id
) readings ON (
LOCATION_ID = readings.location_id
AND READING_DATE = readings.reading_date
)
WHEN NOT MATCHED THEN
...
WHEN MATCHED THEN
...
;

iterate around values in a bulk collected table - extractvalue

I have a piece of PL/SQL which does:
SELECT *
BULK COLLECT INTO table_1
FROM XMLTABLE (
'//Match'
PASSING l_xml_string
COLUMNS col_1 VARCHAR2 (8) PATH '#col_1' ,
col_2 VARCHAR2 (40) PATH '#col_2');
I then store these as an XML variable using XMLAGG.
I want to join the col_1 value of a view, but the problem is when I use the EXTRACTVALUE function (from the aggregated xml) I get a terrible explain plan (full table scans over using an index) over when I pass it a single value - even when there's one 1 record within the xml.
When I do the extract before entering this table (storing it in a variable and then joining against the variable) it takes the correct path, but to do so I limit myself to 1 result (where rownum < 2) when I need to remove this restriction.
When I do:
select col_1
into l_col_1 from
TABLE (table_1);
it displays:
ORA-01422: exact fetch returns more than requested number of rows
Is there another way to do this or:
SELECT EXTRACTVALUE (data.COLUMN_VALUE ....

Error when selecting timestamp from XMLType column in Oracle 11g

I have 2 Oracle 11g databases with a table containing a XMLType column and some test data differing only in the separator (.,) for the milliseconds of the timestamp:
create table TEST_TIMESTAMP (
ID number(19,0) constraint "NN_TEST_TIMESTAMP_ID" not null,
DOC xmltype constraint "NN_TEST_TIMESTAMP_DOC" not null
);
insert into TEST_TIMESTAMP values ( 1, xmltype('<?xml version="1.0" encoding="utf-8"?><test><ts>2015-04-08T04:55:33.11</ts></test>'));
insert into TEST_TIMESTAMP values ( 2, xmltype('<?xml version="1.0" encoding="utf-8"?><test><ts>2015-04-08T04:55:33,11</ts></test>'));
When I try to extract the timestamp with the following statements, it fails either with the first document on one database or with the second document on the other database.
select x.*
from TEST_TIMESTAMP t,
xmltable(
'/test'
passing t.DOC
columns
ORIGINAL varchar2(50) path 'ts',
RESULT timestamp with time zone path 'ts'
) x
where t.ID = 1;
select x.*
from TEST_TIMESTAMP t,
xmltable(
'/test'
passing t.DOC
columns
ORIGINAL varchar2(50) path 'ts',
RESULT timestamp with time zone path 'ts'
) x
where t.ID = 2;
The error I get:
ORA-01858: a non-numeric character was found where a numeric was expected
01858. 00000 - "a non-numeric character was found where a numeric was expected"
*Cause: The input data to be converted using a date format model was
incorrect. The input data did not contain a number where a number was
required by the format model.
*Action: Fix the input data or the date format model to make sure the
elements match in number and type. Then retry the operation.
The only differences between those databases I've found are:
DB1: version=11.2.0.1.0, NLS_CHARACTERSET=AL32UTF8 -> fails on document 2
DB2: version=11.2.0.2.0, NLS_CHARACTERSET=WE8MSWIN1252 -> fails on document 1
DB1 has the behaviour that I would expect. Does anybody know why those databases behave differently and how to fix the issue in DB2?
Thanks in advance,
Oliver
My guess is that the nls_timestamp_format is different between the two databases.
However, rather than forcing the implicit conversion down at the XMLTABLE level, I would do an explicit conversion in the select list:
with test_timestamp as (select 1 id, xmltype('<?xml version="1.0" encoding="utf-8"?><test><ts>2015-04-08T04:55:33.11</ts></test>') doc from dual union all
select 2 id, xmltype('<?xml version="1.0" encoding="utf-8"?><test><ts>2015-04-08T04:55:33,11</ts></test>') doc from dual)
select x.original,
to_timestamp(x.original, 'yyyy-mm-dd"T"hh24:mi:ss,ff2') result
from test_timestamp t,
xmltable('/test' passing t.doc
columns original varchar2(50) path 'ts') x;
ORIGINAL RESULT
-------------------------------------------------- --------------------------------------------------
2015-04-08T04:55:33.11 08/04/2015 04:55:33.110000000
2015-04-08T04:55:33,11 08/04/2015 04:55:33.110000000
N.B. I found that using "ss.ff2" errored, but "ss,ff2" handled both cases just fine. I'm not sure if that's reliant on some other nls setting or not, though.

select inside CLOB and get what it cotains

I am not sure if this is duplicated ,I havent find it in the search.
I have a table called mytable that has column STORY the type of this column is CLOB
mytable
The elder tree
Soldiers
Going for a hunt
The blue moon
If i write :
select story from mytable
I will have the result:
Mytable
1-clob
2-clob
3-clob
4-clob
What I want what inside CLOB , can I achieve that ?
dbms_lob.substr( clob, bytes, startbyte );
but in sql you can retrieve only 4000 bytes into varchar

Resources