ORA-31011 on XMLTYPE column - oracle

I'm using Oracle version - 11.2.0.4
I've reproduced the problem I'm facing in a more simplified manner below.
I've 2 tables with XMLTYPE column.
Table 1 (base_table in my example below) has storage model for XMLTYPE as BINARY XML.
Table 2 (my_tab) has storage model for XMLTYPE as CLOB.
With the XML in base_table I'm extracting the value of an attribute based on certain condition. This attribute in turn is a name of a node in xml contained in my_tab and I want to extract that node's value from my_tab.
Please note that I do not have the liberty to change this logic at the moment.
Code was working fine till the storage model for XMLTYPE column was CLOB in both tables. Recently base_table was recreated (drop and create), so it's storage model got modified as BINARY XML as I understand that's the default storage model in version 11.2.0.4
Here is create table stmt and sample data -
create table base_table(xml xmltype);
create table my_tab(xml xmltype)
xmltype column "XML" store as clob;
insert into base_table(xml)
values (xmltype('<ROOT>
<ELEMENT NAME="NODEA">
<NODE1>A-Node1</NODE1>
<NODE2>A-Node2</NODE2>
</ELEMENT>
<ELEMENT NAME="NODEB">
<NODE1>B-Node1</NODE1>
<NODE2>B-Node2</NODE2>
</ELEMENT>
<ELEMENT NAME="NODEC">
<NODE1>C-Node1</NODE1>
<NODE2>C-Node2</NODE2>
</ELEMENT>
</ROOT>')
);
insert into my_tab(xml)
values (xmltype('<TEST_XML>
<SOME_NODE>
<XYZ>
<NODEB>My area of concern</NODEB>
<OTHER_NODE> Something irrelevant </OTHER_NODE>
</XYZ>
</SOME_NODE>
<SOME_OTHER_NODE>
<ABC> Some value for this node </ABC>
</SOME_OTHER_NODE>
</TEST_XML>')
);
Below query fails:
select extract(t.xml, sd.tag_name).getstringval()
from (select '//' || extract(value(d), '//#NAME').getstringval() || '/text()' as tag_name
from base_table b,
table(xmlsequence(extract(b.xml, '//ROOT/ELEMENT'))) d
where extract(value(d), '//NODE2/text()').getstringval() = 'B-Node2') sd,
my_tab t;
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00601: Invalid token in: '///text()'
However, this query works fine and is able to extract the value of node I'm interested in. It can be seen that tag_name is fetched as required, but when it is used within "extract", it's value is somehow lost.
select sd.tag_name, extract(t.xml, '//NodeB/text()').getstringval()
from (select '//' || extract(value(d), '//#NAME').getstringval() || '/text()' as tag_name
from base_table b,
table(xmlsequence(extract(b.xml, '//ROOT/ELEMENT'))) d
where extract(value(d), '//NODE2/text()').getstringval() = 'B-Node2') sd,
my_tab t;
If I change XMLTYPE storage model of base_table back to CLOB, the erroneous query works fine again.
I would like to understand what's going wrong with storage model as BINARY XML.
I modified query as below, which is working fine. i.e. convert to clob and back to XMLTYPE:
select extract(t.xml, sd.tag_name).getstringval()
from (select '//' || extract(value(d), '//#NAME').getstringval() || '/text()' as tag_name
from base_table b,
table(xmlsequence(extract(XMLTYPE(b.xml.getclobval()), '//ROOT/ELEMENT'))) d
where extract(value(d), '//NODE2/text()').getstringval() = 'B-Node2') sd,
my_tab t;
Thanks,
Kailash

Related

Oracle PL/SQL Use Merge command on data from XML Table

I have a PL/SQL procedure that currently gets data from an XML service and only does inserts.
xml_data := xmltype(GET_XML_F('http://test.example.com/mywebservice');
--GET_XML_F gets the XML text from the site
INSERT INTO TEST_READINGS (TEST, READING_DATE, CREATE_DATE, LOCATION_ID)
SELECT round(avg(readings.reading_val), 2),
to_date(substr(readings.reading_dt, 1, 10),'YYYY-MM-DD'), SYSDATE,
p_location_id)
FROM XMLTable(
XMLNamespaces('http://www.example.com' as "ns1"),
'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime') readings
GROUP BY substr(readings.reading_dt,1,10), p_location_id;
I would like to be able to insert or update the data using a merge statement in the event that it needs to be re-run on the same day to find added records. I'm doing this in other procedures using the code below.
MERGE INTO TEST_READINGS USING DUAL
ON (LOCATION_ID = p_location_id AND READING_DATE = p_date)
WHEN NOT MATCHED THEN INSERT
(TEST_reading_id, site_id, test, reading_date, create_date)
VALUES (TEST_readings_seq.nextval, p_location_id,
p_value, p_date, SYSDATE)
WHEN MATCHED THEN UPDATE
SET TEST = p_value;
The fact that I'm pulling it from an XMLTable is throwing me off. Is there way to get the data from the XMLTable while still using the (much cleaner) merge syntax? I would just delete the data beforehand and re-import or use lots of conditional statements, but I would like to avoid doing so if possible.
Can't you simply put your SELECT into MERGE statement?
I believe, this should look more less like this:
MERGE INTO TEST_READINGS USING (
SELECT
ROUND(AVG(readings.reading_val), 2) AS test
,TO_DATE(SUBSTR(readings.reading_dt, 1, 10),'YYYY-MM-DD') AS reading_date
,SYSDATE AS create_date
,p_location_id AS location_id
FROM
XMLTable(
XMLNamespaces('http://www.example.com' as "ns1")
,'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS
reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime'
) readings
GROUP BY
SUBSTR(readings.reading_dt,1,10)
,p_location_id
) readings ON (
LOCATION_ID = readings.location_id
AND READING_DATE = readings.reading_date
)
WHEN NOT MATCHED THEN
...
WHEN MATCHED THEN
...
;

iterate around values in a bulk collected table - extractvalue

I have a piece of PL/SQL which does:
SELECT *
BULK COLLECT INTO table_1
FROM XMLTABLE (
'//Match'
PASSING l_xml_string
COLUMNS col_1 VARCHAR2 (8) PATH '#col_1' ,
col_2 VARCHAR2 (40) PATH '#col_2');
I then store these as an XML variable using XMLAGG.
I want to join the col_1 value of a view, but the problem is when I use the EXTRACTVALUE function (from the aggregated xml) I get a terrible explain plan (full table scans over using an index) over when I pass it a single value - even when there's one 1 record within the xml.
When I do the extract before entering this table (storing it in a variable and then joining against the variable) it takes the correct path, but to do so I limit myself to 1 result (where rownum < 2) when I need to remove this restriction.
When I do:
select col_1
into l_col_1 from
TABLE (table_1);
it displays:
ORA-01422: exact fetch returns more than requested number of rows
Is there another way to do this or:
SELECT EXTRACTVALUE (data.COLUMN_VALUE ....

Error when selecting timestamp from XMLType column in Oracle 11g

I have 2 Oracle 11g databases with a table containing a XMLType column and some test data differing only in the separator (.,) for the milliseconds of the timestamp:
create table TEST_TIMESTAMP (
ID number(19,0) constraint "NN_TEST_TIMESTAMP_ID" not null,
DOC xmltype constraint "NN_TEST_TIMESTAMP_DOC" not null
);
insert into TEST_TIMESTAMP values ( 1, xmltype('<?xml version="1.0" encoding="utf-8"?><test><ts>2015-04-08T04:55:33.11</ts></test>'));
insert into TEST_TIMESTAMP values ( 2, xmltype('<?xml version="1.0" encoding="utf-8"?><test><ts>2015-04-08T04:55:33,11</ts></test>'));
When I try to extract the timestamp with the following statements, it fails either with the first document on one database or with the second document on the other database.
select x.*
from TEST_TIMESTAMP t,
xmltable(
'/test'
passing t.DOC
columns
ORIGINAL varchar2(50) path 'ts',
RESULT timestamp with time zone path 'ts'
) x
where t.ID = 1;
select x.*
from TEST_TIMESTAMP t,
xmltable(
'/test'
passing t.DOC
columns
ORIGINAL varchar2(50) path 'ts',
RESULT timestamp with time zone path 'ts'
) x
where t.ID = 2;
The error I get:
ORA-01858: a non-numeric character was found where a numeric was expected
01858. 00000 - "a non-numeric character was found where a numeric was expected"
*Cause: The input data to be converted using a date format model was
incorrect. The input data did not contain a number where a number was
required by the format model.
*Action: Fix the input data or the date format model to make sure the
elements match in number and type. Then retry the operation.
The only differences between those databases I've found are:
DB1: version=11.2.0.1.0, NLS_CHARACTERSET=AL32UTF8 -> fails on document 2
DB2: version=11.2.0.2.0, NLS_CHARACTERSET=WE8MSWIN1252 -> fails on document 1
DB1 has the behaviour that I would expect. Does anybody know why those databases behave differently and how to fix the issue in DB2?
Thanks in advance,
Oliver
My guess is that the nls_timestamp_format is different between the two databases.
However, rather than forcing the implicit conversion down at the XMLTABLE level, I would do an explicit conversion in the select list:
with test_timestamp as (select 1 id, xmltype('<?xml version="1.0" encoding="utf-8"?><test><ts>2015-04-08T04:55:33.11</ts></test>') doc from dual union all
select 2 id, xmltype('<?xml version="1.0" encoding="utf-8"?><test><ts>2015-04-08T04:55:33,11</ts></test>') doc from dual)
select x.original,
to_timestamp(x.original, 'yyyy-mm-dd"T"hh24:mi:ss,ff2') result
from test_timestamp t,
xmltable('/test' passing t.doc
columns original varchar2(50) path 'ts') x;
ORIGINAL RESULT
-------------------------------------------------- --------------------------------------------------
2015-04-08T04:55:33.11 08/04/2015 04:55:33.110000000
2015-04-08T04:55:33,11 08/04/2015 04:55:33.110000000
N.B. I found that using "ss.ff2" errored, but "ss,ff2" handled both cases just fine. I'm not sure if that's reliant on some other nls setting or not, though.

Oracle Datatype Modifier

I need to be able to reconstruct a table column by using the column data in DBA_TAB_COLUMNS, and so to develop this I need to understand what each column refers to. I'm looking to understand what DATA_TYPE_MOD is -- the documentation (http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_2094.htm#I1020277) says it is a data type modifier, but I can't seem to find any columns with this field populated or any way to populate this field with a dummy column. Anyone familiar with this field?
Data_type_mod column of the [all][dba][user]_tab_columns data dictionary view gets populated when a column of a table is declared as a reference to an object type using REF datatype(contains object identifier(OID) of an object it points to).
create type obj as object(
item number
) ;
create table tb_1(
col ref obj
)
select t.table_name
, t.column_name
, t.data_type_mod
from user_tab_columns t
where t.table_name = 'TB_1'
Result:
table_name column_name data_type_mod
-----------------------------------------
TB_1 COL REF
Oracle has a PL/SQL package that can be used to generate the DDL for creating a table. You would probably be better off using this.
See GET_DDL on http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_metada.htm#i1019414
And see also:
How to get Oracle create table statement in SQL*Plus

INSERT INTO TARGET_TABLE SELECT * FROM SOURCE_TABLE;

I would like to do an INSERT / SELECT, this means INSERT in the TARGET_TABLE the records of the SOURCE_TABLE, with this assumption:
The SOURCE and the TARGET table have only a SUBSET of common columns, this means in example:
==> The SOURCE TABLE has ALPHA, BETA and GAMMA columns;
==> The TARGET TABLE has BETA, GAMMA and DELTA columns.
What is the most efficient way to produce INSERT / SELECT statements, respecting the assumption that not all the target columns are present in the source table?
The idea is that the PL/SQL script CHECKS the columns in the source table and in the target table, makes the INTERSECTION, and then produces a dynamic SQL with the correct list of columns.
Please assume that the columns present in the target table, but not present in the source table, have to be left NULL.
I wish to extract the data from SOURCE into a set of INSERT statements for later insertion into the TARGET table.
You can assume that the TARGET table has more columns than the SOURCE table, and that all the columns in the SOURCE table are present in the TARGET table in the same order.
Thank you in advance for your useful suggestions!
In Oracle, You can get common columns with this SQL query:
select column_name
from user_tab_columns
where table_name = 'TABLE_1'
intersect
select column_name
from user_tab_columns
where table_name = 'TABLE_2'
Then you iterate a cursor with the mentioned query to generate a comma separated list of all values returned. Put that comma separated string into a varchar2 variable named common_fields. Then, you can:
sql_sentence := 'insert into TABLE_1 (' ||
common_fields ||
') select ' ||
common_fields ||
' from TABLE_2';
execute immediate sql_sentence;

Resources