XML datatype in column to find existing of values - oracle

I have issues in find the rows in table having datatype as XMLType and I am looking for Name 'PrimeSub' and Value as 'Y'. Thanks
<attributes>
<upper_lvl_ver_desc>
<Name>AABB</Name>
<Description>pkListValue</Description>
<Value/>
</upper_lvl_ver_desc>
<upper_lvl_ver_desc>
<Name>GL_PS_ALLOWED</Name>
<Description>pkListValue</Description>
<Value/>
</upper_lvl_ver_desc>
<upper_lvl_ver_desc>
<Name>PrimeSub</Name>
<Description>pkListValue</Description>
<Value>Y</Value>
</upper_lvl_ver_desc>
</attributes>

You didn't post your table name or structure, so here I'm using mytable as the table name and xmlcol as the xmltype column. The WITH clause at the top is just to provide the test data.
with mytable as (select xmltype('<attributes>
<upper_lvl_ver_desc>
<Name>AABB</Name>
<Description>pkListValue</Description>
<Value/>
</upper_lvl_ver_desc>
<upper_lvl_ver_desc>
<Name>GL_PS_ALLOWED</Name>
<Description>pkListValue</Description>
<Value/>
</upper_lvl_ver_desc>
<upper_lvl_ver_desc>
<Name>PrimeSub</Name>
<Description>pkListValue</Description>
<Value>Y</Value>
</upper_lvl_ver_desc>
</attributes>') as xmlcol from dual)
-- query starts here
select xmlcol
from mytable
inner join XMLTable('/attributes/upper_lvl_ver_desc' PASSING mytable.xmlcol
COLUMNS c_name varchar2(50) PATH 'Name',
c_value varchar2(1) PATH 'Value') atts
on c_name = 'PrimeSub' and c_value = 'Y'
;
This method might seem a bit counter-intuitive, but it's one way Oracle recommends doing it. You join the XML column as a pseudo-table using the XMLTable function, transforming it into relational data which can easily be manipulated in SQL. You can read more about it here.

Related

Oracle check variable type

A very quick question
I have a read-only access to Oracle database. However, I would like to check the variable type list e.g.
Var1 Varchar(30)
Var2 Numeric
etc...
What is the best code to do this?
Thanks
I'm guessing you want to view the data types for the columns of a table. If so:
select
column_id,
table_name,
column_name,
decode(data_type,
'NUMBER', 'NUMBER('||data_precision||decode(data_scale,0,null,','||data_scale)||')',
'VARCHAR2', 'VARCHAR2('||data_length||')',
'DATE', 'DATE',
data_type||'('||data_length||')'
) type,
decode(nullable,'N','NOT NULL') n
from user_tab_columns
where table_name = 'MY_TABLE_NAME'
order by 1
If the table resides in another schema, use instead:
...
from all_tab_columns
where table_name = 'MY_TABLE_NAME' and owner='CORRECT_USERNAME'

Oracle PL/SQL Use Merge command on data from XML Table

I have a PL/SQL procedure that currently gets data from an XML service and only does inserts.
xml_data := xmltype(GET_XML_F('http://test.example.com/mywebservice');
--GET_XML_F gets the XML text from the site
INSERT INTO TEST_READINGS (TEST, READING_DATE, CREATE_DATE, LOCATION_ID)
SELECT round(avg(readings.reading_val), 2),
to_date(substr(readings.reading_dt, 1, 10),'YYYY-MM-DD'), SYSDATE,
p_location_id)
FROM XMLTable(
XMLNamespaces('http://www.example.com' as "ns1"),
'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime') readings
GROUP BY substr(readings.reading_dt,1,10), p_location_id;
I would like to be able to insert or update the data using a merge statement in the event that it needs to be re-run on the same day to find added records. I'm doing this in other procedures using the code below.
MERGE INTO TEST_READINGS USING DUAL
ON (LOCATION_ID = p_location_id AND READING_DATE = p_date)
WHEN NOT MATCHED THEN INSERT
(TEST_reading_id, site_id, test, reading_date, create_date)
VALUES (TEST_readings_seq.nextval, p_location_id,
p_value, p_date, SYSDATE)
WHEN MATCHED THEN UPDATE
SET TEST = p_value;
The fact that I'm pulling it from an XMLTable is throwing me off. Is there way to get the data from the XMLTable while still using the (much cleaner) merge syntax? I would just delete the data beforehand and re-import or use lots of conditional statements, but I would like to avoid doing so if possible.
Can't you simply put your SELECT into MERGE statement?
I believe, this should look more less like this:
MERGE INTO TEST_READINGS USING (
SELECT
ROUND(AVG(readings.reading_val), 2) AS test
,TO_DATE(SUBSTR(readings.reading_dt, 1, 10),'YYYY-MM-DD') AS reading_date
,SYSDATE AS create_date
,p_location_id AS location_id
FROM
XMLTable(
XMLNamespaces('http://www.example.com' as "ns1")
,'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS
reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime'
) readings
GROUP BY
SUBSTR(readings.reading_dt,1,10)
,p_location_id
) readings ON (
LOCATION_ID = readings.location_id
AND READING_DATE = readings.reading_date
)
WHEN NOT MATCHED THEN
...
WHEN MATCHED THEN
...
;

How to get all not null columns in a table

I have a requirement to find all not-null columns in a table. For example, my table is the below one
Lets say, Column1, Column2 and Column3 have not-null constraints and Column4, Column5 and Column6 are of nullable types. Is there any query in Oracle that list the column names that are of not-null types, ie I need to get the column names Column1, Column2 and Column3.
DESIRED OUTPUT
Column1
Column2
Column3
I know there should be a simple way to achieve this, but am new to Oracle. Any help would be highly appreciated.
You can query the all_tab_columns table:
select column_name
from all_tab_columns
where table_name = 'TABLE1'
and nullable = 'N';
I know there should be a simple way to achieve this, but am new to Oracle.
Well, online documentation is exactly what you need to look into.
Depending on the privilege, you need to look into [DBA|USER|ALL]_TAB_COLUMNS.
ALL_TAB_COLUMNS
Column Datatype Description
NULLABLE VARCHAR2(1) Indicates whether a column allows NULLs.
The value is N if there is a NOT NULL constraint
on the column or if the column is part of a PRIMARY KEY.
The constraint should be in an ENABLE VALIDATE state.
So, per the documentation, you need to use the filter:
NULLABLE = 'N'

ORA-31011 on XMLTYPE column

I'm using Oracle version - 11.2.0.4
I've reproduced the problem I'm facing in a more simplified manner below.
I've 2 tables with XMLTYPE column.
Table 1 (base_table in my example below) has storage model for XMLTYPE as BINARY XML.
Table 2 (my_tab) has storage model for XMLTYPE as CLOB.
With the XML in base_table I'm extracting the value of an attribute based on certain condition. This attribute in turn is a name of a node in xml contained in my_tab and I want to extract that node's value from my_tab.
Please note that I do not have the liberty to change this logic at the moment.
Code was working fine till the storage model for XMLTYPE column was CLOB in both tables. Recently base_table was recreated (drop and create), so it's storage model got modified as BINARY XML as I understand that's the default storage model in version 11.2.0.4
Here is create table stmt and sample data -
create table base_table(xml xmltype);
create table my_tab(xml xmltype)
xmltype column "XML" store as clob;
insert into base_table(xml)
values (xmltype('<ROOT>
<ELEMENT NAME="NODEA">
<NODE1>A-Node1</NODE1>
<NODE2>A-Node2</NODE2>
</ELEMENT>
<ELEMENT NAME="NODEB">
<NODE1>B-Node1</NODE1>
<NODE2>B-Node2</NODE2>
</ELEMENT>
<ELEMENT NAME="NODEC">
<NODE1>C-Node1</NODE1>
<NODE2>C-Node2</NODE2>
</ELEMENT>
</ROOT>')
);
insert into my_tab(xml)
values (xmltype('<TEST_XML>
<SOME_NODE>
<XYZ>
<NODEB>My area of concern</NODEB>
<OTHER_NODE> Something irrelevant </OTHER_NODE>
</XYZ>
</SOME_NODE>
<SOME_OTHER_NODE>
<ABC> Some value for this node </ABC>
</SOME_OTHER_NODE>
</TEST_XML>')
);
Below query fails:
select extract(t.xml, sd.tag_name).getstringval()
from (select '//' || extract(value(d), '//#NAME').getstringval() || '/text()' as tag_name
from base_table b,
table(xmlsequence(extract(b.xml, '//ROOT/ELEMENT'))) d
where extract(value(d), '//NODE2/text()').getstringval() = 'B-Node2') sd,
my_tab t;
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00601: Invalid token in: '///text()'
However, this query works fine and is able to extract the value of node I'm interested in. It can be seen that tag_name is fetched as required, but when it is used within "extract", it's value is somehow lost.
select sd.tag_name, extract(t.xml, '//NodeB/text()').getstringval()
from (select '//' || extract(value(d), '//#NAME').getstringval() || '/text()' as tag_name
from base_table b,
table(xmlsequence(extract(b.xml, '//ROOT/ELEMENT'))) d
where extract(value(d), '//NODE2/text()').getstringval() = 'B-Node2') sd,
my_tab t;
If I change XMLTYPE storage model of base_table back to CLOB, the erroneous query works fine again.
I would like to understand what's going wrong with storage model as BINARY XML.
I modified query as below, which is working fine. i.e. convert to clob and back to XMLTYPE:
select extract(t.xml, sd.tag_name).getstringval()
from (select '//' || extract(value(d), '//#NAME').getstringval() || '/text()' as tag_name
from base_table b,
table(xmlsequence(extract(XMLTYPE(b.xml.getclobval()), '//ROOT/ELEMENT'))) d
where extract(value(d), '//NODE2/text()').getstringval() = 'B-Node2') sd,
my_tab t;
Thanks,
Kailash

PL/SQL XML parsing into relational tables

I am using a PL/SQL procedure for inserting values from XML to relational tables. The XML file resides in an XMLTYPE column.
Columns of table (OFFLINE_XML) containing XML are
ID, XML_FILE, STATUS
There are two table in which i want to insert the values i.e. DEPARTMENT and SECTIONS
Structure of DEPARTMENT is as under:-
ID, NAME
Structure of SECTIONS table is:-
ID, NAME, DEPARTMENT_ID
Now there is a third table (LIST_1) in which i want to insert the values which already exists in both the above mentioned tables.
Structure of LIST_1 is :-
ID, DEPARTMENT_ID,DEPARTMENT_NAME,SECTIONS_ID, SECTIONS_NAME
XML format is as under:-
<ROWSET>
<DEPARTMENT>
<DEPARTMENT_ID>DEP22681352268280797</DEPARTMENT_ID>
<DEPARTMENT_NAME>myDEPARTMENT</DEPARTMENT_NAME>
<SECTIONS_ID>6390135666643567</SECTIONS_ID>
<SECTIONS_NAME>mySection</SECTIONS_NAME>
</DEPARTMENT>
<DEPARTMENT>
<DEPARTMENT_ID>DEP255555555550797</DEPARTMENT_ID>
<DEPARTMENT_NAME>myDEPARTMENT2</DEPARTMENT_NAME>
<SECTIONS_ID>63901667779243567</SECTIONS_ID>
<SECTIONS_NAME>mySection2</SECTIONS_NAME>
</DEPARTMENT>
</ROWSET>
DECLARE
BEGIN
insert all
into department (id, name)
values (unit_id, unit_name)
into sections (id, name, department _id)
values ( sect_id, sect_name, department _id)
select department .id as department _id
, department.name as department_name
, sect.id as sect_id
, sect.name as sect_name
from OFFLINE_XML
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'UNIT_ID'
, "NAME" varchar2(20) path 'UNIT_NAME'
) department
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'SECTIONS_ID'
, "NAME" varchar2(20) path 'SECTIONS_NAME'
) sect
where status = 3;
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
dbms_output.put_line('Duplicate='|| department.id );
--insert into LIST_1 values(ID,DEPARTMENT_ID, SECTIONS_ID, DEPARTMENT_NAME,SECTIONS_NAME);
END;
Now the problem is that how can i insert or identify the values on the basis of primary key which already exists in table DEPARTMENT and SECTIONS and thereafter insert the existing values in LIST_1 table.
------An updated effort --------------
I came up with another solution but this again is giving me problem. In the under mentioned procedure cursor tends to repeat for every xquery. I don't know how am i going to handle this issue..
DECLARE
department_id varchar2(20);
department_name varchar2(20);
sect_id varchar2(20);
sect_name varchar2(20);
sections_unit_id varchar2(20);
var number;
CURSOR C1 IS
select
sect.id as sect_id
, sect.name as sect_name
, sect.unit_id as sections_unit_id
from OFFLINE_XML
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'UNIT_ID'
, "NAME" varchar2(20) path 'UNIT_NAME'
) DEPARTMENT
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'SECTIONS_ID'
, "NAME" varchar2(20) path 'SECTIONS_NAME'
, "DEPARTMENT_ID" varchar2(20) path 'DEPARTMENT_ID'
) sect
where status = 3;
BEGIN
FOR R_C1 IN C1 LOOP
BEGIN
var :=1;
--insert into sections_temp_1 (id, name)values ( R_C1.sect_id, R_C1.sect_name);
-- commit;
dbms_output.put_line('Duplicate='||var);
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
dbms_output.put_line('Duplicate='||R_C1.sect_id);
END;
var:=var+1;
END LOOP;
END;
Seems that first of all you need a little bit more complicated XQuery to extract rows from XMLType field.
There are no need to extract sections and departments separately and after that try to match it back.
Try this variant:
select
department_id,
department_name,
sections_id,
sections_name
from
OFFLINE_XML xml_list,
xmltable(
'
for $dept in $param/ROWSET/DEPARTMENT
return $dept
'
passing xml_list.xml_file as "param"
columns
"DEPARTMENT_ID" varchar2(100) path '//DEPARTMENT/DEPARTMENT_ID',
"DEPARTMENT_NAME" varchar2(4000) path '//DEPARTMENT/DEPARTMENT_NAME',
"SECTIONS_ID" varchar2(100) path '//DEPARTMENT/SECTIONS_ID',
"SECTIONS_NAME" varchar2(4000) path '//DEPARTMENT/SECTIONS_NAME'
) section_list
where
xml_list.Status = 3
SQL fiddle - 1
After that you got a dataset which can be outer joined to existing tables on it's primary keys (or something other - depends on required logic) if you want to find if any values already exists:
select
offline_set.offline_xml_id,
offline_set.department_id,
offline_set.department_name,
offline_set.sections_id,
offline_set.sections_name,
nvl2(dept.id,'Y', 'N') is_dept_exists,
nvl2(sect.id,'Y', 'N') is_sect_exists
from
(
[... skipped text of previous query ...]
) offline_set,
department dept,
sections sect
where
dept.id (+) = offline_set.department_id
and
sect.id (+) = offline_set.sections_id
SQL fiddle - 2
Because I actually unaware about logic behind this requirements, I can't suggest any future processing instructions. But it seems that you missed reference to OFFLINE_XML table in LIST_1 which needed to identify source of errors/duplicates.
The best way to do this would be with Oracle's built in error logging. Use DBMS_ERRLOG.CREATE_ERROR_LOG() to generate a logging table for each target table (i.e. SECTION and DEPARTMENT in your case). Find out more.
The syntax for using these tables with INSERT ALL is not intuitive but this is what to do:
insert all
into department (id, name)
values (unit_id, unit_name)
log errors into err$_department ('XML Load failure')
into sections (id, name, department_id)
values ( sect_id, sect_name, department_id)
log errors into err$_section ('XML Load failure')
select department.id as department_id
....
You can put any (short-ish) string into the error log label, but make sure it's something which will help you local the relevant records. You may wish to set the REJECT LIMIT to some value depending on whether you wish to fail on one (or a couple of) error, or process the whole XML and sort it out afterwards. Find out more.
I suggest you use separate logs for each target tables rather one log for both for two reasons:
In my expereince solutions which leverage Oracle's built-in feartures tend to scale better and be more robust than hand-rolled code.
It's a better fit for what might happen. You have three circumstances which might cause loading to hurl DUP_VAL_ON_INDEX:
Record has duplicate Department ID
Record has duplicate Section ID
Record has duplicate Department ID and duplicate Section ID
Separate tables make it easier to understand what's gone awry. This is a major boon when loading large amounts of data.
"i need to inform my user that this much of duplicate entries were
found in xml"
You can still do that with two error logs. Heck, you can even join the error logs into a view called LIST_1 is that is so very important to you.

Resources