I have a PL/SQL procedure that currently gets data from an XML service and only does inserts.
xml_data := xmltype(GET_XML_F('http://test.example.com/mywebservice');
--GET_XML_F gets the XML text from the site
INSERT INTO TEST_READINGS (TEST, READING_DATE, CREATE_DATE, LOCATION_ID)
SELECT round(avg(readings.reading_val), 2),
to_date(substr(readings.reading_dt, 1, 10),'YYYY-MM-DD'), SYSDATE,
p_location_id)
FROM XMLTable(
XMLNamespaces('http://www.example.com' as "ns1"),
'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime') readings
GROUP BY substr(readings.reading_dt,1,10), p_location_id;
I would like to be able to insert or update the data using a merge statement in the event that it needs to be re-run on the same day to find added records. I'm doing this in other procedures using the code below.
MERGE INTO TEST_READINGS USING DUAL
ON (LOCATION_ID = p_location_id AND READING_DATE = p_date)
WHEN NOT MATCHED THEN INSERT
(TEST_reading_id, site_id, test, reading_date, create_date)
VALUES (TEST_readings_seq.nextval, p_location_id,
p_value, p_date, SYSDATE)
WHEN MATCHED THEN UPDATE
SET TEST = p_value;
The fact that I'm pulling it from an XMLTable is throwing me off. Is there way to get the data from the XMLTable while still using the (much cleaner) merge syntax? I would just delete the data beforehand and re-import or use lots of conditional statements, but I would like to avoid doing so if possible.
Can't you simply put your SELECT into MERGE statement?
I believe, this should look more less like this:
MERGE INTO TEST_READINGS USING (
SELECT
ROUND(AVG(readings.reading_val), 2) AS test
,TO_DATE(SUBSTR(readings.reading_dt, 1, 10),'YYYY-MM-DD') AS reading_date
,SYSDATE AS create_date
,p_location_id AS location_id
FROM
XMLTable(
XMLNamespaces('http://www.example.com' as "ns1")
,'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS
reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime'
) readings
GROUP BY
SUBSTR(readings.reading_dt,1,10)
,p_location_id
) readings ON (
LOCATION_ID = readings.location_id
AND READING_DATE = readings.reading_date
)
WHEN NOT MATCHED THEN
...
WHEN MATCHED THEN
...
;
Related
At a given time I stored the result of the following ORACLE SQL Query :
SELET col , TO_CHAR( LOWER( STANDARD_HASH( col , 'MD5' ) ) AS hash_col FROM MyTable ;
A week later, I executed the same query on the same data ( same values for column col ).
I thought the resulting hash_col column would have the same values as the values from the former execution but it was not the case.
Is it possible for ORACLE STANDARD_HASH function to deliver over time the same result for identical input data ?
It does if the function is called twice the same day.
All we have about the data changing (or not) and the hash changing (or not) is your assertion.
You could create and populate a log table:
create table hash_log (
sample_time timestamp,
hashed_string varchar2(200),
hashed_string_dump varchar2(200),
hash_value varchar2(200)
);
Then on a daily basis:
insert into hash_log values
(select systimestamp,
source_column,
dump(source_column),
STANDARD_HASH(source_column , 'MD5' )
from source_table
);
Then, to spot changes:
select distinct hashed_string ||
hashed_string_dump ||
hash_value
from hash_log;
I am using the below query to parse a HTML table as an XML in Oracle dynamically
SELECT ID,caption,amdtype AS Amendment_Reason
FROM
(
with tbl as
(
SELECT ID,xmltype('<html><body>'|| REPLACE(REPLACE(REPLACE(CAST(Note AS VARCHAR(4000)),'<br>',''),'<br/>',''),'&','&') || '</body></html>') AS xml_data
FROM TBL_EVENT
WHERE EVENT_TYPE='Amended note'
AND to_Char(CREATED_DATE,'mm-yyyy') = to_char(sysdate-1,'mm-yyyy')
AND NUMBER NOT LIKE '%c%'
)
SELECT ROW_NUMBER() OVER (PARTITION BY tbl.ID ORDER BY tbl.ID) AS Rankord,tbl.ID,
x.caption,
x.amdtype
FROM tbl
CROSS JOIN
XMLTABLE(
'/html/body/table'
PASSING tbl.xml_data
COLUMNS
caption VARCHAR2(50) PATH 'caption',
amdtype VARCHAR2(50) PATH 'tr[1]/td[1]'
) x
WHERE x.caption='Amendment Reason'
)
It is working for some texts but I am getting the below error
LPX-00243: element attribute value must be enclosed in quotes
Since I am parsing it as XML dynamically I am not sure how to make the changes anyone please guide me how to do it
I have a table having one of the columns that stores SQL query returning ids or it stores comma separated ids.
create table to store query or ids(separated by ,)
create table test1
(
name varchar(20) primary key,
stmt_or_value varchar(500),
type varchar(50)
);
insert into test1 (name, stmt_or_value, type)
values ('first', 'select id from data where id = 1;','SQL_QUERY')
insert into test1 (name, stmt_or_value, type)
values ('second', '1,2,3,4','VALUE')
data table is as follows
create table data
(
id number,
subject varchar(500)
);
insert into data (id, subject) values (1, 'test subject1');
insert into data (id, subject) values (2, 'test subject2');
insert into data (id, subject) values (3, 'test subject2');
I am not able to formulate query that will return values after either executing stored sql or parsing stored ids based on the value of name.
select id, subject
from data
where id in( EXECUTE IMMEDIATE stmt_or_value
where type='SQL_QUERY'
and name = 'first') or
( parse and return ids
from stmt_or_value
where type='VALUE'
and name = 'second')
Could you please help me in this.
Parsing comma separated value is done, I basically need help in below first part of the query:
( EXECUTE IMMEDIATE stmt_or_value
where type='SQL_QUERY'
and name = 'first')
This seems a very peculiar requirement, and one which will be difficult to solve in a robust fashion. STMT_OR_VALUE is the embodiment of the One Column Two Usages anti-pattern. Furthermore, resolving STMT_OR_VALUE requires flow control logic and the use of dynamic SQL. Consequently it cannot be a pure SQL solution: you need to use PL/SQL to assemble and execute the dynamic query.
Here is a proof of concept for a solution. I have opted for a function which you can call from SQL. It depends on one assumption: every query string you insert into TEST1.STMT_OR_VALUE has a projection of a single numeric column and every value string is a CSV of numeric data only. With this proviso it is simple to construct a function which either executes a dynamic query or tokenizes the string into a series of numbers; both of which are bulk collected into a nested table:
create or replace function get_ids (p_name in test1.name%type)
return sys.odcinumberlist
is
l_rec test1%rowtype;
return_value sys.odcinumberlist;
begin
select * into l_rec
from test1
where name = p_name;
if l_rec.type = 'SQL_QUERY' then
-- execute a query
execute immediate l_rec.stmt_or_value
bulk collect into return_value;
else
-- tokenize a string
select xmltab.tkn
bulk collect into return_value
from ( select l_rec.stmt_or_value from dual) t
, xmltable( 'for $text in ora:tokenize($in, ",") return $text'
passing stmt_or_value as "in"
columns tkn number path '.'
) xmltab;
end if;
return return_value;
end;
/
Note there is more than one way of executing a dynamic SQL statement and a multiplicity of ways to tokenize a CSV into a series of numbers. My decisions are arbitrary: feel free to substitute your preferred methods here.
This function can be invoked with a table() call:
select *
from data
where id in ( select * from table(get_ids('first'))) -- execute query
or id in ( select * from table(get_ids('second'))) -- get string of values
/
The big benefit of this approach is it encapsulates the logic around the evaluation of STMT_OR_VALUE and hides use of Dynamic SQL. Consequently it is easy to employ it in any SQL statement whilst retaining readability, or to add further mechanisms for generating a set of IDs.
However, this solution is brittle. It will only work if the values in the test1 table obey the rules. That is, not only must they be convertible to a stream of single numbers but the SQL statements must be valid and executable by EXECUTE IMMEDIATE. For instance, the trailing semi-colon in the question's sample data is invalid and would cause EXECUTE IMMEDIATE to hurl. Dynamic SQL is hard not least because it converts compilation errors into runtime errors.
Following is the set up data used for this example:
create table test1
(
test_id number primary key,
stmt_or_value varchar(500),
test_type varchar(50)
);
insert into test1 (test_id, stmt_or_value, test_type)
values (1, 'select id from data where id = 1','SQL_QUERY');
insert into test1 (test_id, stmt_or_value, test_type)
values (2, '1,2,3,4','VALUE');
insert into test1 (test_id, stmt_or_value, test_type)
values (3, 'select id from data where id = 5','SQL_QUERY');
insert into test1 (test_id, stmt_or_value, test_type)
values (4, '3,4,5,6','VALUE');
select * from test1;
TEST_ID STMT_OR_VALUE TEST_TYPE
1 select id from data where id = 1 SQL_QUERY
2 1,2,3,4 VALUE
3 select id from data where id = 5 SQL_QUERY
4 3,4,5,6 VALUE
create table data
(
id number,
subject varchar(500)
);
insert into data (id, subject) values (1, 'test subject1');
insert into data (id, subject) values (2, 'test subject2');
insert into data (id, subject) values (3, 'test subject3');
insert into data (id, subject) values (4, 'test subject4');
insert into data (id, subject) values (5, 'test subject5');
select * from data;
ID SUBJECT
1 test subject1
2 test subject2
3 test subject3
4 test subject4
5 test subject5
Below is the solution:
declare
sql_stmt clob; --to store the dynamic sql
type o_rec_typ is record(id data.id%type, subject data.subject%type);
type o_tab_typ is table of o_rec_typ;
o_tab o_tab_typ; --to store the output records
begin
--The below SELECT query generates the required dynamic SQL
with stmts as (
select (listagg(stmt_or_value, ' union all ') within group(order by stmt_or_value))||' union all ' s
from test1 t
where test_type = 'SQL_QUERY')
select
q'{select id, subject
from data
where id in (}'||
nullif(s,' union all ')||q'{
select distinct to_number(regexp_substr(s, '[^,]+', 1, l)) id
from (
select level l,
s
from (select listagg(stmt_or_value,',') within group(order by stmt_or_value) s
from test1
where test_type = 'VALUE') inp
connect by level <= length (regexp_replace(s, '[^,]+')) + 1))}' stmt into sql_stmt
from stmts; -- Create the dynamic SQL and store it into the clob variable
--execute the statement, fetch and display the output
execute immediate sql_stmt bulk collect into o_tab;
for i in o_tab.first..o_tab.last
loop
dbms_output.put_line('id: '||o_tab(i).id||' subject: '||o_tab(i).subject);
end loop;
end;
Output:
id: 1 subject: test subject1
id: 2 subject: test subject2
id: 3 subject: test subject3
id: 4 subject: test subject4
id: 5 subject: test subject5
Learnings:
Avoid using key words for table and column names.
Design application tables effectively to serve current and reasonable future requirements.
The above SQL will work. Still it is wise to consider reviewing the table design because, the complexity of the code will keep increasing with changes in requirements in future.
Learned how to convert comma separated values into records. "https://asktom.oracle.com/pls/apex/f?p=100:11:::NO::P11_QUESTION_ID:9538583800346706523"
declare
my_sql varchar2(1000);
v_num number;
v_num1 number;
begin
select stmt_or_value into my_sql from test1 where ttype='SQL_QUERY';
execute immediate my_sql into v_num;
select id into v_num1 from data where id=v_num;
dbms_output.put_line(v_num1);
end;
Answer for part 1.Please check.
I am using a PL/SQL procedure for inserting values from XML to relational tables. The XML file resides in an XMLTYPE column.
Columns of table (OFFLINE_XML) containing XML are
ID, XML_FILE, STATUS
There are two table in which i want to insert the values i.e. DEPARTMENT and SECTIONS
Structure of DEPARTMENT is as under:-
ID, NAME
Structure of SECTIONS table is:-
ID, NAME, DEPARTMENT_ID
Now there is a third table (LIST_1) in which i want to insert the values which already exists in both the above mentioned tables.
Structure of LIST_1 is :-
ID, DEPARTMENT_ID,DEPARTMENT_NAME,SECTIONS_ID, SECTIONS_NAME
XML format is as under:-
<ROWSET>
<DEPARTMENT>
<DEPARTMENT_ID>DEP22681352268280797</DEPARTMENT_ID>
<DEPARTMENT_NAME>myDEPARTMENT</DEPARTMENT_NAME>
<SECTIONS_ID>6390135666643567</SECTIONS_ID>
<SECTIONS_NAME>mySection</SECTIONS_NAME>
</DEPARTMENT>
<DEPARTMENT>
<DEPARTMENT_ID>DEP255555555550797</DEPARTMENT_ID>
<DEPARTMENT_NAME>myDEPARTMENT2</DEPARTMENT_NAME>
<SECTIONS_ID>63901667779243567</SECTIONS_ID>
<SECTIONS_NAME>mySection2</SECTIONS_NAME>
</DEPARTMENT>
</ROWSET>
DECLARE
BEGIN
insert all
into department (id, name)
values (unit_id, unit_name)
into sections (id, name, department _id)
values ( sect_id, sect_name, department _id)
select department .id as department _id
, department.name as department_name
, sect.id as sect_id
, sect.name as sect_name
from OFFLINE_XML
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'UNIT_ID'
, "NAME" varchar2(20) path 'UNIT_NAME'
) department
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'SECTIONS_ID'
, "NAME" varchar2(20) path 'SECTIONS_NAME'
) sect
where status = 3;
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
dbms_output.put_line('Duplicate='|| department.id );
--insert into LIST_1 values(ID,DEPARTMENT_ID, SECTIONS_ID, DEPARTMENT_NAME,SECTIONS_NAME);
END;
Now the problem is that how can i insert or identify the values on the basis of primary key which already exists in table DEPARTMENT and SECTIONS and thereafter insert the existing values in LIST_1 table.
------An updated effort --------------
I came up with another solution but this again is giving me problem. In the under mentioned procedure cursor tends to repeat for every xquery. I don't know how am i going to handle this issue..
DECLARE
department_id varchar2(20);
department_name varchar2(20);
sect_id varchar2(20);
sect_name varchar2(20);
sections_unit_id varchar2(20);
var number;
CURSOR C1 IS
select
sect.id as sect_id
, sect.name as sect_name
, sect.unit_id as sections_unit_id
from OFFLINE_XML
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'UNIT_ID'
, "NAME" varchar2(20) path 'UNIT_NAME'
) DEPARTMENT
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'SECTIONS_ID'
, "NAME" varchar2(20) path 'SECTIONS_NAME'
, "DEPARTMENT_ID" varchar2(20) path 'DEPARTMENT_ID'
) sect
where status = 3;
BEGIN
FOR R_C1 IN C1 LOOP
BEGIN
var :=1;
--insert into sections_temp_1 (id, name)values ( R_C1.sect_id, R_C1.sect_name);
-- commit;
dbms_output.put_line('Duplicate='||var);
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
dbms_output.put_line('Duplicate='||R_C1.sect_id);
END;
var:=var+1;
END LOOP;
END;
Seems that first of all you need a little bit more complicated XQuery to extract rows from XMLType field.
There are no need to extract sections and departments separately and after that try to match it back.
Try this variant:
select
department_id,
department_name,
sections_id,
sections_name
from
OFFLINE_XML xml_list,
xmltable(
'
for $dept in $param/ROWSET/DEPARTMENT
return $dept
'
passing xml_list.xml_file as "param"
columns
"DEPARTMENT_ID" varchar2(100) path '//DEPARTMENT/DEPARTMENT_ID',
"DEPARTMENT_NAME" varchar2(4000) path '//DEPARTMENT/DEPARTMENT_NAME',
"SECTIONS_ID" varchar2(100) path '//DEPARTMENT/SECTIONS_ID',
"SECTIONS_NAME" varchar2(4000) path '//DEPARTMENT/SECTIONS_NAME'
) section_list
where
xml_list.Status = 3
SQL fiddle - 1
After that you got a dataset which can be outer joined to existing tables on it's primary keys (or something other - depends on required logic) if you want to find if any values already exists:
select
offline_set.offline_xml_id,
offline_set.department_id,
offline_set.department_name,
offline_set.sections_id,
offline_set.sections_name,
nvl2(dept.id,'Y', 'N') is_dept_exists,
nvl2(sect.id,'Y', 'N') is_sect_exists
from
(
[... skipped text of previous query ...]
) offline_set,
department dept,
sections sect
where
dept.id (+) = offline_set.department_id
and
sect.id (+) = offline_set.sections_id
SQL fiddle - 2
Because I actually unaware about logic behind this requirements, I can't suggest any future processing instructions. But it seems that you missed reference to OFFLINE_XML table in LIST_1 which needed to identify source of errors/duplicates.
The best way to do this would be with Oracle's built in error logging. Use DBMS_ERRLOG.CREATE_ERROR_LOG() to generate a logging table for each target table (i.e. SECTION and DEPARTMENT in your case). Find out more.
The syntax for using these tables with INSERT ALL is not intuitive but this is what to do:
insert all
into department (id, name)
values (unit_id, unit_name)
log errors into err$_department ('XML Load failure')
into sections (id, name, department_id)
values ( sect_id, sect_name, department_id)
log errors into err$_section ('XML Load failure')
select department.id as department_id
....
You can put any (short-ish) string into the error log label, but make sure it's something which will help you local the relevant records. You may wish to set the REJECT LIMIT to some value depending on whether you wish to fail on one (or a couple of) error, or process the whole XML and sort it out afterwards. Find out more.
I suggest you use separate logs for each target tables rather one log for both for two reasons:
In my expereince solutions which leverage Oracle's built-in feartures tend to scale better and be more robust than hand-rolled code.
It's a better fit for what might happen. You have three circumstances which might cause loading to hurl DUP_VAL_ON_INDEX:
Record has duplicate Department ID
Record has duplicate Section ID
Record has duplicate Department ID and duplicate Section ID
Separate tables make it easier to understand what's gone awry. This is a major boon when loading large amounts of data.
"i need to inform my user that this much of duplicate entries were
found in xml"
You can still do that with two error logs. Heck, you can even join the error logs into a view called LIST_1 is that is so very important to you.
Do want to create Stored procc which updates or inserts into table based on the condition if current line does not exist in table?
This is what I have come up with so far:
PROCEDURE SP_UPDATE_EMPLOYEE
(
SSN VARCHAR2,
NAME VARCHAR2
)
AS
BEGIN
IF EXISTS(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN)
--what ? just carry on to else
ELSE
INSERT INTO pb_mifid (ssn, NAME)
VALUES (SSN, NAME);
END;
Is this the way to achieve this?
This is quite a common pattern. Depending on what version of Oracle you are running, you could use the merge statement (I am not sure what version it appeared in).
create table test_merge (id integer, c2 varchar2(255));
create unique index test_merge_idx1 on test_merge(id);
merge into test_merge t
using (select 1 id, 'foobar' c2 from dual) s
on (t.id = s.id)
when matched then update set c2 = s.c2
when not matched then insert (id, c2)
values (s.id, s.c2);
Merge is intended to merge data from a source table, but you can fake it for individual rows by selecting the data from dual.
If you cannot use merge, then optimize for the most common case. Will the proc usually not find a record and need to insert it, or will it usually need to update an existing record?
If inserting will be most common, code such as the following is probably best:
begin
insert into t (columns)
values ()
exception
when dup_val_on_index then
update t set cols = values
end;
If update is the most common, then turn the procedure around:
begin
update t set cols = values;
if sql%rowcount = 0 then
-- nothing was updated, so the record doesn't exist, insert it.
insert into t (columns)
values ();
end if;
end;
You should not issue a select to check for the row and make the decision based on the result - that means you will always need to run two SQL statements, when you can get away with one most of the time (or always if you use merge). The less SQL statements you use, the better your code will perform.
BEGIN
INSERT INTO pb_mifid (ssn, NAME)
select SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN);
END;
UPDATE:
Attention, you should name your parameter p_ssn(distinguish to the column SSN ), and the query become:
INSERT INTO pb_mifid (ssn, NAME)
select P_SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = P_SSN);
because this allways exists:
SELECT * FROM tblEMPLOYEE a where a.ssn = SSN