will the below code will take all the values from src table and insert into IC_MST_VELOCITY table. I need to know how to take all the records from src table to IC_MST_VELOCITY table if the below code is wrong.
(SELECT ARTICLE,
CONCATKEY,
CAST (LASTMODIFIEDDATE AS TIMESTAMP) AS LASTMOD,
PRODSUBGRP,
FROM IC_VELOCITY_V
) src
INSERT INTO IC_MST_VELOCITY(
ARTICLE,
CONCATKEY,
ISDELETED,
LASTMODIFIEDDATE,
MSTID,
PRODSUBGRP,
SKUID,
VELOCITY,
WHSE)
VALUES(
select ARTICLE from src,
select CONCATKEY from src,
select LASTMOD from src,
select PRODSUBGRP from src,
)
);
No, your code wouldn't do anything as it is invalid.
Something like this might; note all NULL values being inserted into columns that don't have the source value (selected from the ic_velocity_v table):
insert into ic_mst_velocity
( article,
concatkey,
isdeleted,
lastmodifieddate,
mstid,
prodsubgrp,
skuid,
velocity,
whse
)
(select article,
concatkey,
null isdeleted,
cast(lastmodifieddate as timestamp) as lastmod,
null mstid,
prodsubgrp,
null skuid,
null velocity,
null whse
from ic_velocity_v
);
Or, shorter version, without columns that don't have any value:
insert into ic_mst_velocity
( article,
concatkey,
lastmodifieddate,
prodsubgrp
)
(select article,
concatkey,
cast(lastmodifieddate as timestamp) as lastmod,
prodsubgrp
from ic_velocity_v
);
Will any of those work? I don't know; it depends on e.g.
if there are NOT NULL columns but you don't put anything in there, it'll fail
if there's a database trigger which handles that, it won't fail
if there's uniqueness enforced and it is violated, it'll fail
maybe you need the where clause, then?
etc.
As I said: it just depends.
Related
At a given time I stored the result of the following ORACLE SQL Query :
SELET col , TO_CHAR( LOWER( STANDARD_HASH( col , 'MD5' ) ) AS hash_col FROM MyTable ;
A week later, I executed the same query on the same data ( same values for column col ).
I thought the resulting hash_col column would have the same values as the values from the former execution but it was not the case.
Is it possible for ORACLE STANDARD_HASH function to deliver over time the same result for identical input data ?
It does if the function is called twice the same day.
All we have about the data changing (or not) and the hash changing (or not) is your assertion.
You could create and populate a log table:
create table hash_log (
sample_time timestamp,
hashed_string varchar2(200),
hashed_string_dump varchar2(200),
hash_value varchar2(200)
);
Then on a daily basis:
insert into hash_log values
(select systimestamp,
source_column,
dump(source_column),
STANDARD_HASH(source_column , 'MD5' )
from source_table
);
Then, to spot changes:
select distinct hashed_string ||
hashed_string_dump ||
hash_value
from hash_log;
I'm dealing with a system that accepts data loads in XML format. For example, there's a field called "col1", and that field has the value "world" in it. The system interprets <col1 />, <col1></col1>, and a missing <col1> element as "no change" to the field called col1. (This is good because, if we were creating new data, "no change" would mean to accept whatever the default value is.) If I need to delete whatever is in the field, the <col1> element needs to have an xsi:nil attribute with a value of true.
So, when I'm extracting data from one instance of the system to load into another instance (inserting with SQL is not an option), I need to conditionally add xsi:nil="true" attribute to the XML returned from a query in Oracle 12c to explicitly indicate that the value of the element is null. (Always adding xsi:nil with a value of true or false, as appropriate, could work but is not desirable as it breaks convention and bloats file size.)
A test case can be set up as follows.
create table table1 (id number(10), col1 varchar2(5));
insert into table1 values (1,'hello');
insert into table1 values (2,null);
commit;
I want to to get this back from a query:
<outer><ID>1</ID><COL1>hello</COL1></outer>
<outer><ID>2</ID><COL1 xsi:nil="true"></COL1></outer>
This query throws an error.
select
xmlelement("outer",
xmlforest(id),
(case col1
when null then xmlelement(COL1, xmlattributes('xsi:nil="true"'), null)
else xmlforest(col1)
end)
)
from table1
;
Is there some other way to conditionally include the xmlattributes call, or some other way to get the output I want?
You can use NVL2 to make it slightly less verbose:
Query 1:
SELECT XMLELEMENT(
"outer",
XMLFOREST( id ),
XMLELEMENT( col1, xmlattributes( NVL2(col1,NULL,'true') as "xsi:nil"), col1 )
).getClobVal() AS element
FROM table1;
Result:
OUTPUT
-----------------------------------------------------
<outer><ID>1</ID><COL1>hello</COL1></outer>
<outer><ID>2</ID><COL1 xsi:nil="true"></COL1></outer>
Query 2: You could also use XMLFOREST to generate the elements and then APPENDCHILDXML to append the missing element (including namespaces are left as an exercise to the OP):
SELECT APPENDCHILDXML(
XMLELEMENT( "outer", XMLFOREST( id, col1 ) ),
'/outer',
NVL2( col1, NULL, XMLTYPE('<COL1 nil="true"></COL1>') )
).getClobVal() AS element
FROM table1;
Result:
OUTPUT
-------------------------------------------
<outer><ID>1</ID><COL1>hello</COL1></outer>
<outer><ID>2</ID><COL1 nil="true"/></outer>
I found that this query works, but it's more verbose than I would like.
select
xmlelement("outer",
xmlforest(id),
xmlelement(col1,xmlattributes(case when col1 is null then 'true' else null end as "xsi:nil"), col1)
).getClobVal()
from table1
;
Need help query performance.
I have a table A joining to a view and it is taking 7 seconds to get the results. But when i do select query on view i get the results in 1 seconds.
I have created the indexes on the table A. But there is no improvements in the query.
SELECT
ITEM_ID, BARCODE, CONTENT_TYPE_CODE, DEPARTMENT, DESCRIPTION, ITEM_NUMBER, FROM_DATE,
TO_DATE, CONTACT_NAME, FILE_LOCATION, FILE_LOCATION_UPPER, SOURCE_LOCATION,
DESTRUCTION_DATE, SOURCE, LABEL_NAME, ARTIST_NAME, TITLE, SELECTION_NUM, REP_IDENTIFIER,
CHECKED_OUT
FROM View B,
table A
where B.item_id=A.itemid
and status='VALID'
AND session_id IN ('naveen13122016095800')
ORDER BY item_id,barcode;
CREATE TABLE A
(
ITEMID NUMBER,
USER_NAME VARCHAR2(25 BYTE),
CREATE_DATE DATE,
SESSION_ID VARCHAR2(240 BYTE),
STATUS VARCHAR2(20 BYTE)
)
CREATE UNIQUE INDEX A_IDX1 ON A(ITEMID);
CREATE INDEX A_IDX2 ON A(SESSION_ID);
CREATE INDEX A_IDX3 ON A(STATUS);'
So querying the view joined to a table is slower than querying the view alone? This is not surprising, is it?
Anyway, it doesn't make much sense to create separate indexes on the fields. The DBMS will pick one index (if any) to access the table. You can try a composed index:
CREATE UNIQUE INDEX A_IDX4 ON A(status, session_id, itemid);
But the DBMS will still only use this index when it sees an advantage in this over simply reading the full table. That means, if the DBMS expects to have to read a big amount of records anyway, it won't indirectly access them via the index.
At last two remarks concerning your query:
Don't use those out-dated comma-separated joins. They are less readable and more prone to errors than explicit ANSI joins (FROM View B JOIN table A ON B.item_id = A.itemid).
Use qualifiers for all columns when working with more than one table or view in your query (and A.status='VALID' ...).
UPDATE: I see now, that you are not selecting any columns from the table, so why join it at all? It seems you are merely looking up whether a record exists in the table, so use EXISTS or IN accordingly. (This may not make it faster, but a lot more readable at least.)
SELECT
ITEM_ID, BARCODE, CONTENT_TYPE_CODE, DEPARTMENT, DESCRIPTION, ITEM_NUMBER, FROM_DATE,
TO_DATE, CONTACT_NAME, FILE_LOCATION, FILE_LOCATION_UPPER, SOURCE_LOCATION,
DESTRUCTION_DATE, SOURCE, LABEL_NAME, ARTIST_NAME, TITLE, SELECTION_NUM, REP_IDENTIFIER,
CHECKED_OUT
FROM View
WHERE itemid IN
(
SELECT itemid
FROM A
WHERE status = 'VALID'
AND session_id IN ('naveen13122016095800')
)
ORDER BY item_id, barcode;
I have a PL/SQL procedure that currently gets data from an XML service and only does inserts.
xml_data := xmltype(GET_XML_F('http://test.example.com/mywebservice');
--GET_XML_F gets the XML text from the site
INSERT INTO TEST_READINGS (TEST, READING_DATE, CREATE_DATE, LOCATION_ID)
SELECT round(avg(readings.reading_val), 2),
to_date(substr(readings.reading_dt, 1, 10),'YYYY-MM-DD'), SYSDATE,
p_location_id)
FROM XMLTable(
XMLNamespaces('http://www.example.com' as "ns1"),
'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime') readings
GROUP BY substr(readings.reading_dt,1,10), p_location_id;
I would like to be able to insert or update the data using a merge statement in the event that it needs to be re-run on the same day to find added records. I'm doing this in other procedures using the code below.
MERGE INTO TEST_READINGS USING DUAL
ON (LOCATION_ID = p_location_id AND READING_DATE = p_date)
WHEN NOT MATCHED THEN INSERT
(TEST_reading_id, site_id, test, reading_date, create_date)
VALUES (TEST_readings_seq.nextval, p_location_id,
p_value, p_date, SYSDATE)
WHEN MATCHED THEN UPDATE
SET TEST = p_value;
The fact that I'm pulling it from an XMLTable is throwing me off. Is there way to get the data from the XMLTable while still using the (much cleaner) merge syntax? I would just delete the data beforehand and re-import or use lots of conditional statements, but I would like to avoid doing so if possible.
Can't you simply put your SELECT into MERGE statement?
I believe, this should look more less like this:
MERGE INTO TEST_READINGS USING (
SELECT
ROUND(AVG(readings.reading_val), 2) AS test
,TO_DATE(SUBSTR(readings.reading_dt, 1, 10),'YYYY-MM-DD') AS reading_date
,SYSDATE AS create_date
,p_location_id AS location_id
FROM
XMLTable(
XMLNamespaces('http://www.example.com' as "ns1")
,'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS
reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime'
) readings
GROUP BY
SUBSTR(readings.reading_dt,1,10)
,p_location_id
) readings ON (
LOCATION_ID = readings.location_id
AND READING_DATE = readings.reading_date
)
WHEN NOT MATCHED THEN
...
WHEN MATCHED THEN
...
;
I am using a PL/SQL procedure for inserting values from XML to relational tables. The XML file resides in an XMLTYPE column.
Columns of table (OFFLINE_XML) containing XML are
ID, XML_FILE, STATUS
There are two table in which i want to insert the values i.e. DEPARTMENT and SECTIONS
Structure of DEPARTMENT is as under:-
ID, NAME
Structure of SECTIONS table is:-
ID, NAME, DEPARTMENT_ID
Now there is a third table (LIST_1) in which i want to insert the values which already exists in both the above mentioned tables.
Structure of LIST_1 is :-
ID, DEPARTMENT_ID,DEPARTMENT_NAME,SECTIONS_ID, SECTIONS_NAME
XML format is as under:-
<ROWSET>
<DEPARTMENT>
<DEPARTMENT_ID>DEP22681352268280797</DEPARTMENT_ID>
<DEPARTMENT_NAME>myDEPARTMENT</DEPARTMENT_NAME>
<SECTIONS_ID>6390135666643567</SECTIONS_ID>
<SECTIONS_NAME>mySection</SECTIONS_NAME>
</DEPARTMENT>
<DEPARTMENT>
<DEPARTMENT_ID>DEP255555555550797</DEPARTMENT_ID>
<DEPARTMENT_NAME>myDEPARTMENT2</DEPARTMENT_NAME>
<SECTIONS_ID>63901667779243567</SECTIONS_ID>
<SECTIONS_NAME>mySection2</SECTIONS_NAME>
</DEPARTMENT>
</ROWSET>
DECLARE
BEGIN
insert all
into department (id, name)
values (unit_id, unit_name)
into sections (id, name, department _id)
values ( sect_id, sect_name, department _id)
select department .id as department _id
, department.name as department_name
, sect.id as sect_id
, sect.name as sect_name
from OFFLINE_XML
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'UNIT_ID'
, "NAME" varchar2(20) path 'UNIT_NAME'
) department
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'SECTIONS_ID'
, "NAME" varchar2(20) path 'SECTIONS_NAME'
) sect
where status = 3;
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
dbms_output.put_line('Duplicate='|| department.id );
--insert into LIST_1 values(ID,DEPARTMENT_ID, SECTIONS_ID, DEPARTMENT_NAME,SECTIONS_NAME);
END;
Now the problem is that how can i insert or identify the values on the basis of primary key which already exists in table DEPARTMENT and SECTIONS and thereafter insert the existing values in LIST_1 table.
------An updated effort --------------
I came up with another solution but this again is giving me problem. In the under mentioned procedure cursor tends to repeat for every xquery. I don't know how am i going to handle this issue..
DECLARE
department_id varchar2(20);
department_name varchar2(20);
sect_id varchar2(20);
sect_name varchar2(20);
sections_unit_id varchar2(20);
var number;
CURSOR C1 IS
select
sect.id as sect_id
, sect.name as sect_name
, sect.unit_id as sections_unit_id
from OFFLINE_XML
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'UNIT_ID'
, "NAME" varchar2(20) path 'UNIT_NAME'
) DEPARTMENT
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'SECTIONS_ID'
, "NAME" varchar2(20) path 'SECTIONS_NAME'
, "DEPARTMENT_ID" varchar2(20) path 'DEPARTMENT_ID'
) sect
where status = 3;
BEGIN
FOR R_C1 IN C1 LOOP
BEGIN
var :=1;
--insert into sections_temp_1 (id, name)values ( R_C1.sect_id, R_C1.sect_name);
-- commit;
dbms_output.put_line('Duplicate='||var);
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
dbms_output.put_line('Duplicate='||R_C1.sect_id);
END;
var:=var+1;
END LOOP;
END;
Seems that first of all you need a little bit more complicated XQuery to extract rows from XMLType field.
There are no need to extract sections and departments separately and after that try to match it back.
Try this variant:
select
department_id,
department_name,
sections_id,
sections_name
from
OFFLINE_XML xml_list,
xmltable(
'
for $dept in $param/ROWSET/DEPARTMENT
return $dept
'
passing xml_list.xml_file as "param"
columns
"DEPARTMENT_ID" varchar2(100) path '//DEPARTMENT/DEPARTMENT_ID',
"DEPARTMENT_NAME" varchar2(4000) path '//DEPARTMENT/DEPARTMENT_NAME',
"SECTIONS_ID" varchar2(100) path '//DEPARTMENT/SECTIONS_ID',
"SECTIONS_NAME" varchar2(4000) path '//DEPARTMENT/SECTIONS_NAME'
) section_list
where
xml_list.Status = 3
SQL fiddle - 1
After that you got a dataset which can be outer joined to existing tables on it's primary keys (or something other - depends on required logic) if you want to find if any values already exists:
select
offline_set.offline_xml_id,
offline_set.department_id,
offline_set.department_name,
offline_set.sections_id,
offline_set.sections_name,
nvl2(dept.id,'Y', 'N') is_dept_exists,
nvl2(sect.id,'Y', 'N') is_sect_exists
from
(
[... skipped text of previous query ...]
) offline_set,
department dept,
sections sect
where
dept.id (+) = offline_set.department_id
and
sect.id (+) = offline_set.sections_id
SQL fiddle - 2
Because I actually unaware about logic behind this requirements, I can't suggest any future processing instructions. But it seems that you missed reference to OFFLINE_XML table in LIST_1 which needed to identify source of errors/duplicates.
The best way to do this would be with Oracle's built in error logging. Use DBMS_ERRLOG.CREATE_ERROR_LOG() to generate a logging table for each target table (i.e. SECTION and DEPARTMENT in your case). Find out more.
The syntax for using these tables with INSERT ALL is not intuitive but this is what to do:
insert all
into department (id, name)
values (unit_id, unit_name)
log errors into err$_department ('XML Load failure')
into sections (id, name, department_id)
values ( sect_id, sect_name, department_id)
log errors into err$_section ('XML Load failure')
select department.id as department_id
....
You can put any (short-ish) string into the error log label, but make sure it's something which will help you local the relevant records. You may wish to set the REJECT LIMIT to some value depending on whether you wish to fail on one (or a couple of) error, or process the whole XML and sort it out afterwards. Find out more.
I suggest you use separate logs for each target tables rather one log for both for two reasons:
In my expereince solutions which leverage Oracle's built-in feartures tend to scale better and be more robust than hand-rolled code.
It's a better fit for what might happen. You have three circumstances which might cause loading to hurl DUP_VAL_ON_INDEX:
Record has duplicate Department ID
Record has duplicate Section ID
Record has duplicate Department ID and duplicate Section ID
Separate tables make it easier to understand what's gone awry. This is a major boon when loading large amounts of data.
"i need to inform my user that this much of duplicate entries were
found in xml"
You can still do that with two error logs. Heck, you can even join the error logs into a view called LIST_1 is that is so very important to you.