conditionally add an xmlattribute to an xmlelement with Oracle SQL - oracle

I'm dealing with a system that accepts data loads in XML format. For example, there's a field called "col1", and that field has the value "world" in it. The system interprets <col1 />, <col1></col1>, and a missing <col1> element as "no change" to the field called col1. (This is good because, if we were creating new data, "no change" would mean to accept whatever the default value is.) If I need to delete whatever is in the field, the <col1> element needs to have an xsi:nil attribute with a value of true.
So, when I'm extracting data from one instance of the system to load into another instance (inserting with SQL is not an option), I need to conditionally add xsi:nil="true" attribute to the XML returned from a query in Oracle 12c to explicitly indicate that the value of the element is null. (Always adding xsi:nil with a value of true or false, as appropriate, could work but is not desirable as it breaks convention and bloats file size.)
A test case can be set up as follows.
create table table1 (id number(10), col1 varchar2(5));
insert into table1 values (1,'hello');
insert into table1 values (2,null);
commit;
I want to to get this back from a query:
<outer><ID>1</ID><COL1>hello</COL1></outer>
<outer><ID>2</ID><COL1 xsi:nil="true"></COL1></outer>
This query throws an error.
select
xmlelement("outer",
xmlforest(id),
(case col1
when null then xmlelement(COL1, xmlattributes('xsi:nil="true"'), null)
else xmlforest(col1)
end)
)
from table1
;
Is there some other way to conditionally include the xmlattributes call, or some other way to get the output I want?

You can use NVL2 to make it slightly less verbose:
Query 1:
SELECT XMLELEMENT(
"outer",
XMLFOREST( id ),
XMLELEMENT( col1, xmlattributes( NVL2(col1,NULL,'true') as "xsi:nil"), col1 )
).getClobVal() AS element
FROM table1;
Result:
OUTPUT
-----------------------------------------------------
<outer><ID>1</ID><COL1>hello</COL1></outer>
<outer><ID>2</ID><COL1 xsi:nil="true"></COL1></outer>
Query 2: You could also use XMLFOREST to generate the elements and then APPENDCHILDXML to append the missing element (including namespaces are left as an exercise to the OP):
SELECT APPENDCHILDXML(
XMLELEMENT( "outer", XMLFOREST( id, col1 ) ),
'/outer',
NVL2( col1, NULL, XMLTYPE('<COL1 nil="true"></COL1>') )
).getClobVal() AS element
FROM table1;
Result:
OUTPUT
-------------------------------------------
<outer><ID>1</ID><COL1>hello</COL1></outer>
<outer><ID>2</ID><COL1 nil="true"/></outer>

I found that this query works, but it's more verbose than I would like.
select
xmlelement("outer",
xmlforest(id),
xmlelement(col1,xmlattributes(case when col1 is null then 'true' else null end as "xsi:nil"), col1)
).getClobVal()
from table1
;

Related

inserting multiple values from source table to destination table

will the below code will take all the values from src table and insert into IC_MST_VELOCITY table. I need to know how to take all the records from src table to IC_MST_VELOCITY table if the below code is wrong.
(SELECT ARTICLE,
CONCATKEY,
CAST (LASTMODIFIEDDATE AS TIMESTAMP) AS LASTMOD,
PRODSUBGRP,
FROM IC_VELOCITY_V
) src
INSERT INTO IC_MST_VELOCITY(
ARTICLE,
CONCATKEY,
ISDELETED,
LASTMODIFIEDDATE,
MSTID,
PRODSUBGRP,
SKUID,
VELOCITY,
WHSE)
VALUES(
select ARTICLE from src,
select CONCATKEY from src,
select LASTMOD from src,
select PRODSUBGRP from src,
)
);
No, your code wouldn't do anything as it is invalid.
Something like this might; note all NULL values being inserted into columns that don't have the source value (selected from the ic_velocity_v table):
insert into ic_mst_velocity
( article,
concatkey,
isdeleted,
lastmodifieddate,
mstid,
prodsubgrp,
skuid,
velocity,
whse
)
(select article,
concatkey,
null isdeleted,
cast(lastmodifieddate as timestamp) as lastmod,
null mstid,
prodsubgrp,
null skuid,
null velocity,
null whse
from ic_velocity_v
);
Or, shorter version, without columns that don't have any value:
insert into ic_mst_velocity
( article,
concatkey,
lastmodifieddate,
prodsubgrp
)
(select article,
concatkey,
cast(lastmodifieddate as timestamp) as lastmod,
prodsubgrp
from ic_velocity_v
);
Will any of those work? I don't know; it depends on e.g.
if there are NOT NULL columns but you don't put anything in there, it'll fail
if there's a database trigger which handles that, it won't fail
if there's uniqueness enforced and it is violated, it'll fail
maybe you need the where clause, then?
etc.
As I said: it just depends.

How to only select existing values from oracle?

I have a table with a massive number of columns. So many, that when I do SELECT * I can't even see any values because all the columns fill up the screen. I'd like to do something like this:
SELECT * FROM my_table WHERE NAME LIKE '%unique name%' AND <THIS COLUMN> IS NOT NULL
Is this possible? Note: VALUE is not a column.
There are so many questions on SO that ask this same question, but they have some bizarre twist, and the actual question is not answered.
I've tried:
SELECT * FROM my_table WHERE NAME LIKE '%unique name%' AND VALUE NOT NULL
*
Invalid relational operator
SELECT * FROM my_table WHERE NAME LIKE '%unique name%' AND VALUE <> ''
*
'VALUE': invalid identifier
SELECT * FROM my_table WHERE NAME LIKE '%unique name%' AND COLUMN NOT NULL
*
Missing Expression
Bonus Questions:
Is there any way to force Oracle to only show one output screen at a time?
Is there a variable to use in the WHERE clause that relates to the current column? Such as: WHERE this.column = '1', where it would check each column to match that expression?
Is there any way to get back your last command in Oracle? (I have to remote into a Linux box running Oracle - it's all command line - can't even copy/paste, so I have to type every command by hand, with a wonky connection, so it's taking an extremely long time to debug this stuff)
If you are trying to find all the non null column values for a particular record you could try an unpivot provided all the columns you are unpivoting have the same data type:
SELECT *
FROM (select * from my_table where name like '%unique value%')
UNPIVOT [include nulls] (col_value FOR col_name IN (col1, col2, ..., coln))
with the above code null values will be excluded unless you include the optional include nulls statement, also you will need to explicitly list each column you want unpivoted.
If they don't all have the same data type, you can use a variation that doesn't necessarily prune away all the null values:
select *
from (select * from my_table where name like '%unique value%')
unpivot ((str_val, num_val, date_val)
for col_name in ((cola, col1, date1)
,(colb, col2, date2)
,(colc, col3, date1)));
You can have a fairly large set of column groups, though here I'm showing just three, one for each major data type, with the IN list you need to have a column listed for each column in your column group, though you can reuse columns as shown by the date_val column where I've used date1 twice. As an alternative to reusing an existing column, you could use a dummy column with a null value:
select *
from (select t1.*, null dummy from my_table t1 where name like '%unique value%')
unpivot ((str_val, num_val, date_val)
for col_name in ((dummy, col1, date1)
,(colb, dummy, date2)
,(colc, col3, dummy)));
Have tried this?
SELECT * FROM my_table WHERE NAME LIKE '%unique name%' AND value IS NOT NULL;
Oracle / PLSQL: IS NOT NULL Condition
For row number:
SELECT field1, field2, ROW_NUMBER() OVER (PARTITION BY unique_field) R WHERE R=1;
Usually in Linux consoles you can use arrow up&down to repeat the last sentence.

Oracle PL/SQL Use Merge command on data from XML Table

I have a PL/SQL procedure that currently gets data from an XML service and only does inserts.
xml_data := xmltype(GET_XML_F('http://test.example.com/mywebservice');
--GET_XML_F gets the XML text from the site
INSERT INTO TEST_READINGS (TEST, READING_DATE, CREATE_DATE, LOCATION_ID)
SELECT round(avg(readings.reading_val), 2),
to_date(substr(readings.reading_dt, 1, 10),'YYYY-MM-DD'), SYSDATE,
p_location_id)
FROM XMLTable(
XMLNamespaces('http://www.example.com' as "ns1"),
'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime') readings
GROUP BY substr(readings.reading_dt,1,10), p_location_id;
I would like to be able to insert or update the data using a merge statement in the event that it needs to be re-run on the same day to find added records. I'm doing this in other procedures using the code below.
MERGE INTO TEST_READINGS USING DUAL
ON (LOCATION_ID = p_location_id AND READING_DATE = p_date)
WHEN NOT MATCHED THEN INSERT
(TEST_reading_id, site_id, test, reading_date, create_date)
VALUES (TEST_readings_seq.nextval, p_location_id,
p_value, p_date, SYSDATE)
WHEN MATCHED THEN UPDATE
SET TEST = p_value;
The fact that I'm pulling it from an XMLTable is throwing me off. Is there way to get the data from the XMLTable while still using the (much cleaner) merge syntax? I would just delete the data beforehand and re-import or use lots of conditional statements, but I would like to avoid doing so if possible.
Can't you simply put your SELECT into MERGE statement?
I believe, this should look more less like this:
MERGE INTO TEST_READINGS USING (
SELECT
ROUND(AVG(readings.reading_val), 2) AS test
,TO_DATE(SUBSTR(readings.reading_dt, 1, 10),'YYYY-MM-DD') AS reading_date
,SYSDATE AS create_date
,p_location_id AS location_id
FROM
XMLTable(
XMLNamespaces('http://www.example.com' as "ns1")
,'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS
reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime'
) readings
GROUP BY
SUBSTR(readings.reading_dt,1,10)
,p_location_id
) readings ON (
LOCATION_ID = readings.location_id
AND READING_DATE = readings.reading_date
)
WHEN NOT MATCHED THEN
...
WHEN MATCHED THEN
...
;

Sorting a table with a column with SQL Server 2012

I have a table like this :
How can I sort it out in the form below :
Each set of records has been marked by a FlagID at end
Each set of records has been marked by a FlagID at end
I assume this means that each record is implicitly associated with the first non-null FlagID value that occurs at or after its position along the ID primary key. Thus, we can use a correlated subquery to project this implicit FlagID value for each record, sort by it, then sort by your Row column as tiebreaker for each set.
SELECT *
FROM YourTable T1
ORDER BY
(
SELECT TOP 1 T2.FlagID
FROM YourTable T2
WHERE T2.ID <= T1.ID
AND T2.FlagID IS NOT NULL
ORDER BY T2.ID DESC
),
T1.Row
However, if you're able to alter the database content, I would recommend you to explicitly populate all the FlagID fields, as this would make your life easier. If you do so, then the query becomes:
SELECT *
FROM YourTable T1
ORDER BY T1.FlagID,
T1.Row

Oracle: Show special text if field is null

I would like to write a select where I show the value of the field as normal except when the field is null. If it is null I'd like to show a special text, for example "Field is null". How would I best do this?
// Oracle newbie
I like to use function COALESCE for this purpose. It returns the first non-null value from given arguments (so you can test more than one field at a time).
SELECT COALESCE(NULL, 'Special text') FROM DUAL
So this would also work:
SELECT COALESCE(
First_Nullable_Field,
Second_Nullable_Field,
Third_Nullable_Field,
'All fields are NULL'
) FROM YourTable
Just insert the NVL PL/SQL function into your query
SELECT NVL(SOMENULLABLEFIELD,'Field Is Null') SOMENULLABLEFIELD
FROM MYTABLE;
More detail here : http://www.techonthenet.com/oracle/functions/nvl.php
You could also use DECODE:
select value, decode(value, NULL, 'SPECIAL', value) from
(select NULL value from dual
union all
select 2 value from dual
)

Resources