equivalent of distinct On in Oracle - oracle

How to translate the following query to Oracle SQL, as Oracle doesn't support distinct on()?
select distinct on (t.transaction_id) t.transaction_id as transactionId ,
t.transaction_status as transactionStatus ,
c.customer_id as customerId ,
c.customer_name as customerName,

You can use ANY_VALUE with group by for this:
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/any_value.html
Example: https://dbfiddle.uk/WUxvjv5J
with t (a,b,c) as (
select 1,10,1 from dual union all
select 1,10,2 from dual union all
select 1,10,3 from dual union all
select 1,20,4 from dual union all
select 1,20,5 from dual union all
select 1,30,7 from dual
)
select a,b,any_value(c)
from t
group by a,b;

Yes, Oracle has a full set of windowing functions you can use for this. The simplest is ROW_NUMBER:
SELECT *
FROM (SELECT x.col1,
x.col2,
x.col3,
ROW_NUMBER() OVER (PARTITION BY x.col1 ORDER BY x.col2 DESC) seq
FROM table x)
WHERE seq = 1
for each distinct col1, it will number the highest col2 value as seq=1, the next highest as seq=2, etc... so you can filter on 1 to get the desired row. You can used as complex ORDER BY logic as you need to pick the row you want. The key thing is that the ORDER BY goes inside the ROW_NUMBER OVER clause along with the distinct (PARTITION BY) definition, not outside in the main query block.

Related

Oracle ordering with IN clause [duplicate]

Is it possible to keep order from a 'IN' conditional clause?
I found this question on SO but in his example the OP have already a sorted 'IN' clause.
My case is different, 'IN' clause is in random order
Something like this :
SELECT SomeField,OtherField
FROM TestResult
WHERE TestResult.SomeField IN (45,2,445,12,789)
I would like to retrieve results in (45,2,445,12,789) order. I'm using an Oracle database. Maybe there is an attribute in SQL I can use with the conditional clause to specify to keep order of the clause.
There will be no reliable ordering unless you use an ORDER BY clause ..
SELECT SomeField,OtherField
FROM TestResult
WHERE TestResult.SomeField IN (45,2,445,12,789)
order by case TestResult.SomeField
when 45 then 1
when 2 then 2
when 445 then 3
...
end
You could split the query into 5 queries union all'd together though ...
SELECT SomeField,OtherField
FROM TestResult
WHERE TestResult.SomeField = 4
union all
SELECT SomeField,OtherField
FROM TestResult
WHERE TestResult.SomeField = 2
union all
...
I'd trust the former method more, and it would probably perform much better.
Decode function comes handy in this case instead of case expressions:
SELECT SomeField,OtherField
FROM TestResult
WHERE TestResult.SomeField IN (45,2,445,12,789)
ORDER BY DECODE(SomeField, 45,1, 2,2, 445,3, 12,4, 789,5)
Note that value,position pairs (e.g. 445,3) are kept together for readability reasons.
Try this:
SELECT T.SomeField,T.OtherField
FROM TestResult T
JOIN
(
SELECT 1 as Id, 45 as Val FROM dual UNION ALL
SELECT 2, 2 FROM dual UNION ALL
SELECT 3, 445 FROM dual UNION ALL
SELECT 4, 12 FROM dual UNION ALL
SELECT 5, 789 FROM dual
) I
ON T.SomeField = I.Val
ORDER BY I.Id
There is an alternative that uses string functions:
with const as (select ',45,2,445,12,789,' as vals)
select tr.*
from TestResult tr cross join const
where instr(const.vals, ','||cast(tr.somefield as varchar(255))||',') > 0
order by instr(const.vals, ','||cast(tr.somefield as varchar(255))||',')
I offer this because you might find it easier to maintain a string of values rather than an intermediate table.
I was able to do this in my application using (using SQL Server 2016)
select ItemID, iName
from Items
where ItemID in (13,11,12,1)
order by CHARINDEX(' ' + Convert("varchar",ItemID) + ' ',' 13 , 11 , 12 , 1 ')
I used a code-side regex to replace \b (word boundary) with a space. Something like...
var mylist = "13,11,12,1";
var spacedlist = replace(mylist,/\b/," ");
Importantly, because I can in my scenario, I cache the result until the next time the related items are updated, so that the query is only run at item creation/modification, rather than with each item viewing, helping to minimize any performance hit.
Pass the values in via a collection (SYS.ODCINUMBERLIST is an example of a built-in collection) and then order the rows by the collection's order:
SELECT t.SomeField,
t.OtherField
FROM TestResult t
INNER JOIN (
SELECT ROWNUM AS rn,
COLUMN_VALUE AS value
FROM TABLE(SYS.ODCINUMBERLIST(45,2,445,12,789))
) i
ON t.somefield = i.value
ORDER BY rn
Then, for the sample data:
CREATE TABLE TestResult ( somefield, otherfield ) AS
SELECT 2, 'A' FROM DUAL UNION ALL
SELECT 5, 'B' FROM DUAL UNION ALL
SELECT 12, 'C' FROM DUAL UNION ALL
SELECT 37, 'D' FROM DUAL UNION ALL
SELECT 45, 'E' FROM DUAL UNION ALL
SELECT 100, 'F' FROM DUAL UNION ALL
SELECT 445, 'G' FROM DUAL UNION ALL
SELECT 789, 'H' FROM DUAL UNION ALL
SELECT 999, 'I' FROM DUAL;
The output is:
SOMEFIELD
OTHERFIELD
45
E
2
A
445
G
12
C
789
H
fiddle

Oracle SQL -- Finding count of rows that match date maximum in table

I am trying to use a query to return the count from rows such that the date of the rows matches the maximum date for that column in the table.
Oracle SQL: version 11.2:
The following syntax would seem to be correct (to me), and it compiles and runs. However, instead of returning JUST the count for the maximum, it returns several counts more or less like the "HAIVNG" clause wasn't there.
Select ourDate, Count(1) as OUR_COUNT
from schema1.table1
group by ourDate
HAVING ourDate = max(ourDate) ;
How can this be fixed, please?
You can use:
SELECT MAX(ourDate) AS ourDate,
COUNT(*) KEEP (DENSE_RANK LAST ORDER BY ourDate) AS ourCount
FROM schema1.table1
or:
SELECT ourDate,
COUNT(*) AS our_count
FROM (
SELECT ourDate,
RANK() OVER (ORDER BY ourDate DESC) AS rnk
FROM schema1.table1
)
WHERE rnk = 1
GROUP BY ourDate
Which, for the sample data:
CREATE TABLE table1 (ourDate) AS
SELECT SYSDATE FROM DUAL CONNECT BY LEVEL <= 5 UNION ALL
SELECT SYSDATE - 1 FROM DUAL;
Both output:
OURDATE
OUR_COUNT
2022-06-28 13:35:01
5
db<>fiddle here
I don't know if I understand what you want. Try this:
Select x.ourDate, Count(1) as OUR_COUNT
from schema1.table1 x
where x.ourDate = (select max(y.ourDate) from schema1.table1 y)
group by x.ourDate
One option is to use a subquery which fetches maximum date:
select ourdate, count(*)
from table1
where ourdate = (select max(ourdate)
from table1)
group by ourdate;
Or, a more modern approach (if your database version supports it; 11g doesn't, though):
select ourdate, count(*)
from table1
group by ourdate
order by ourdate desc
fetch first 1 rows only;
You can use this SQL query:
select MAX(ourDate),COUNT(1) as OUR_COUNT
from schema1.table1
where ourDate = (select MAX(ourDate) from schema1.table1)
group by ourDate;

Reduce overload on pl/sql

I have a requirement to do matching of few attributes one by one. I'm looking to avoid multiple select statements. Below is the example.
Table1
Col1|Price|Brand|size
-----------------------
A|10$|BRAND1|SIZE1
B|10$|BRAND1|SIZE1
C|30$|BRAND2|SIZE2
D|40$|BRAND2|SIZE4
Table2
Col1|Col2|Col3
--------------
B|XYZ|PQR
C|ZZZ|YYY
Table3
Col1|COL2|COL3|LIKECOL1|Price|brand|size
-----------------------------------------
B|XYZ|PQR|A|10$|BRAND1|SIZE1
C|ZZZ|YYY|D|NULL|BRAND2|NULL
In table3, I need to insert data from table2 by checking below conditions.
Find a match for record in table2, if Brand and size, Price match
If no match found, then try just Brand, Size
still no match found, try brand only
In the above example, for the first record in table2, found match with all the 3 attributes and so inserted into table3 and second record, record 'D' is matching but only 'Brand'.
All I can think of is writing 3 different insert statements like below into an oracle pl/sql block.
insert into table3
select from tab2
where all 3 attributes are matching;
insert into table3
select from tab2
where brand and price are matching
and not exists in table3 (not exists is to avoid
inserting the same record which was already
inserted with all 3 attributes matched);
insert into table3
select from tab2
where Brand is matching and not exists in table3;
Can anyone please suggest a better way to achieve it in any better way avoiding multiple times selecting from table2.
This is a case for OUTER APPLY.
OUTER APPLY is a type of lateral join that allows you join on dynamic views that refer to tables appearing earlier in your FROM clause. With that ability, you can define a dynamic view that finds all the matches, sorts them by the pecking order you've specified, and then use FETCH FIRST 1 ROW ONLY to only include the 1st one in the results.
Using OUTER APPLY means that if there is no match, you will still get the table B record -- just with all the match columns null. If you don't want that, you can change OUTER APPLY to CROSS APPLY.
Here is a working example (with step by step comments), shamelessly stealing the table creation scripts from Michael Piankov's answer:
create table Table1 (Col1,Price,Brand,size1)
as select 'A','10','BRAND1','SIZE1' from dual union all
select 'B','10','BRAND1','SIZE1' from dual union all
select 'C','30','BRAND2','SIZE2' from dual union all
select 'D','40','BRAND2','SIZE4'from dual
create table Table2(Col1,Col2,Col3)
as select 'B','XYZ','PQR' from dual union all
select'C','ZZZ','YYY' from dual;
-- INSERT INTO table3
SELECT t2.col1, t2.col2, t2.col3,
t1.col1 likecol1,
decode(t1.price,t1_template.price,t1_template.price, null) price,
decode(t1.brand,t1_template.brand,t1_template.brand, null) brand,
decode(t1.size1,t1_template.size1,t1_template.size1, null) size1
FROM
-- Start with table2
table2 t2
-- Get the row from table1 matching on col1... this is our search template
inner join table1 t1_template on
t1_template.col1 = t2.col1
-- Get the best match from table1 for our search
-- template, excluding the search template itself
outer apply (
SELECT * FROM table1 t1
WHERE 1=1
-- Exclude search template itself
and t1.col1 != t2.col1
-- All matches include BRAND
and t1.brand = t1_template.brand
-- order by match strength based on price and size
order by case when t1.price = t1_template.price and t1.size1 = t1_template.size1 THEN 1
when t1.size1 = t1_template.size1 THEN 2
else 3 END
-- Only get the best match for each row in T2
FETCH FIRST 1 ROW ONLY) t1;
Unfortunately is not clear what do you mean when say match. What is you expectation if there is more then one match?
Should it be only first matching or it will generate all available pairs?
Regarding you question how to avoid multiple inserts there is more then one way:
You could use multitable insert with INSERT first and condition.
You could join table1 to self and get all pairs and filter results in where condition
You could use analytical function
I suppose there is another ways. But why you would like to avoid 3 simple inserts. Its easy to read and maintain. And may be
There is example with analytical function next:
create table Table1 (Col1,Price,Brand,size1)
as select 'A','10','BRAND1','SIZE1' from dual union all
select 'B','10','BRAND1','SIZE1' from dual union all
select 'C','30','BRAND2','SIZE2' from dual union all
select 'D','40','BRAND2','SIZE4'from dual
create table Table2(Col1,Col2,Col3)
as select 'B','XYZ','PQR' from dual union all
select'C','ZZZ','YYY' from dual
with s as (
select Col1,Price,Brand,size1,
count(*) over(partition by Price,Brand,size1 ) as match3,
count(*) over(partition by Price,Brand ) as match2,
count(*) over(partition by Brand ) as match1,
lead(Col1) over(partition by Price,Brand,size1 order by Col1) as like3,
lead(Col1) over(partition by Price,Brand order by Col1) as like2,
lead(Col1) over(partition by Brand order by Col1) as like1,
lag(Col1) over(partition by Price,Brand,size1 order by Col1) as like_desc3,
lag(Col1) over(partition by Price,Brand order by Col1) as like_desc2,
lag(Col1) over(partition by Brand order by Col1) as like_desc1
from Table1 t )
select t.Col1,t.Col2,t.Col3, coalesce(s.like3, like_desc3, s.like1, like_desc1, s.like1, like_desc1),
case when match3 > 1 then size1 end as size1,
case when match1 > 1 then Brand end as Brand,
case when match2 > 1 then Price end as Price
from table2 t
left join s on s.Col1 = t.Col1
COL1 COL2 COL3 LIKE_COL SIZE1 BRAND PRICE
B XYZ PQR A SIZE1 BRAND1 10
C ZZZ YYY D - BRAND2 -

Using a single select statment to get the next row from a table or return the first row if the end of table is reached

I have a table say STAFF like below:
STAFF_NAME
============
ALEX
BERNARD
CARL
DOMINIC
EMMA
Now, I want to write a stored function with a single argument. E.g. GET_NEXT_STAFF(CURRENT_STAFF).
The input and output should be like:
Input | Output
=====================
NULL | ALEX
ALEX | BERNARD
BERNARD | CARL
EMMA | ALEX (Start from the beginning of the table again)
I know how to handle this problem using PL/SQL, but is it possible to deal with this problem with a single select statement?
In the solution below, I assume the rows are ordered alphabetically by names. They may be ordered by another column in the same table (for example by hire date, or by salary, etc. - it doesn't matter) - then the name of that column should be used in the ORDER BY clause of the two analytic functions.
The input name is passed in as a bind variable, :input_staff_name. The solution uses pure SQL, with no need for functions (PL/SQL), but if you must make it into a function, you can adapt it easily.
Edit: In my original answer I missed the required behavior when the input is null. The last line of code (excluding the semicolon) takes care of that. As written currently, the query returns ALEX (or in general the first value in the table) when the input is null, and it returns no rows when the input is not null and not in the table. If instead the requirement is to return the first name when the input is null or not found in the table, then it can be accommodated easily by removing and :input_staff_name is null from the last line.
with
tbl ( staff_name ) as (
select 'ALEX' from dual union all
select 'BERNARD' from dual union all
select 'CARL' from dual union all
select 'DOMINIC' from dual union all
select 'EMMA' from dual
),
prep ( staff_name, next_name, first_name ) as (
select staff_name,
lead(staff_name) over (order by staff_name),
first_value (staff_name) over (order by staff_name)
from tbl
)
select nvl(next_name, first_name) as next_staff_name
from prep
where staff_name = :input_staff_name
or (next_name is null and :input_staff_name is null)
;
Based on the answer from #mathguy I have made a few changes that seem to work. I have added the follow
UNION ALL
SELECT NULL
FROM DUAL
and
WHERE NVL (staff_name, 'X') = NVL (NULL, 'X');
The full code
WITH tbl (staff_name) AS
(SELECT 'ALEX' FROM DUAL
UNION ALL
SELECT 'BERNARD' FROM DUAL
UNION ALL
SELECT 'CARL' FROM DUAL
UNION ALL
SELECT 'DOMINIC' FROM DUAL
UNION ALL
SELECT 'EMMA' FROM DUAL
UNION ALL
SELECT NULL
FROM DUAL),
prep (staff_name,
next_name,
first_name,
last_name) AS
(SELECT staff_name,
LEAD (staff_name) OVER (ORDER BY staff_name),
FIRST_VALUE (staff_name) OVER (ORDER BY staff_name),
LAG (staff_name) OVER (ORDER BY staff_name)
FROM tbl)
SELECT NVL (next_name, first_name) AS next_staff_name
FROM prep
WHERE NVL (staff_name, 'X') = NVL (:input_staff_name, 'X');

Oracle: transform wm_concat in an Analytic Function

I was reading that it should be really simple to use an Aggregate function as an Analytic Function link. I would like to use wm_concat in Oracle 10g as an analytic function.
It simply works as suggested!!! Really good thanks!
WITH temp as (
SELECT 'A' as master , 1 Col from dual
UNION SELECT 'A' , 3 from dual
UNION SELECT 'B' , 1 from dual
UNION SELECT 'B' , 2 from dual
UNION SELECT 'C' , 1 from dual)
SELECT
master,
wm_concat(Col) over (partition by master)
FROM
temp

Resources