Let's assume I extract some set of data.
i.e.
SELECT A, date
FROM table
I want just the record with the max date (for each value of A). I could write
SELECT A, col_date
FROM TABLENAME t_ext
WHERE col_date = (SELECT MAX (col_date)
FROM TABLENAME t_in
WHERE t_in.A = t_ext.A)
But my query is really long... is there a more compact way using ANALYTIC FUNCTION to do the same?
The analytic function approach would look something like
SELECT a, some_date_column
FROM (SELECT a,
some_date_column,
rank() over (partition by a order by some_date_column desc) rnk
FROM tablename)
WHERE rnk = 1
Note that depending on how you want to handle ties (or whether ties are possible in your data model), you may want to use either the ROW_NUMBER or the DENSE_RANK analytic function rather than RANK.
If date and col_date are the same columns you should simply do:
SELECT A, MAX(date) FROM t GROUP BY A
Why not use:
WITH x AS ( SELECT A, MAX(col_date) m FROM TABLENAME GROUP BY A )
SELECT t.A, t.date FROM TABLENAME t JOIN x ON x.A = t.A AND x.m = t.col_date
Otherwise:
SELECT A, FIRST_VALUE(date) KEEP(dense_rank FIRST ORDER BY col_date DESC)
FROM TABLENAME
GROUP BY A
You could also use:
SELECT t.*
FROM
TABLENAME t
JOIN
( SELECT A, MAX(col_date) AS col_date
FROM TABLENAME
GROUP BY A
) m
ON m.A = t.A
AND m.col_date = t.col_date
A is the key, max(date) is the value, we might simplify the query as below:
SELECT distinct A, max(date) over (partition by A)
FROM TABLENAME
Justin Cave answer is the best, but if you want antoher option, try this:
select A,col_date
from (select A,col_date
from tablename
order by col_date desc)
where rownum<2
Since Oracle 12C, you can fetch a specific number of rows with FETCH FIRST ROW ONLY.
In your case this implies an ORDER BY, so the performance should be considered.
SELECT A, col_date
FROM TABLENAME t_ext
ORDER BY col_date DESC NULLS LAST
FETCH FIRST 1 ROW ONLY;
The NULLS LAST is just in case you may have null values in your field.
SELECT mu_file, mudate
FROM flightdata t_ext
WHERE mudate = (SELECT MAX (mudate)
FROM flightdata where mudate < sysdate)
Related
I am trying to use a query to return the count from rows such that the date of the rows matches the maximum date for that column in the table.
Oracle SQL: version 11.2:
The following syntax would seem to be correct (to me), and it compiles and runs. However, instead of returning JUST the count for the maximum, it returns several counts more or less like the "HAIVNG" clause wasn't there.
Select ourDate, Count(1) as OUR_COUNT
from schema1.table1
group by ourDate
HAVING ourDate = max(ourDate) ;
How can this be fixed, please?
You can use:
SELECT MAX(ourDate) AS ourDate,
COUNT(*) KEEP (DENSE_RANK LAST ORDER BY ourDate) AS ourCount
FROM schema1.table1
or:
SELECT ourDate,
COUNT(*) AS our_count
FROM (
SELECT ourDate,
RANK() OVER (ORDER BY ourDate DESC) AS rnk
FROM schema1.table1
)
WHERE rnk = 1
GROUP BY ourDate
Which, for the sample data:
CREATE TABLE table1 (ourDate) AS
SELECT SYSDATE FROM DUAL CONNECT BY LEVEL <= 5 UNION ALL
SELECT SYSDATE - 1 FROM DUAL;
Both output:
OURDATE
OUR_COUNT
2022-06-28 13:35:01
5
db<>fiddle here
I don't know if I understand what you want. Try this:
Select x.ourDate, Count(1) as OUR_COUNT
from schema1.table1 x
where x.ourDate = (select max(y.ourDate) from schema1.table1 y)
group by x.ourDate
One option is to use a subquery which fetches maximum date:
select ourdate, count(*)
from table1
where ourdate = (select max(ourdate)
from table1)
group by ourdate;
Or, a more modern approach (if your database version supports it; 11g doesn't, though):
select ourdate, count(*)
from table1
group by ourdate
order by ourdate desc
fetch first 1 rows only;
You can use this SQL query:
select MAX(ourDate),COUNT(1) as OUR_COUNT
from schema1.table1
where ourDate = (select MAX(ourDate) from schema1.table1)
group by ourDate;
Please help me with next problem:
And the result should be:
filtered by iban_code distinct
You can use row_number analytical function.
Select * from
(Select t.*,
Row_number()
over (partition by per_id, iban_code
order by main_bank_account desc) as rn
From your_table t)
Where rn=1;
Cheers!!
Table 1 has duplicate entries in column A with same frequency values. I need to select one random record out of those .If the duplicate entry contain 'unknown' as a column B value ( like in record "d") select one from other rows . I need a select statement which satisfy the above . Thanks .
These conditions can be prioritized using a case expression in order by with a function like row_number.
select A,B,frequency,timekey
from (select t.*
,row_number() over(partition by A order by cast((B = 'unknown') as int), B) as rnum
from tbl t
) t
where rnum = 1
Here for each group of A rows, we prioritize rows other than B = 'unknown' first, and then in the order of B values.
Use row_number analytic function. If you want to select not unknown record first, then use the query below:
select A, B, Frequency, timekey
from
(select
A, B, Frequency, timekey,
row_number() over(partition by A,Frequency order by case when B='unknown' then 1 else 0 end) rn
)s where rn=1
And if you want to select unknown if they exist, use this row_number in the query above:
row_number() over(partition by A,Frequency order by case when B='unknown' then 0 else 1 end) rn
Hi I have two table like that:
First: Tcustcounselm
Second: Tcustcounseldt
Tcustcounseldt include these columns : Counsel_Seq, Proc_Note, Proc_Date
Tcustcounselm include these columns : Counsel_Seq, Cust_No, Proc_Date
I have customers and I want to retrieve the customer proc_note (refund detail). Some customers have more than one refund detail, I only want the latest one.
This is my plsql code but when I run it it gives the same cust_no twice when I only want to see the latest one.
Select
A.Cust_No,
Max(To_Char(B.Proc_Date, 'yyyy/mm/dd hh24:mi:ss')) As Proc,
B.Proc_Note,
A.Counsel_Seq
FROM Tcustcounselm A,
Tcustcounseldt B
WHERE
A.Counsel_Seq = B.Counsel_Seq
--AND B.Do_Flag ='40'
AND A.Proc_Date BETWEEN TO_DATE('2013/07/08', 'YYYY/MM/DD') AND TO_DATE('2013/07/08', 'YYYY/MM/DD')+1
GROUP BY
A.Cust_No,
B.Proc_Note,
A.Counsel_Seq
ORDER BY 2 DESC;
I thınk MAX is problem so i tried different sample code but same problem
SELECT A.Cust_No,
B.Proc_Note
FROM Tcustcounselm A ,
(SELECT Counsel_Seq,
Proc_Note,
Rank () Over (Partition By Counsel_Seq Order By Proc_Date Desc) As Priority
From Tcustcounseldt
) B
Where A.Counsel_Seq = B.Counsel_Seq
--And B.Priority = 1
AND A.Proc_Date BETWEEN TO_DATE('2013/07/08', 'YYYY/MM/DD') AND TO_DATE('2013/07/08', 'YYYY/MM/DD')+1;
This (or somehing similar) should work
SELECT A.Cust_No,
To_Char(B.Proc_Date, 'yyyy/mm/dd hh24:mi:ss') As Proc,
B.Proc_Note,
A.Counsel_Seq
FROM Tcustcounselm B
INNER JOIN Tcustcounseldt A
ON A.Counsel_Seq = B.Counsel_Seq
WHERE (A.Cust_No,B.Proc_Date) IN ( SELECT A.Cust_No,
max(B.Proc_Date) PD
FROM Tcustcounselm B
INNER JOIN Tcustcounseldt A
ON A.Counsel_Seq = B.Counsel_Seq
GROUP BY A.Cust_No)
I had time to play with sqlfiddle and I fixed the query, try to check it here.
Here is my query,
SELECT ID As Col1,
(
SELECT VID FROM TABLE2 t
WHERE (a.ID=t.ID or a.ID=t.ID2)
AND t.STARTDTE =
(
SELECT MAX(tt.STARTDTE)
FROM TABLE2 tt
WHERE (a.ID=tt.ID or a.ID=tt.ID2) AND tt.STARTDTE < SYSDATE
)
) As Col2
FROM TABLE1 a
Table1 has 48850 records and Table2 has 15944098 records.
I have separate indexes in TABLE2 on ID,ID & STARTDTE, STARTDTE, ID, ID2 & STARTDTE.
The query is still too slow. How can this be improved? Please help.
I'm guessing that the OR in inner queries is messing up with the optimizer's ability to use indexes. Also I wouldn't recommend a solution that would scan all of TABLE2 given its size.
This is why in this case I would suggest using a function that will efficiently retrieve the information you are looking for (2 index scan per call):
CREATE OR REPLACE FUNCTION getvid(p_id table1.id%TYPE)
RETURN table2.vid%TYPE IS
l_result table2.vid%TYPE;
BEGIN
SELECT vid
INTO l_result
FROM (SELECT vid, startdte
FROM (SELECT vid, startdte
FROM table2 t
WHERE t.id = p_id
AND t.startdte < SYSDATE
ORDER BY t.startdte DESC)
WHERE rownum = 1
UNION ALL
SELECT vid, startdte
FROM (SELECT vid, startdte
FROM table2 t
WHERE t.id2 = p_id
AND t.startdte < SYSDATE
ORDER BY t.startdte DESC)
WHERE rownum = 1
ORDER BY startdte DESC)
WHERE rownum = 1;
RETURN l_result;
END;
Your SQL would become:
SELECT ID As Col1,
getvid(a.id) vid
FROM TABLE1 a
Make sure you have indexes on both table2(id, startdte DESC) and table2(id2, startdte DESC). The order of the index is very important.
Possibly try the following, though untested.
WITH max_times AS
(SELECT a.ID, MAX(t.STARTDTE) AS Startdte
FROM TABLE1 a, TABLE2 t
WHERE (a.ID=t.ID OR a.ID=t.ID2)
AND t.STARTDTE < SYSDATE
GROUP BY a.ID)
SELECT b.ID As Col1, tt.VID
FROM TABLE1 b
LEFT OUTER JOIN max_times mt
ON (b.ID = mt.ID)
LEFT OUTER JOIN TABLE2 tt
ON ((mt.ID=tt.ID OR mt.ID=tt.ID2)
AND mt.startdte = tt.startdte)
You can look at analytic functions to avoid having to hit the second table twice. Something like this might work:
SELECT id AS col1, vid
FROM (
SELECT t1.id, t2.vid, RANK() OVER (PARTITION BY t1.id ORDER BY
CASE WHEN t2.startdte < TRUNC(SYSDATE) THEN t2.startdte ELSE null END
NULLS LAST) AS rn
FROM table1 t1
JOIN table2 t2 ON t2.id IN (t1.ID, t1.ID2)
)
WHERE rn = 1;
The inner select gets the id and vid values from the two tables with a simple join on id or id2. The rank function calculates a ranking for each matching row in the second table based on the startdte. It's complicated a bit by you wanting to filter on that date, so I've used a case to effectively ignore any dates today or later by changing the evaluated value to null, and in this instance that means the order by in the over clause needs nulls last so they're ignored.
I'd suggest you run the inner select on its own first - maybe with just a couple of id values for brevity - to see what its doing, and what ranks are being allocated.
The outer query is then just picking the top-ranked result for each id.
You may still get duplicates though; if table2 has more than one row for an id with the same startdte they'll get the same rank, but then you may have had that situation before. You may need to add more fields to the order by to break ties in a way that makes sens to you.
But this is largely speculation without being able to see where your existing query is actually slow.