My situation similar to this but I just want to create sequence number for repeated values only.
Table: MyTable
-----------------------
ID CODE
1 100
2 100
3 200
4 200
5 200
6 300
Below is my query:
SELECT ID, CODE, (row_number() over (partition by CODE order by ID)) as SEQ from MyTable
And this is my current result:
ID CODE SEQ
1 100 1
2 100 2
3 200 1
4 200 2
5 200 3
6 300 1
But my expected result is:
ID CODE SEQ
1 100 1
2 100 2
3 200 1
4 200 2
5 200 3
6 300
Eventually, I do some coding to modify my current result. But I want to ask is there any way to generate the expected result via query only?
You can add CASE COUNT(1) over (partition by CODE) in your query, see sample query below.
WITH MyTable
as (
select 1 id, 100 code from dual union all
select 2 id, 100 code from dual union all
select 3 id, 200 code from dual union all
select 4 id, 200 code from dual union all
select 5 id, 200 code from dual union all
select 6 id, 300 code from dual)
SELECT ID, CODE, CASE COUNT(1) over (partition by CODE)
WHEN 1 THEN NULL
ELSE row_number() over (partition by CODE order by ID)
END as SEQ
from MyTable;
ID CODE SEQ
---------- ---------- ----------
1 100 1
2 100 2
3 200 1
4 200 2
5 200 3
6 300
6 rows selected
Related
I believe there is probably an easy way to solve these problems with pivots or partitions but I can't seem to find the proper solutions. I have a round about solution for problem 1 by using a long list of select sum()s and a long solution for problem 2 where I just select the count(*) from table B where id = id from table A multiple times in a (select) blocks but if I have a large number of IDs both of those solution equal very long SQL that gets very tedious and I'm sure there is a better way it is just eluding me.
I would really like solutions that would allow me to include a large set of multiple IDs or supply the solution with a table of IDs to evaluate.
Problem 1:
Table:
------------------
ID DESC YEAR
1 A 2021
1 B 2021
1 C 2021
2 A 2021
2 B 2021
2 C 2021
3 A 2019
3 B 2019
I would like to have the count of the ID's for each DESC by year.
Expected Result:
------------------
Year CountA CountB CountC
2019 1 1 0
2021 2 2 2
Problem 2:
Table A:
------------------
ID DESC
1 A
2 B
3 C
Table B:
------------------
SET ID
10 1
10 1
12 1
13 2
14 3
I would like to see (1) how many of each ID from Table A can be found in each SET in Table B and (2) how many of each ID from Table A can be found in each SET in Table B and not in any other SET of Table B (unique matches).
Expected Result 1:
------------------
ID Count10 Count12 Count13 Count14
1 2 1 0 0
2 0 0 1 0
3 0 0 0 1
Expected Result 2:
------------------
ID UniqueCount10 UniqueCount12 UniqueCount13 UniqueCount14
1 0 0 0 0
2 0 0 1 0
3 0 0 0 1
Thank you for any and all assistance.
All three problems can be solved with pivoting (calling Problem 2 "two different problems"), although it is not clear what purpose Result 2 would serve (in the second problem; see my comments to you).
Note that desc and set are reserved keywords, and year is a keyword, so they shouldn't be used as column names. I changed to descr, set_ (with an underscore) and yr. Also, I do not use double-quotes for column names in the output; all-caps column names are just fine.
In the second problem it is not clear why you need Table A. Could you have some id values that don't appear at all in Table B, but you still want them in the final output? If so, you will need to change my semi-joins to outer joins; left as an exercise, since it's a different (and much more basic) type of question.
In the first problem, you must pivot the result of a subquery, which selects only the relevant columns from the base table. There is no such need for the second problem (unless your tables have other columns that should not be considered - left for you to figure out).
Problem 1
Data:
create table tbl (id, descr, yr) as
select 1, 'A', 2021 from dual union all
select 1, 'B', 2021 from dual union all
select 1, 'C', 2021 from dual union all
select 2, 'A', 2021 from dual union all
select 2, 'B', 2021 from dual union all
select 2, 'C', 2021 from dual union all
select 3, 'A', 2019 from dual union all
select 3, 'B', 2019 from dual
;
Query and output:
select *
from (select descr, yr from tbl)
pivot (count(*) for descr in ('A' as count_a, 'B' as count_b, 'C' as count_c))
order by yr
;
YR COUNT_A COUNT_B COUNT_C
---- ------- ------- -------
2019 1 1 0
2021 2 2 2
Problem 2
Data:
create table table_a (id, descr) as
select 1, 'A' from dual union all
select 2, 'B' from dual union all
select 3, 'C' from dual
;
create table table_b (set_, id) as
select 10, 1 from dual union all
select 10, 1 from dual union all
select 12, 1 from dual union all
select 13, 2 from dual union all
select 14, 3 from dual
;
Part 1 - Query and result:
select *
from table_b
pivot (count(*) for set_ in (10 as count_10, 12 as count_12,
13 as count_13, 14 as count_14))
where id in (select id from table_a) -- is this needed?
order by id -- if needed
;
ID COUNT_10 COUNT_12 COUNT_13 COUNT_14
-- -------- -------- -------- --------
1 2 1 0 0
2 0 0 1 0
3 0 0 0 1
Part 2 - Query and result:
select *
from (
select id, case count(distinct set_) when 1 then max(set_) end as set_
from table_b
where id in (select id from table_a) -- is this needed?
group by id
)
pivot (count(*) for set_ in (10 as unique_ct_10, 12 as unique_ct_12,
13 as unique_ct_13, 14 as unique_ct_14))
order by id -- if needed
;
ID UNIQUE_CT_10 UNIQUE_CT_12 UNIQUE_CT_13 UNIQUE_CT_14
-- ------------ ------------ ------------ ------------
1 0 0 0 0
2 0 0 1 0
3 0 0 0 1
In this last part of the second problem, you might as well just take the subquery and run it separately - what's the purpose of pivoting its output?
How to make a query that will create groups that have a space between them greater than "n"?
Data:
01-01-2000
02-01-2000
03-01-2000
06-01-2000
07-01-2000
19-02-2001
10-01-2002
11-01-2002
I would like to get a result for the interval between records, e.g. 2 days:
DATE GROUP
01-01-2000 1
02-01-2000 1
03-01-2000 1
06-01-2000 2
07-01-2000 2
19-02-2001 3
10-01-2002 4
11-01-2002 4
For 10 days:
01-01-2000 1
02-01-2000 1
03-01-2000 1
06-01-2000 1
07-01-2000 1
19-02-2001 2
10-01-2002 3
11-01-2002 3
Another example with integers:
with x as (
select 1 as A from dual
union all
select 2 as A from dual
union all
select 3 as A from dual
union all
select 10 as A from dual
union all
select 20 as A from dual
union all
select 22 as A from dual
union all
select 33 as A from dual
union all
select 40 as A from dual
union all
select 50 as A from dual
union all
select 100 as A from dual
union all
select 101 as A from dual
union all
select 102 as A from dual
) select A
from x;
I need to create groups for a value increase of more than 3:
Example result:
1 1
2 1
3 1
10 2
20 3
22 3
33 4
40 5
50 6
100 7
101 7
102 7
Here is one way to do it
CREATE TABLE TEST (
DATE_IN DATE
);
INSERT INTO TEST VALUES (TO_DATE('01-01-2000','DD-MM-YYYY'));
INSERT INTO TEST VALUES (TO_DATE('02-01-2000','DD-MM-YYYY'));
INSERT INTO TEST VALUES (TO_DATE('03-01-2000','DD-MM-YYYY'));
INSERT INTO TEST VALUES (TO_DATE('06-01-2000','DD-MM-YYYY'));
INSERT INTO TEST VALUES (TO_DATE('07-01-2000','DD-MM-YYYY'));
INSERT INTO TEST VALUES (TO_DATE('19-02-2001','DD-MM-YYYY'));
INSERT INTO TEST VALUES (TO_DATE('10-01-2002','DD-MM-YYYY'));
INSERT INTO TEST VALUES (TO_DATE('11-01-2002','DD-MM-YYYY'));
--HERE IS AN EXAMPLE FOR 1 DAY. Just change the value in the > 1 TO >10
--if you want to create a group if there is a gap of more than 10days
SELECT DATE_IN, SUM(NEW_GROUP) OVER ( ORDER BY DATE_IN) AS GROUPE FROM (
SELECT
DATE_IN,
CASE WHEN DATE_IN - LAG(DATE_IN,1,TO_DATE('01-01-1900','DD-MM-YYYY')) OVER ( ORDER BY DATE_IN) > 1 THEN 1 ELSE 0 END AS NEW_GROUP
FROM TEST
)
-- Result
DATE_IN GROUPE
2000-01-01T00:00:00Z 1
2000-01-02T00:00:00Z 1
2000-01-03T00:00:00Z 1
2000-01-06T00:00:00Z 2
2000-01-07T00:00:00Z 2
2001-02-19T00:00:00Z 3
2002-01-10T00:00:00Z 4
2002-01-11T00:00:00Z 4
Example with integer:
with x as (
select 1 as A from dual
union all
select 2 as A from dual
union all
select 3 as A from dual
union all
select 10 as A from dual
union all
select 20 as A from dual
union all
select 22 as A from dual
union all
select 33 as A from dual
union all
select 40 as A from dual
union all
select 50 as A from dual
union all
select 100 as A from dual
union all
select 101 as A from dual
union all
select 102 as A from dual
) SELECT A, SUM(NEW_GROUP) OVER ( ORDER BY A) AS GROUPE FROM (
SELECT
A,
CASE WHEN A - LAG(A,1,1) OVER ( ORDER BY A) > 5 THEN 1 ELSE 0 END AS NEW_GROUP
FROM X
)
order by A;
I have a simple table with values (ID) in groups (GRP_ID).
create table tst as
select 1 grp_id, 1 id from dual union all
select 1 grp_id, 1 id from dual union all
select 1 grp_id, 2 id from dual union all
select 2 grp_id, 1 id from dual union all
select 2 grp_id, 2 id from dual union all
select 2 grp_id, 2 id from dual union all
select 3 grp_id, 3 id from dual;
It is straightforward to find a maximum value per group using analytical functions.
select grp_id, id,
max(id) over (partition by grp_id) max_grp
from tst
order by 1,2;
GRP_ID ID MAX_GRP
---------- ---------- ----------
1 1 2
1 1 2
1 2 2
2 1 2
2 2 2
2 2 2
3 3 3
But the goal is to find the maximum value excluding the value of the current row.
This is the expected result (column MAX_OTHER_ID):
GRP_ID ID MAX_GRP MAX_OTHER_ID
---------- ---------- ---------- ------------
1 1 2 2
1 1 2 2
1 2 2 1
2 1 2 2
2 2 2 2
2 2 2 2
3 3 3
Note that in the GRP_ID = 2 a tie on the MAX value exists, so the MAX_OTHER_ID remains the same.
I did manage this two step solution, but I'm wondering if there is a more straightforward and simple solution.
with max1 as (
select grp_id, id,
row_number() over (partition by grp_id order by id desc) rn
from tst
)
select GRP_ID, ID,
case when rn = 1 /* MAX row per group */ then
max(decode(rn,1,to_number(null),id)) over (partition by grp_id)
else
max(id) over (partition by grp_id)
end as max_other_id
from max1
order by 1,2
;
I wish the window functions supported multiple range specifications something like:
max(id) over (
partition by grp_id
order by id
range between unbounded preceding and 1 preceding
or range between 1 following and unbounded following
)
But unfortunately they don't.
As a workaround, you can avoid subqueries and CTEs using the function twice on the different ranges and call coalesce on that.
select grp_id,
id,
coalesce(
max(id) over (
partition by grp_id
order by id
range between 1 following and unbounded following
)
, max(id) over (
partition by grp_id
order by id
range between unbounded preceding and 1 preceding
)
) max_grp
from tst
order by 1,
2
Coalesce works out of the box because of the ordering clause as the result of the window function call will be either the max in the given window or a null value.
Demo - http://rextester.com/SDXVF13962
SELECT GRP_ID,ID, (SELECT Max(ID) FROM TEST A WHERE A.ROWID<>B.ROWID AND A.GRP_ID=B.GRP_ID) maX_ID FROM TEST B;
Got the expected result with Co-Related Query ! Hope this helps .
I've following data like
ano asal
------------------
1 100
1 150
1 190
2 200
2 240
3 300
3 350
4 400
4 400
4 400
i want ans like max sal from 1 ,from 2,3 and 4
o/p like
ano asal
---------------
1 190
2 240
3 390
4 400
4 400
4 400
You want to return the max value of asal for each ano group, but you want to retain the duplicates in the original table if they exist. This means you can't just do a simple GROUP BY. But you can use a GROUP BY query to identify the max values and then retain those records via an INNER JOIN. Try this query:
SELECT t1.ano, t1.asal
FROM yourTable t1
INNER JOIN
(
SELECT ano, MAX(asal) AS asal
FROM yourTable
GROUP BY ano
) t2
ON t1.ano = t2.ano AND t1.asal = t2.asal
You can use an union the firt select with group by
select ano, max(asal)
from my_table
where ano != 4
group by ano
union all
select ano, asal
from my_table
where ano = 4
order by ano
SELECT
ano, asal
FROM (
SELECT
data.*,
MAX(asal) OVER (PARTITION BY ano) max
FROM
data)
WHERE
asal = max
May be this question is a duplicate of another, I already explored couple of similar questions here but I didn't find similar one. Please suggest if you find a link to similar question.
My problem is, I have a table say CLIENTS as below
BRANCH CLNTID ACCNT FACID
------ ---------- ---------- ----------
201 10001 123400 110021
201 10001 123401
201 10001 123402 110023
201 10001 123403
201 10001 123404 110025
201 10001 123405
201 10001 123406 110027
201 10001 123407 110028
so on... many rows.
Now I want to write a query to give output like this
Branch clntid facid_null facid_not_null
201 10001 3 5
I want to find facid colmun count for facid=null and facid !=null for each branch and each clntid.
I wrote the below query but its fetching only one count either facid is null or facid is not null.
select branch,clntid,count(*)
from clnt
where facid is null
group by branch, clntid;
Please help me to find both counts in a single query using GROUP BY and as well as OVER (PARTITION BY ) clauses.
Thanks in advance.
Vivek.
select branch
,clntid
,count(*) as num_rows
,count(facid) as not_nulls
,count(*) - count(facid) as nulls
from clnt
group
by branch
,clntid;
The aggregate function COUNT counts all not null occurrences, so you can simply use count(facid) to count the facid_not_null column and you can use a similar technique and first swap the null and not null for the facid_null column. Here is a working example:
SQL> create table clnt (branch,clntid,accnt,facid)
2 as
3 select 201, 10001, 123400, 110021 from dual union all
4 select 201, 10001, 123401, null from dual union all
5 select 201, 10001, 123402, 110023 from dual union all
6 select 201, 10001, 123403, null from dual union all
7 select 201, 10001, 123404, 110025 from dual union all
8 select 201, 10001, 123405, null from dual union all
9 select 201, 10001, 123406, 110027 from dual union all
10 select 201, 10001, 123407, 110028 from dual
11 /
Table created.
SQL> select branch
2 , clntid
3 , count(case when facid is null then 1 end) facid_null
4 , count(facid) facid_not_null
5 from clnt
6 group by branch
7 , clntid
8 /
BRANCH CLNTID FACID_NULL FACID_NOT_NULL
---------- ---------- ---------- --------------
201 10001 3 5
1 row selected.