I have created a function that pivots my current table so that I can see which of my students enjoy what sports. I am using oracle db. So now, each sport is in its own respective column after the pivoting. It is flagged as a 1 if a student enjoys that specific sport.
Year Student Basketball Baseball Golf Soccer
2019 Michael 1 NA 1 NA
2018 Jason NA NA 1 NA
2017 Sarah 1 1 1 NA
2016 Michelle NA NA NA NA
I want to grab the count for the sports columns to see how many students flagged that respective sport.
select
SUM(CASE WHEN Basketball=1 THEN 1 ELSE 0 END) as NBA,
SUM(CASE WHEN Baseball=1 THEN 1 ELSE 0 END) as MLB,
SUM(CASE WHEN Golf=1 THEN 1 ELSE 0 END) as PGA,
SUM(CASE WHEN Soccer=1 THEN 1 ELSE 0 END) as MLS
from students_sports
group by year
order by 1;
I have tried the syntax above but no luck. I have also tried the syntax below.
select
sum(nvl("Basketball", 0)) as NBA,
sum(nvl("Baseball", 0)) as MLB,
etc....
When I run this query, I get the total count of rows and not the actual sum count of the rows flagged as 1. Any thoughts?
the first syntax is wrong. Your column Basketball, Baseball, Golf, Soccer is varchar as you are saving NA in the field not numeric. And while you are doing a comparison in a query CASE WHEN Basketball=1 you are doing a numeric comparison. So change your query as following and let us know if it works.
**Query**
`SELECT SUM (CASE WHEN Basketball = '1' THEN 1 ELSE 0 END) AS NBA
,SUM (CASE WHEN Baseball = '1' THEN 1 ELSE 0 END) AS MLB
,SUM (CASE WHEN Golf = '1' THEN 1 ELSE 0 END) AS PGA
,SUM (CASE WHEN Soccer = '1' THEN 1 ELSE 0 END) AS MLS
FROM students_sports
GROUP BY year
ORDER BY 1;`
Example:
https://dbfiddle.uk/?rdbms=oracle_11.2&fiddle=9c6cc6b067cc128b699316cb299769e3
This should give you result as 1 or 0. Let us know how it goes. I am not sure about your SUM what are you doing with it but it does not make sense to me as that sum will worth nothing.
Related
I have 2 table as follows:
Case_No.
Month
Month_Prev
Code
Stage
Code_Prev
Stage_Prev
Status
1
2022.09
2022.08
b
2
a
1
1
2
2022.09
2022.08
a
2
b
1
1
and
Month
Code
Stage
Rate
Status
2022.09
a
1
0.2
1
2022.09
a
2
0.1
1
2022.09
b
1
0.3
1
2022.09
b
2
0.1
1
2022.08
a
1
0.3
1
2022.08
a
2
0.2
1
2022.08
b
1
0.15
1
2022.08
b
2
0.25
1
My desired output:
Case_No.
Month
Month_Prev
Code
Stage
Code_Prev
Stage_Prev
Status
Rate
Rate_Prev
1
2022.09
2022.08
b
2
a
1
1
0.1
0.3
2
2022.09
2022.08
a
2
b
1
1
0.1
0.15
Basically, I want to obtain the rate corresponding to each individual set of {Month, Code, Stage, Status} and {Month_Prev, Code_Prev, Stage_Prev, Status} and I'm using Oracle. Anyone can help?
Well, you have already shown the keys for the join, so simply apply them. You'll have to join the second table twice, once for Month, once for Month_Prev.
select
t1.*
this.rate,
prev.rate as prev_rate
from t1
join t2 this on this.month = t1.month and this.code = t1.code and this.stage = t1.stage and this.status = t1.status
join t2 prev on prev.month = t1.month_prev and prev.code = t1.code and prev.stage = t1.stage and prev.status = t1.status
order by t1.month, t1.code, t1.stage, t1.status;
(In case there can be t1 rows without a match in t2 and you still want to show the row without a rate then, then change the inner joins to left outer joins.)
Imagine I have a table with people and their features:
group Name red_hair tall blue_eyes programmer
1 Mark 1 1 0 1
1 Sean 1 0 1 0
1 Lucas 1 1 1 1
2 Linda 0 1 1 1
I would like to count how many people of specific sets of features are in every group. In other words, I would like to make some bins without counting a person multiple times.
There are 2^4 (16) possible combinations of those sets, but I don't need so much.
For example, if a person has red_hair I don't care whether he or she has blue eyes or he or she a programmer. This person goes to the red hair bin of this group.
If a person is a programmer I don't care whether he or she is tall, but I don't want to count people who are already in a red hair bin. Because I have already counted them.
So I have a priority:
Red hair people counts first
Programmers second
People with blue eyes third
Expected result of this dataset:
group red_hair_persons programmers blue_eyes_persons
1 3 0 0
2 0 1 0
when I do this:
select group, count(case when red_hair = 1 then name end) as red_hair,
count(case when programmer = 1 and red_hair = 0 then name end) as programmers
from table
group by group
I fear that there would be some intersections. Or the logic with CASES would be so complex I could drown in it.
Am I right?
If so how could I avoid them? Maybe I am doing everything wrong and there is a better way to do what I want to. I have an enormous table with many features in it and I don't want to screw up.
Here's how I understood it:
SQL> with test (cgroup, name, red_hair, tall, blue_eyes, programmer) as
2 (select 1, 'mark' , 1, 1, 0, 1 from dual union all
3 select 1, 'sean' , 1, 0, 1, 0 from dual union all
4 select 1, 'lucas', 1, 1, 1, 1 from dual union all
5 select 2, 'linda', 0, 1, 1, 1 from dual
6 ),
7 priority as
8 (select t.*,
9 case when red_hair = 1 then 'A'
10 when programmer = 1 then 'B'
11 when blue_eyes = 1 then 'C'
12 else 'D'
13 end priority
14 from test t
15 )
16 select cgroup,
17 sum(case when priority = 'A' then 1 else 0 end) red_hair,
18 sum(case when priority = 'B' then 1 else 0 end) programmer,
19 sum(case when priority = 'C' then 1 else 0 end) blue_eyes,
20 sum(case when priority = 'D' then 1 else 0 end) other
21 from priority
22 group by cgroup;
CGROUP RED_HAIR PROGRAMMER BLUE_EYES OTHER
---------- ---------- ---------- ---------- ----------
1 3 0 0 0
2 0 1 0 0
SQL>
priority CTE puts every person into its priority group, based on their properties
the final select counts (using SUM + CASE) them per group
With a little bit of simple math involved in the conditional aggregation:
select "group",
sum("red_hair") red_hair_persons,
sum((1 - "red_hair") * "programmer") programmers,
sum((1 - "red_hair") * (1 - "programmer") * "blue_eyes") blue_eyes_persons
from tablename
group by "group"
See the demo.
Results:
> group | RED_HAIR_PERSONS | PROGRAMMERS | BLUE_EYES_PERSONS
> ----: | ---------------: | ----------: | ----------------:
> 1 | 3 | 0 | 0
> 2 | 0 | 1 | 0
Here is the data look like.
Name P_ID NUM
A P1 3
A P2 1
B P3 1
B P4 1
C P5 2
D P7 1
In BI Answers I want the result show like this:
Name NUM_OF_1 NUM_OF_2 NUM_OF_3 SUM
A 1 0 1 2
B 2 0 0 2
C 0 1 0 1
D 1 0 0 1
The column NUM_OF_N is occurrences of a number in a 'name' group.
If you are looking for a SQL query then you can try the following pivot:
SELECT Name,
SUM(CASE WHEN NUM = 1 THEN 1 ELSE 0 END) AS NUM_OF_1,
SUM(CASE WHEN NUM = 2 THEN 1 ELSE 0 END) AS NUM_OF_2,
SUM(CASE WHEN NUM = 3 THEN 1 ELSE 0 END) AS NUM_OF_3,
COUNT(*) AS "SUM"
FROM yourTable
GROUP BY Name
Tim has got it nailed in terms of SQL. In terms of pure OBI dev you should put that logic into logical (measure) columns in your RPD though so the BI server treats them as such and you can use them automatically with all the usual functionalities like drill, aggregate etc
I am having trouble solve. I am suppose to be getting a record every time there is a change to an account in our data warehouse, but I am only receiving one. The table below is a sample of what I am working with.
Row Acct1 Acct2 Date Total_Reissued Reissue_Per_Day
1 A 1 1/1/2016 2 2
2 A 1 1/2/2016 3 1
3 A 1 1/3/2016 5 2
4 A 1 1/4/2016 6 1
1 B 3 1/1/2016 1 1
2 B 3 1/2/2016 2 1
1 B 4 1/1/2016 1 1
2 B 4 1/2/2016 2 1
The Reissued Column is a running total. For Acct A on 1/1/2016 there were 2 reissues, then On 1/2/2016 there was 1 more making a total of 3. My problem is calculating the actual number of reissues per day.
You can use the lag() function to peek back at the previous row; assuming that 'previous' is the last date you saw for the acct1/acct2 combination you can do:
select row_number() over (partition by acct1, acct2 order by dt) as row_num,
acct1, acct2, dt, total_reissued,
total_reissued - nvl(lag(total_reissued)
over (partition by acct1, acct2 order by dt), 0) as reissue_per_day
from your_table;
ROW_NUM A ACCT2 DT TOTAL_REISSUED REISSUE_PER_DAY
---------- - ---------- ---------- -------------- ---------------
1 A 1 2016-01-01 2 2
2 A 1 2016-01-02 3 1
3 A 1 2016-01-03 5 2
4 A 1 2016-01-04 6 1
1 B 3 2016-01-01 1 1
2 B 3 2016-01-02 2 1
1 B 4 2016-01-01 1 1
2 B 4 2016-01-02 2 1
I'm not sure if your 'row' column actually exists, or is required, or was just to illustrate your data. I've generated it anyway, in case you need it.
The main bit of interest is:
lag(total_reissued) over (partition by acct1, acct2 order by dt)
which finds the previous date's value (using dt as a column name, since date isn't a valid name). That then has an nvl() wrapper so the first row sees a dummy value of zero instead of null. And then that is subtracted from the current row's value to get the difference.
Suppose we are having the following data:
Key Value Desired Rank
--- ----- ------------
P1 0.6 2
P1 0.6 2
P1 0.6 2
P2 0.8 1
P2 0.8 1
P3 0.6 3
P3 0.6 3
I want to select Distinct Keys ordered by Value DESC to be displayed in a grid that supports pagination.
I don’t know how to generate rank as the values displayed in Desired Rank column. So that I can paginate correctly over the data set
When I tried to use: DENSE_RANK() OVER(ORDER BY value), the result was
Key Value DENSE_RANK() OVER(ORDER BY value)
--- ----- ------------
P1 0.6 2
P1 0.6 2
P1 0.6 2
P2 0.8 1
P2 0.8 1
P3 0.6 2
P3 0.6 2
When I try to select the first two keys “rank between 1 and 2” I receive back 3 keys. And this ruins the required pagination mechanism.
Any ideas?
Thanks
If you want the distinct keys and values, why not use distinct?
select distinct
t.Key,
t.Value
from
YourTable t
order by
t.value
Do you actualle need the rank?
If you do, you still could
select distinct
t.Key,
t.Value,
dense_rank() over () order by (t.Value, t.Key) as Rank
from
YourTable t
order by
t.value
This whould work without the distinct as well.
'When I try to select the first two
keys “rank between 1 and 2” I receive
back 3 keys.'
That is because you are ordering just by VALUE, so all KEYS with the same value are assigned the same rank. So you need to include the KEY in the ordering clause. Like this:
DENSE_RANK() OVER (ORDER BY key ASC, value DESC)