I have the following example table:
FIELD1 FIELD2 FIELD3
12345 888555 009999
12345 888555 327777
12345 888555 327777
54321 999666 334444
54321 999666 324444
55501 999333 005555
55501 999333 350000
55501 999333 890000
55501 999333 320000
Now, assuming FIELD1 and FIELD2 have always the same value, e.g. if FIELD1 = 12345 then FIELD2 will be always 888555, what I have to do is make a query where I count FIELD1 and FIELD2 occurencies when the first 2 digits of FIELD3 = 32 and when the first 2 digits of FIELD3 IS NOT 32.
In above case I should have as output:
FIELD1 FIELD2 FIELD3 COUNT
12345 888555 32 2
54321 999666 NOT32 1
54321 999666 32 1
55501 999333 32 1
55501 999333 NOT32 3
Could someone help me to achieve that?
EDIT: Thank you to #Akina who suggest me the following solution (it seems to work):
select FIELD1,
FIELD2,
SUM(CASE WHEN SUBSTR(FIELD3, 1, 2) = '32' THEN 1 ELSE 0 END) as FIELD3_EQU_32,
SUM(CASE WHEN SUBSTR(FIELD3, 1, 2) = '32' THEN 0 ELSE 1 END) as FIELD3_NEQ_32
from TABLE
group by FIELD1,
FIELD2;
What I need now is output only a count who exceed a specif threshold,
something like:
HAVING ((FIELD3_EQU_32 > '10' and SUBSTR(FIELD2, 1,3)='888')
or (FIELD3_NEQ_32 > '5' and SUBSTR(FIELD2, 1,3)='999')
etc...
Thank you very much.
Lucas
select FIELD1,
FIELD2,
SUM(CASE WHEN SUBSTR(FIELD3, 1, 2) = '32' THEN 1 ELSE 0 END) as FIELD3_EQU_32,
SUM(CASE WHEN SUBSTR(FIELD3, 1, 2) = '32' THEN 0 ELSE 1 END) as FIELD3_NEQ_32
from TABLE
group by FIELD1,
FIELD2;
Related
I have table t:
ID Type
---- ----
1 a
1 b
2 a
2 a
3 b
And table with names of IDs from first table - n:
ID Name
---- ----
1 name1
2 name2
3 name3
I need make query in PL/SQL for count percentage of type occurrence among all types for same id (group by ID).
The result must be:
Name a% b% row
--- ---- --- ---
name1 50 50 1
name2 100 0 2
name3 0 100 3
I tried:
select
n.name,
a.perc as "a%",
b.perc as "b%",
row_number() over (
order by name asc
) mf_rownumber
from n n
left join
(select
id,
round(100 * (count(*) / sum(count(*)) over ()), 2) perc
from t
where (type = 'a')
group by id) a
on a.id = n.id
left join
(select
id,
round(100 * (count(*) / sum(count(*)) over ()), 2) perc
from t
where (type = 'b')
group by id) b
on b.id = n.id;
What I get, is percentage of every type from all rows:
Name a% b% row
--- ---- --- ---
name1 20 20 1
name2 40 0 2
name3 0 20 3
But I need count everything in borders of the same ID, not all rows.
I think it can be simplified a lot :
http://sqlfiddle.com/#!4/6bb2a/20
select
n.name,
round(100 * (sum(case when type='a' then 1 else 0 end) / count(*)), 2) as "a%",
round(100 * (sum(case when type='b' then 1 else 0 end) / count(*)), 2) as "b%",
row_number() over (order by name asc ) mf_rownumber
from n
left join t on t.id = n.id
group by n.name
I would do something like this:
select
n.name,
n.id,
count(case when type='a' then 1 end)/count(*)*100 as "a%",
count(case when type='b' then 1 end)/count(*)*100 as "b%"
from n left join t on a.id=n.id
group by n.id;
DECLARE
var1 INTEGER :=0;
var2 INTEGER :=0;
BEGIN
SELECT DISTINCT
<whatever here>
CASE
WHEN trunc(thisDate) - TRUNC(thatDate) BETWEEN 0 AND 10 THEN var1+1
WHEN trunc(thisDate2) - TRUNC(thatDate2) BETWEEN 11 AND 20 THEN var2+1
ELSE 0
END
FROM
<Rest of query here>
basically what I want to be able to do is to add 1 to the local variable then print out the value of that variable as part of my select statement using a count (or sum of the count) or something everytime the difference in the ages falls within those categories
I'm not sure how to add to the local variable basically.
SELECT
sum(CASE WHEN trunc(thisDate) - TRUNC(thatDate) BETWEEN 0 AND 10 THEN 1 ELSE 0 END) + var1,
sum(CASE WHEN trunc(thisDate2) - TRUNC(thatDate2) BETWEEN 11 AND 20 THEN 1 ELSE 0 END) + var2
into var1, var2
FROM
<Rest of query here>
Try analytic functions
BEGIN
SELECT DISTINCT
<whatever here>
sum(CASE
WHEN trunc(thisDate) - TRUNC(thatDate) BETWEEN 0 AND 10 THEN 1
end) over() var1,
sum(CASE
WHEN trunc(thisDate2) - TRUNC(thatDate2) BETWEEN 11 AND 20 THEN 1
END) over() var2
FROM
<Rest of query here>
Given a comment by Allan, maybe you are looking for something like than instead:
SELECT V.*,
case when t1 is not null then
count(t1) over (ORDER BY n ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
when t2 is not null then
count(t2) over (ORDER BY n ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
else
0
end varX
FROM (
SELECT DISTINCT
<whatever here>
ROWNUM n,
CASE WHEN trunc(thisDate) - TRUNC(thatDate) BETWEEN 0 AND 10 THEN 1 END t1,
CASE WHEN trunc(thisDate) - TRUNC(thatDate) BETWEEN 11 AND 20 THEN 1 END t2
FROM
<Rest of query here>
) V
Some example for you to check if this is what you are looking for: http://sqlfiddle.com/#!4/2de0d/2
The inner query is basically yours, plus
a ROWNUM column (maybe not necessary if you somehow ORDER BY your rows)
a marker set to 1 if the row is in the first range, NULL otherwise
a marker set to 1 if the row is in the second range, NULL otherwise
The outer query use the analytic function COUNT() OVER(...) to respectively count the numbers of markers between the current row and the first of the result set. The row number n is used here. Replace that by something more relevant if your data are already ordered.
I don't think you can update some variable while executing a query. All you can do is fetch the result value into some variable.
Slight variation over Multisync's answer, using COUNT instead of SUM:
SELECT
count(CASE WHEN trunc(thisDate) - TRUNC(thatDate) BETWEEN 0 AND 10 THEN 1 END) ,
count(CASE WHEN trunc(thisDate2) - TRUNC(thatDate2) BETWEEN 11 AND 20 THEN 1 END)
into var1, var2
FROM
<Rest of query here>
This will actually return a one-row result with the count of values is the first range in the first column, and the count of values in the second range in the second column. Using the INTO var1,var2 clause, PL/SQL will implicitly fetch those values into your local variables.
Here is my Query
SELECT
tbl_dtl_feature.customer_id,
result,
COUNT(*) AS expr1
FROM tbl_dtl_feature
WHERE tbl_dtl_feature.feature_id = 'F001'
AND TRUNC(tbl_dtl_feature.start_datetime)
BETWEEN TO_DATE('10/01/2014', 'DD/MM/YYYY') AND TO_DATE('10/01/2014', 'DD/MM/YYYY')
GROUP BY tbl_dtl_feature.result, tbl_dtl_feature.customer_id
My Result set:
CustomerID Result Count
---------- ------- -----
44438111 Success 3
44438444 Success 1
44438111 Failure 1
Expected Result Set:
CustomerID Count(Success) Count(Failure)
---------- -------------- -------------
44438111 3 1
44438444 1 0
Can you help me out?
Try like this,
SELECT t.customer_id,
count(CASE WHEN result = 'Success' THEN 1 END) Count_Success,
count(CASE WHEN result = 'Failure' THEN 1 END) Count_Failure
FROM tbl_dtl_feature t
WHERE t.feature_id = 'F001'
AND trunc(t.start_datetime) BETWEEN to_date('10/01/2014', 'DD/MM/YYYY') AND to_date('10/01/2014', 'DD/MM/YYYY')
GROUP BY t.customer_id;
I need help with the following -- please help .
How to summate a date range.?? I am a newbie in Oracle .
Try something like this:
SELECT FIELD1, FIELD2, TRUNC(MIN(FIELD3)), TRUNC(MAX(FIELD3)), SUM(FIELD4)
FROM SOME_TABLE
WHERE FIELD3 BETWEEN DATE '2013-02-01'
AND DATE '2013-02-04' + INTERVAL '1' DAY - INTERVAL '1' SECOND
GROUP BY FIELD1, FIELD2
ORDER BY MIN(FIELD3);
Share and enjoy.
The questions/answers in the comment section to your ariginal answer show that you are actually looking for two different selections - one for the first date range and one for the overlapping second date range. Only you want to get all result records in a single result set. You can use UNION for that:
select field1, field2, min(trunc(field3))) || '-' || max(trunc(field3))), sum(field4)
from yourtable
where to_char(field3, 'yyyymmdd') between '20130201' and '20130204'
group by field1, field2
UNION
select field1, field2, min(trunc(field3))) || '-' || max(trunc(field3))), sum(field4)
from yourtable
where to_char(field3, 'yyyymmdd') between '20130203' and '20130207'
group by field1, field2
order by 1, 2, 3;
Using hard-coded dates is a bit odd, as is the way you're making your ranges (and your field4 value appears to be wrong in your sample output), but assuming you know what you want...
You can use acase statement to assign a dummy group number to the rows based on the dates, and then have an outer query that uses group by against that dummy field, which I've called gr:
select field1, field2,
to_char(min(field3), 'MM/DD/YYYY')
||'-'|| to_char(max(field3), 'MM/DD/YYYY') as field3,
sum(field4) as field4
from (
select field1, field2, field3, field4,
case when field3 between date '2013-02-01'
and date '2013-02-05' - interval '1' second then 1
when field3 between date '2013-02-05'
and date '2013-02-08' - interval '1' second then 2
end as gr
from t42
)
group by field1, field2, gr
order by field1, field2, gr;
F FIELD2 FIELD3 FIELD4
- ---------- --------------------- ----------
A 1 02/01/2013-02/04/2013 14
A 1 02/05/2013-02/07/2013 21
The display of field3 will look wrong if there is no data for one of the boundary days, but I'm not sure that's the biggest problem with this approach *8-)
You can potentially modify the case to have more generic groups, but I'm not sure how this will be used.
In a comment you say you specify two groups of dates which do not overlap. This comntradicts the data you posted in your question. Several people have wasted their time proposing non-solutions because of your tiresome inability to expalin your requirements in a clear and simple fashion.
Anyway, assuming you have finally got your story straight and the two groups don't overlap this should work for you:
with data as (
select field1
, field2
, field4
, case when field3 between date '2011-10-30' and date '2012-01-28' then 'GR1'
when field3 between date '2012-10-28' and date '2013-02-03' then 'GR2'
else null
end as GRP
from your_table )
select field1
, field2
, GRP
, sum(field4) as sum_field4
from data
where GRP is not null
order by 1, 2, 3
/
I have a table that has 5 "optional" fields. I'd like to find out how many rows have all 5 null, how many have 1 field non-null, etc.
I've tried a couple of things, like:
select
count(*),
( (if field1 is null then 1 else 0) +
(if field2 is null then 1 else 0) +
etc.
but of course that doesn't work.
Ideally, I'm looking for output that's something like
Nulls Cnt
0 200
1 345
...
5 40
Is there an elegant solution?
The keyword is not if, it is case, and you must use end to end the case statement.
Here is a query that can suit you:
WITH subQuery AS
(
SELECT My_Table.*, (CASE WHEN My_Table.field1 IS NULL THEN 1 ELSE 0 END +
CASE WHEN My_Table.field2 IS NULL THEN 1 ELSE 0 END +
CASE WHEN My_Table.field3 IS NULL THEN 1 ELSE 0 END +
CASE WHEN My_Table.field4 IS NULL THEN 1 ELSE 0 END +
CASE WHEN My_Table.field5 IS NULL THEN 1 ELSE 0 END ) NumberOfNullFields
FROM My_Table
)
SELECT NumberOfNullFields, COUNT(*)
FROM subQuery
GROUP BY NumberOfNullFields;
While there is nothing wrong with the case WHEN counting, I just wanted to see if there was another way.
WITH SAMPLEDATA AS(--just generate some data with nulls for 5 cols
select level ,
(case when mod(level,2) = 0 then 1 else null end) colA,
(case when mod(level,3) = 0 then 1 else null end) colB,
(case when mod(level,5) = 0 then 1 else null end) colC,
(case when mod(level,7) = 0 then 1 else null end) colD,
(case when mod(level,11) = 0 then 1 else null end) colE
from dual
connect by level < 1000
), --utilize the count(Aggregate)'s avoidance of nulls to our summation advantage
nullCols as(
SELECT COUNT(COLA) aNotNull
,cOUNT(*)-COUNT(COLA) aNull
,count(colB) bNotNull
,cOUNT(*)-count(colB) bNull
,count(colc) cNotNull
,cOUNT(*)-count(colc) cNull
,count(cold) dNotNull
,cOUNT(*)-count(cold) dNull
,count(cole) eNotNull
,cOUNT(*)-count(cole) eNull
, cOUNT(*) TotalCountOfRows
from SAMPLEDATA )
SELECT (select count(*) from sampledata where cola is null and colb is null and colc is null and cold is null and cole is null) allIsNull
,nullCols.*
FROM nullCols;
ALLISNULL ANOTNULL ANULL BNOTNULL BNULL CNOTNULL CNULL DNOTNULL DNULL ENOTNULL ENULL TOTALCOUNTOFROWS
---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ----------------------
207 499 500 333 666 199 800 142 857 90 909 999
this utilizes the
If expression in count(expression)
evaluates to null, it is not counted:
as noted from here
This method, as is obvious above, does not 'eloquently' sum all null columns well. Just wanted to see if this was possible without the CASE logic.