I had a result set on Oracle like this table:
Is there a way to add a new column with the values based on the previous TEMREGIONAL column to be like that:
311,1,1,0
430,2,0,1
329,3,0,1
What I want is based on the TEMREGIONAL value, if it is 1, so all rows after that will be 1 to.
So if I have something like that:
311,1,0
430,2,0
329,3,1
334,4,0
323,5,0
324,6,0
326,7,0
The result should be:
311,1,0,0
430,2,0,0
329,3,1,0
334,4,0,1
323,5,0,1
324,6,0,1
326,7,0,1
What I want is to add a new column and after the row with the value 1 at the third column, all rows should have the value 1 in this new column.
Anybody can help me?
You may use ignore nulls addition of lag to find previous 1, turning zeroes to null. This can be done in one pass.
with a(
ID_ORGAO_INTELIGENCIA
, ORD
, TEMREGIONAL
) as (
select 311,1,0 from dual union all
select 430,2,0 from dual union all
select 329,3,1 from dual union all
select 334,4,0 from dual union all
select 323,5,0 from dual union all
select 324,6,0 from dual union all
select 326,7,0 from dual
)
select
a.*
, coalesce(
lag(nullif(TEMREGIONAL, 0))
ignore nulls
over(order by ord asc)
, 0) as prev
from a
ID_ORGAO_INTELIGENCIA | ORD | TEMREGIONAL | PREV
--------------------: | --: | ----------: | ---:
311 | 1 | 0 | 0
430 | 2 | 0 | 0
329 | 3 | 1 | 0
334 | 4 | 0 | 1
323 | 5 | 0 | 1
324 | 6 | 0 | 1
326 | 7 | 0 | 1
db<>fiddle here
For sample data
SQL> select * from test order by ord;
ID_ORGAO_INTELIGENCIA ORD TERMREGIONAL
--------------------- ---------- ------------
311 1 0
430 2 0
329 3 1
334 4 0
323 5 0
324 6 0
326 7 0
7 rows selected.
this might be one option:
SQL> with
2 temp as
3 -- find minimal ORD for which TERMREGIONAL = 1
4 (select min(a.ord) min_ord
5 from test a
6 where a.termregional = 1
7 )
8 select t.id_orgao_inteligencia,
9 t.ord,
10 t.termregional,
11 case when t.ord > m.min_ord then 1 else 0 end new_column
12 from temp m cross join test t
13 order by t.ord;
ID_ORGAO_INTELIGENCIA ORD TERMREGIONAL NEW_COLUMN
--------------------- ---------- ------------ ----------
311 1 0 0
430 2 0 0
329 3 1 0
334 4 0 1
323 5 0 1
324 6 0 1
326 7 0 1
7 rows selected.
SQL>
Related
In this question - why adding order by in the query changes the aggregate value? - I was told that "If the window clause contains both PARTITION BY and ORDER BY, it returns the running count within the partition . So, using the ORDER BY expression, how many rows have been counted so far within the partition."
Referring to this example - https://www.vertica.com/docs/11.0.x/HTML/Content/Authoring/AnalyzingData/SQLAnalytics/ReportingAggregates.htm?tocpath=Analyzing%20Data%7CSQL%20Analytics%7CWindow%20Framing%7C_____3
Why does the cumulative count shows 4 (last value of count) for all values of sal=109?
=> SELECT deptno, sal, empno, COUNT(sal) OVER (
-> PARTITION BY deptno ORDER BY sal
-> ) AS COUNT
-> FROM emp;
deptno | sal | empno | count
--------+-----+-------+-------
10 | 101 | 1 | 1
10 | 104 | 4 | 2
------------------------------
20 | 100 | 11 | 1
20 | 109 | 7 | 4<-
20 | 109 | 6 | 4<-
20 | 109 | 8 | 4<-
20 | 110 | 10 | 6<-
20 | 110 | 9 | 6<-
------------------------------
30 | 102 | 2 | 1
30 | 103 | 3 | 2
30 | 105 | 5 | 3
You order by sal, which is at 109 for 3 rows within deptno 20. For the ordering criteria, there are 3 rows that should appear at the same time. After all 3 are added, after 100 for 1 row, you are immediately at 4. So you get 4 for all 3.
You need distinct ordering values to get distinct running count results.
When I run the following code, I would expect b1 and b2 to be equal, however, b2 is doubled. Am I doing something wrong? Is this a bug in the database? We're running Oracle 12c (12.2.0.1.0).
WITH TBL AS
(
SELECT 1 a, 1 b FROM DUAL UNION ALL
SELECT 1 a, 2 b FROM DUAL UNION ALL
SELECT 1 a, 3 b FROM DUAL UNION ALL
SELECT 1 a, 4 b FROM DUAL
)
SELECT
*
FROM
TBL
MATCH_RECOGNIZE
(
PARTITION BY
a
ORDER BY
b
MEASURES
FINAL SUM(b) b1,
NULLIF(FINAL SUM(b), 0) b2
ALL ROWS PER MATCH WITH UNMATCHED ROWS
AFTER MATCH SKIP PAST LAST ROW
PATTERN
(C*)
DEFINE
C AS B > 0
) mr
Result:
| A | B | B1 | B2 |
|---|---|----|----|
| 1 | 1 | 10 | 20 |
| 1 | 2 | 10 | 20 |
| 1 | 3 | 10 | 20 |
| 1 | 4 | 10 | 20 |
The problem seems to be with NULLIF when I converted the same into it's logical equivalent and it is working fine CASE WHEN expr1 = expr 2 THEN NULL ELSE expr1 END
WITH TBL AS
(
SELECT 1 a, 1 b FROM DUAL UNION ALL
SELECT 1 a, 2 b FROM DUAL UNION ALL
SELECT 1 a, 3 b FROM DUAL UNION ALL
SELECT 1 a, 4 b FROM DUAL
)
SELECT
*
FROM
TBL
MATCH_RECOGNIZE
(
PARTITION BY
a
ORDER BY
b
MEASURES
FINAL SUM(b) b1,
CASE WHEN FINAL SUM(b)=0 THEN NULL ELSE FINAL SUM(b) END b2
ALL ROWS PER MATCH WITH UNMATCHED ROWS
AFTER MATCH SKIP PAST LAST ROW
PATTERN
(C*)
DEFINE
C AS B > 0
) mr
Result
| A | B | B1 | B2 |
|---|---|----|----|
| 1 | 1 | 10 | 10 |
| 1 | 2 | 10 | 10 |
| 1 | 3 | 10 | 10 |
| 1 | 4 | 10 | 10 |
I have a table with following data
Order_no | Part_No | R_from | R_to
1001 | 1010037-00L| 1 | 5
1001 | 1010025-00L| 6 | 12
I need to get the above data to a report in below manner.
R_NO | PART_NO
------------------
1 | 1010037-00L
2 | 1010037-00L
3 | 1010037-00L
4 | 1010037-00L
5 | 1010037-00L
6 | 1010025-00L
7 | 1010025-00L
8 | 1010025-00L
9 | 1010025-00L
10 | 1010025-00L
11 | 1010025-00L
12 | 1010025-00L
Something like:
WITH r_nos ( r_no ) AS (
SELECT LEVEL
FROM DUAL
CONNECT BY LEVEL <= ( SELECT MAX( R_to ) FROM your_table )
)
SELECT r_no,
part_no
FROM r_nos r
INNER JOIN
your_table y
ON ( r.r_no BETWEEN y.r_from AND y.r_to )
Here's an alternative which doesn't require a separate join. You should test both solutions to see which is more performant for your data etc, though.
WITH your_table AS (SELECT 1001 order_no, '1010037-00L' part_no, 1 r_from, 5 r_to FROM dual UNION ALL
SELECT 1001 order_no, '1010025-00L' part_no, 6 r_from, 12 r_to FROM dual)
SELECT r_from + LEVEL -1 r_no,
order_no,
part_no
FROM your_table
CONNECT BY PRIOR order_no = order_no
AND PRIOR part_no = part_no
AND PRIOR sys_guid() IS NOT NULL
AND LEVEL <= r_to - r_from + 1
ORDER BY r_no;
R_NO ORDER_NO PART_NO
---------- ---------- -----------
1 1001 1010037-00L
2 1001 1010037-00L
3 1001 1010037-00L
4 1001 1010037-00L
5 1001 1010037-00L
6 1001 1010025-00L
7 1001 1010025-00L
8 1001 1010025-00L
9 1001 1010025-00L
10 1001 1010025-00L
11 1001 1010025-00L
12 1001 1010025-00L
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
user_id | username | salary |
+---------+----------+------+
| 1 | John | 4000 |
| 2 | Paul | 0900 |
| 3 | Adam | 0589 |
| 4 | Ben | 2154 |
| 5 | Charles | 2489 |
| 6 | Dean | 2500 |
| 7 | Edward | 2900 |
| 8 | Fred | 2800 |
| 9 | George | 4100 |
| 10 | Hugo | 5200 |
I need output like this
range count
--------------------
0-999 2
1000-1999 0
2000-2999 5
3000-3999 0
4000-4999 2
5000-5999 1
Here is an attempt:
with w as
(
select 1000 * (level - 1) low, 1000 * level high from dual
connect by level <= 10
)
select w.low, w.high, sum(decode(t.user_id, null, 0, 1)) nb
from w, test_epn t
where w.low <= t.salary (+)
and w.high > t.salary (+)
group by w.low, w.high
order by w.low
;
This gives:
1 0 1000 2
2 1000 2000 0
3 2000 3000 5
4 3000 4000 0
5 4000 5000 2
6 5000 6000 1
7 6000 7000 0
8 7000 8000 0
9 8000 9000 0
10 9000 10000 0
SQL> col range format a30
SQL> with t as (
2 select 'John' name, 4000 sal from dual union all
3 select 'Paul' name, 900 from dual union all
4 select 'Adam' name, 589 from dual union all
5 select 'Ben' name, 2154 from dual union all
6 select 'Charles' name, 2489 from dual union all
7 select 'Dean' name, 2500 from dual union all
8 select 'Edward' name, 2900 from dual union all
9 select 'Fred' name, 2800 from dual union all
10 select 'George' name, 4100 from dual union all
11 select 'Hugo' name, 5200 from dual
12 )
13 select to_char(pvtid*1000)||'-'||to_char(pvtid*1000+999) range, count(t.sal)
14 from t
15 ,
16 (
17 select rownum-1 pvtid
18 from dual connect by level <= (select floor(max(sal)/1000) from t)+1
19 ) piv
20 where piv.pvtid = floor(t.sal(+)/1000)
21 group by piv.pvtid
22 order by 1
23 /
RANGE COUNT(T.SAL)
------------------------------ ------------
0-999 2
1000-1999 0
2000-2999 5
3000-3999 0
4000-4999 2
5000-5999 1
Oracle 11g R2 Schema Setup:
create table test_table as
select 1 user_id, 'John' username , 4000 salary from dual union all
select 2 , 'Paul' , 0900 from dual union all
select 3 , 'Adam' , 0589 from dual union all
select 4 , 'Ben' , 2154 from dual union all
select 5 , 'Charles' , 2489 from dual union all
select 6 , 'Dean' , 2500 from dual union all
select 7 , 'Edward' , 2900 from dual union all
select 8 , 'Fred' , 2800 from dual union all
select 9 , 'George' , 4100 from dual union all
select 10 , 'Hugo' , 5200 from dual
Query 1:
with range_tab(f,t) as (select (level - 1)*1000 , (level - 1)*1000 + 999
from dual
connect by (level - 1)*1000 <= (select max(salary) from test_table))
select f ||'-'|| t as range, count(user_id)
from test_table
right outer join range_tab on (salary between f and t)
group by f, t
order by 1
[Results][2]:
| RANGE | COUNT(USER_ID) |
|-----------|----------------|
| 0-999 | 2 |
| 1000-1999 | 0 |
| 2000-2999 | 5 |
| 3000-3999 | 0 |
| 4000-4999 | 2 |
| 5000-5999 | 1 |
In case of fixed interval you can also use Oracle WIDTH_BUCKET function.
select count(*),
(WIDTH_BUCKET(salary, 0, 10000,10)-1)*1000 ||'-'||to_char(WIDTH_BUCKET(salary, 0, 10000,10)*1000-1) as salary_range
from table1
group by WIDTH_BUCKET(salary, 0, 10000,10)
order by salary_range;
| COUNT(*) | SALARY_RANGE |
|----------|--------------|
| 2 | 0-999 |
| 5 | 2000-2999 |
| 2 | 4000-4999 |
| 1 | 5000-5999 |
Disadvantage is: It does not count empty buckets, but maybe this satisfy your needs anyway.
how can i assign a category based on a value?
for example, i have a table with values from 1-200. how do i assign a category to each record, like 1-5, 6-10, 11-15, etc.
i can do it using the below but that seems like a bad solution.
sorry, this is probably very basic but i don't know what it's called and googling buckets (as it's called in our company) didn't bring up any results.
thank you
SELECT DISTINCT CountOfSA,
CASE
WHEN CountOfSA BETWEEN 1 AND 5 THEN
'1-5'
WHEN CountOfSA BETWEEN 6 AND 10 THEN
'6-10'
WHEN CountOfSA BETWEEN 11 AND 15 THEN
'11-15'
WHEN CountOfSA BETWEEN 16 AND 20 THEN
'16-20'
WHEN CountOfSA BETWEEN 21 AND 25 THEN
'21-25'
WHEN CountOfSA BETWEEN 26 AND 30 THEN
'26-30'
END
AS diff
FROM NR_CF_212
Have a look at WIDTH_BUCKET function.
This divides the range into equal sized intervals and assigns a bucket number to each interval.
with x as (
select CountOfSA,
width_bucket(CountOfSA, 1, 200, 40) bucket_
from NR_CF_212
)
select CountOfSA,
cast(1 + (bucket_ - 1)*5 as varchar2(4)) ||
'-' ||
cast( bucket_*5 as varchar2(4)) diff
from x
order by CountOfSA;
Demo here.
I would put the range values and descriptions into a separate table, especially if you plan to use these for future queries, views, etc. Plus its easier to change the ranges or descriptions as needed. For example:
create table sales_ranges
(
low_val number not null,
high_val number not null,
range_desc varchar2(100) not null
)
cache;
insert into sales_ranges values (0,1000,'$0-$1k');
insert into sales_ranges values (1001,10000,'$1k-$10k');
insert into sales_ranges values (10001,100000,'$10k-$100k');
insert into sales_ranges values (100001,1000000,'$100k-$1mm');
insert into sales_ranges values (1000001,10000000,'$1mm-$10mm');
insert into sales_ranges values (10000001,100000000,'$10mm-$100mm');
commit;
create table sales
(
id number,
total_sales number
);
insert into sales(id, total_sales)
-- some random values for testing
select level, trunc(dbms_random.value(1,10000000))
from dual
connect by level <= 100;
commit;
select id, total_sales, range_desc
from sales s
left outer join sales_ranges sr
on (s.total_sales between sr.low_val and sr.high_val)
order by s.id
;
Output (just first 3 rows):
ID TOTAL_SALES RANGE_DESC
1 5122380 $1mm-$10mm
2 347726 $100k-$1mm
3 6564700 $1mm-$10mm
I guess you could use concatenate together some calculated values to dynamically create the bucket name:
select countofsa
, ((countofsa - 1)/5) * 5 + 1
, ((countofsa - 1)/5 + 1) * 5
, ((countofsa - 1)/5) * 5 + 1 || '-' || ((countofsa - 1)/5 + 1) * 5 AS diff
from nr_cf_212
Some output:
countofsa | ?column? | ?column? | diff
-----------+----------+----------+-------
1 | 1 | 5 | 1-5
2 | 1 | 5 | 1-5
3 | 1 | 5 | 1-5
4 | 1 | 5 | 1-5
5 | 1 | 5 | 1-5
6 | 6 | 10 | 6-10
7 | 6 | 10 | 6-10
8 | 6 | 10 | 6-10
9 | 6 | 10 | 6-10
10 | 6 | 10 | 6-10
11 | 11 | 15 | 11-15
(11 rows)
UPDATE from comments, Oracle example, dynamically computing range:
create table nr_cf_212(countofsa number);
insert into nr_cf_212 values(1);
insert into nr_cf_212 values(2);
insert into nr_cf_212 values(3);
insert into nr_cf_212 values(4);
insert into nr_cf_212 values(5);
insert into nr_cf_212 values(6);
insert into nr_cf_212 values(7);
insert into nr_cf_212 values(9);
insert into nr_cf_212 values(10);
insert into nr_cf_212 values(11);
select countofsa
, TRUNC((countofsa - 1)/5) * 5 + 1
, (TRUNC((countofsa - 1)/5) + 1) * 5
, TRUNC((countofsa - 1)/5) * 5 + 1 || '-' || (TRUNC((countofsa - 1)/5) + 1) * 5 AS diff
from nr_cf_212;
| COUNTOFSA | TRUNC((COUNTOFSA-1)/5)*5+1 | (TRUNC((COUNTOFSA-1)/5)+1)*5 | DIFF |
|-----------|----------------------------|------------------------------|-------|
| 1 | 1 | 5 | 1-5 |
| 2 | 1 | 5 | 1-5 |
| 3 | 1 | 5 | 1-5 |
| 4 | 1 | 5 | 1-5 |
| 5 | 1 | 5 | 1-5 |
| 6 | 6 | 10 | 6-10 |
| 7 | 6 | 10 | 6-10 |
| 9 | 6 | 10 | 6-10 |
| 10 | 6 | 10 | 6-10 |
| 11 | 11 | 15 | 11-15 |
I tried it with sqlfiddle (http://sqlfiddle.com/#!4/b922e/4).
I broke it into parts to show the "from" column, the "to" column and then the range. If you divide your number by 5 and look at the quotient and the remainder, you will see a pattern:
1/5 = 0 remainder 1
2/5 = 0 remainder 2
3/5 = 0 remainder 3
4/5 = 0 remainder 4
5/5 = 1 remainder 0
6/5 = 1 remainder 1
7/5 = 1 remainder 2
8/5 = 1 remainder 3
9/5 = 1 remainder 4
10/5 = 2 remainder 0
11/5 = 2 remainder 1
The range for the number is "from" 5 times the quotient "to" 5 times the quotient plus the remainder -almost. Actually, everything is offset by 1. So take your number, subtract 1, then do the division.