How can I make index column start over after reaching 5th row? I can't do that with a window function as there are no groups, I just need an index with max number of 5 like this:
date
index
01.01.21
1
02.01.21
2
03.01.21
3
04.01.21
4
05.01.21
5
06.01.21
1
07.01.21
2
and so on.
Appreciate any ideas.
You can use below solution for that purpose.
First, rank (row_number analytic function)the rows in your table within inline view
Then, use again the row_number function with partition by clause to group the previously ranked rows by TRUNC((rnb - 1)/5)
SELECT t."DATE"
, row_number()over(PARTITION BY TRUNC((rnb - 1)/5) ORDER BY rnb) as "INDEX"
FROM (
select "DATE", row_number()OVER(ORDER BY "DATE") rnb
from Your_table
) t
ORDER BY 1
;
demo on db<>fiddle
Your comment about using analytic functions is wrong; you can use analytic functions even when there are no "groups" (or "partitions"). Here you do need an analytic function, to order the rows (even if you don't need to partition them).
Here is a very simple solution, using just row_number(). Note the with clause, which is not part of the solution; I included it just for testing. In your real-life case, remove the with clause, and use your actual table and column names. The use of mod(... , 5) is pretty much obvious; it looks a little odd (subtracting 1, taking the modulus, then adding 1) because in Oracle we seem to count from 1 in all cases, instead of the much more natural counting from 0 common in other languages (like C).
Note that both date and index are reserved keywords, which shouldn't be used as column names. I used one common way to address that - I added an underscore at the end.
alter session set nls_date_format = 'dd.mm.rr';
with
sample_inputs (date_) as (
select date '2021-01-01' from dual union all
select date '2021-01-02' from dual union all
select date '2021-01-03' from dual union all
select date '2021-01-04' from dual union all
select date '2021-01-05' from dual union all
select date '2021-01-06' from dual union all
select date '2021-01-07' from dual
)
select date_, 1 + mod(row_number() over (order by date_) - 1, 5) as index_
from sample_inputs
;
DATE_ INDEX_
-------- ----------
01.01.21 1
02.01.21 2
03.01.21 3
04.01.21 4
05.01.21 5
06.01.21 1
07.01.21 2
You can combine MOD() with ROW_NUMBER() to get the index you want. For example:
select date, 1 + mod(row_number() over(order by date) - 1, 5) as idx from t
Related
I have a very unique query that I must write, and hoping to do it all in 1 query, but not sure if I can.
I have a list of Articles, each article has a unique ID. The application passes this ID to the stored procedure. Then, I am to retrieve that article, AND, the next & previous articles. So, the list is sorted by date, and I can get the next & previous.
I do this via LEAD & LAG. It works in one query. However, in some cases, that next or previous article has a NULL in one of the fields. If that field is NULL, I am basically to get the next article where that field is NOT NULL.
Then there is one more thing. The article passed from the application has a category assigned to it. The next & previous articles must be of the same category.
The query is pretty big now, but it work getting the next & previous to the article ID passed as the subquery sorts everything by date. But with these 2 new criteria, the NULL factor and category factor, I do not see how it is possible to do this in one query.
Any thoughts? Or need some examples, or my existing query?
Thanks for all your time.
Oracle Setup:
CREATE TABLE articles ( id, category, value, dt ) AS
SELECT 1, 1, 1, DATE '2017-01-01' FROM DUAL UNION ALL
SELECT 2, 1, 2, DATE '2017-01-02' FROM DUAL UNION ALL -- Previous row
SELECT 3, 1, NULL, DATE '2017-01-03' FROM DUAL UNION ALL -- Ignored as value is null
SELECT 4, 1, 1, DATE '2017-01-04' FROM DUAL UNION ALL -- Chosen id
SELECT 5, 2, 3, DATE '2017-01-05' FROM DUAL UNION ALL -- Ignored as different category
SELECT 6, 1, 5, DATE '2017-01-06' FROM DUAL; -- Next row
Query:
SELECT *
FROM (
SELECT a.*,
LAG( CASE WHEN value IS NOT NULL THEN id END ) IGNORE NULLS OVER ( PARTITION BY category ORDER BY dt ) AS prv,
LEAD( CASE WHEN value IS NOT NULL THEN id END ) IGNORE NULLS OVER ( PARTITION BY category ORDER BY dt ) AS nxt
FROM articles a
)
WHERE :your_id IN ( id, nxt, prv )
AND ( id = :your_id OR value IS NOT NULL )
ORDER BY dt;
(:your_id is set to 4 in the example output below.)
Output:
ID CATEGORY VALUE DT PRV NXT
---------- ---------- ---------- ------------------- ---------- ----------
2 1 2 2017-01-02 00:00:00 1 4
4 1 1 2017-01-04 00:00:00 2 6
6 1 5 2017-01-06 00:00:00 4
What I am attempting to do is find out the MAX number of tasks I may receive on a day during the next 6 months.
for example
task 1 runs 1-jan-16 and ends 10-jan-16
Task 2 runs 3-Jan-16 and ends 15-jan-16
task 3 runs 6-Jan-16 and ends 10-Jan-16
Task 4 runs 9-Jan-16 and ends 20-Jan-16
So in this example there are 4 tasks that are open between 1-Jan and 10th Jan so I want the outcome to be 4 in this scenario. The reason being is I'm displaying them in a Gantt chart so they'll be all underneath each other.
All I have to work with so far is:
select schedule_start_date,am.AC,count(wo) as from ac_master am
left outer join wo on wo.ac = am.ac and ac_type = '190'
where wo.status = 'OPEN'
group by am.ac,schedule_start_date
This will show the count per day but some of these may overlap.
Is there anyway to do what I am trying to accomplish?
If you just want the count for each scheduled group at a given point in time, then you can just use BETWEEN with the start and end dates:
SELECT schedule_start_date,
am.AC,
COUNT(*) AS theCount
FROM ac_master am
LEFT OUTER JOIN wo
ON wo.ac = am.ac AND
ac_type = '190'
WHERE wo.status = 'OPEN' AND
'2016-01-10' BETWEEN schedule_start_date AND schedule_end_date
GROUP BY schedule_start_date,
am.ac
Regardless of how you develop a set of rows with start_date and end_date, here is a method to show how the task count changes over time. Each date is the first date when the task count changes from the previous value to the new one.
If you only need max(tasks), that's a simple matter of grouping by whatever is needed. (Or, in Oracle 12, you can order by tasks and use the new fetch first feature.) Notice also the partition by clause - if you need different groups for different categories (for example: for different "departments" etc.) you can use this clause so that the computations are done separately for each group, all in one pass over the data.
with
intervals ( start_date, end_date ) as (
select date '2016-01-01', date '2016-01-10' from dual union all
select date '2016-01-03', date '2016-01-15' from dual union all
select date '2016-01-06', date '2016-01-10' from dual union all
select date '2016-01-09', date '2016-01-20' from dual
),
u ( dt, flag ) as (
select start_date , 1 from intervals
union all
select end_date + 1, -1 from intervals
)
select distinct dt, sum(flag) over (partition by null order by dt) as tasks
from u
order by dt;
DT TASKS
---------- ---------
2016-01-01 1
2016-01-03 2
2016-01-06 3
2016-01-09 4
2016-01-11 2
2016-01-16 1
2016-01-21 0
I have a field in my table which is of type NUMBER. It has values like the below,
COL1
1
1.1
2
2.111
3
4.5
Now I have a request to increment this to the next highest number of the same kind.
If the value is whole, say 1, i need to increment it to 2. If the value is decimal, say 1.1, i need to increment it to 1.2.
Any pointers on how to do it would be greatly helpful.
Building on BluShadow Jul 27, 2010 8:55 AM : # https://community.oracle.com/thread/1107846 ...
Still may be a simpler way, but this seems to work.
select col1, case when floor(col1)=col1
then col1+1
else power(10,-1*
regexp_count(regexp_replace(col1,'[0-9]*\.([0-9])','\1'),'[0-9]'))+col1 end as nextNum
from (
select 1 as col1 from dual union all
Select 1.1 as col1 from dual union all
select 2 as col1 from dual union all
select 2.111 as col1 from dual union all
select 3 as col1 from dual union all
select 4.5 as col1 from dual) b
What this does:
uses a case statement to compare the floor of col1 to col1 (essentially finding out if there are decimals), if not simply add 1.
If there are decimals count how many. Use the power function and base 10 to identify the correct decimal position to add one to, and and add it back to col1 base.
Hi I am working on oracle DB. The DB has one column date. It consists of the dates of 5 years with the dates model will be refreshed. Ex
DATE_TABLE
DATE
------------
1-jan-2013
15-jan-2013
31-jan-2013
6-feb-2013
etc.........
now for today's date suppose 13th jan 2013. The next refresh date will be 15th jan. and previous refresh date is 1st jan. to retrieve these two dates. Can i have any way without using PL/SQL. using regular select queries?. Thanks in advance
There are two functions LAG() (allows you to reference previous record) and LEAD() allows you to reference next record. Here is an example:
SQL> with t1(col) as(
2 select '1-jan-2013' from dual union all
3 select '15-jan-2013' from dual union all
4 select '31-jan-2013' from dual union all
5 select '6-feb-2013' from dual
6 )
7 select col as current_value
8 , lag(col, 1) over(order by col) as prev_value
9 , lead(col, 1) over(order by col)as next_value
10 from t1
11 ;
Result:
CURRENT_VALUE PREV_VALUE NEXT_VALUE
------------- ----------- -----------
1-jan-2013 NULL 15-jan-2013
15-jan-2013 1-jan-2013 31-jan-2013
31-jan-2013 15-jan-2013 6-feb-2013
6-feb-2013 31-jan-2013 NULL
We can simply use the below query, plain and simple. No need of pl/sql
SELECT MIN(DATE) FROM DATE_TABLE WHERE DATE > SYSDATE ;
The question I need to answer is this "What is the maximum number of page requests we have ever received in a 60 minute period?"
I have a table that looks similar to this:
date_page_requested date;
page varchar(80);
I'm looking for the MAX count of rows in any 60 minute timeslice.
I thought analytic functions might get me there but so far I'm drawing a blank.
I would love a pointer in the right direction.
You have some options in the answer that will work, here is one that uses Oracle's "Windowing Functions with Logical Offset" feature instead of joins or correlated subqueries.
First the test table:
Wrote file afiedt.buf
1 create table t pctfree 0 nologging as
2 select date '2011-09-15' + level / (24 * 4) as date_page_requested
3 from dual
4* connect by level <= (24 * 4)
SQL> /
Table created.
SQL> insert into t values (to_date('2011-09-15 11:11:11', 'YYYY-MM-DD HH24:Mi:SS'));
1 row created.
SQL> commit;
Commit complete.
T now contains a row every quarter hour for a day with one additional row at 11:11:11 AM. The query preceeds in three steps. Step 1 is to, for every row, get the number of rows that come within the next hour after the time of the row:
1 with x as (select date_page_requested
2 , count(*) over (order by date_page_requested
3 range between current row
4 and interval '1' hour following) as hour_count
5 from t)
Then assign the ordering by hour_count:
6 , y as (select date_page_requested
7 , hour_count
8 , row_number() over (order by hour_count desc, date_page_requested asc) as rn
9 from x)
And finally select the earliest row that has the greatest number of following rows.
10 select to_char(date_page_requested, 'YYYY-MM-DD HH24:Mi:SS')
11 , hour_count
12 from y
13* where rn = 1
If multiple 60 minute windows tie in hour count, the above will only give you the first window.
This should give you what you need, the first row returned should have
the hour with the highest number of pages.
select number_of_pages
,hour_requested
from (select to_char(date_page_requested,'dd/mm/yyyy hh') hour_requested
,count(*) number_of_pages
from pages
group by to_char(date_page_requested,'dd/mm/yyyy hh')) p
order by number_of_pages
How about something like this?
SELECT TOP 1
ranges.date_start,
COUNT(data.page) AS Tally
FROM (SELECT DISTINCT
date_page_requested AS date_start,
DATEADD(HOUR,1,date_page_requested) AS date_end
FROM #Table) ranges
JOIN #Table data
ON data.date_page_requested >= ranges.date_start
AND data.date_page_requested < ranges.date_end
GROUP BY ranges.date_start
ORDER BY Tally DESC
For PostgreSQL, I'd first probably write something like this for a "window" aligned on the minute. You don't need OLAP windowing functions for this.
select w.ts,
date_trunc('minute', w.ts) as hour_start,
date_trunc('minute', w.ts) + interval '1' hour as hour_end,
(select count(*)
from weblog
where ts between date_trunc('minute', w.ts) and
(date_trunc('minute', w.ts) + interval '1' hour) ) as num_pages
from weblog w
group by ts, hour_start, hour_end
order by num_pages desc
Oracle also has a trunc() function, but I'm not sure of the format. I'll either look it up in a minute, or leave to see a friend's burlesque show.
WITH ranges AS
( SELECT
date_page_requested AS StartDate,
date_page_requested + (1/24) AS EndDate,
ROWNUMBER() OVER(ORDER BY date_page_requested) AS RowNo
FROM
#Table
)
SELECT
a.StartDate AS StartDate,
MAX(b.RowNo) - a.RowNo + 1 AS Tally
FROM
ranges a
JOIN
ranges b
ON a.StartDate <= b.StartDate
AND b.StartDate < a.EndDate
GROUP BY a.StartDate
, a.RowNo
ORDER BY Tally DESC
or:
WITH ranges AS
( SELECT
date_page_requested AS StartDate,
date_page_requested + (1/24) AS EndDate,
ROWNUMBER() OVER(ORDER BY date_page_requested) AS RowNo
FROM
#Table
)
SELECT
a.StartDate AS StartDate,
( SELECT MIN(b.RowNo) - a.RowNo
FROM ranges b
WHERE b.StartDate > a.EndDate
) AS Tally
FROM
ranges a
ORDER BY Tally DESC