Populating future dates in oracle table - oracle

I have attached tables, product and date.
Lets say my product table has data till yesterday i.e 05/31/2018
am trying to populate season table where I could do the calculation till 5/31/2018 where value = (value on same day last year/previous day last year) with Ch(a) and P(pen), however the data set was till 5/31/2018. my aim is to get data/calculation for 06/1/2018 till 12/31/2018 as well. how do i get the data for these future dates as I have the data to calculate these future dates in prod table.
appreciate if you can help.
Thank you!

You can generate a series of dates using CONNECT BY subquery like this one:
SELECT Start_date + level - 1 as my_date
FROM (
SELECT date '2018-01-01' as Start_date FROM dual
)
CONNECT BY Start_date + level - 1 <= date '2018-01-05'
Demo: http://www.sqlfiddle.com/#!4/072359/1
| MY_DATE |
|----------------------|
| 2018-01-01T00:00:00Z |
| 2018-01-02T00:00:00Z |
| 2018-01-03T00:00:00Z |
| 2018-01-04T00:00:00Z |
| 2018-01-05T00:00:00Z |

Related

Impala Query to get next date

I have 2 Impala tables.
1st table T1 (additional columns are there but I am interested in only date and day type as weekday):
date day_type
04/01/2020 Weekday
04/02/2020 Weekday
04/03/2020 Weekday
04/04/2020 Weekend
04/05/2020 Weekend
04/06/2020 Weekday
2nd table T2:
process date status
A 04/01/2020 finished
A 04/02/2020 finished
A 04/03/2020 finished
A 04/03/2020 run_again
Using Impala queries I have to get the maximum date from second table T2 and get its status. According to the above table 04/03 is the maximum date.
If the status is finished on 04/03, then my query should return the next available weekday date from T1 which is 04/06/2020.
But if the status is run_again, then the query should return the same date.
In the above table, 04/03 has run_again and when my query runs the output should be 04/03/2020 and not 04/06/2020.
Please note more than one status is possible for a date. For example, 04/03/2020 can have a row with finished as status and another with run again as status. In this case run again should be prioritized and the query should give 04/03/2020 as output date
What I tried so far:
I ran a subquery from second table and got the maximum date and its status. I tried to run a case in my main query and gave T1 as subselect in Case statement but its not working.
Is it possible to achieve this through Impala query?
One way to do this is to create a CTE from table T1 instead of a correlated subquery. Something like:
WITH T3 as (
select t.date date, min(x.date) next_workday
from T1 t join T1 x
on t.date < x.date
where x.day_type = 'Weekday'
group by t.date
)
select T2.process, T2.date run_date, T2.status,
case when T2.status = 'finished' then T3.next_workday
else T3.date
end next_run_date
from T2 join T3
on T2.date = T3.date
order by T2.process, T2.date;
+---------+------------+-----------+---------------+
| process | run_date | status | next_run_date |
+---------+------------+-----------+---------------+
| A | 2020-04-01 | finished | 2020-04-02 |
| A | 2020-04-02 | finished | 2020-04-03 |
| A | 2020-04-03 | run again | 2020-04-03 |
+---------+------------+-----------+---------------+
You can then select max from the result instead of ordering.
There might be multiple solutions and even some better ones considering performance but this is my approach. Hope it helps.
select case when status='run_again' then t2_date else t1_date end as needed_date from t2 cross join (select t1_date from t1 where t1.day_type='Weekday' and t1_date>(select max(t2_date) from t2) order by t1.t1_date limit 1)a where t2_date=(select max(t2_date) from t2);

What is the best way to work around data?

I am trying to display data as such:
In our database we have events (with unique ID) and then a start date. The events do not overlap, and each one starts on the date the last one ended. However we don't have 'end date' in the database.
I have to feed the data into another system so that it shows event ID, start date, and end date (which is just the next start date).
I want to avoid creating a custom view as that's really frowned upon here for this database. So I'm wondering if there's a good way to do this in a query.
Essentially it would be:
EventA | Date1 | Date2
EventB | Date2 | Date3
EventC | Date3 | Date4
The events are planned years in advance and I only need to have the next few months pulled for the query, so no worry about running out of 'next event start dates' and in case it matters, this query will be part of a webservice call.
The basic pseudo code for event and date would be:
select Event.ID, Event.StartDate
from Event
where Event.StartDate > sysdate and Event.StartDate < sysdate+90
Essentially I want to take the next row's Event.StartDate and make it the current row's Event.EndDate
Use the LEAD analytic function:
Oracle Setup:
A table with 10 rows:
CREATE TABLE Event ( ID, StartDate ) AS
SELECT LEVEL, TRUNC( SYSDATE ) + LEVEL
FROM DUAL
CONNECT BY LEVEL <= 10;
Query:
select ID,
StartDate,
LEAD( StartDate ) OVER ( ORDER BY StartDate ) AS EndDate
from Event
where StartDate > sysdate and StartDate < sysdate+90
Output:
ID | STARTDATE | ENDDATE
-: | :-------- | :--------
1 | 22-JUN-19 | 23-JUN-19
2 | 23-JUN-19 | 24-JUN-19
3 | 24-JUN-19 | 25-JUN-19
4 | 25-JUN-19 | 26-JUN-19
5 | 26-JUN-19 | 27-JUN-19
6 | 27-JUN-19 | 28-JUN-19
7 | 28-JUN-19 | 29-JUN-19
8 | 29-JUN-19 | 30-JUN-19
9 | 30-JUN-19 | 01-JUL-19
10 | 01-JUL-19 | null
db<>fiddle here

In hiveql, what is the most elegant/performatic way of calculating an average value if some of the data is implicitly not present?

In Hiveql, what is the most elegant and performatic way of calculating an average value when there are 'gaps' in the data, with implicit repeated values between them? i.e. Considering a table with the following data:
+----------+----------+----------+
| Employee | Date | Balance |
+----------+----------+----------+
| John | 20181029 | 1800.2 |
| John | 20181105 | 2937.74 |
| John | 20181106 | 3000 |
| John | 20181110 | 1500 |
| John | 20181119 | -755.5 |
| John | 20181120 | -800 |
| John | 20181121 | 1200 |
| John | 20181122 | -400 |
| John | 20181123 | -900 |
| John | 20181202 | -1300 |
+----------+----------+----------+
If I try to calculate a simple average of the november rows, it will return ~722.78, but the average should take into account the days that are not shown have the same balance as the previous register. In the above data, John had 1800.2 between 20181101 and 20181104, for example.
Assuming that the table always have exactly one row for each date/balance and given that I cannot change how this data is stored (and probably shouldn't since it would be a waste of storage to write rows for days with unchanged balances), I've been tinkering with getting the average from a select with subqueries for all the days in the queried month, returning a NULL for the absent days, and then using case to get the balance from the previous available date in reverse order. All of this just to avoid writing temporary tables.
Step 1: Original Data
The 1st step is to recreate a table with the original data. Let's say the original table is called daily_employee_balance.
daily_employee_balance
use default;
drop table if exists daily_employee_balance;
create table if not exists daily_employee_balance (
employee_id string,
employee string,
iso_date date,
balance double
);
Insert Sample Data in original table daily_employee_balance
insert into table daily_employee_balance values
('103','John','2018-10-25',1800.2),
('103','John','2018-10-29',1125.7),
('103','John','2018-11-05',2937.74),
('103','John','2018-11-06',3000),
('103','John','2018-11-10',1500),
('103','John','2018-11-19',-755.5),
('103','John','2018-11-20',-800),
('103','John','2018-11-21',1200),
('103','John','2018-11-22',-400),
('103','John','2018-11-23',-900),
('103','John','2018-12-02',-1300);
Step 2: Dimension Table
You will need a dimension table where you will have a calendar (table with all the possible dates), call it dimension_date. This is a normal industry standard to have a calendar table, you could probably download this sample data over the internet.
use default;
drop table if exists dimension_date;
create external table dimension_date(
date_id int,
iso_date string,
year string,
month string,
month_desc string,
end_of_month_flg string
);
Insert some sample data for entire month of Nov 2018:
insert into table dimension_date values
(6880,'2018-11-01','2018','2018-11','November','N'),
(6881,'2018-11-02','2018','2018-11','November','N'),
(6882,'2018-11-03','2018','2018-11','November','N'),
(6883,'2018-11-04','2018','2018-11','November','N'),
(6884,'2018-11-05','2018','2018-11','November','N'),
(6885,'2018-11-06','2018','2018-11','November','N'),
(6886,'2018-11-07','2018','2018-11','November','N'),
(6887,'2018-11-08','2018','2018-11','November','N'),
(6888,'2018-11-09','2018','2018-11','November','N'),
(6889,'2018-11-10','2018','2018-11','November','N'),
(6890,'2018-11-11','2018','2018-11','November','N'),
(6891,'2018-11-12','2018','2018-11','November','N'),
(6892,'2018-11-13','2018','2018-11','November','N'),
(6893,'2018-11-14','2018','2018-11','November','N'),
(6894,'2018-11-15','2018','2018-11','November','N'),
(6895,'2018-11-16','2018','2018-11','November','N'),
(6896,'2018-11-17','2018','2018-11','November','N'),
(6897,'2018-11-18','2018','2018-11','November','N'),
(6898,'2018-11-19','2018','2018-11','November','N'),
(6899,'2018-11-20','2018','2018-11','November','N'),
(6900,'2018-11-21','2018','2018-11','November','N'),
(6901,'2018-11-22','2018','2018-11','November','N'),
(6902,'2018-11-23','2018','2018-11','November','N'),
(6903,'2018-11-24','2018','2018-11','November','N'),
(6904,'2018-11-25','2018','2018-11','November','N'),
(6905,'2018-11-26','2018','2018-11','November','N'),
(6906,'2018-11-27','2018','2018-11','November','N'),
(6907,'2018-11-28','2018','2018-11','November','N'),
(6908,'2018-11-29','2018','2018-11','November','N'),
(6909,'2018-11-30','2018','2018-11','November','Y');
Step 3: Fact Table
Create a fact table from the original table. In normal practice, you ingest the data to hdfs/hive then process the raw data and create a table with historical data where you keep inserting in increment manner. You can look more into data warehousing to get the proper definition but I call this a fact table - f_employee_balance.
This will re-create the original table with missing dates and populate the missing balance with earlier known balance.
--inner query to get all the possible dates
--outer self join query will populate the missing dates and balance
drop table if exists f_employee_balance;
create table f_employee_balance
stored as orc tblproperties ("orc.compress"="SNAPPY") as
select q1.employee_id, q1.iso_date,
nvl(last_value(r.balance, true) --initial dates to be populated with 0 balance
over (partition by q1.employee_id order by q1.iso_date rows between unbounded preceding and current row),0) as balance,
month, year from (
select distinct
r.employee_id,
d.iso_date as iso_date,
d.month, d.year
from daily_employee_balance r, dimension_date d )q1
left outer join daily_employee_balance r on
(q1.employee_id = r.employee_id) and (q1.iso_date = r.iso_date);
Step 4: Analytics
The query below will give you the true average for by month:
select employee_id, monthly_avg, month, year from (
select employee_id,
row_number() over (partition by employee_id,year,month) as row_num,
avg(balance) over (partition by employee_id,year,month) as monthly_avg, month, year from
f_employee_balance)q1
where row_num = 1
order by year, month;
Step 5: Conclusion
You could have just combined step 3 and 4 together; this would save you from creating extra table. When you are in the big data world, you don't worry much about wasting extra disk space or development time. You can easily add another disk or node and automate the process using workflows. For more information, please look into data warehousing concept and hive analytical queries.

oracle query to get max hour every day, and corresponding row values

I'm having a hard time creating a query to do the following:
I have this table, called LOG:
ID | INSERT_TIME | LOG_VALUE
----------------------------------------
1 | 2013-04-29 18:00:00.000 | 160473
2 | 2013-04-29 21:00:00.000 | 154281
3 | 2013-04-30 09:00:00.000 | 186552
4 | 2013-04-30 14:00:00.000 | 173145
5 | 2013-04-30 14:30:00.000 | 102235
6 | 2013-05-01 11:00:00.000 | 201541
7 | 2013-05-01 23:00:00.000 | 195234
What I want to do is build a query that returns, for each day, the last values inserted (using the max value of INSERT_TIME). I'm only interested in the date part of that column, and in the column LOG_VALUE. So, this would be my resultset after running the query:
2013-04-29 154281
2013-04-30 102235
2013-05-01 195234
I guess that I need to use GROUP BY over the INSERT_TIME column, along with MAX() function, but by doing that, I can't seem to get the LOG_VALUE. Can anyone help me on this, please?
(I'm on Oracle 10g)
SELECT trunc(insert_time),
log_value
FROM (
SELECT insert_time,
log_value,
rank() over (partition by trunc(insert_time)
order by insert_time desc) rnk
FROM log)
WHERE rnk = 1
is one option. This uses the analytic function rank to identify the row with the latest insert_time on each day.

Create a table of dates in rows

I have a Oracle Database;
and I want to create a table with two columns, one contain id and the other contain incremented dates in rows.
i want to specify in my PL/SQL code the limit dates, and the code will generate the rows between the two limit dates (from and to).
This is an example output result :
+-----+--------------------+
| id |dates |
+-----+--------------------+
| 1 |01/02/2011 04:00:00 |
+-----+--------------------+
| 2 |01/02/2011 05:00:00 |
+-----+--------------------+
| 3 |01/02/2011 06:00:00 |
+-----+--------------------+
| 4 |01/02/2011 07:00:00 |
+-----+--------------------+
| 5 |01/02/2011 08:00:00 |
....
...
..
| 334 |05/03/2011 023:00:00|
+-----+--------------------+
You haven't exactly deluged us with details, but this is the sort of construct you want:
select level as id
, &&start_date + ((level-1) * (1/24) as dates
from dual
connect by level <= ((&&end_date - &&start_date)*24)
/
This assumes your input values are whole days, You will need to adjust the maths if your start or end date contains a time component.
You would need to start with a date baseline:
vBaselineDate := TRUNC(SYSDATE);
OR
vBaselineDate := TO_DATE('28-03-2013 12:00:00', 'DD-MM-YYYY HH:MI:SS');
Then increment the baseline by adding fractions of a day depending on how large you want the range, eg: 1 minute, 1 hour etc.
FOR i IN 1..334 LOOP
INSERT INTO mytable
(id, dates)
VALUES
(i, (vBaselineDate + i/24));
END LOOP;
COMMIT;
1/24 = 1 hour.
1/1440 = 1 minute;
Hope this helps.

Resources