Oracle rounding down sysdate to the nearest minute point that divisible by 30 - oracle

I have to convert sysdate by rounding down to nearest minute point that divisible by 30. For example:
If sysdate is between 2020-10-14 09:00:00 and 2020-10-14 09:29:59 then return 2020-10-14 09:00:00
If sysdate is between 2020-10-14 09:30:00 and 2020-10-14 09:59:59 then return 2020-10-14 09:30:00
How can I get my expected result in Oracle?

The minutes logic here
get the minutes
divide by 30 and truncate (which gives 0 or 1)
*30/1440 to get 0 or 30minutes of a day
and then add to the hour of day
SQL> with d as
2 ( select to_date('09:27','HH:MI') x from dual
3 union all
4 select to_date('09:37','HH:MI') x from dual
5 )
6 select x, trunc(x,'HH') + 30*trunc(to_number(to_char(x,'MI'))/30)/1440
7 from d;
X TRUNC(X,'HH')+30*TR
------------------- -------------------
01/10/2020 09:27:00 01/10/2020 09:00:00
01/10/2020 09:37:00 01/10/2020 09:30:00

Related

Sum values by date range in multiple columns

I need to sum value by date range in multiple columns. Every date range is one week of a month. It can be shorter than 7 days if it is the start of the month or the end of the month.
For example, I have dates for February:
my_user my_date my_value
A 01.02.2019 100
A 02.02.2019 150
B 01.02.2019 90
Z 28.02.2019 120
How can I have in date range format such as below?
my_user 01/02-03/02 04/02-10/02 11/02-17/02 18/02-24/02 25/02-28/02
A 250 0 0 0 0
B 90 0 0 0 0
Z 0 0 0 0 120
Any suggestions? Thanks!
You can do this:
select *
from (
select to_char(dt, 'iw') - to_char(trunc(dt, 'month'), 'iw') + 1 wk, usr, val from t)
pivot (sum(val) for wk in (1, 2, 3, 4, 5, 6))
demo
USR 1 2 3 4 5 6
--- ---------- ---------- ---------- ---------- ---------- ----------
A 250
B 90
Z 120
Header numbers are the weeks of month. Maximum may be 6 if month starts at the end of the week and is longer than 28 days.
Similiar way you can find first and last day of each week if needed, but you can't put them as headers, or at least not easily.
Edit:
it is possible to define certain date range with pivot, simple as two
dates? For example, I need to sum values from 5 December 2018 to 4
January 2019, 5 January 2019 to 4 February 2019, 5 March 2019 to 4
April 2019
Yes. Everything depends on how we count first and next weeks. Here:
to_char(dt, 'iw') - to_char(trunc(dt, 'month'), 'iw') + 1
i am subtracting week in year for given date and week in year of first day in month for this date. You can simply replace this second value with your starting date, either by hardcoding it in your query or by sending parameter to query or finding minimum date at first in a subquery:
(to_char(dt, 'iw') - to_char(date '2019-03-05', 'iw')) + 1
or
(to_char(dt, 'iw') - to_char((select min(dt) from data), 'iw')) + 1
Edit 2:
There is one problem however. When user defined period contains two or more years. to_date(..., 'iw') works fine for one year, but for two we get values 51, 52, 01, 02... We have to deal with this somehow, for instance like here:
with t(dt1, dt2) as (select date '2018-12-16', date '2019-01-15' from dual)
select min(dt) mnd, max(dt) mxd, iw, row_number() over (order by min(dt)) rn
from (select dt1 + level - 1 dt, to_char(dt1 + level - 1, 'iw') iw
from t connect by level -1 <= dt2 - dt1)
group by iw
which gives us:
MND MXD IW RN
----------- ----------- -- ----------
2018-12-16 2018-12-16 50 1
2018-12-17 2018-12-23 51 2
2018-12-24 2018-12-30 52 3
2018-12-31 2019-01-06 01 4
2019-01-07 2019-01-13 02 5
2019-01-14 2019-01-15 03 6
In first line we have user defined date ranges. Then I did hierarchical query looping through all dates in range assigning week, then grouped by this week, found start and end dates for each week and assigned row number rn which can be further used by pivot.
You can now simply join your input data with this query, let's name it weeks:
from data join weeks on dt between mnd and mxd
and make pivot. But for longer periods you have to find how many weeks there can be and specify them in pivot clause in (1, 2, 3, 4...). You can also add aliases if you need:
pivot ... for rn in (1 week01, 2 week02... 12 week12)
There is no simply way to avoid listing them manually. If you need it please look for oracle dynamic pivot in SO, there where hundreds similiar questions already. ;-)

Rolling Average in Oracle SQL

LotNumber Device Measure MeasureDate RowNumber
LotA DevA 1.1 10/1/15 0:00 1
LotA DevA 1.2 10/1/15 1:00 1
LotA DevB 1.1 10/1/15 2:00 2
LotB DevA 1.3 10/1/15 3:00 3
LotB DevA 1.4 10/1/15 4:00 3
LotA DevA 1.2 10/1/15 5:00 4
LotC DevA 1.3 10/1/15 6:00 5
LotD DevA 1.5 10/1/15 7:00 6
LotE DevA 1.1 10/1/15 8:00 7
LotF DevA 1.2 10/1/15 9:00 8
LotG DevA 1.3 10/1/15 10:00 9
LotH DevA 1.4 10/1/15 11:00 10
LotNumber Device Measure MeasureDate RowNumber Rolling Average
LotA DevA 1.1 10/1/15 0:00 1 Measure Average of RowNumber 1-5
LotA DevB 1.1 10/1/15 2:00 2 Measure Average of RowNumber 2-6
LotB DevA 1.3 10/1/15 3:00 3 Measure Average of RowNumber 3-7
LotA DevA 1.2 10/1/15 5:00 4 Measure Average of RowNumber 4-8
LotC DevA 1.3 10/1/15 6:00 5 Measure Average of RowNumber 5-9
LotD DevA 1.5 10/1/15 7:00 6 Measure Average of RowNumber 6-10
LotE DevA 1.1 10/1/15 8:00 7 Measure Average of RowNumber 7-10
LotF DevA 1.2 10/1/15 9:00 8 Measure Average of RowNumber 8-10
LotG DevA 1.3 10/1/15 10:00 9 Measure Average of RowNumber 9-10
LotH DevA 1.4 10/1/15 11:00 10 Measure Average of RowNumber 10
Is it possible to have the second table? I have no idea how to do this. Rolling average of rownumber with intervals of 4. For example, Average of RowNumber 1-5 is the average of all Measure with Rownumber that is ranging from 1-5. Thanks!
Yes, you can easily achieve this through the use of the avg analytic function, something like:
with sample_data (LotNumber, Device, Measure, MeasureDate, RowNumber) as
(select 'LotA', 'DevA', 1.1, to_date('10/1/15 00:00', 'dd/mm/yyyy hh24:mi'), 1
from dual union all
select 'LotA', 'DevA', 1.2, to_date('10/1/15 01:00', 'dd/mm/yyyy hh24:mi'), 1
from dual union all
select 'LotA', 'DevB', 1.1, to_date('10/1/15 02:00', 'dd/mm/yyyy hh24:mi'), 2
from dual union all
select 'LotB', 'DevA', 1.3, to_date('10/1/15 03:00', 'dd/mm/yyyy hh24:mi'), 3
from dual union all
select 'LotB', 'DevA', 1.4, to_date('10/1/15 04:00', 'dd/mm/yyyy hh24:mi'), 3
from dual union all
select 'LotA', 'DevA', 1.2, to_date('10/1/15 05:00', 'dd/mm/yyyy hh24:mi'), 4
from dual union all
select 'LotC', 'DevA', 1.3, to_date('10/1/15 06:00', 'dd/mm/yyyy hh24:mi'), 5
from dual union all
select 'LotD', 'DevA', 1.5, to_date('10/1/15 07:00', 'dd/mm/yyyy hh24:mi'), 6
from dual union all
select 'LotE', 'DevA', 1.1, to_date('10/1/15 08:00', 'dd/mm/yyyy hh24:mi'), 7
from dual union all
select 'LotF', 'DevA', 1.2, to_date('10/1/15 09:00', 'dd/mm/yyyy hh24:mi'), 8
from dual union all
select 'LotG', 'DevA', 1.3, to_date('10/1/15 10:00', 'dd/mm/yyyy hh24:mi'), 9
from dual union all
select 'LotH', 'DevA', 1.4, to_date('10/1/15 11:00', 'dd/mm/yyyy hh24:mi'), 10
from dual)
select lotnumber,
device,
measure,
measuredate,
rownumber,
avg(measure) over (order by rownumber
rows between current row and 4 following) rolling_average
from sample_data
order by rownumber;
LOTNUMBER DEVICE MEASURE MEASUREDATE ROWNUMBER ROLLING_AVERAGE
--------- ------ ---------- ------------------ ---------- ---------------
LotA DevA 1.1 10/01/0015 00:00 1 1.22
LotA DevA 1.2 10/01/0015 01:00 1 1.24
LotA DevB 1.1 10/01/0015 02:00 2 1.26
LotB DevA 1.3 10/01/0015 03:00 3 1.34
LotB DevA 1.4 10/01/0015 04:00 3 1.3
LotA DevA 1.2 10/01/0015 05:00 4 1.26
LotC DevA 1.3 10/01/0015 06:00 5 1.28
LotD DevA 1.5 10/01/0015 07:00 6 1.3
LotE DevA 1.1 10/01/0015 08:00 7 1.25
LotF DevA 1.2 10/01/0015 09:00 8 1.3
LotG DevA 1.3 10/01/0015 10:00 9 1.35
LotH DevA 1.4 10/01/0015 11:00 10 1.4
N.B. you didn't mention any grouping (eg. per day, per lotnumber, etc) and I used the rownumber column for the ordering - maybe it should have been the measuredate column?). If your requirements are more complex than what you've stated, you'll need to amend the over () clause appropriately.
With your additional comment that clarifies your requirements, you can amend the windowing clause to be on range, rather than rows between, like so:
with sample_data (LotNumber, Device, Measure, MeasureDate, RowNumber) as
(select 'LotA', 'DevA', 1.1, to_date('10/1/15 00:00', 'dd/mm/yyyy hh24:mi'), 1
from dual union all
select 'LotA', 'DevA', 1.2, to_date('10/1/15 01:00', 'dd/mm/yyyy hh24:mi'), 1
from dual union all
select 'LotA', 'DevB', 1.1, to_date('10/1/15 02:00', 'dd/mm/yyyy hh24:mi'), 2
from dual union all
select 'LotB', 'DevA', 1.3, to_date('10/1/15 03:00', 'dd/mm/yyyy hh24:mi'), 3
from dual union all
select 'LotB', 'DevA', 1.4, to_date('10/1/15 04:00', 'dd/mm/yyyy hh24:mi'), 3
from dual union all
select 'LotA', 'DevA', 1.2, to_date('10/1/15 05:00', 'dd/mm/yyyy hh24:mi'), 4
from dual union all
select 'LotC', 'DevA', 1.3, to_date('10/1/15 06:00', 'dd/mm/yyyy hh24:mi'), 5
from dual union all
select 'LotD', 'DevA', 1.5, to_date('10/1/15 07:00', 'dd/mm/yyyy hh24:mi'), 6
from dual union all
select 'LotE', 'DevA', 1.1, to_date('10/1/15 08:00', 'dd/mm/yyyy hh24:mi'), 7
from dual union all
select 'LotF', 'DevA', 1.2, to_date('10/1/15 09:00', 'dd/mm/yyyy hh24:mi'), 8
from dual union all
select 'LotG', 'DevA', 1.3, to_date('10/1/15 10:00', 'dd/mm/yyyy hh24:mi'), 9
from dual union all
select 'LotH', 'DevA', 1.4, to_date('10/1/15 11:00', 'dd/mm/yyyy hh24:mi'), 10
from dual)
select lotnumber,
device,
measure,
measuredate,
rownumber,
avg(measure) over (order by rownumber
range between current row and 4 following) rolling_average,
row_number() over (partition by rownumber order by measuredate) rn
from sample_data
order by rownumber;
LOTNUMBER DEVICE MEASURE MEASUREDATE ROWNUMBER ROLLING_AVERAGE RN
--------- ------ ---------- --------------------- ---------- --------------- ----------
LotA DevA 1.1 10/01/0015 00:00:00 1 1.22857143 1
LotA DevA 1.2 10/01/0015 01:00:00 1 1.22857143 2
LotA DevB 1.1 10/01/0015 02:00:00 2 1.3 1
LotB DevA 1.3 10/01/0015 03:00:00 3 1.3 1
LotB DevA 1.4 10/01/0015 04:00:00 3 1.3 2
LotA DevA 1.2 10/01/0015 05:00:00 4 1.26 1
LotC DevA 1.3 10/01/0015 06:00:00 5 1.28 1
LotD DevA 1.5 10/01/0015 07:00:00 6 1.3 1
LotE DevA 1.1 10/01/0015 08:00:00 7 1.25 1
LotF DevA 1.2 10/01/0015 09:00:00 8 1.3 1
LotG DevA 1.3 10/01/0015 10:00:00 9 1.35 1
LotH DevA 1.4 10/01/0015 11:00:00 10 1.4 1
Note that I've included the "rn" column, because I wasn't sure if you wanted to filter out the "duplicate" rownumber rows or not - if you do, then you'll need to add an outer query that filters on rn = 1.
N.B. It would have been helpful if you had included the actual output values you were expecting to see in the rolling average column rather than just the logic, so that we could compare our results with it.

Split 2 db rows into 3 by date ranges

I have a problem I can't solve. I have A and B money which I can spend in a defined period. These are the following two rows in the DB (with begin_date, end_date and amount columns):
A: 2015.01.01.-2015.09.30. 10.000$
B: 2015.07.01.-2015.12.31. 7.000$
So these dates are overlapped, and it means I can spend more money between 2017.07.01. and 2015.09.30. So in the output I have to get the following:
2015.01.01.-2015.07.01. x$
2015.07.01.-2015.09.30. y$
2015.09.30.-2015.12.31. z$
How can I select these ranges and count the amounts considering I spend money equally per months? If I can define the 3 date ranges I think I can count the amounts, but the dates are really tricky, and I can't handle them.
I use Oracle 11g.
Borrowing heavily from this approach, which is also explained here in more detail along with some alternatives, to just get the date ranges you can do:
with cte1 as
(
select begin_date as marker_date, 1 as type
from your_table
union all
select end_date + 1 as marker_date, -1 as type
from your_table
),
cte2 as (
select marker_date as begin_date,
lead(marker_date) over (order by marker_date) - 1 as end_date,
sum(type) over (order by marker_date) as periods
from cte1
)
select begin_date, end_date from cte2
where end_date is not null and periods > 0;
Which gives you:
BEGIN_DATE END_DATE
---------- ----------
2015-01-01 2015-06-30
2015-07-01 2015-09-30
2015-10-01 2015-12-31
I've assumed that you don't actually want the generated periods to overlap by a day, and instead want them to be the start and ends of months like the original two rows.
To get the amounts - if I've understood what you described - you can modify that to include the amount change at each date, as either positive or negative depending on whether it's the start or end of a period:
with cte1 as
(
select begin_date as marker_date,
amount / months_between(end_date + 1, begin_date) as monthly_amount
from your_table
union all
select end_date + 1 as marker_date,
-amount / months_between(end_date + 1, begin_date) as monthly_amount
from your_table
),
cte2 as (
select marker_date as begin_date,
lead(marker_date) over (order by marker_date) - 1 as end_date,
sum(monthly_amount) over (order by marker_date) as total_monthly_amount
from cte1
)
select begin_date, end_date,
total_monthly_amount * months_between(end_date + 1, begin_date) as amount
from cte2
where end_date is not null and total_monthly_amount > 0;
BEGIN_DATE END_DATE AMOUNT
---------- ---------- ----------
2015-01-01 2015-06-30 6.66666667
2015-07-01 2015-09-30 6.83333333
2015-10-01 2015-12-31 3.5
This works by dividing the amount for the original period by the number of months it covers:
select begin_date as marker_date, amount,
months_between(end_date + 1, begin_date) as months,
amount / months_between(end_date + 1, begin_date) as monthly_amount
from your_table
union all
select end_date + 1 as marker_date, amount,
months_between(end_date + 1, begin_date) as months,
-amount / months_between(end_date + 1, begin_date) as monthly_amount
from your_table;
MARKER_DATE AMOUNT MONTHS MONTHLY_AMOUNT
----------- ---------- ---------- --------------
2015-01-01 10 9 1.11111111
2015-07-01 7 6 1.16666667
2015-10-01 10 9 -1.11111111
2016-01-01 7 6 -1.16666667
And then using that as a CTE and applying the lead analytic function to reconstruct the new, non-overlapping periods:
with cte1 as
(
select begin_date as marker_date,
months_between(end_date + 1, begin_date) as months,
amount / months_between(end_date + 1, begin_date) as monthly_amount
from your_table
union all
select end_date + 1 as marker_date,
months_between(end_date + 1, begin_date) as months,
-amount / months_between(end_date + 1, begin_date) as monthly_amount
from your_table
)
select marker_date as begin_date,
lead(marker_date) over (order by marker_date) - 1 as end_date,
sum(monthly_amount) over (order by marker_date) as total_monthly_amount,
months_between(lead(marker_date) over (order by marker_date), marker_date) as months
from cte1;
BEGIN_DATE END_DATE TOTAL_MONTHLY_AMOUNT MONTHS
---------- ---------- -------------------- ----------
2015-01-01 2015-06-30 1.11111111 6
2015-07-01 2015-09-30 2.27777778 3
2015-10-01 2015-12-31 1.16666667 3
2016-01-01 0.00000000
And finally excluding the artificial open-ended period at the end, plus any that have a zero total in case there are any gaps (which you don't have in the small sample, but could appear in a larger data set); and multiplying the new monthly amount by the number of months in the new period.

Oracle Sql query to count time span with certain criteria

Oracle Sql query , I was trying to count the grand total for time difference that is greater than 2, but when I tried this it just counted all the rows from the query instead of just the rows that have the criteria I was looking for. Anybody have an idea of what I am missing or a better approach . Thanks
This is my query
select DC.CUST_FIRST_NAME,DC.CUST_LAST_NAME,oi.customer_id,oi.order_timestamp,oi.order_timestamp - LAG(oi.order_timestamp) OVER (ORDER BY oi.order_timestamp) AS "Difference(In Days)" ,
(select Count('Elapsed Order Difference')
from demo_orders oi,
demo_customers dc
where OI.CUSTOMER_ID = DC.CUSTOMER_ID
group by 'Elapsed Order Difference'
having count('Elapsed Order Difference') > 3
)Total
from demo_orders oi,
demo_customers dc
where OI.CUSTOMER_ID = DC.CUSTOMER_ID
Results
CUST_FIRST_NAME CUST_LAST_NAME CUSTOMER_ID ORDER_TIMESTAMP Difference(In Days) TOTAL
Eugene Bradley 7 8/14/2013 5:59:11 PM 10
William Hartsfield 2 8/28/2013 5:59:11 PM 14 10
Edward "Butch" OHare 4 9/8/2013 5:59:11 PM 11 10
Edward Logan 3 9/10/2013 5:59:11 PM 2 10
Edward Logan 3 9/20/2013 5:59:11 PM 10 10
Albert Lambert 6 9/25/2013 5:59:11 PM 5 10
Fiorello LaGuardia 5 9/30/2013 5:59:11 PM 5 10
William Hartsfield 2 10/8/2013 5:59:11 PM 8 10
John Dulles 1 10/14/2013 5:59:11 PM 6 10
Eugene Bradley 7 10/17/2013 5:59:11 PM 3 10
This is untested, but I think it might give you what you're after.
with raw_data as (
select
dc.cust_first_name, dc.cust_last_name,
oi.customer_id, oi.order_timestamp,
oi.order_timestamp - LAG(oi.order_timestamp) OVER
(ORDER BY oi.order_timestamp) AS "Difference(In Days)",
case
when oi.order_timestamp - LAG(oi.order_timestamp)
over (ORDER BY oi.order_timestamp) > 2 then 1
else 0
end as gt2
from
demo_orders oi,
demo_customers dc
where
oi.customer_id = dc.customer_id
)
select
cust_first_name, cust_last_name,
customer_id, order_timestamp,
"Difference(In Days)",
sum (gt2) over (partition by 1) as total
from raw_data
When you do Count('Elapsed Order Difference') above, you are counting every row, no matter what. You could have put count ('frog') or count (*) and have gotten the same result. The having count > 3 was already satisfied since the count of all rows was 10.
In general, I'd try to avoid using a scalar for a field in a query as you have in your example. I'm not saying it's never a good idea, but I would argue that there is usually a better way to do it. With 10 rows, you'll hardly notice a performance difference, but as your datasets grow, this can create issues.
Expected output:
fn ln id order date dif total
E B 7 8/14/2014 8
W H 2 8/28/2014 14 8
E O 4 9/8/2014 11 8
E L 3 9/10/2014 2 8
E L 3 9/20/2014 10 8
A L 6 9/25/2014 5 8
F L 5 9/30/2014 5 8
W H 2 10/8/2014 8 8
J D 1 10/14/2014 6 8
E B 7 10/17/2014 3 8

Oracle Calculation Involving Results of Another Calculation

First off, I'm a total Oracle noob although I'm very familiar with SQL. I have a single cost column. I need to calculate the total cost, the percentage of the total cost, and then a running sum of the percentages. I'm having trouble with the running sum of percentages because the only way I can think to do this uses nested SUM functions, which isn't allowed.
Here's what works:
SELECT cost, SUM(cost) OVER() AS total, cost / SUM(cost) OVER() AS per
FROM my_table
ORDER BY cost DESC
Here's what I'm trying to do that doesn't work:
SELECT cost, SUM(cost) OVER() AS total, cost / SUM(cost) OVER() AS per,
SUM(cost/SUM(cost) OVER()) OVER(cost) AS per_sum
FROM my_table
ORDER BY cost DESC
Am I just going about it wrong, or is what I'm trying to do just not possible? By the way I'm using Oracle 10g. Thanks in advance for any help.
You don't need the order by inside that inline view, especially since the outer select is doing an order by the order way around. Also, cost / SUM(cost) OVER () equals RATIO_TO_REPORT(cost) OVER ().
An example:
SQL> create table my_table(cost)
2 as
3 select 10 from dual union all
4 select 20 from dual union all
5 select 5 from dual union all
6 select 50 from dual union all
7 select 60 from dual union all
8 select 40 from dual union all
9 select 15 from dual
10 /
Table created.
Your initial query:
SQL> SELECT cost, SUM(cost) OVER() AS total, cost / SUM(cost) OVER() AS per
2 FROM my_table
3 ORDER BY cost DESC
4 /
COST TOTAL PER
---------- ---------- ----------
60 200 .3
50 200 .25
40 200 .2
20 200 .1
15 200 .075
10 200 .05
5 200 .025
7 rows selected.
Quassnoi's query contains a typo:
SQL> SELECT cost, total, per, SUM(running) OVER (ORDER BY cost)
2 FROM (
3 SELECT cost, SUM(cost) OVER() AS total, cost / SUM(cost) OVER() AS per
4 FROM my_table
5 ORDER BY
6 cost DESC
7 )
8 /
SELECT cost, total, per, SUM(running) OVER (ORDER BY cost)
*
ERROR at line 1:
ORA-00904: "RUNNING": invalid identifier
And if I correct that typo. It gives the right results, but wrongly sorted (I guess):
SQL> SELECT cost, total, per, SUM(per) OVER (ORDER BY cost)
2 FROM (
3 SELECT cost, SUM(cost) OVER() AS total, cost / SUM(cost) OVER() AS per
4 FROM my_table
5 ORDER BY
6 cost DESC
7 )
8 /
COST TOTAL PER SUM(PER)OVER(ORDERBYCOST)
---------- ---------- ---------- -------------------------
5 200 .025 .025
10 200 .05 .075
15 200 .075 .15
20 200 .1 .25
40 200 .2 .45
50 200 .25 .7
60 200 .3 1
7 rows selected.
I think this is the one you are looking for:
SQL> select cost
2 , total
3 , per
4 , sum(per) over (order by cost desc)
5 from ( select cost
6 , sum(cost) over () total
7 , ratio_to_report(cost) over () per
8 from my_table
9 )
10 order by cost desc
11 /
COST TOTAL PER SUM(PER)OVER(ORDERBYCOSTDESC)
---------- ---------- ---------- -----------------------------
60 200 .3 .3
50 200 .25 .55
40 200 .2 .75
20 200 .1 .85
15 200 .075 .925
10 200 .05 .975
5 200 .025 1
7 rows selected.
Regards,
Rob.
SELECT cost, total, per, SUM(per) OVER (ORDER BY cost)
FROM (
SELECT cost, SUM(cost) OVER() AS total, cost / SUM(cost) OVER() AS per
FROM my_table
)
ORDER BY
cost DESC

Resources