pulling only row that shows max value in two variables - max

I'm trying to pull the max # of leads from a field that is a running sum on the month level.
I wrote:
SELECT
ACCOUNT,
max(pl_verified_leads_this_month),
usage_date
FROM PROD.client_usage_data
WHERE pl_verified_leads_this_month is not null and pl_verified_leads_this_month <>0
group by ACCOUNT, usage_date
order by account, usage_date desc)
But that is not returning what I'd expect. At the end, I want a row per account level & usage_date (which is mm/dd/yyyy - and I want to convert it into just mm/yyyy) that shows the max(pl_verified_leads_this_month).
How do I achieve this?

try changing usage_date at all places to this:
to_date(usage_date, 'MM/YYYY')

Related

Small detail when using this function in Google Sheets

Small detail when using =INDEX($A$8:$A$11;MATCH(MAX(SUMIF($B$8:$B$11;$B$8:$B$11));SUMIF($B$8:$B$11;$B$8:$B$11);0))) If the values in column B are all different it returns the correct date value, but if two identical values in column B coincide in different dates then it returns the date of the first value; it does not return the correct date and it keeps the first one that has the repeated value.
Any idea?
p.s This question can be added to this post
Even more easier way:
On E2 Try this =TRANSPOSE(INDEX(QUERY(A1:B," select A, sum(B) group by A Order By sum(B) Desc "),2))
and format the date and currency accordingly.
You can do that easily and differently to get:
1 - Make a helper table to get unique dates, You can use two ways
a) - Use SUMIF Function to get the sum of Expenditure in each unique date Like so =IF(D2="",,SUMIF($A$2:$A,D2,$B$2:$B)) and drag it down.
b) - By using QUERY Function =QUERY(A1:B11," select A, sum(B) group by A Order By sum(B) Desc ")
2 - to get SUM BY DATE OF HIGHEST EXPENDITURE: =MAX(E2:E)
3 - to get DATE BY HIGHEST EXPENDITURE: =INDEX($D$2:$D,MATCH($H$3,$E$2:$E,0),1)
Make a copy of this sheet "make it yours."
Hope that answerd your question.

How to SELECT the MAX Time Difference Between Any 2 Consecutive Rows Per Value?

Just had a user answer this correctly for TSQL, but wondering how best to achieve this now in SQL Developer/PLSQL seeing as there is no DATEDIFF function.
Table I want to query on has some 'CODE' values, which can naturally have multiple primary key records ('OccsID') in a table 'Occs'. There is also a datetime column called 'CreateDT' for each OccsID.
Just want to find the maximum possible time variance between any 2 consecutive rows in 'Occs', per 'CODE'.
If you subtract the "next" date and "this" date (using the LEAD analytic function), you'll get the date difference. Then fetch the maximum difference per code. Something like this:
with diff as
(select occsid,
code,
nvl(lead(createdt) over (partition by code order by createdt), createdt) - createdt date_diff
from test
)
select code,
max(date_diff)
from diff
group by code;
Assuming that this T-SQL version works for you (from the prior question)
SELECT x.code, MAX(x.diff_sec) FROM
(
SELECT
code,
DATEDIFF(
SECOND,
CreateDT,
LEAD(CreateDT) OVER(PARTITION BY CODE ORDER BY CreateDT) --next row's createdt
) as diff_sec
FROM Occs
)x
GROUP BY x.code
The simplest option is just to subtract the two dates to get a difference in days. You can then multiply to get the difference in hours, minutes, or seconds
SELECT x.code, MAX(x.diff_day), MAX(x.diff_sec)
FROM
(
SELECT
code,
CreateDT -
LEAD(CreateDT) OVER(PARTITION BY CODE ORDER BY CreateDT) as diff_day,
24*60*60* (CreateDT -
LEAD(CreateDT) OVER(PARTITION BY CODE ORDER BY CreateDT)) as diff_sec
FROM Occs
)x
GROUP BY x.code

Compare date to month-year in Postgres/Ruby

I have a date column in my table and I would like to 'filter'/select out items after a certain year-month. So if I have data from 2010 on, I have a user input that specifies '2011-10' as the 'earliest date' they want to see data from.
My current SQL looks like this:
select round(sum(amount), 2) as amount,
date_part('month', date) as month
from receipts join items
on receipts.item = items.item
where items.expense = ?
and date_part('year', date)>=2014
and funding = 'General'
group by items.expense, month, items.order
order by items.order desc;
In the second part of the 'where', instead of doing year >= 2014, I want to do something like to_char(date, 'YY-MMMM') >= ? as another parameter and then pass in '2011-10'. However, when I do this:
costsSql = "select round(sum(amount), 2) as amount,
to_char(date, 'YY-MMMM') as year_month
from receipts join items
on receipts.item = items.item
where items.expense = ?
and year_month >= ?
and funding = 'General'
group by items.expense, year_month, items.order
order by items.order desc"
and call that with my two params, I get a postgres error: PG::UndefinedColumn: ERROR: column "year_month" does not exist.
Edit: I converted my YYYY-MM string into a date and passed that in as my param instead and it's working. But I still don't understand why I get the 'column does not exist' error after I created that column in the select clause - can someone explain? Can columns created like that not be used in where clauses?
This error: column "year_month" does not exist happens because year_month is an alias defined the SELECT-list and such aliases can't be refered to in the WHERE clause.
This is based on the fact that the SELECT-list is evaluated after the WHERE clause, see for example: Column alias in where clause? for an explanation from PG developers.
Some databases allow it nonetheless, others don't, and PostgreSQL doesn't. It's one of the many portability hazards between SQL engines.
In the case of the query shown in the question, you don't even need the to_char in the WHERE clause anyway, because as mentioned in the first comment, a direct comparison with a date is simpler and more efficient too.
When a query has a complex expression in the SELECT-list and repeating it in the WHERE clause looks wrong, sometimes it might be refactored to move the expression into a sub-select or a WITH clause at the beginning of the query.

Oracle Daily count/average over a year

I'm pulling two pieces of information over a specific time period, but I would like to fetch the daily average of one tag and the daily count of another tag. I'm not sure how to do daily averages over a specific time period, can anyone provide some advice? Below were my first ideas on how to handle this however to change every date would be annoying. Any help is appreciated thanks
SELECT COUNT(distinct chargeno), to_char(chargetime, 'mmddyyyy') AS chargeend
FROM batch_index WHERE plant=1 AND chargetime>to_date('2012-06-18:00:00:00','yyyy-mm-dd:hh24:mi:ss')
AND chargetime<to_date('2012-07-19:00:00:00','yyyy-mm-dd:hh24:mi:ss')
group by chargetime;
The working version of the daily sum
SELECT to_char(bi.chargetime, 'mmddyyyy') as chargtime, SUM(cv.val)*0.0005
FROM Charge_Value cv, batch_index bi WHERE cv.ValueID =97
AND bi.chargetime<=to_date('2012-07-19','yyyy-mm-dd')
AND bi.chargeno = cv.chargeno AND bi.typ=1
group by to_char(bi.chargetime, 'mmddyyyy')
seems like in the first one you want to change the group to the day - not the time... (plus i dont think you need to specify all those 0's for seconds..)
SELECT COUNT(distinct chargeno), to_char(chargetime, 'mmddyyyy') AS chargeend
FROM batch_index WHERE plant=1 AND chargetime>to_date('2012-06-18','yyyy-mm-dd')
AND chargetime<to_date('2012-07-19','yyyy-mm-dd')
group by to_char(chargetime, 'mmddyyyy') ;
not 100% I'm following your question, but if you just want to do aggregates (sums, avg), then do just that. I threw in the rollup just in case that is what you were looking for
with fakeData as(
select trunc(level *.66667) nr
, trunc(2*level * .33478) lvl --these truncs just make the doubles ints
,trunc(sysdate+trunc(level*.263784123)) dte --note the trunc, this gets rid of the to_char to drop the time
from dual
connect by level < 600
) --the cte is just to create fake data
--below is just some aggregates that may help you
select sum(nr) daily_sum_of_nr
, avg(nr) daily_avg_of_nr
, count(distinct lvl) distinct_lvls_per_day
, count(lvl) count_of_nonNull_lvls_per_day
, dte days
from fakeData
group by rollup(dte)
--if you want the query to supply a total for the range, you may use rollup ( http://psoug.org/reference/rollup.html )

How can I limit the numbers of results being grouped in my Group By in Oracle?

I've got a table of a parameters, values, and times at which those values were recorded.
I've got a procedure which takes in a time, and needs to get the average result of each parameters value in the window of time that is -15/+5 seconds around that time frame. On top of that, I want to make sure that I take the no more than 15 records before the passed in time, and no more than 5 records after it.
For example, maybe I'm recording values of some parameters every second. If I passed in the time 21:30:30, I'd want to get the values between 21:30:15 and 21:30:35. But if I was recording every half second, I'd actually have more parameters that fit in that time frame than I want, and that's where my need to limit my results comes in.
I've read this question and this article which seem pretty related to what I'm trying to do, but unfortunately I'm dealing with Oracle and not MySQL, so I can't use "limit".
I've currently got something that looks like this:
std_values as
(
select
V.ParameterId,
V.NumericValue,
from
ValuesTable V
where
V.ValueSource = pValueSource
and V.Time >= pSummaryTime - 15/86400
and V.Time <= pSummaryTime + 5/86400
)
select
ParameterId,
avg(NumericValue) as NumericValue
from
std_values
group by
ParameterId
pValueSource is just something that lets me filter down which value types I'm looking at, and pSummaryTime is the input time that I'm basing my time frame around. The goal here is to get the 15 records before pSummaryTime that falls within that window, and the 5 after that falls within that window, and use those for the average. Currently I'm not limiting the number of "before" and "after" results though, so I'm ending up with the average of everything that falls into that time window. And without something like "limit", I'm not sure how to do this in Oracle.
Sounds like you want a moving window aggregate function. This is part of the Analytical functions feature of Oracle.
It's not my strong suit, and since you didn't include sample tables/data to build a test case, I'll just point you to the Oracle documentation, here:
http://docs.oracle.com/cd/B14117_01/server.101/b10736/analysis.htm#i1006709
You probably want something like:
AVG(NumericValue) over (order by pSummaryTime RANGE BETWEEN 15 PRECEDING AND 5 FOLLOWING)
but, like I said, not my strong suit, and totally untested, but, I hope it gets the idea across.
Hope that helps.
Thanks to Mark Bobak's answer getting me on the right track, I ended up with this solution.
with
values_before as
(
select
V.ParameterId,
V.NumericValue,
row_number() over (Partition by V.ParameterId order by V.Time desc) as RowNumber
from
ValuesTable V
where
V.ValueSource = pValueSource
and V.Time >= pSummaryTime - 15/86400
and V.Time <= pSummaryTime
),
values_after as
(
select
V.ParameterId,
V.NumericValue,
row_number() over (Partition by V.ParameterId order by V.Time desc) as RowNumber
from
ValuesTable V
where
V.ValueSource = pValueSource
and V.Time <= pSummaryTime + 5/86400
and V.Time > pSummaryTime
),
values_all as
(
select * from values_before where RowNumber <= 15
union all
select * from values_after where RowNumber <= 5
)
select ParameterId, avg(NumericValue) from values_all group by ParameterId
No doubt there's a better way to do this, but it at least seems to be giving the correct result. The key was using an analytical function to set the row number and order for the 15 before and 5 after, and then filtering my results down to just those.

Resources