This forum seems to be more geared toward more programming issues but I would be interested if someone would look at the logic issue in the link below.
My issue is about how to calculate the previous 12 months' total using the calendar already defined. I have found very few examples of this on the QV community. Please see the link below for more details. I would be willing to look at an SQL or a QV script solution.
Our fiscal year runs Nov to Oct. I would like to have the end user select the Year and a chart to display the last rolling 12 months' margin. I have had issues getting my total to accumulate for previous months.
My goal would be for it look similar to the Rolling 12 Month Total - GP column in the manually calculated Excel image 'Goal' (look at QV link for screenshot).
Rolling Margin equation: my attempt to use Set Analysis to make a rolling avg equation.
=Sum({<master_date={'>=$(=MonthStart(Max(master_date), -12))<=$(=MonthEnd(Max(master_date)))'}>}
MasterCalendar:
TempCalendar:
LOAD
$(vDateMin) + RowNo() - 1 AS DateNumber,
Date($(vDateMin) + RowNo() - 1) AS TempDate
AUTOGENERATE 1
WHILE $(vDateMin)+IterNo()-1<= $(vDateMax);
MasterCalendar: this uses master_date to connect items together. This an fiscal calendar are hard to put together with rolling avg
LOAD
TempDate AS master_date,
Day(TempDate) AS CalendarDay,
WeekDay(TempDate) AS CalendarWeekDay,
Week(TempDate) AS CalendarWeek,
Month(TempDate) AS CalendarMonth,
Year(TempDate) AS CalendarYear,
'Q' & Ceil(Month(TempDate)/3) AS CalendarQuarter,
WeekDay(TempDate) & '-' & Year(TempDate) AS CalendarWeekAndYear,
Month(TempDate) & '-' & Year(TempDate) AS CalendarMonthAndYear,
If(Num(TempDate) >= $(vYearStart) AND Num(TempDate) < $(vMonthNow), -1, 0) AS YTD,
If(Num(TempDate) >= $(vYearStartLY) AND Num(TempDate) < $(vMonthNowLY), -1, 0) AS LY_YTD,
Year2Date(TempDate) * -1 AS YTD_Flag,
Year2Date(TempDate,-1, 1, $(vToday))*-1 AS LY_YTD_Flag
RESIDENT TempCalendar ORDER BY TempDate ASC;
DROP TABLE TempCalendar;
FiscalCalendar: This defines our fiscal year
FiscalCalendar:
LOAD date(date#(20011101,'YYYYMMDD')+recno(),'MM/DD/YY') AS "master_date"
AUTOGENERATE today()-date#(20011101,'YYYYMMDD');
LEFT JOIN (FiscalCalendar)
LOAD
"master_date",
date(monthstart(master_date),'MMM YY') AS "MonthFisical",
date(monthstart(master_date),'MMM') AS "MonthFisical_MonthTitle",
date(yearstart(master_date,1,11),'YYYY') AS "YearFiscal",
month(master_date)-month(num(today(1))) AS FiscalMonthsElapsed,
YearToDate(master_date, 0,11)*-1 AS YTD_FLAG_Fiscal,
YearToDate(master_date,-1,11)*-1 AS LY_YTD_FLAG_Fiscal
RESIDENT FiscalCalendar;
To see screenshots:
http://community.qlikview.com/message/219912#219912
Thank you for taking the time to look at this issue.
The solution is not in the calendar : you have to create a pivot table between your calendar and the fact table.
In this pivot table you have 2 type : DIRECT and CROSSING.
For type DIRECT, a row in fact table is linked to the date in calendar
For type CROSSING, a row in fact table is linked to all the dates of the 12 future months in calendar.
So in Qlikview, you use the type DIRECT all the time, except when you want to present for each month the total of the past 12 months. In this case you use CROSSING because all rows are linked to the dates of the 12 future months, so it means (reversed point of view) that a month is linked to all data of the past 12 months.
Examples:
xlsx
QVW
Related
In my cube, I have several measures at the day grain that I'd like to sum at the day grain but average (or take latest) at the month grain or year grain.
Example:
We have a Fact table with Date and number of active subscribers in that day (aka PMC). This is snapshotted per day.
dt
SubscriberCnt
1/1/22
50
1/2/22
55
This works great at the day level. At the month level, we don't want to sum these two values (count = 105) because it doesn't make sense and not accurate.
when someone is looking at month grain, it should look like this - take the latest for the month. (we may change this to do an average instead, management is still deciding)
option 1 - Take latest
Month-Dt
Subscribers
Jan-2022
55
Feb-2022
-
option 2 - Take aveage
Month-Dt
Subscribers
Jan-2022
52
Feb-2022
-
I've not been able to find the right search terms for this but this seems like a common problem.
I added some sample data at the end of a month for testing:
dt
SubscriberCnt
12/30/21
46
12/31/21
48
This formula uses LASTNONBLANKVALUE, which sorts by the first column and provides the latest value that is not blank:
Monthly Subscriber Count = LASTNONBLANKVALUE( 'Table'[dt], SUM('Table'[SubscriberCnt]) )
If you do an AVERAGE, a simple AVERAGE formula will work. If you want an average just for the current month, then try this:
Current Subscriber Count =
VAR _EOM = CLOSINGBALANCEMONTH( SUM('Table'[SubscriberCnt]), DateDim[Date] )
RETURN IF(_EOM <> 0, _EOM, AVERAGE('Table'[SubscriberCnt]) )
But the total row will be misleading, so I would add this so the total row is the latest number:
Current Subscriber Count =
VAR _EOM = CLOSINGBALANCEMONTH( SUM('Table'[SubscriberCnt]), DateDim[Date] ) //Get the number on the last day of the month
VAR _TOT = NOT HASONEVALUE(DateDim[MonthNo]) // Check if this is a total row (more than one month value)
RETURN IF(_TOT, [Monthly Subscriber Count], // For total rows, use the latest nonblank value
IF(_EOM <> 0, _EOM, AVERAGE('Table'[SubscriberCnt]) ) // For month rows, use final day if available, else use the average
)
I made a measure to find Top Defect as text for last week.
Def 1 = CALCULATE(max('sum'[ScrapCode]), FILTER('sum', 'sum'[KW]=[Nr Last KW])
[Nr Last KW]) is the last KW
Than I want to use this Defect to filter the table. But there ist only last week in new table. How can I remove filter for last week, but only in calculated table?
Table = CALCULATETABLE('sum', FILTER('sum', 'sum'[ScrapCode]=[Def 1]))
Table = CALCULATETABLE('sum', FILTER(ALL('sum'), 'sum'[ScrapCode]=[Def 1])) doesn't work. There is only last week too.
Thanks
I have a list of products and would like to get a 50 day simple moving average of its volume using Power Query (M).
The table is sorted by product name and date. I add a custom column and applied the code below.
if [date] >= #date(2018,1,29)
then List.Average(List.Range(Source[Volume],[Volume]-1,-50))
else ""
Since it is already sorted by date and name, an if statement was applied with a date as criteria/filter. However, an error occurs that says
'Volume' column not found in the table.
I expect to have an added column in the power query with volume 50 day moving average per product. the calculation to be done if date is greater than or equal Jan 29, 2018.
We don't know what your columns are, but assuming you have [product], [date] and [volume] in Source, this would average the last 50 days of [volume] for the identical [product] based on each [date], and place in a new column
AvgAmountAdded = Table.AddColumn(Source, "AverageAmount", (i) => List.Average(Table.SelectRows(Source, each ([product] = i[product] and [date]<=i[date] and [date]>=Date.AddDays(i[date],-50)))[volume]), type number)
Finally! found a solution.
First, apply Index by product see this post for further details
Then index again without criteria (index all rows)
Then, apply below code
= Table.AddColumn(#"Previous Step", "Volume SMA(50)", each if [Index_byProduct] >= 50 then List.Average(List.Range(#"Previous Step"[Volume], ([Index_All]-50),50)) else 0),
For large dataset, Table.Buffer function is recommended after index-expand step to improve PQ calculation speed
I have a table of every product purchased by every client over 25 years. The table contains client#, product, start date, and end date.
The products can be owned by the client for any amount of time (1 day to 100 years). While the client owns products with us, the client is active. If a client ends all products they cease to be a client. I want to count new client starts each year. The problem is, some clients end all products then start purchasing products again years later (but clients always retain the same client#) - If the client leaves then rejoins year's later I want to count the client as a new client.
I have created DAX code to do this which works perfectly on a small file, but the code uses up too many resources and so I cannot use it on my data (about 200,000 records). I know my code is HIGHLY INEFFICIENT and could probably be cleaned up...but I am not sure how. Alternately, if I could figure out how to make these columns in PowerQuery, perhaps that would work
Here is how I do it.
1) Add four calculated columns to my table:
VeryFirstStart = Calculate(
Min('Products'[StartDate]),
ALLEXCEPT(Products,Products[ClientNumber]))=Products[StartDate]
this flags records that contain the first ever start date of any client
MaxEndDateofEarlierDates = Calculate(
Max('Products'[EndDate]),
Filter(
Filter(ALLEXCEPT(Products, Products[ClientNumber]), Products[EndDate]),
Products[StartDate] < EARLIER(Products[StartDate])))
This step blows up my PowerBI - this shows the date of any NEW product purchases where the new start date occurs AFTER an ending date
Second+Start = And(
Products[MaxEndDateofEarlierDates]<>BLANK(),
Products[MaxEndDateofEarlierDates]<Products[StartDate])
this flags records where we want to count the new start date as a new client
NewStart = OR(Products[Second+Start],Products[VeryFirstStart])
**this flags ANY new client start date regardless of whether it was the first or a subsequent*
Finally I added this measure:
!MemberNewStarts = CALCULATE(
DISTINCTCOUNT(Products[ClientNumber]),
FILTER(
'Products',
('Products'[StartDate] <= LASTDATE('DIMDate'[Date]) &&
'Products'[StartDate]>= FIRSTDATE('DIMDate'[Date]) &&
Products[NewStart]=TRUE())))
Does anyone have any suggestions about how to achieve this with less resources?
Thanks
Here is some data to try
MemberNumber Product StartDate EndDate Note (not in real data)
1 A 02/02/2003 02/02/2004
1 C 02/02/2009 02/02/2010
2 A 02/02/2001 02/02/2002
2 C 02/02/2001 02/02/2002
2 B 02/02/2005 02/02/2010
3 C 02/02/2002 02/02/2005
3 B 02/02/2002 02/02/2005
3 A 02/02/2003 02/02/2008
4 B 02/02/2002 02/02/2003
4 C 02/02/2003 02/02/2006
5 B 02/02/2003 02/02/2007
5 C 02/02/2005 02/02/2010
5 A 02/02/2005 02/02/2007
6 A 02/02/2001 02/02/2006
6 C 02/02/2003 02/02/2007
7 B 02/02/2001 02/02/2004
7 A 02/02/2001 02/02/2005
7 C 02/02/2005 02/02/2006
8 B 02/02/2002 02/02/2006
8 A 02/02/2004 02/02/2009
note member 1 starts as a new client in 2009 since all previous products ended in 2004 and member 2 starts as a new client in 2005 since all previous products ended in 2002
The desired outcome is:
Start Year 2001 2002 2003 2004 2005 2006 2007 2008
New Clients 3 3 2 0 1 0 0 0
Here's one way of trying to solve it. Let me know if this is any more efficient than yours:
1st New Column:
PreviousHighestFinish:=
Calculate(
Max(Products[EndDate]),
ALLEXCEPT(Products,Products[ClientNumber]),
Products[StartDate] < Earlier(Products[StartDate]
)
This will give you the latest end date where the Client Number matches and the start date is before the current start date. If there is no earlier start date, it returns a blank.
2nd New Column:
NewClientProduct:=
if(Products[StartDate]>=Products[PreviousHighestFinish],1,0)
This will give you a 1 for every row where the client has either not been seen before (and the previous column showed blank) or the client has ben seen before, but has no current products.
The problem with this measure is that if you have a client starting more than one product on the same date, they will show as multiple new clients.
The fix for this is to count up the instances of each client-date combination
3rd New Column:
ClientDateCount:=
CALCULATE(
COUNTROWS(Products),
ALLEXCEPT(Products,Products[ClientNumber],Products[StartDate])
)
This essentially gives the number of times that the client on this row in the table has started a product on this date.
Now divide the 2nd new column by this one
4th New Column:
NewClients:=
DIVIDE(Products[NewClientProduct],Products[ClientDateCount])
And voila:
I have a panel data set for multiple waves (13) for roughly 10,000 individuals each year, with people entering and exiting at various time points. I am interested in what happens as people become diagnosed with a disease over time. Therefore I need to recode the time variable so that it becomes t=0 the first wave when diagnosed, then t=1 is the next year and so on, so that all of my individuals are comparable (and I guess -1 for t-1 etc). However I am unsure about how to go about this in stata. Would anyone be able to advise? Many thanks
The case of one diagnosis per person
clear all
set more off
*----- example data -----
set obs 100
set seed 2357
generate id = _n
generate year = floor(10 * runiform()) + 1990
expand 5
bysort id: replace year = year + _n
bysort id (year): generate diag = cond(_n == 3, 1, 0)
list in 1/20, sepby(id)
*----- what you seek -----
bysort id (diag): gen time = year - year[_N]
sort id year
list in 1/20
I assume the same data structure as #RichardHerron and use his example. diag is an indicator variable that takes on the value of 1 at the time of diagnosis and 0 otherwise (only one diagnosis per person is considered).
The sorting done by bysort is critical. The observation holding the time of diagnosis is pushed to the end of the database (by id groups) and then all that's left to do is compare (subtract) all years with that reference year. See help _variables for details on system variables like _N.
The case of multiple diagnoses per person
If several diagnoses are made per person, but we care only for the first occurence (according to year), we could do:
gsort id diag -year
by id: gen time = year - year[_N]
Simple but not optimal solution
Suppose diagnosis is 1 when diagnosed (at most once per person) and 0 otherwise.
Then the time at diagnosis is at its simplest
egen time_diagnosis = total(diagnosis * year), by(id)
but you need to ignore any zeros. To spell that out,
replace time_diagnosis = . if time_diagnosis == 0
Better alternative
A more complicated but preferable alternative can handle multiple diagnoses if they occur:
egen time_diagnosis = min(year / diagnosis), by(id)
as year / diagnosis is year when diagnosis is 1 and missing otherwise. This yields missing values if there is no diagnosis, which is as it should be.
Then you subtract that to get a new time variable.
gen time2 = time - time_diagnosis
In short, I think you can get this done in two statements, handling panel structure too.
Update
#Richard Herron asks why use egen with by(), and not just
gen time_diagnosis = time * diagnosis
A limitation of that is that the "correct" value is contained only in those observations for which diagnosis is 1; that value still has to be "spread" to other values for the same id. But that is precisely what egen does here. In the simplest situation, with one diagnosis the total of time * diagnosis is just time * 1 or time, as any zeros make no difference to the sum.
It is usually helpful to provide test data, but here they are easy enough to generate. The trick is to find the first year for each individual (my fyear), which I'll do with min() from egen. Then I'll subtract this first year fyear from the actual year to find the year relative to diagnosis ryear.
/* generate panel */
clear
set obs 10000
generate id = _n
generate year = floor(10 * runiform()) + 1990
expand 10
bysort id: replace year = year + _n
sort id year
list in 1/20
/* generate relative year */
bysort id: egen fyear = min(year)
generate ryear = year - fyear
list in 1/20
If the first year in the panel is not diagnosis, then just construct fyear based on diagnosis criteria.
Edit: Thinking more on this, maybe it's the last part that you're having a hard time with (i.e., identifying the diagnosis year to subtract from the calendar year). Here's what I would do.
bysort id (year): generate diagnosis = cond(_n == 5, 1, 0)
preserve
tempfile diagnosis
keep if (diagnosis == 1)
rename year dyear
keep id dyear
save `diagnosis'
restore
merge m:1 id using `diagnosis', nogenerate
generate ryear2 = year - dyear