I have the below model:
When I want to display revenue by segment, it just gives me the totals:
(The above is wrong; it should roughly be half for A and half for B, but it just displays totals 3 times...)
Is there a function in DAX to enforce this segmentation?? (the segmentation comes from ClientType, which is not directly related, but through Client.
Related
I have a table in Excel, utilizing Power Pivot that I then display/filter using a Pivot Table. Within my dataset, calculating a ratio within Power Pivot that "sums" correctly in the pivot table based on slicers is fine - this utilizes a SUMX(Cost)/SUMX(Total) and everything works fine. By sums correctly, I mean if I further break down the data set based on Region/State/Product/Employee, all those Rows sum up correctly for the ratio percentage.
The dataset is filtered based on a single month or range of months. The result of this works fine for either the single month or range of Months. What I'm trying to do is within my Pivot Table, show a current month ratio AND a year to date ratio. I've tried messing around with equations I've found online, but nothing seems to work. This includes the following attempts:
=CALCULATE([Cost],[ProductID]="224594")/CALCULATE([Total],[ProductID]="224594")
=SUMX (FILTER(ALL('TableName'),PATHCONTAINS ('TableName'[ProductID], EARLIER('TableName'[ProductID]))),'TableName'[Cost]) / SUMX(FILTER(ALL('TableName'),PATHCONTAINS('TableName'[ProductID], EARLIER ('TableName'[ProductID]))),'TableName'[Total])
I need the "sumifs" to sum the cost for Product A for all months divided by the sum of total for Product A for all months. I do not want to hard code in the the Product ID into the equation, but simply sum all previous records for that product, but I can't seem to get this to work.
Any suggestions?
Sample Data Set
I used the calculate and filter functions in a column instead of trying to use them in a measure, which fixed the problem.
I'm obtaining wrong results from a DAX formula and I can't understand why.
In my database I have articles that are composed by multiple tools, which are produced from blank tools. One blank can be used to produce multiple tools. I need to calculate blank sales by 3 time periods: last 6, last 12 and last 24 months.
This is my Power BI model:
The time period table I used for the time period slicer and the measure look like this :
To obtain Blank's sales volumes, I created 3 measures:
When I use the last formula, which I thought would have returned the right amount of Blank sold by article by time period, I obtain strange results.
When I select "last 24 months" time period, everything looks fine:
When I select "Last 12 months", the total is fine, but the total by article is wrong:
Finally, if I select "Last 6 months" time period, all the results are totally wrong:
The curious fact is that I checked the result by executing a sql query on the database, and the DAX formula returns the right result (so 1466 for the selected time period), but only when used in a card, without filtering it by Article number.
I have no other filters that affect the visuals.
Could you help me understand why I'm not obtaining the right result, or suggest a better way to reach the desired results?
I'm guessing (at least part of) the problem is that you are backing up from different end dates because LASTDATE(Sales[DocumentDate]) can return different values for different ArticleNo.
I'm not sure what value you actually want for that date, possibly LASTDATE('Dates Table'[Date]), but I'm pretty sure you want it consistent across different ArticleNo.
I'm trying to figure out the difference between the in-application modifier as_rate() and the rollup function per_second().
I want a table with two columns: the left column shows the total number of events submitted to a Distribution (in query-speak: count:METRIC{*} by {tag}), and the right column shows the average rate of events per second. The table visualization applies a sum rollup on left column, and an average rollup on the right column, so that the left column should equal the right column multiplied by the total number of seconds in the selected time period.
From reading the docs I expected either of these queries to work for the right column:
count:DISTRIBUTION_METRIC{*} by {tag}.as_rate()
per_second(count:DISTRIBUTION_METRIC{*} by {tag})
But, it turns out that these two queries are not the same. as_rate() is the only one that finds the expected average rate where left = right * num_seconds. In fact, the per_second() rollup does this extra weird thing where metrics with lower total events have higher average rates.
Is someone able to clarify why these two functions are not synonymous and what per_second() does differently?
I have revenue data that has a natural credit balance (less than zero or negative) and expense data that has a natural debit balance (greater than zero or positive). Revenue accounts are tagged as reporting code=400 and all other items are some other number other than 400.
I would like to create a variance column between two measures where the idea is to display whether it is favorable to profit or not. Creating the two measures is not the problem...it is that the variance calculation itself is different for revenue than expenses. Current-LastPeriod=Variance will work for Revenue, while (Current-LastPeriod)*-1=Variance will work for Expenses.
How can I construct a DAX calculation that handles this type of situation?
I tried IF statements but received an error saying a single value for my reportingcodeid column could not be determined. It makes sense as it is a measure. Here is what I tried.
Variance:=IF(TrialBalance_View[ReportingCode1ID]=400,[MonthlyAmount]-[MonthlyAmount_PY],[MonthlyAmount]-[MonthlyAmount_PY]*-1)
This is an issue I am sure a lot of Finance/Accounting people have and I appreciate any help I can get! Thank you!!
Without seeing your data model and code of the measures, I suggest this approach:
Variance =
SUMX(
VALUES(TrialBalance_View[ReportingCode1ID]),
VAR Account_Variance = [MonthlyAmount]-[MonthlyAmount_PY]
RETURN IF(TrialBalance_View[ReportingCode1ID] = 400, Account_Variance, -Account_Variance)
)
Here, we first create a list of reporting codes visible in a current context using VALUES function. Then, SUMX iterates these codes one by one. For each code, it computes variance and stores it in a variable. Then, if code is 400, it takes variance, else it takes negative variance.
If it does not work, please add your data model diagram, and post DAX code for your measures.
I have a table in Oracle that records events for a user. This user may have many events. From these events I am calculating a reputation with a formula. My question is, what is this best approach to do this in calculating and returning the data. Using a view and using SQL, doing it in code by grabbing all the events and calculating it (problem with this is when you have a list of users and need to calculate the reputation for all), or something else. Like to hear your thoughts.
Comments * (.1) +
Blog Posts * (.3) +
Blog Posts Ratings * (.1) +
Followers * (.1) +
Following * (.1) +
Badges * (.2) +
Connections * (.1)
= 100%
One Example
Comments:
This parameter is based on the average comments per post.
• Max: 20
• Formula: AVE(#) / max * 100 = 100%
• Example: 5 /10 * 100 = 50%
Max is that maximum number to get all that percentage. Hope that makes some sense.
We are calculating visitation, so all unique visits / date of membership is another. The table contains an event name, some meta data, and it is tied to that user. Reputation just uses those events to formulate a reputation based on 100% as the highest.
85% reputation - Joe AuthorUser been a member for 3 years. He has:
• written 18 blog posts
o 2 in the past month
• commented an average of 115 times per month
• 3,000 followers
• following 2,000 people
• received an average like rating of 325 per post
• he's earned, over the past 3 years:
o 100 level 1 badges
o 50 level 2 badges
• he's connected his:
o FB account
o Twitter account
As a general approach I would be using PL/SQL. One package with several get_rep functions.
function calc_rep (i_comments in number, i_posts in number, i_ratings in number,
i_followers in number, i_following in number, i_badges in number,
i_connections in number) return number deterministic is
...
end calc_rep;
function get_rep_for_user (i_user_id in number) is
v_comments ....
begin
select .....
calc_rep (v_comments...)
end get_rep_for_user;
If you've got to recalculate rep for a lot of users a lot of the time, I'd look into parallel pipelined functions (which should be a separate question). The CALC_REP is deterministic as anyone with the same set of numbers will get the same result.
If the number of comments etc is stored in a single record, then it will be simple to call. If the details need to be summarised up, then use materialized views for the summaries. If they need to be gathered from multiple places, then a view can be used to encapsulate the joins.
Whether you can calculate on the fly fast enough to meet requirements is a factor of data volumes, database design, final calculation complexity..... to imagine that we can give you a cut-and-dry approach is unreasonable.
It may wind up being something that would be helped by storing summaries used for some calculated values. For example, look at the things that cause DML. If you had a user_reputation table, then a trigger on your blog_post table could increment/decrement a counter on user_reputation on insert or delete of a post. Same for comments, likes, follows, etc.
If you keep all of your summaries up to date in that manner, then the incremental costs to DML will be minor and the calculations will become simple.
Not saying that this is THE solution. Just saying that it might be worth exploring.