I have a measure using DIVIDE function, that combines two other measures based on columns from different tables.
The matrix shows averages in row subtotals automatically.
However, it is not clear to me how to prevent it from counting in blank cells when calculating averages.
I can't find solution anywhere online, even though it seems like a simple problem.
% of Full Time = CALCULATE (DIVIDE([Actual Hours Heatmap], [Heatmap Capacity]))
Measure from source 1:
Heatmap Capacity = ROUND (calculate( sum('Workday Target per CC'[Hour Target]) * sum('Employee Mapping'[eMPLOYEE Count])), 2)
Measure from source 2:
Actual Hours Heatmap = ROUND (sum(Actuals Input[Hours ACT]),2)
AVERAGEX function
FILTER function
Rounding
Filter - measure is not 0 and not blank
Measures are recalculated for each cell. The total isn't an average-of-averages; it's the measure calculated without any active filter on Area.
Related
I have a table in Excel, utilizing Power Pivot that I then display/filter using a Pivot Table. Within my dataset, calculating a ratio within Power Pivot that "sums" correctly in the pivot table based on slicers is fine - this utilizes a SUMX(Cost)/SUMX(Total) and everything works fine. By sums correctly, I mean if I further break down the data set based on Region/State/Product/Employee, all those Rows sum up correctly for the ratio percentage.
The dataset is filtered based on a single month or range of months. The result of this works fine for either the single month or range of Months. What I'm trying to do is within my Pivot Table, show a current month ratio AND a year to date ratio. I've tried messing around with equations I've found online, but nothing seems to work. This includes the following attempts:
=CALCULATE([Cost],[ProductID]="224594")/CALCULATE([Total],[ProductID]="224594")
=SUMX (FILTER(ALL('TableName'),PATHCONTAINS ('TableName'[ProductID], EARLIER('TableName'[ProductID]))),'TableName'[Cost]) / SUMX(FILTER(ALL('TableName'),PATHCONTAINS('TableName'[ProductID], EARLIER ('TableName'[ProductID]))),'TableName'[Total])
I need the "sumifs" to sum the cost for Product A for all months divided by the sum of total for Product A for all months. I do not want to hard code in the the Product ID into the equation, but simply sum all previous records for that product, but I can't seem to get this to work.
Any suggestions?
Sample Data Set
I used the calculate and filter functions in a column instead of trying to use them in a measure, which fixed the problem.
I'm trying to create a calculated column based on a derived measure in SSAS cube, this measure which will count the number of cases per order so for one order if it has 3 cases it will have the value 3.
Now I'm trying to create a bucket attribute which says 1caseOrder,2caseOrder,3caseOrder,3+caseOrder. I tried the below one
IF([nrofcase] = 1, "nrofcase[1]", IF([nrofcase] = 2, "nrofcase[2]",
IF([nrofcase] = 3, "nrofcase[3]", "nrofcase[>3]") )
But it doesn't work as expected, when the level of the report is changed from qtr to week it was suppose to recalculate on different level.
Please let me know if it case work.
Calculated columns are static. When the column is added and when the table is processed, the value is calculated and stored. The only way for the value to change is to reprocess the model. If the formula refers to a DAX measure, it will use the measure without any of the context from the report (eg. no row filters or slicers, etc.).
Think of it this way:
Calculated column is a fact about a row that doesn't change. It is known just by looking at a single row. An example of this is Cost = [Quantity] * [Unit Price]. Cost never changes and is known by looking at the Quantity and Unit Price columns. It doesn't matter what filters or context are in the report. Cost doesn't change.
A measure is a fact about a table. You have to look at multiple rows to calculate its value. An example is Total Cost = SUM(Sales[Cost]). You want this value to change depending on the context of time, region, product, etc., so it's value is not stored but calculated dynamically in the report.
It sounds like for your data, there are multiple rows that tell you the number of cases per order, so this is a measure. Use a measure instead of a calculated column.
It's been weeks since I've been trying to solve this problem, I tried various formulas for this (ArrayFormula, ABS, SUMPRODUCT, using a negative sign on the cells), but I can't seem to get it right.
The correct way will always be manually subtracting the cells one by one but this will cause too much delay or problem if we have more than 100 rows on the sheets.
=if(D14<(E3-E4-E5-E6-E7-E8-E9-E10-E11-E12-E13),D14,E3-E4-E5-E6-E7-E8-E9-E10-E11-E12-E13)
Here's the link to the sheet: https://docs.google.com/spreadsheets/d/1fAPQHKupKglBAJpoxrcVqWP343m0P5QOj8zp1FvasEA/edit?usp=sharing
The overall idea for this is that the Total Purchased should be compared to the total sold. The 2201 value on the total sold is retrieved from another transactions sheet and it just totals every sold item, and then starting from E4 (170 in cell value) onwards, it decreases since we just need to know the number of sold items from that certain row.
Thank you very much for taking the time to read this. I'm looking forward to getting help from this as this stresses me for weeks now.
use cumulative function
=arrayformula(mmult(1*(transpose(row(D4:D))<=row(D4:D)),if(D4:D="",0,D4:D)))
and include in your formula in E4 as follows
=arrayformula(if(D4:D<($E$3-(mmult(1*(transpose(row(D4:D))<=row(D4:D)),if(D4:D="",0,D4:D)))),D4:D))
I have an excel that I'm calculating my Scrum Task's completed average. I have Story point item also in the excel. My calculation is:
Result= SP * percentage of completion --> This calculation is for each row and after that I sum up all result and taking the summary.
But sometimes I am adding new task and for each task I am adding the calculation to the average result.
Is there any way to use for loop in the excel?
for(int i=0;i<50;i++){ if(SP!=null && task!=null)(B+i)*(L+i)}
My calculation is like below:
AVERAGE((B4*L4+B5*L5+B6*L6+B7*L7+B8*L8+B9*L9+B10*L10)/SUM(B4:B10))
First of all, AVERAGE is not doing anything in your formula, since the argument you pass to it is just one single value. You already do an average calculation by dividing by the sum. That average is in fact a weighted average, and so you could not even achieve that with a plain AVERAGE function.
I see several ways to make this formula more generic, so it keeps working when you add rows:
1. Use SUMPRODUCT
=SUMPRODUCT(B4:B100,L4:L100)/SUM(B4:B100)
The row number 100 is chosen arbitrarily, but should evidently encompass all data rows. If you have no data occurring below your table, then it is safe to add a large margin. You'll want to avoid the situation where you think you add a line to the table, but actually get outside of the range of the formula. Using proper Excel tables can help to avoid this situation.
2. Use an array formula
This would be a second resort for when the formula becomes more complicated and cannot be executed with a "simple" SUMPRODUCT. But the above would translate to this array formula:
=SUM(B4:B100*L4:L100)/SUM(B4:B100)
Once you have typed this in the formula bar, make sure to press Ctrl+Shift+Enter to enter it. Only then will it act as an array formula.
Again, the same remark about row number 100.
3. Use an extra column
Things get easy when you use an extra column for storing the product of B & L values for each row. So you would put in cell N4 the following formula:
=B4*L4
...and then copy that relative formula to the other rows. You can hide that column if you want.
Then the overal formula can be:
=SUM(N4:N100)/SUM(B4:B100)
With this solution you must take care to always copy a row when inserting a new row, as you need the N column to have the intermediate product formula also for any new row.
I am in the process of designing an algorithm that will calculate regions in a candlestick chart where strong areas of support exist. An "area of support" in this case is defined as an area in the chart where the price of a stock rises by a large amount in a short period of time. (Please see the diagram below, the blue dots represent these strong areas of support)
The data I am working with is a list of over 6000 TOHLC (timestamp, open price, high price, low price, close price) values. For example, the first entry in this list of data is:
[1555286400, 83.7, 84.63, 83.7, 84.27]
The way I have structured the algorithm to work is as follows:
1.) The list of 6000+ TOHLC values are split into sub-lists of 30 TOHLC values (30 is a number that I arbitrarily chose). The lowest low price (LLP) is then obtained from each of these sub-lists. The purpose behind using this method is to find areas in the chart where prices dip.
2.) The next step is to determine how high the price rose from each of these lows. For this, I take the next 30 candlestick values from the low and determine what the highest high price (HHP) is. Then, if HHP / LLP >= 1.03, the low price is accepted, otherwise it is discarded. Again, 1.03 is a value that I arbitrarily chose, by analysing the stock chart manually and determining how much the price rose on average from these lows.
The blue dots in the chart above represent the accepted areas of support by the algorithm. It appears to be working well, in terms of that I am trying to achieve.
So the question I have is: does anyone have any improvements they can suggest for this algorithm, or point out any faults in it?
Thanks!
I may have understood wrong, however, from your explanation it seems like you are doing your calculation in separate 30-ish sub lists and then combining them together.
So, what if the LLP is the 30th element of sublist N and HHP is 1st element of sublist N+1 ? If you have taken that into account, then it's fine.
If you haven't taken that into account, I would suggest doing a moving-window type of approach in reading those data. So, you would start from 0th element of 6000+ TOHLC and start with a window size of 30 and slide it 1 by 1. This way, you won't miss any values.
Some of the selected blue dots have higher dip than others. Why is that? I would separate them into another classifier. If you will store them into an object, store the dip rate as well.
Floating point numbers are not suggested in finance. If possible, I'd use a different approach and perhaps classifier, solely using integers. It may not bother you or your project as of now, but surely, it will begin to create false results when the numbers add up in the future.