I need to sum the values of column resulting from the table resulting from Summarize Funtion.
For e.g. my Data Set 'Tab' is like this
Type Value
A 10
A 10
A 10
B 20
B 20
B 20
C 30
C 30
C 30
The result from Summarize(Tab,[Type],AVG([Value])) will be like following
A 10
B 20
C 30
And the final result required from this result set is 10+20+30 i.e. 60.
Please help
You can use SUMX function.
Sum of Avg =
SUMX (
SUMMARIZE ( Tab, [Type], "Total Average", AVERAGE ( Tab[Value] ) ),
[Total Average]
)
It will give you the total if there is not any Type context affecting the measure:
Let me know if this helps.
You need to declare a name for it.
Total Value = Summarize(Tab,'Tab'[Type],"Total value",SUM('Tab'[Value])
Related
I would appreciate any advises for my problem :
I have a list of stock with daily values (so several stocks and one value per day for each).
I'm trying to do a cumulative margin % on the total portfolio from the beginning of the year as a measure so i have the results on a daily basis.
So by example if the total portfolio value is 100 one day and 102 the day after and 104 the following day, i would like to have a measure with (for these 3 days) 0, 2, 4%.
I have a measure calculating the margin % of the whole portfolio per day (i can't have a column as the data is not portfolio but stock based) :
And what i would like to achieve is the following :
I tried to do a =CALCULATE(sum(dailies[marge_daily_percent_measure]); FILTER(all(dailies);INT(dailies[Date (Year)])=[annee]))
(the filter is to get the current year data) but the sum cannot be applied to the measure (he's looking for a column).
I also tried a TOTALYTD but i then have 2 issues : The Sum still cannot be applied to the measure and i also need the result on a daily basis.
Thanks for any hints.
Assuming your table with stock prices looks like this
Date
Value
30 December 2021
104
31 December 2021
106
03 January 2022
107
04 January 2022
107
05 January 2022
106
06 January 2022
95
07 January 2022
106
10 January 2022
110
I have calculated a Margin measure, DAX below. And cumulative measure using SUMX.
DAX: Margin
Margin =
VAR _SelectedDate =
SELECTEDVALUE ( 'Table'[Date] )
VAR _SelectedValue =
SELECTEDVALUE ( 'Table'[Value] )
VAR _PreviousDate =
CALCULATE ( MAX ( 'Table'[Date] ), 'Table'[Date] < _SelectedDate )
VAR _PreviousValue =
CALCULATE ( SUM ( 'Table'[Value] ), 'Table'[Date] = _PreviousDate )
VAR Margin =
DIVIDE ( _SelectedValue - _PreviousValue, _PreviousValue )
RETURN
Margin
DAX: Margin Cumulative
Cumulative Margin =
VAR _SelectedDate =
SELECTEDVALUE ( 'Table'[Date] )
VAR Cumulative =
CALCULATE (
SUMX ( VALUES ( 'Table'[Date] ), [Margin] ),
'Table'[Date] <= _SelectedDate
)
RETURN
Cumulative
Bear in mind that the final percentage value you get from Cumulative Margin is not the same as the difference from the first value against the last value. In this case, (110-104)/104 = 5.77%. With the Cumulative, I get 6.91%
Output
One way I've built cumulative measures in the past is to do the following logic to filter on date. Assuming you use this in some kind of time-sliced view (like your table, or a linechart), this should only grab the dates on/before 'todays' date for each row.
_Cumulative_ClosedTasks =
CALCULATE (
[_ClosedTasks],
FILTER (
ALL ('Date'[Date]),
'Date'[Date] <= MAX ('Date'[Date])
)
)
([_ClosedTasks] is just a basic SUM metric)
Does this approach work for your data?
In PowerBI, I want to compare the value growth of categories (lets take A and B) over time from any starting year. To compare this easily, I am using a line graph with the time on the x-axis and category as a legend. I would like both categories to start at 100% and show the growth relative to that starting point. I then want to be able to use a continuous date slicer to vary the start and end-points of my line graph.
I've created a dummy data to illustrate this
Category, Year, Value
A 2000 5
A 2001 8
A 2002 8
A 2003 10
B 2000 10
B 2001 8
B 2002 12
B 2003 10
Without any date filter, I would like to display years 2000-2003 with the following values for the lines:
A: 100%, 160%, 160%, 200%
B: 100%, 80%, 120%, 100%
(The first value of category A is 5. Therefore the line graph should display A's values relative to 5. It's values 5, 8, 8, 10 are then displayed as the mentioned percentages. The first value of category B is 10, so B's values should be displayed relative to 10).
With a date slicer set to filter years 2001-2003, I want the line values to become:
A: 100%, 100%, 125%
B: 100%, 150%, 125%
(Due to the slicer the first value of category A is 8, so I want the % values relative to 8. The first value of B is also 8)
I was thinking of writing a measure for this. Can anyone help me with it? Thank you in advance.
You can create a measure to establish the earliest filtered year, the value for that year, then divide each evaluated value by the min year value:
MyMeasure =
VAR MinYear =
CALCULATE (
MIN ( MyTable[Year] ),
ALLSELECTED ( MyTable[Year] )
)
VAR BaseValue =
CALCULATE (
SUM ( MyTable[Value] ),
REMOVEFILTERS ( MyTable[Year] ),
MyTable[Year] = MinYear
)
VAR CurrentValue =
SUM ( MyTable[Value] )
RETURN
DIVIDE (
CurrentValue,
BaseValue
)
Which results in:
How do I select a random row from the database based on the probability chance assigned to each row.
Example:
Make Chance Value
ALFA ROMEO 0.0024 20000
AUDI 0.0338 35000
BMW 0.0376 40000
CHEVROLET 0.0087 15000
CITROEN 0.016 15000
........
How do I select random make name and its value based on the probability it has to be chosen.
Would a combination of rand() and ORDER BY work? If so what is the best way to do this?
You can do this by using rand() and then using a cumulative sum. Assuming they add up to 100%:
select t.*
from (select t.*, (#cumep := #cumep + chance) as cumep
from t cross join
(select #cumep := 0, #r := rand()) params
) t
where #r between cumep - chance and cumep
limit 1;
Notes:
rand() is called once in a subquery to initialize a variable. Multiple calls to rand() are not desirable.
There is a remote chance that the random number will be exactly on the boundary between two values. The limit 1 arbitrarily chooses 1.
This could be made more efficient by stopping the subquery when cumep > #r.
The values do not have to be in any particular order.
This can be modified to handle chances where the sum is not equal to 1, but that would be another question.
I need to solve a financial math problem. I have a revenue goal set based on target company growth rate. Given this total revenue goal for next year, I need to set sales goals each month that have the growth rate (monthly) applied to them. They will total the annual revenue goal. What this looks like is contributions that increase every occurrence by a set rate. Once I determine either the first or last month's goal, I can discount back or find the future values easily.
The problem I have is that I know what these goals need to total, but not what the first or last goal would equal. Hypothetically, I supposed I could use the mean goal (annual goal/12) to give me the goal for the middle of the year and discount back and scale up from June. However, since there is a growth rate, the compounding causes exponential rather than linear growth of the goals. What kind of formula can I use to solve this? Would I treat this as ongoing (but changing) contributions toward an investment with a set future value and growth rate? Or is there some sort of Goal Solver functionality that will help? I am currently doing this in Google Sheets but can switch to Excel or another medium. (I use R heavily, so not afraid of some programmatic methods).
If I cannot figure this out, I will just apply a linear function to it and use the difference in revenue each year as the slope.
Approach:
Let's assume your business starts in Sep-2017, a Month 0, with S units sold.
The constant growth rate, for each next month, was defined in your Business Case as a q, equal to 8% ( 1.08 )
Month 0: S [units], be it 1, 3 or 76,538,112,257
Month 1: S * q
Month 2: S * q * q
Month 3: S * q * q * q
..
Month 11: S * q * q * q * ... * q
>>> S = 1
>>> q = 1.08
>>> [ S * ( q ** i ) for i in range( 12 ) ]
[ 1.0,
1.08,
1.1664,
1.2597120000000002,
1.3604889600000003,
1.4693280768000005,
1.5868743229440005,
1.7138242687795209,
1.8509302102818825,
1.9990046271044333,
2.158924997272788,
2.331638997054611
]
The S units "Scale-free" sum ( independent on the initial amount )
help determine the relation between the target T units sold in total and any S, given q
>>> sum( [ S * ( q**i ) for i in range( 12 ) ] )
18.977126460237237
Here one can see, how inaccurate would be any attempt to use averages and similar guesses to approximate the progress of the powers of q during the period of compounding a constant growth rate ( yielding a T of ~ 19 x the S over 12 months at a given constant rate q of just 8% -- do not hesitate to experiment with other values of q to see the effect sharper and sharper ).
So for an example of a total T of 19,000 units sold during the Year 0, keeping the growth rate of 8% p.m.:
The initial seed for S would be a target T divided by the sum of ( constant growth ) scaling coefficients:
T / sum( [ S * ( q**i ) for i in range( 12 ) ] )
To be on the safer side,
>>> int( 1 + T / sum( [ S * ( q**i ) for i in range( 12 ) ] ) )
1002
>>> sum( [ 1002 * ( q**i ) for i in range( 12 ) ] )
19015.08 ...
>>> [ int( 1002 * ( q**i ) ) for i in range( 12 ) ]
[ 1002,
1082,
1168,
1262,
1363,
1472,
1590,
1717,
1854,
2003,
2163,
2336
]
Month 0: S ~ 1,002 [units]
Month 1: S * q ~ 1,082
Month 2: S * q * q ~ 1,168
Month 3: S * q * q * q ~ 1,262
.. ~ 1,363
. ~ 1,472
~ 1,590
~ 1,717
~ 1,854
. ~ 2,003
.. ~ 2,163
Month 11: S * q * q * q * ... * q ~ 2,336
_____________________________________________________________
19,012 [unit] per Year 0
So Good Luck & Go Get It Sold!
For example if you take the following example into consideration.
100.00 - Original Number
33.33 - 1st divided by 3
33.33 - 2nd divided by 3
33.33 - 3rd divided by 3
99.99 - Is the sum of the 3 division outcomes
But i want it to match the original 100.00
One way that i saw it could be done was by taking the original number minus the first two divisions and the result would be my third number. Now if i take those 3 numbers i get my original number.
100.00 - Original Number
33.33 - 1st divided by 3
33.33 - 2nd divided by 3
33.34 - 3rd number
100.00 - Which gives me my original number correctly. (33.33+33.33+33.34 = 100.00)
Is there a formula for this either in Oracle PL/SQL or a function or something that could be implemented?
Thanks in advance!
This version takes precision as a parameter as well:
with q as (select 100 as val, 3 as parts, 2 as prec from dual)
select rownum as no
,case when rownum = parts
then val - round(val / parts, prec) * (parts - 1)
else round(val / parts, prec)
end v
from q
connect by level <= parts
no v
=== =====
1 33.33
2 33.33
3 33.34
For example, if you want to split the value among the number of days in the current month, you can do this:
with q as (select 100 as val
,extract(day from last_day(sysdate)) as parts
,2 as prec from dual)
select rownum as no
,case when rownum = parts
then val - round(val / parts, prec) * (parts - 1)
else round(val / parts, prec)
end v
from q
connect by level <= parts;
1 3.33
2 3.33
3 3.33
4 3.33
...
27 3.33
28 3.33
29 3.33
30 3.43
To apportion the value amongst each month, weighted by the number of days in each month, you could do this instead (change the level <= 3 to change the number of months it is calculated for):
with q as (
select add_months(date '2013-07-01', rownum-1) the_month
,extract(day from last_day(add_months(date '2013-07-01', rownum-1)))
as days_in_month
,100 as val
,2 as prec
from dual
connect by level <= 3)
,q2 as (
select the_month, val, prec
,round(val * days_in_month
/ sum(days_in_month) over (), prec)
as apportioned
,row_number() over (order by the_month desc)
as reverse_rn
from q)
select the_month
,case when reverse_rn = 1
then val - sum(apportioned) over (order by the_month
rows between unbounded preceding and 1 preceding)
else apportioned
end as portion
from q2;
01/JUL/13 33.7
01/AUG/13 33.7
01/SEP/13 32.6
Use rational numbers. You could store the numbers as fractions rather than simple values. That's the only way to assure that the quantity is truly split in 3, and that it adds up to the original number. Sure you can do something hacky with rounding and remainders, as long as you don't care that the portions are not exactly split in 3.
The "algorithm" is simply that
100/3 + 100/3 + 100/3 == 300/3 == 100
Store both the numerator and the denominator in separate fields, then add the numerators. You can always convert to floating point when you display the values.
The Oracle docs even have a nice example of how to implement it:
CREATE TYPE rational_type AS OBJECT
( numerator INTEGER,
denominator INTEGER,
MAP MEMBER FUNCTION rat_to_real RETURN REAL,
MEMBER PROCEDURE normalize,
MEMBER FUNCTION plus (x rational_type)
RETURN rational_type);
Here is a parameterized SQL version
SELECT COUNT (*), grp
FROM (WITH input AS (SELECT 100 p_number, 3 p_buckets FROM DUAL),
data
AS ( SELECT LEVEL id, (p_number / p_buckets) group_size
FROM input
CONNECT BY LEVEL <= p_number)
SELECT id, CEIL (ROW_NUMBER () OVER (ORDER BY id) / group_size) grp
FROM data)
GROUP BY grp
output:
COUNT(*) GRP
33 1
33 2
34 3
If you edit the input parameters (p_number and p_buckets) the SQL essentially distributes p_number as evenly as possible among the # of buckets requested (p_buckets).
I've solved this problem yesterday by subtracting 2 of 3 parts from the starting number, e.g. 100 - 33.33 - 33.33 = 33.34 and the result of summing it up is still 100.