summation of same values once for each id - filter

How can I define a measure to calculate the same values for each id once for summation in Power BI?
for example, in this picture sum of num ( sum(num) )should be: 5
I used the code below but it returns a summation of all numbers and its result for this example is 11 instead of 5!
Measure = (CALCULATE(SUM('table1'[num]) , ALLEXCEPT('table1','table1'[order_id])))

First, try to calculate a virtual table that contains only rows that match your condition, then you can easily countrows and display them in CARD.
CountOF =
var __temp = CALCULATETABLE(VALUES(Sheet2[user_id]), Sheet2[num] > 0)
return
CALCULATE( COUNTROWS(__temp))

Related

Sum the Calculation of 2 columns

I have a measure that was calculating the number of sales for a particular product name, but now the business has split the name of the product up in to 2 different products.
I need to sum together the calculations of the now 2 product, i've tried suming them but it just comes up with saying the syntax for calculate is now wrong.
This what i've tried.
measure = SUM(CALCULATE([Measure1], brand[ProductType3] = "product1"),CALCULATE([Measure1], brand[ProductType3] = "product2")
How do you sum 2 calculations in the same measure?

Calculating Moving Average for N Months in DAX Power BI

I have a measure that calculates Moving Average for 3 months:
Moving_Avg_3_Months = AVERAGEX(DATESINPERIOD('Calendar FY'[Date],
LASTDATE('Calendar FY'[Date]), -3, MONTH),[CUS Revenue Credible All])
Is it possible to create a measure that would calculate Moving Average for my [CUS Revenue Credible All] - but for N months. Where N = 3 or N = 6 or N = whatever number I'd like?
If you create a new table with the different values for moving average you want to use eg. TableMovingAverage: [-3,-6,-12,-24,...,N]
and modify you DAX formula like this:
Moving_Avg_3_Months =
AVERAGEX(
DATESINPERIOD('Calendar FY'[Date],
LASTDATE('Calendar FY'[Date]),
SELECTEDVALUE('TableMovingAverage', -3),
MONTH),
[CUS Revenue Credible All])
SELECTEDVALUE returns a scalar if only one value is in the specified table, otherwise it return a default value -3 in this case.
If you filter TableMovingAverage you can switch between different moving averages

Combining two unique numbers (order doesn't matter) to create a unique number

I'm creating a website in which 2 items out of a possible 117 are chosen and compared in different ways. I need a way to assign each of these matchups a unique number so they can be easily stored in a database and what not. I've seen pairing functions, but I cannot find one in which order doesn't matter. For example, I want the unique number for 2 and 17 to be the same as 17 and 2. Is there an equation that will satisfy this?
It depends on what programming language you are using.
In Java for example it would be quite easy, because same seed is producing the same random number sequence. So you could simple use the sum of both random numbers
Long seed = 2L + 17L;
Long seed2 = 17L+2L;
Random random = new Random(seed);
Random random2 = new Random(seed2);
Boolean b = (random.nextLong() == random2.nextLong()) //true
However, this would also return the same value for 1+18, 0+19 and so on - whatever sums up to 19.
So, to get really unique numbers "per pair" you would need to shift one of them. IE, with 117 entries, you could multiply the SMALLER (or larger) by 1000:
Long seed = 2L * 1000 + 17L;
....
Then you have a unique random number for 2,17 and 17,2 - but 19,0 or 0,19 would produce a DIFFERENT random number.
ps.: if it should ALWAYS return the same for 2,19 - the result is not really a random number, isn't it?
I know that the question dates from 2014, but I still wanted to add the following answer.
You could use the product of prime numbers to do this. For example, if your pair is (2,4), then you could use the product of the 2nd prime number (=3) and the 4th prime number (=7) as your id (= 3*7 = 21).
In order to do this though with your 117 possible combinations, you would need to pre-calculate all first 117 prime numbers and store them for example in an array or a hash table and then do something like (in JavaScript):
var primes = [2,3,5,7,...];
var a = 2;
var b = 17;
var id = primes[a-1]*primes[b-1];
Note that if you want to decode them as well, things are going to be more difficult since you would need to calculate the prime factorization of your id.

Range update and querying in a 2D matrix

I don't have a scenario, but here goes the problem. This is one is just driving me crazy. There is a nxn boolean matrix initially all elements are 0, n <= 10^6 and given as input.
Next there will be up to 10^5 queries. Each query can be either set all elements of column c to 0 or 1, or set all elements of row r to 0 or 1. There can be another type of query, printing the total number of 1's in column c or row r.
I have no idea how to solve this and any help would be appreciated. Obviously a O(n) solution per query is not feasible.
The idea of using a number to order the modifications is taken from Dukeling's post.
We will need 2 maps and 4 binary indexed tree (BIT, a.k.a. Fenwick Tree): 1 map and 2 BITs for rows, and 1 map and 2 BITs for columns. Let us call them m_row, f_row[0], and f_row[1]; m_col, f_col[0] and f_col[1] respectively.
Map may be implemented with array, or tree like structure, or hashing. The 2 maps are used to store the last modification to a row/column. Since there can be at most 105 modification, you may use that fact to save space from simple array implementation.
BIT has 2 operations:
adjust(value, delta_freq), which adjusts the frequency of the value by delta_freq amount.
rsq(from_value, to_value), (rsq stands for range sum query) which finds the sum of the all the frequencies from from_value to to_value inclusive.
Let us declare global variable: version
Let us define numRow to be the number of rows in the 2D boolean matrix, and numCol to be the number of columns in the 2D boolean matrix.
The BITs should have size of at least MAX_QUERY + 1, since it is used to count the number of changes to the rows and columns, which can be as many as the number of queries.
Initialization:
version = 1
# Map should return <0, 0> for rows or cols not yet
# directly updated by query
m_row = m_col = empty map
f_row[0] = f_row[1] = f_col[0] = f_col[1] = empty BIT
Update algorithm:
update(isRow, value, idx):
if (isRow):
# Since setting a row/column to a new value will reset
# everything done to it, we need to erase earlier
# modification to it.
# For example, turn on/off on a row a few times, then
# query some column
<prevValue, prevVersion> = m_row.get(idx)
if ( prevVersion > 0 ):
f_row[prevValue].adjust( prevVersion, -1 )
m_row.map( idx, <value, version> )
f_row[value].adjust( version, 1 )
else:
<prevValue, prevVersion> = m_col.get(idx)
if ( prevVersion > 0 ):
f_col[prevValue].adjust( prevVersion, -1 )
m_col.map( idx, <value, version> )
f_col[value].adjust( version, 1 )
version = version + 1
Count algorithm:
count(isRow, idx):
if (isRow):
# If this is row, we want to find number of reverse modifications
# done by updating the columns
<value, row_version> = m_row.get(idx)
count = f_col[1 - value].rsq(row_version + 1, version)
else:
# If this is column, we want to find number of reverse modifications
# done by updating the rows
<value, col_version> = m_col.get(idx)
count = f_row[1 - value].rsq(col_version + 1, version)
if (isRow):
if (value == 1):
return numRow - count
else:
return count
else:
if (value == 1):
return numCol - count
else:
return count
The complexity is logarithmic in worst case for both update and count.
Take version just to mean a value that gets auto-incremented for each update.
Store the last version and last update value at each row and column.
Store a list of (versions and counts of zeros and counts of ones) for the rows. The same for the columns. So that's only 2 lists for the entire grid.
When a row is updated, we set its version to the current version and insert into the list for rows the version and if (oldRowValue == 0) zeroCount = oldZeroCount else zeroCount = oldZeroCount + 1 (so it's not the number of zero's, rather the number of times a value was updated with a zero). Same for oneCount. Same for columns.
If you do a print for a row, we get the row's version and last value, we do a binary search for that version in the column list (first value greater than). Then:
if (rowValue == 1)
target = n*rowValue
- (latestColZeroCount - colZeroCount)
+ (latestColOneCount - colOneCount)
else
target = (latestColOneCount - colOneCount)
Not too sure whether the above will work.
That's O(1) for update, O(log k) for print, where k is the number of updates.

How to decide on weights?

For my work, I need some kind of algorithm with the following input and output:
Input: a set of dates (from the past). Output: a set of weights - one weight per one given date (the sum of all weights = 1).
The basic idea is that the closest date to today's date should receive the highest weight, the second closest date will get the second highest weight, and so on...
Any ideas?
Thanks in advance!
First, for each date in your input set assign the amount of time between the date and today.
For example: the following date set {today, tomorrow, yesterday, a week from today} becomes {0, 1, 1, 7}. Formally: val[i] = abs(today - date[i]).
Second, inverse the values in such a way that their relative weights are reversed. The simplest way of doing so would be: val[i] = 1/val[i].
Other suggestions:
val[i] = 1/val[i]^2
val[i] = 1/sqrt(val[i])
val[i] = 1/log(val[i])
The hardest and most important part is deciding how to inverse the values. Think, what should be the nature of the weights? (do you want noticeable differences between two far away dates, or maybe two far away dates should have pretty equal weights? Do you want a date which is very close to today have an extremely bigger weight or a reasonably bigger weight?).
Note that you should come up with an inverting procedure where you cannot divide by zero. In the example above, dividing by val[i] results in division by zero. One method to avoid division by zero is called smoothing. The most trivial way to "smooth" your data is using the add-one smoothing where you just add one to each value (so today becomes 1, tomorrow becomes 2, next week becomes 8, etc).
Now the easiest part is to normalize the values so that they'll sum up to one.
sum = val[1] + val[2] + ... + val[n]
weight[i] = val[i]/sum for each i
Sort dates and remove dups
Assign values (maybe starting from the farthest date in steps of 10 or whatever you need - these value can be arbitrary, they just reflect order and distance)
Normalize weights to add up to 1
Executable pseudocode (tweakable):
#!/usr/bin/env python
import random, pprint
from operator import itemgetter
# for simplicity's sake dates are integers here ...
pivot_date = 1000
past_dates = set(random.sample(range(1, pivot_date), 5))
weights, stepping = [], 10
for date in sorted(past_dates):
weights.append( (date, stepping) )
stepping += 10
sum_of_steppings = sum([ itemgetter(1)(x) for x in weights ])
normalized = [ (d, (w / float(sum_of_steppings)) ) for d, w in weights ]
pprint.pprint(normalized)
# Example output
# The 'date' closest to 1000 (here: 889) has the highest weight,
# 703 the second highest, and so forth ...
# [(151, 0.06666666666666667),
# (425, 0.13333333333333333),
# (571, 0.2),
# (703, 0.26666666666666666),
# (889, 0.3333333333333333)]
How to weight: just compute the difference of all dates and the current date
x(i) = abs(date(i) - current_date)
you can then use different expression to assign weights:
w(i) = 1/x(i)
w(i) = exp(-x(i))
w(i) = exp(-x(i)^2))
use gaussian distribution - more complicated, do not recommend
Then use normalized weights: w(i)/sum(w(i)) so that the sum is 1.
(Note that the exponential func is always used by statisticians in survival analysis)
The first thing that comes to my mind to to use a geometric series:
http://en.wikipedia.org/wiki/Geometric_series
(1/2)+(1/4)+(1/8)+(1/16)+(1/32)+(1/64)+(1/128)+(1/256)..... sums to one.
Yesterday would be 1/2
2 days ago would be 1/4
and so on
Is is the index for the i-th date.
Assign weights equal to to Ni / D.
D0 is the first date.
Ni is the difference in days between the i-th date and the first date D0.
D is the normalization factor
converts dates to yyyymmddhhmiss format (24 hours), add all these values ​​and the total, divide by the total time, and sort by this value.
declare #data table
(
Date bigint,
Weight float
)
declare #sumTotal decimal(18,2)
insert into #Data (Date)
select top 100
replace(replace(replace(convert(varchar,Datetime,20),'-',''),':',''),' ','')
from Dates
select #sumTotal=sum(Date)
from #Data
update #Data set
Weight=Date/#sumTotal
select * from #Data order by 2 desc

Resources