How to find SUMPRODUCT of two columns in DAX - dax

I am new to DAX formulas. I am looking to see how I can do an excel equivalent of a SUMPRODUCT for the following data:
Id | Metric | Weight | Metric times Weight |
1 | 20% | 30% | 6.0%
2 | 10% | 20% | 2.0%
3 | 25% | 20% | 5.0%
4 | 12% | 10% | 1.2%
5 | 15% | 10% | 1.5%
6 | 2% | 10% | 0.2% | |
Net | 84.0% | 100.0% | 84.0% (expected Metric*Weight)
I need 15.9% which is SUMPRODUCT(col1, col2).
The row total needs to be a sumproduct() versus the expected cross product above.
Any hints how I can achieve this?

If the data is on a table called Table1 then the following DAX formula should work:
WeightedAvg := SUMX(Table1, Table1[Metric] * Table1[Weight])
Basically, the function SUMX is iterating over the table Table 1 doing the product of [Metric] * [Weight] then once all the iterations are done, the results are added.
If you have a column with the product (as in your example), then you just need to drag that field to the values area of the PivotTable.
Hope this helps!

Related

Cumulative Return For the following data

Actually I was reading a blog on how to calculate the cumulating return of a stock for each day.
The formula that was described in the blog to calculate cumulating return was
(1 + TodayReturn) * (1 + Cumulative_Return_Of_Previous_Day) - 1 , but still I am not able to calculate the cumulative return that was provided on it.
Can someone please make this clear that how is cumulative return has been calculated in the table given below. This would be a lot of help.
Thanks in advance.
| Days | Stock Price | Return | Cumulative Return|
---------------------------------------------------
| Day 1 | 150 | | |
| Day 2 | 153 | 2.00 % | 2.00 % |
| Day 3 | 160 | 4.58 % | 6.67 % |
| Day 4 | 163 | 1.88 % | 8.67 % |
| Day 5 | 165 | 1.23 % | 10.00 % |
---------------------------------------------------

Compounded monthly growth rate calculation in tableau

I have a dataset with MTD returns for different securities and I want to compound them over time. Anyone know how I can do this in Tableau?
Ideally I want a timeline showing the returns compounding from month to month. Since I have the ending returns for each month for each security I don't need to calculate this, on the other hand I do not have the dollar value of the securities, so I can not use this in my calculation.
The sample data that can be taken
+--------+--------+
| Month | return |
+--------+--------+
| Jan-19 | 10% |
+--------+--------+
| Feb-19 | 15% |
+--------+--------+
| Mar-19 | 20% |
+--------+--------+
| Apr-19 | 10% |
+--------+--------+
| May-19 | 0% |
+--------+--------+
| Jun-19 | 11% |
+--------+--------+
| Jul-19 | 14% |
+--------+--------+
| Aug-19 | 9% |
+--------+--------+
| Sep-19 | 6% |
+--------+--------+
| Oct-19 | 15% |
+--------+--------+
| Nov-19 | 20% |
+--------+--------+
| Dec-19 | 8% |
+--------+--------+
| Jan-20 | 4% |
+--------+--------+
| Feb-20 | 9% |
+--------+--------+
| Mar-20 | 7% |
+--------+--------+
| Apr-20 | 1% |
+--------+--------+
I want the timeline to show Compounded Monthly Growth Rate (CMGR) as:
August: 10% return
September: 26.5% return (1-(1+0.10)*(1+0.15) = 0.265 OR 26.5%)
October: 51.8% return (1-(1+0.10)*(1+0.15)(1+0.20) = 0.518 OR 51.8%)
Presently I am doing separate calculations for each month, but I am pretty sure there is an easy way in tableau to show this growth rate, by applying some running product (cumulative product) kind of function.
Any help is appreciated!
I have taken this sample data..
+--------+--------+
| Month | return |
+--------+--------+
| Jan-19 | 10% |
+--------+--------+
| Feb-19 | 15% |
+--------+--------+
| Mar-19 | 20% |
+--------+--------+
| Apr-19 | 10% |
+--------+--------+
| May-19 | 0% |
+--------+--------+
| Jun-19 | 11% |
+--------+--------+
| Jul-19 | 14% |
+--------+--------+
| Aug-19 | 9% |
+--------+--------+
| Sep-19 | 6% |
+--------+--------+
| Oct-19 | 15% |
+--------+--------+
| Nov-19 | 20% |
+--------+--------+
| Dec-19 | 8% |
+--------+--------+
| Jan-20 | 4% |
+--------+--------+
| Feb-20 | 9% |
+--------+--------+
| Mar-20 | 7% |
+--------+--------+
| Apr-20 | 1% |
+--------+--------+
Step-1 After importing the data in tableau create a calculated field CMGR with calculation as
EXP(RUNNING_SUM(LN(1+SUM([Return]))))-1
the calculation does running_sum on logarithmic scale giving us running_product like function (which sadly isn't available in tableau at present)
Step-2 Build your view (like the screenshot shows it works as desired)
Or a line chart (if desired)

Reconcile two large tables efficiently in Power BI?

As part of a migration project, I’m looking to reconcile two fact tables (high cardinality with approx 500k rows each- there are a lot of customer accounts and it has to be reconciled on a customer account basis ). There is a many-to-many relationship between customer columns in the two tables.
I am struggling to find an efficient way to output the customers that appear in both tables but have a difference in the value.
I’ve tried merge in Power Query but it is extremely slow- perhaps due to the volume and high cardinality factor.
I would welcome any advice on how to produce the desired output efficiently?
Input Table 1:
Customer | Type | Channel | Loan
Jones | A | Branch | 100
Taylor | B | Phone | 200
Taylor | B | Online | 60
Jerez | C | Online | 120
Murray | D | Phone | 90
Input Table 2:
Customer | Type | Loan
Jones | A | 81
Taylor | B | 285
Jerez | C | 80
Jerez | C | 40
Seinfeld | A | 140
Desired Output:
Customer is in both tables, but the difference is in loan:
Customer | Type1 | Loan1 | Loan2
Jones | A | 100 | 81
Taylor | B | 260 | 285
where Loan 1 is the loan stated in Table 1; and Loan 2 is the loan stated in Table 2.
Thanks for taking the time to look at this question.

Automated sorting of table after filter is selected

timestamp | product | performance | sort_quantity
--------------|------------------|-----------------------|------------------------
2020-01-01 | Product_A | high | 819
2020-03-15 | Product_A | high | 819
2020-01-01 | Product_B | low | -214
2020-03-15 | Product_B | low | -214
2020-01-01 | Product_C | high | -100
2020-03-15 | Product_C | high | -100
2020-01-01 | Product_D | low | 933
2020-03-15 | Product_D | low | 933
2020-01-01 | Product_E | high | 501
2020-03-15 | Product_E | high | 501
I insert the table above into Tableau looking like this:
(Sorry for only having it available in German)
All this works perfectly.
Now I add a filter for column performance to the report.
When I select one of the values in the filter (e.g. high) the report looks like this:
The filter function is correct but I also want that once the filter is clicked the table is automatically sorted (descending) based on column sort_quantity.
Is it possible to to do this with Tableau?
If yes how can I achieve it?
Create a string version of sort quantity and place it as the first pill on rows, to the left of Product
str([sort quantity])
Set this field to sort Descending on by Field on Sort Quantity (not the str version).
On Str sort quantity, deselect show header to hide the column.
Final view should look like this.

Is there a way to cache and filter a table locally in PL SQL?

I’m faced with having to process a table in ORACLE 11g, that contains around 5 million records. These records are speed limits along a divided highway. There is a SpeedLimitId, HighwayId and a from mile post and to mile post to depict the area that the speed limit is applied to. Currently all the records are only on one side of the divided highway and the records need to be processed to also apply them to the other side. There is a measure equation table that lets us know which range of measure on one side of the highway equal a range of measure on the other side of the highway. This allows us to calculate the measure that the speed limit event will be on the other side by calculating the percentage of the measure value in the range on measure and then finding that same percentage of the range on the opposing side. The speed limit record can be contained to one measure equation record or it can cross several of them. Base on the information in the speed limit table and the measure equation, one or more records need to be inserted into a third table.
SPEED_LIMIT
+--------------+-----------+--------------+------------+------------+-------+
| SpeedLimitId | HighwayId | FromMilePost | ToMilePost | SpeedLimit | Lane |
+--------------+-----------+--------------+------------+------------+-------+
| 1 | 75N | 115 | 123 | 60 | South |
+--------------+-----------+--------------+------------+------------+-------+
MEASURE_EQUATION
+------------+----------------+-----------+---------+-------+----------------+-----------+---------+-------+------------------+
| EquationId | NorthHighwayId | NFromMile | NToMile | NGain | SouthHighwayId | SFromMile | SToMile | SGain | IsHighwayDivided |
+------------+----------------+-----------+---------+-------+----------------+-----------+---------+-------+------------------+
| 1 | 75N | 105 | 120 | 15 | 75S | 100 | 110 | 10 | No |
| 2 | 75N | 120 | 125 | 5 | 75S | 110 | 125 | 15 | Yes |
| 3 | 75N | 125 | 130 | 5 | 75S | 125 | 130 | 5 | No |
+------------+----------------+-----------+---------+-------+----------------+-----------+---------+-------+------------------+
Depending on information in the SPEED_LIMIT and MEASURE_EQUATION table there will be a need to insert at least one but can be as many as three records in a third table. There are a dozen or so different scenarios that can take place as a result of different values in the fields.
Using the above data you can see that the SpeedLimitId 1 is noted as being on the south side of the highway, but it is currently on the north side and that it also spans the 2 equation records with the ids of 1 and 2. In this case it spans two measure ranges as a single roadway splits off and becomes divided highway. We need to split the original records into two events and add them to third processing table and calculate the new measure for the south bound lane.
SPEED_LIMIT_PROCESSING
+--------------+-----------+-------+----------+--------+
| SpeedLimitId | HighwayId | LANE | FromMile | ToMile |
+--------------+-----------+-------+----------+--------+
| 1 | 75N | North | 115 | 120 |
| 1 | 75S | South | 110 | 119 |
+--------------+-----------+-------+----------+--------+
The methodology to calculate the measure on the south bound lane is as follows:
+--------------------+----------------------------+-----------------------------+
| | From Measure Translation | To Measure Translation |
+--------------------+----------------------------+-----------------------------+
| Event Measure as % | ((120 – 120)/5) * 100 = 0% | ((123 – 120)/5) * 100 = 60% |
| Offset Measure | ((15 * 0) / 100 = 0 | ((15 * 60) / 100) = 9 |
| Translated Measure | 110 + 0 = 110 | 110 + 9 = 119 |
+--------------------+----------------------------+-----------------------------+
My concern is to do this in the most efficient way possible. The idea would be to loop through each record in the SPEED_LIMIT table, select the corresponding records in the measure equation table and then based on information from those 2 tables I would insert records into a 3rd table. In order to limit PL/SQL context switches I planned on using "BULK COLLECT and FORALL” statements to query the event table and to run the insert statements, this would allow me to do things in batches. The missing component is how to get the corresponding records from the MEASURE_EQUATION table without having to do a sql query for every record loop in the SPEED_LIMIT table. The MEASURE_EQUATION only has about 700 records in it, so I was wondering if there is a way I can cache it in PL SQL and then filter it to the appropriate records for the current SPEED_LIMIT record.
As you can probably gleamed from my question, I’m fairly new at PL SQL and ORACLE in general, so maybe I’m going about it in the completely wrong way.

Resources