Creating DAX peer measure - dax

The scenario:
We are an insurance brokerage company. Our fact table is claim metrics current table. This table has unique rows for multiple claim sid-s, so that, countrows(claim current) gives the correct count of the number of unique claims. Now, this table also has clientsid and industrysid. The relation between client and industry here is that, 1 industry can have multiple clients, and 1 client can belong to only 1 industry.
Now, let us consider a fact called claimlagdays, which is present in the table at the granularity of claimsid.
Now, one requirement is that, we need to find out "peer" sum(claimlagdays). This, for a particular client, is basically calculated as:
sum(claimlagdays) for the industry of the client being filtered (minus) sum(claimlagdays) for this particular client. Let's call this measure A.
Similar to above, we need to calculate "peer" claim count , which is claimcount for the industry of the client being filtered (minus) claimcount for this particular client.
Let's call this measure B.
In the final calculation, we need to divide A by B, to get the "peer" average lag days.
So basically, the hard part here is this: find the industry of the particular client which is being filtered for, and then, apply this filter to the fact table (claim metrics current) to find out the total claim count/other metric only for this industry. then of course, subtract the client figure from this industry figure to get the "peer" measure. This has to be done for each row, keeping intact any other filters which might be applied in the slicer(date/business unit, etc.)
There are a couple of other filters static which need to be considered, which are present in other tables, such as "Claim Type"(=Indemnity/Medical) and Claim Status(=Closed).
My solution:
For measure B
I tried creating a calculated column, as:
Claim Count_WC_MO_Industry=COUNTROWS(FILTER(FILTER('Claim Metrics Current',RELATED('Claim WC'[WC Claim Type])="Medical" && RELATED('Coverage'[Coverage Code])="WC" && RELATED('Claim Status'[Status Code])="CL"),EARLIER('Claim Metrics Current'[IndustrySID])='Claim Metrics Current'[IndustrySID]))
Then I created the measure
Claim Count - WC MO Peer:=CALCULATE(SUM([Claim Count_WC_MO_Industry])/[Claim - Count])- [Claim - Count WC MO]
{I did a sum because, tabular model doesn't directly allow me to use a calculated column as a measure, without any aggregation. And also, that wouldn't make any sense since tabular model wouldn't understand which row to take}
The second part of the above measure is obviously, the claim count of the particular client, with the above-mentioned filters.
Problem with my solution:
The figures are all wrong.I am not getting a client-wise or year-wise separation of the industry counts or the peer counts. I am only getting a sum of all the industry counts in the measure.
My suspicion is that this is happening because of the sum which is being done. However, I don't really have a choice, do I, as I can't use a calculated column as a measure without some aggregation...
Please let me know if you think the information provided here is not sufficient and if you'd like me to furnish some data (dummy). I would be glad to help.

So assuming that you are filtering for the specific client via a frontend, it sounds like you just want
ClientLagDays :=
CALCULATE (
SUM ( 'Claim Metrics Current'[Lag Days] ),
Static Filters Here
)
Just your base measure of appropriate client lag days, including your static filters.
IndustryLagDays :=
CALCULATE (
[ClientLagDays],
ALL ( 'Claim Metrics Current'[Client] ),
VALUES ( 'Claim Metrics Current'[IndustrySID] )
)
This removes the filter on client but retains the filter on Industry to get the industry-wide total of lag days.
PeerLagDays:=[IndustryLagDays]-[ClientLagDays]
Straightforward enough.
And then repeat for claim counts, and then take [PeerLagDays] / [PeerClaimCount] for your [Average Peer Lag Days].

Related

PBCS: Custom rules to aggregate Period members

I have an input account (never share) in which the user types a parameter for each month, I want that into aggregate members of Period dimension, for example on YearTotal, the value will be the weighted average between two other accounts representing the cost and the quantity.
With the account properties I can rollup my account in addition or as simple average between months, obviously in this way I get wrong data in both cases.
Anyone know a solution to my question?
Thanks a lot,
Daniele
Not sure exactly what you are asking. But I assume the following in my answer:
data entry for user on account Parameter (from the context, I think it is a price)
data entry for user on level0 Period, i.e. the months
you want Essbase to show the Parameter value as typed in at the month level (Jan .. Dec)
you want Essbase to show Costs / Quantity for Q1/2/3/4 and the YearTotal
the Account and Period dimension are of density: dense
You did not specify if you are also reporting on YTD values and how you have implemented this in Essbase. I assume you do, but the preferred solution depends on how you have implemented this, so I take the "safe" solution here:
solution 1
This is the most straightforward solution:
Implement a "parameter_inp" account on which the user keys in the data. Set the account to "never consolidate".
Create a new "parameter" account, dynamic calc, and give it the formula "Costs/Quantity;".
Refer to "parameter" in your reports, and to "parameter_inp" for user entry
solution 2 - alternative
If you have a lot of these parameters, you'll end up with a system making it unpleasant for data entry and reporting for the end-users. To solve it using data entry and reporting on the same "parameter" account, you need to tune your implementation for Quarter and YearTotal calculation, including the YTD calculation. I see no way of getting this correct if you are using DTS.
This is the way to go forward:
Make use of a new dimension called "View", data entry on PER (= periodic), additional dynamic calc member "YTD", density: dense, place it after Period (so Account, Period, View)
Add a UDA to the "parameter", for example "WA"
Set custom dynamic calculations on Quarter and YearTotal level, something like: IF (#ISUDA("WA")) THEN ELSIF <check on FLOW/BALANCE> ... logic for regular aggregation of FLOW and BALANCE items hereby overriding Essbase's native time logic)
Set custom dynamic calculations for YTD (overiding DTS), and make an exception for UDA "WA"

Tableau - Aggregate and non aggregate error for divide forumla

I used COUNT (CUST_ID) as measure value to come up [Total No of Customer]. When I created new measure for [Average Profit per customer] by formula - [Total Profit] / [Total No of Customer], the error of Aggregate and non aggregate error prompted.
DB level:
Cust ID_____Profit
123_______100
234_______500
345_______350
567_______505
You must be looking for avg aggregate function.
Select cust_id, avg(profit)
From your_table
Group by cust_id;
Cheers!!
In your database table, you appear to have one data row per customer. Customer ID is serving as a unique primary key. The level of detail (or granularity) of the database table is the customer.
Given that, the simplest solution to your question is to display AVG([Profit]) -- without having [Cust ID] in the view (i.e. not on any shelf)
If the assumptions mentioned above are not correct, then you may need to employ other methods depending on how you define your question. I suggest making sure you understand what COUNT() actually does compared to COUNTD(). The behavior is not what people tend to assume. LOD calculations may prove useful. All described in the online help.
Put the calculations directly in the calculated field as:
SUM([Profit])/COUNT([CUST_ID])
This will give you aggregate and aggregate calculation.
If you want to show Average profit using a key like [CUST_ID], you can use LOD expression:
{FIXED [CUST_ID]: AVG[Profit]}

how to improve Neo4J performance in creating edges?

i'm building a traffic schedule application using Neo4J, NodeJS and GTFS-data; currently, i'm trying to get
things working for the traffic on a single day on the Berlin subway network. these are the grand totals
i've collected so far:
10 routes
211 stops
4096 trips
83322 stoptimes
to put it simply, GTFS (General Transit Feed Specification) has the concept of a stoptime which denotes the
event of a given train or bus stopping for passengers to board and alight. stoptimes happen on a trip,
which is a series of stoptimes, they happen on a specific date and time, and they happen on a given
stop for a given route (or 'line') in a transit network. so there's a lot of references here.
the problem i'm running into is the amount of data and the time it takes to build the database. in order
to speed up things, i've already (1) cut down the data to a single day, (2) deleted the database files
and have the server create a fresh one (very effective!), (3) searched a lot to get better queries. alas,
with the figures as given above, it still takes 30~50 minutes to get all the edges of the graph.
these are the indexes i'm building:
CREATE CONSTRAINT ON (n:trip) ASSERT n.id IS UNIQUE;
CREATE CONSTRAINT ON (n:stop) ASSERT n.id IS UNIQUE;
CREATE CONSTRAINT ON (n:route) ASSERT n.id IS UNIQUE;
CREATE CONSTRAINT ON (n:stoptime) ASSERT n.id IS UNIQUE;
CREATE INDEX ON :trip(`route-id`);
CREATE INDEX ON :stop(`name`);
CREATE INDEX ON :stoptime(`trip-id`);
CREATE INDEX ON :stoptime(`stop-id`);
CREATE INDEX ON :route(`name`);
i'd guess the unique primary keys should be most important.
and here are the queries that take up like 80% of the running time (with 10% that are unrelated to Neo4J,
and 10% needed to feed the node data using plain HTTP post requests):
MATCH (trip:`trip`), (route:`route`)
WHERE trip.`route-id` = route.id
CREATE UNIQUE (trip)-[:`trip/route` {`~label`: 'trip/route'}]-(route);
MATCH (stoptime:`stoptime`), (trip:`trip`)
WHERE stoptime.`trip-id` = trip.id
CREATE UNIQUE (trip)-[:`trip/stoptime` {`~label`: 'trip/stoptime'}]-(stoptime);
MATCH (stoptime:`stoptime`), (stop:`stop`)
WHERE stoptime.`stop-id` = stop.id
CREATE UNIQUE (stop)-[:`stop/stoptime` {`~label`: 'stop/stoptime'}]-(stoptime);
MATCH (a:stoptime), (b:stoptime)
WHERE a.`trip-id` = b.`trip-id`
AND ( a.idx + 1 = b.idx OR a.idx - 1 = b.idx )
CREATE UNIQUE (a)-[:linked]-(b);
MATCH (stop1:stop)-->(a:stoptime)-[:next]->(b:stoptime)-->(stop2:stop)
CREATE UNIQUE (stop1)-[:distance {`~label`: 'distance', value: 0}]-(stop2);
the first query is still in the range of some minutes which i find longish given that there are only
thousands (not hundreds of thousands or millions) of trips in the database. the subsequent queries that
involve stoptimes take several ten minutes each on my desktop machine.
(i've also calculated whether the schedule really contains 83322 stoptimes each day, and yes, it's plausible:
in Berlin, subway trains run on 10 lines for 20 hours a day with 6 or 12 trips per hour, and there are 173
subway stations: 10 lines x 2 directions x 17.3 stops per line x 20 hours x 9 trips per hour gives 62280,
close enough. there are some faulty? / double / extra stop nodes in the data (211
stops instead of 173), but those are few.)
frankly, if i don't find a way to speed up things at least tenfold (rather more), it'll make little sense to use Neo4J
for this project. just in order to cover the single city of Berlin many, many more stoptimes have to be added,
as the subway is just a tiny fraction of the overall public transport here (e.g. bus and tramway have like
170 routes with 7,000 stops, so expect around 7,000,000 stoptimes each day).
Update the above edge creation queries, which i perform one by one, have now been running for over an hour and not yet finished, meaning that—if things scale in a linear fashion—the time needed to feed the Berlin public transport data for a single day would consume something like a week. therefore, the code currently performs several orders of magnitude too slow to be viable.
Update #MichaelHunger's solution did work; see my response below.
I just imported 12M nodes and 12M rels into Neo4j in 10 minutes using LOAD CSV.
You should see your issues when you run profiling on your queries in the shell.
Prefix your query with profile and look a the profile output if it mentions to use the index or rather just label-scan.
Do you use parameters for your insert queries? So that Neo4j can re-use built queries?
For queries like this:
MATCH (trip:`trip`), (route:`route`)
WHERE trip.`route-id` = route.id
CREATE UNIQUE (trip)-[:`trip/route` {`~label`: 'trip/route'}]-(route);
It will very probably not use your index.
Can you perhaps point to your datasource? We can convert it into CSV if it isn't and then import even more quickly.
Perhaps we can create a graph gist for your model?
I would rather use:
MATCH (route:`route`)
MATCH (trip:`trip` {`route-id` = route.id)
CREATE (trip)-[:`trip/route` {`~label`: 'trip/route'}]-(route);
For your initial import you also don't need create unique as you match every trip only once.
And I'm not sure what your "~label" is good for?
Similar for your other queries.
As the data is public it would be cool to work together on this.
Something I'd love to hear more about is how you plan do express your query use-cases.
I had a really great discussion about timetables for public transport with training attendees last time in Leipzig. You can also email me on michael at neo4j.org
Also perhaps you want to check out these links:
Tramchester
http://www.thoughtworks.com/de/insights/blog/transforming-travel-and-transport-industry-one-graph-time
http://de.slideshare.net/neo4j/graph-connect-v5
https://www.youtube.com/watch?v=AhvECxOhEX0
London Tube Graph
http://blog.bruggen.com/2013/11/meet-this-tubular-graph.html
http://www.markhneedham.com/blog/2014/03/03/neo4j-2-1-0-m01-load-csv-with-rik-van-bruggens-tube-graph/
http://www.markhneedham.com/blog/2014/02/13/neo4j-value-in-relationships-but-value-in-nodes-too/
detailed solution
i'm happy to report that #MichaelHunger's solution works like a charm. i modified the edge-building queries
from the question with the below shapes that keep to the suggested query outline:
MATCH (route:`route`)
MATCH (trip:`trip` {`route-id`: route.id})
CREATE (trip)-[:`trip/route` {`~label`: 'trip/route'}]->(route)
MATCH (trip:`trip`)
MATCH (stoptime:`stoptime` {`trip-id`: trip.id})
CREATE (trip)-[:`trip/stoptime` {`~label`: 'trip/stoptime'}]->(stoptime)
MATCH (stop:`stop`)
MATCH (stoptime:`stoptime` {`stop-id`: stop.id})
CREATE (stop)-[:`stop/stoptime` {`~label`: 'stop/stoptime'}]->(stoptime)
MATCH (a:stoptime)
MATCH (b:stoptime {`trip-id`: a.`trip-id`, `idx`: a.idx + 1})
CREATE (a)-[:linked {`~label`: 'linked'}]->(b)
MATCH (stop1:stop)--(a:stoptime)-[:linked]-(b:stoptime)--(stop2:stop)
CREATE (stop1)-[:distance {`~label`: 'distance', value: 0}]->(stop2)
as can be seen, the trick here is to give each participating node a MATCH statement of its own and to
move the WHERE clause inside the second match condition; presumably, as mentioned above, Neo4J can only
then take advantage of its indexes.
with these queries in place, the process of reading in nodes and building edges takes roughly 13 minutes;
of these 13 minutes, fetching the data from an external source, building the node representations and issuing CREATE queries
takes about 10 minutes, and building almost a half million edges between them is done in about 3 minutes.
right now none of my queries (especially the node CREATE statements and updates for stop distances) use
parametrized queries, which is another potential source for performance gains.
as for the ~label field and also the question why i use dahes in names where underscores would be more
convenient, well, that's a long story about what i perceive good and practical naming that sometimes clashes
with the syntax of some languages (of most languages, should i say). but that's boring detail. maybe more
intersting is the question: why is there a ~label attribute that repeats what the element label says (what
you write after the colon)? well, it's an attempt to comply with Neo4J conventions (we use labels here), take
advantage of the 'identifier, colon, label' syntax of cypher queries, AND to make it so the labels do
appear in the returned values.
mind you, labels are so central to graph thinking the Neo4J way, but *in query results, labels are
conspicuously absent. when you include a relationship that is marked with nothing but a label in your result set,
then that edge will arrive as an empty
object, telling you only that there is something but not what. so i decided i to duplicate the
label on each single node and each single edge. not an optimal solution but at least now i get an informative
graph display in the Neo4J browser.
as for how to express query use-cases, that's an active field of reserach for me right now. i guess it will
all start with a 'field of interest', like 'show all Berlin subway stops', or 'all busses departing within
the next 15 minutes from a bus stop near me'. the data already allows to see which stops are directly connected
by a subway line, their geographical distance, what services are present and what routes they take. the idea
is to grab the data and present them in novel, usable and beatiful ways. 9292 is quite
close to what i imagine; what's missing are graphical representations of spatial and temporal relationships.

How to identify a new pattern in a URL with a machine learning algorithm (Text mining)

I am trying to identify new patterns after analyzing a number of URLs. So let's say, I am investigating the hypothetical website Yoohle.com and their URLs have the following structure.
domain = yoohle.com
q= search phrase
lan= language used
pr= partner_id
br= browser_id
so a sample url will look like this
www.yoohle.com/test_folder/test_page?q=hello+world&lan=en&pr=stackoverflow&br=chrome
If I am investigating the web traffic of this website and seeing abnormal increase month over month, I would like to find out what's causing this. In this example I can just parse out the URL and look at the pr= value since it will tell me if there is a new partnership (maybe stackoverflow is going to be powered by yoohle.com and that drives the increase etc.)
The question is, how can I build something robust that can compare 2 (or more) months and tell me exactly what's driving the increase. I want to get something like, "we are seeing an increase and it is driven by the following pattern"
www.yoohle.com/test_folder/test_page%pr=stackoverflow%
The tricky part is, you do not know anything about what the tokens mean unlike this example since I will not know what token stands for partner_id. Another issue is, if we look at token by token, this will be misleading because lan=en will also go up with a new partner assuming the users will still have English as the language.
My idea is to analyze the tokens by looking at all the combinations but it is very costly, (4! in this example and probably 10+! for other websites). Also analyzing tokens itself is not going to solve the problem since I still need to analyze the values of the tokens.
I tried k-means clustering, apriori algorithm did some research on URL/text mining but could not get what I want. Any ideas about how to approach building an algorithm will be beneficial.
Imagine that you are seeing realtime data, so we are talking about analyzing around 100K URLs in a given month.
I would go the following way. You can create the following table:
URL
time
time_month -- time rounded to month, for demonstration purpose
q_bol -- boolean flag whether question parameter was used
q -- question parameter value
lan -- language parameter value
lan_bol -- boolean flag whether language parameter was used
pr -- partner parameter value
pr_bol -- boolean flag whether partner parameter was used
br -- browser parameter value
br_bol -- boolean flag whether browse parameter was used
Now, you can write some query.
with t as (
select
time_month,
q_bol, lan_bol, pr_bol, br_bol, count(*)
from
urldata
where
time_month > '2013-02-01'::date and time_month < '2013-04-01'::date -- last two months data
group by
time_month
)
, u as (
select
*,
t2-coalesce(t1,0) as abs_change, -- change in pattern MoM,
case when t1 is null then 0 else t2/t1 end as relchange -- relative change
from
t t1 full outer join t t2 using (q_bol, lan_bol, pr_bol, br_bol)
)
select * from u where abs_change > 5000 or relchange > 3
The query above gives you parameters patterns where there is more than 5000 change month over month or more than 300% increase month over month. If you can use group by rollup in your sql system it would give also higher level aggregations (combinations of three parameters, two parameters, one parameter).
You can do pretty the same with values of parameters. Because you do not know what tokens will be present with values, you can parse url in the following structure of tables:
-- urls
id_url
url
time
-- parameters
id_url
token
value
Then you will need to rewrite the query above in some way, e.g. you can use array aggregation function in PostgreSQL array_agg().

Efficient set operations in mapreduce

I have inherited a mapreduce codebase which mainly calculates the number of unique user IDs seen over time for different ads. To me it doesn't look like it is being done very efficiently, and I would like to know if anyone has any tips or suggestions on how to do this kind of calculation as efficiently as possible in mapreduce.
We use Hadoop, but I'll give an example in pseudocode, without all the cruft:
map(key, value):
ad_id = .. // extract from value
user_id = ... // extract from value
collect(ad_id, user_id)
reduce(ad_id, user_ids):
uniqe_user_ids = new Set()
foreach (user_id in user_ids):
unique_user_ids.add(user_id)
collect(ad_id, unique_user_ids.size)
It's not much code, and it's not very hard to understand, but it's not very efficient. Every day we get more data, and so every day we need to look at all the ad impressions from the beginning to calculate the number of unique user IDs for that ad, so each day it takes longer, and uses more memory. Moreover, without having actually profiled the code (not sure how to do that in Hadoop) I'm pretty certain that almost all of the work is in creating the set of unique IDs. It eats enormous amounts of memory too.
I've experimented with non-mapreduce solutions, and have gotten much better performance (but the question there is how to scale it in the same way that I can scale with Hadoop), but it feels like there should be a better way of doing it in mapreduce that the code I have. It must be a common enough problem for others to have solved.
How do you implement the counting of unique IDs in an efficient manner using mapreduce?
The problem is that the code you inherited was written with the mindset "I'll determine the unique set myself" instead of the "let's leverage the framework to do it for me".
I would something like this (pseudocode) instead:
map(key, value):
ad_id = .. // extract from value
user_id = ... // extract from value
collect(ad_id & user_id , unused dummy value)
reduce(ad_id & user_id , unused dummy value):
output (ad_id , 1); // one unique userid.
map(ad_id , 1): --> identity mapper!
collect(ad_id , 1 )
reduce(ad_id , set of a lot of '1's):
summarize ;
output (ad_id , unique_user_ids);
Niels' solution is good, but for an approximate alternative that is closer to the original code and uses only one map reduce phase, just replace the set with a bloom filter. The membership queries in a bloom filter have a small probability of error, but the size estimates are very accurate.

Resources