Let me describe the one record structure in pseudo-code:
Record
UserName
E-mail
Items[8]
ItemPropertyA
ItemPropertyB
ItemPropertyC
ItemPropertyD
ItemPropertyE
There are 1-8 items in a record and exactly 5 properties each in each item. So I need to store these many records as (excel) table and I want it to be human readable, if possible. The straitforward approach is to put items and properties in 8 * 5 = 40 columns, but this is difficult to review. I'm going to place a JSON array of properties in each cell (one celll per item), using as many cells in each rows as needed. I'm just curious about other tree-to-table possibilities.
There is an alternative to 40 separate columns (some of which may be unused if there are fewer than 8 items in a record). You can use database style normalized records:
SHEET 1
RecordId UserName Email
1 Bobby bobby#example.com
2 Susan sueb#example.com
SHEET 2
RecordId ItemId PropertyA PropertyB PropertyC PropertyD PropertyE
1 1 Chocolate Electric Round Silver Hebrew
1 2 Raspberry Steam Trapezoid Brass Esperanto
1 3 Durian Gravity Bezier Titanium Bahasa Melayu
2 1 Vanilla Solar Rhombus Copper Pashto
Of course you could normalize even further and have just a single Property column, but the above seems perhaps enough when you know each item has exactly the same five properties.
Related
Good day,
I have seen from here a solution to control duplicate entries into a single column. A Data validation with this custom formula works well for one column.
I would like to achieve the same effect over multiple columns ... i.e. unique row entries across multiple columns. Take for example below three columns A-C. Only when values {1,2,1} are entered for the second time will the input be rejected.
A B C
1 1 1
1 2 1
1 2 2
2 2 2
1 2 1 X Entry should be rejected.
Is there a quick way to do this using Data Validation - custom formulae?
use custom formula for data validation:
=INDEX(COUNTIF($A$1:$A&"×"&$B$1:$B&"×"&$C$1:$C, $A1&"×"&$B1&"×"&$C1)<2)
I'm building a pairing system that is supposed to create a pairing between two users and schedule them in a meeting. The selection is based upon a criteria that I am having a hard time figuring out. The criteria is that an earlier match cannot have existed between the pair.
My input is a list of size n that contains email addresses. This list is supposed to be split into pairs. The restriction is that this match hasn't occured previously.
So for example, my list would contain a couple of user ids
list = {1,5,6,634,533,515,61,53}
At the same time i have a database table where the old pairs exist:
previous_pairs
---------------------
id date status
1 2016-10-14 12:52:24.214 1
2 2016-10-15 12:52:24.214 2
3 2016-10-16 12:52:24.214 0
4 2016-10-17 12:52:24.214 2
previous_pair_users
---------------------
id userid
1 1
1 5
2 634
2 553
3 515
3 61
4 53
4 1
What would be a good approach to solve this problem? My test solution right now is to pop two random users and checking them for a previous match. If there exists no match, i pop a new random (if possible) and push one of the incorrect users back to the list. If the two people are last they will get matched anyhow. This doesn't sound good to me since i should predict which matches that cannot occur based on my list with already "existing" pairs.
Do you have any idea on how to get me going in regards to building this procedure? Java 8 streams looks interesting and might be a way to solve this, but i am very new to that unfortunately.
The solution here was to create a list with tuples that contain the old matches using group_concat feature of MySQL:
SELECT group_concat(MatchProfiles.ProfileId) FROM Matches
INNER JOIN MatchProfiles ON Matches.MatchId = MatchProfiles.MatchId
GROUP BY Matches.MatchId
old_matches = ((42,52),(12,52),(19,52),(10,12))
After that I select the candidates and generate a new list of tuples using my pop_random()
new_matches = ((42,12),(19,48),(10,36))
When both lists are done I look at the intersection to find any duplicates
duplicates = list(set(new_matches) & set(old_matches))
If we have duplicates we simply run the randomizer again X attemps until I find it impossible.
I know that this is not very effective when having a large set of numbers but my dataset will never be that large so I think it will be good enough.
I looked around for a bit and didn't see any question quite like the one I have. I have a sheet with over 80k values in column A. What I need, is to remove every occurrence of a duplicate. If the value 5 appears more than once, I don't want the value at all. For example, if I have something like this:
A
1
2
2
3
4
3
I ONLY want the values of 1 and 4, because they only appear once. I'd like every other value deleted, or to have only the values like 1 and 4 appear in another column.
Any help is greatly appreciated.
Work on a copy as the following deletes records from source data. In B1 (adjust 90000 to suit):
=COUNTIF(A$1:A$90000;A1)>1
and copy down to suit. Filter A:B, select 1 for ColumnB and delete the selected rows. Change filter to select All.
I'm curious why in the example below the Mahout recommender isn't returning a recommendation for user 1.
My input file is below. I added blank lines to enhance readability. This file will need the blank lines removed before it's run through Mahout.
The columns in this file are:
User ID | item number | item rating
1 101 0
1 102 0
1 103 5
1 104 0
2 101 4
2 102 5
2 103 4
2 104 0
3 101 0
3 102 5
3 103 5
3 104 3
You'll note that item 103 is the only common item that all 3 users rated.
I ran:
hadoop jar C:\hdp\mahout-0.9.0.2.1.3.0-1981\core\target\mahout-core-0.9.0.2.1.3.0-1981-job.jar org.apache.mahout.cf.taste.hadoop.item.RecommenderJob -s SIMILARITY_COOCCURRENCE --input small_data_set.txt --output small_data_set_output
The Mahout recommendation output file shows:
2 [104:4.5]
3 [101:5.0]
Which I believe means:
User 2 would be recommended item 104. Since user 3 rated item 104 a 3 this may account for the 4.5 recommendation score vs. the result below…
User 3 would be recommended item 101. Since user 2 rated item 101 a "4" this may account for a slightly higher recommendation score of 5.
Is this correct?
Why isn't user 1 included in the recommendation output file? User 1 could have received a recommendation for Item 102 because user 2 and user 3 rated it. Is the data set too small?
Thanks in advance.
Several mistakes may be present in your data, the first two here will cause undefined behavior:
IDs must be contiguous non-zero integers starting at 0 so you need to map your IDs above somehow. So your-user-ID = 1 will be a Mahout-user-ID = 0. The same for items, your-item-ID = 101 will be Mahout-user-ID = 0.
You should omit the 0 values from the input altogether if you mean that the user has expressed no preference, this makes the preference "undefined" in a sense. To do this omit the lines entirely.
Always use SIMILARITY_LOGLIKELIHOOD, it is widely measured as doing significantly better than the other methods unless you are trying to predict ratings, in that case use cosine.
If you use LLR similarity you should omit the values since they will be ignored.
There are very few uses for preference values unless you are trying to predict a user's rating for an item. The preference weights are useless in determining recommendation ranking, which is the typical thing to optimize. If you want to recommend the right things in the right order toss the values and use LLR.
The other thing that people sometimes do with values is show some weight of preference so 1 = a view of a product page and 5 = a product purchase. This will not work! I tried this with a large ecommerce dataset and found the recommendations were worse when adding in product views, even though there was 100 times more data. They are fundamentally different user actions with different user intent and so can't be mixed in this way.
If you really do want to mix different actions use the new multimodal recommender based on Mahout, Spark, and Solr described on the Mahout site here: It allows cross-cooccurrence type indicator calculations so you can use user location, likes and dislikes, view and purchase. Virtually the entire user clickstream can be used. But only with cross-cooccurrence correlating one action to the canonical "best" action, the one you want to recommend.
My development team is working with OBIEE 11G to make an analysis as follows:
Discover which policyholders are in alert. An alert is defined as this: when the quantity of claims of a policyholder is superior to a certain threshold. If the policyholder has at least one alert in one of its claims, then the policyholder is in alert. The problem is that those thresholds are defined to a particular key (combination of type of client, range of age, type of pathology and other stuff) and a policyholder can have many keys and a threshold for each key, so the quantity of claims varies. Something like this:
Policyholder Key #Claims Threshold
ABC123 XYZ 3 4
WQE 3 2
EFG456 ABC 1 2
The ABC123 policyholder has 6 claims in total, 3 for the key XYZ (which has a threshold of 4) and 3 for the key WQE (which has a threshold of 2). On the other hand, the EFG456 policyholder has 1 claim for the key ABC that has a threshold of 2. So in this case, ABC123 policyholder should be in alert because the quantity of claims for the key WQE is greater than the threshold.
So, in OBIEE 11G my team added two columns, one to mark the records in alert and one to mark the records which are not in alert. Like this:
Policyholder Key #Claims Threshold Alert notAlert
ABC123 XYZ 3 4 0 1
WQE 3 2 1 0
EFG456 ABC 1 2 0 1
You see the problem now? OBIEE 11G does not see policyholder ABC123 as a unit and mark it both as in alert and not in alert, which is wrong. The correct info should be:
Policyholder Key #Claims Threshold Alert notAlert
ABC123 XYZ 3 4 0 0
WQE 3 2 1 0
EFG456 ABC 1 2 0 1
Because, it doesn't matter if the policyholder did not reach the alert for key XYZ. If an alert is discovered, the complete file of the policyholder is examined to resolve the alert.
Is there anyway of telling this to OBIEE 11G???
Please help!!
I think this is a dimensional modeling problem instead of an OBIEE one:
In order to help I will make a few assumptions:
PolicyHolder and Key are separate dimensions:
Although the "key" dimension contains some attributes from the policyholder,
such as type of client and age group; it also combines other entities like pathology and to me
that is enough to consider it at least a mini dimension.
The "Is in alert flag" can be modeled as a factless fact table:
It looks like you only need to know if a particular policyholder is in alert,
there is no metric associated with the event and you only need a flag that is either 0 or 1. This can be solved with a simple table that includes at least 3 columns: FK_POLICYHOLDER,FK_DATE and the flag. You already have a flag but it is included in the claims table as a calculated column, if you model this flag as a separate table you will have control of the dimensionality and granularity of the alert. See What dw model is appropriate when there's no measure?.
The metric "number of claims" has a different dimensionality than the alert flag.
I think the crux of the problem is that flags are calculated at the key level but for reporting purposes are only needed at the Policyholder level.
If you want alerts to be assigned to a PolicyHolder "as a unit" then you need a fact table that is linked to the PolicyHolder dimension and NOT LINKED to the key dimension
Concretely:
Create a separate dimension table for your "Key" entity (type of client, pathology, etc.)
Create a new factless fact table that contains alerts for a policyholder, this table should not link to the Key dimension.
Change the "Alert" column in your report, you should get that value from the flag counter of your new factless fact table.
Firstly, the ALERT columns seem redundant. It's an incredibly simple calculation that would be better done by OBI dynamically. That way you can check for policy holders in alert (on the aggregate of their keys) or for each key.
If I wanted to fix that calculation in OBI I would do it with a calculated logical column in the BMM (based on other logical columns) simply evaluating CLAIMS against THRESHOLD:
CASE WHEN CLAIMS >= THRESHOLD THEN 1 ELSE 0 END
That way the flag can work at multiple levels (either for POLICYHOLDER or KEY). But it seems a very simple calculation that could just be done in the Analysis as a filter (or selection step).
Even simpler though (assuming you have CLAIMS and THRESHOLD as measure columns with a SUM aggregation, and POLICYHOLDER and KEY as dimension column) would be to ignore any sort of alert column altogether. If you don't bring KEY into the Analysis OBI would give you each policy holder, their total claims and total threshold. You could then use selection steps or a filter in the criteria to remove those not over the threshold.