Edit: to include concrete explanation of my problem (as correctly deduced by Billiska):
"Set A is the set of users. set B is the set of products. each user rates one or more products. the rating is 1 to 10. you want to deduce for each user, who is the other user that has the most similar taste to him."
"The other half is choosing how exactly do you want to rank similarity of A-elements." - this is also part of my problem. I feel that users who have rated similarly across the most products have the closed affinity, but at the same time I want to avoid user1 and user2 with many mediocre matches being matched ahead of user1 and user3 who have just a few very good matches (perhaps I need a non-linear score).
Disclaimer: I have never used a graph database.
I two sets of data A and B. A has a relationship with zero to many Bs. Each relationship has a fixed value.
e.g.
A1--5-->B10
A1--1-->B1000
So my initial thought "Yay, thats a graph, time to learn about graph databases!" but before I get too carried away.... the only reason for doing this so that I can answer the question....
For each A find the set of As that are most similar based on their weights, where I want to take in to consideration
the difference in weights (assuming 1 to 10) so that 10 and 10 is scored higher than 10 and 1; but then I have an issue with how to handle where is is no pairing (or do I - I am just not sure)
the number of vertices (ignoring weights) that two sets have in common. Intention is to rank two As with lots of vertices to the same Bs higher than two As that have just a single matching vertices.
What would the best approach be to doing this?
(Supplementary - as I realise this may count a second question): How would that approach change if the set of A was in the millions and B in the 100 thousands and I needed real-time answers?
Not a complete answer. I don't fully understand the technique either. but I know it's very relevant.
If you view the data as a matrix. e.g. have the rows correspond to set A, have the columns correspond to set B, and the entries are the weight.
Then it's a matrix with some missing values.
One technique used in recommender system (under the category of collaborative filtering) is low-rank approximation.
It's based on the assumption that the said user-product rating matrix usually have low-rank.
In a rough sense, the said matrix have low-rank if the rows of many users could be expressed as linear combination of other users' row.
I hope this would give a start for further reading.
Yes, you could see in low-rank approximation wiki page that the technique can be used to guess the missing entries (the missing rating). I know it's a different problem, but related.
Related
I have a group of about 75 people. Each user has liked or disliked the other 74 users. These people need to be divided in about 15 groups of various sizes (4 tot 8 people). They need to be grouped together so that the groups consist only of people who all liked eachother, or at least as much as possible.
I'm not sure what the best algorithm is to tackle this problem. Any pointers or pseudo code much appreciated!
This isn't formed quite well enough to suggest a particular algorithm. I suggest clustering and "clique" algorithms, but you'll still need to define your "best grouping" metric. "as much as possible", in the face of trade-offs and undefined desires, is meaningless. Your clustering algorithm will need this metric to form your groups.
Data representation is simple: you need a directed graph. An edge from A to B means that A likes B; lack of an edge means A doesn't like B. That will encode the "likes" information in a form tractable to your algorithm. You have 75 nodes and one edge for every "like".
Start by researching clique algorithms; a "clique" is a set in which every member likes every other member. These will likely form the basis of your clustering.
Note, however, that you have to define your trade-offs. For instance, consider the case of 13 nodes consisting of two distinct cliques of 4 and 8 people, plus one person who likes one member of the 8-clique. There are no other "likes" in the graph.
How do you place that 13th person? Do you split the 8-clique and add them to the group with the person they like? If so, you do split off 3 or 4 people form the 8? Is it fair to break 15 or 16 "likes" to put that person with the one person they like -- who doesn't like them? Is it better to add the 13th person to the mutually antagonistic clique of 4?
Your eval function must return a well-ordered metric for all of these situations. It will need to support adding to a group, splitting a large group, etc.
It sounds like a clustering problem.
Each user is a node. If two users liked each other, there is a path between the nodes.
If users disliked each other, or one like another but not the other way around, then there is no path between those nodes.
Once you process the like information into a graph, you will get a connected graph (maybe some nodes will be isolated if no one likes that user). Now the question becomes how to cut that graph into clusters of 4-8 connected nodes, which is a well studied problem with a lot of possible algorithms:
https://www.google.com/search?q=divide+connected+graph+into+clusters
If you want to differentiate between the cases when two people dislike each other vs one person likes another and that person dislikes the first one, than you can also introduce weight on the path - each like is +1, and dislike is -1. Then the question becomes that of partitioning a weighted graph.
I am a computer scientist working on a problem that requires some statistical measures, though (not being very well versed in statistics) I am not quite sure what statistics to use.
Overview:
I have a set of questions (from StackExchange sites, of course) and with this data, I am exploring algorithms that will find similar questions to one I provide. Yes, StackExchange sites already perform this function, as do many other Q&A sites. What I am trying to do is analyze the methods and algorithms that people employ to accomplish this task to see which methods perform best. My problem is finding appropriate statistical measures to quantitatively determine "which methods perform best."
The Data:
I have a set of StackExchange questions, each of which is saved like this: {'questionID':"...", 'questionText':"..."}. For each question, I have a set of other questions either linked to it or from it. It is common practice for question answer-ers on StackExchange sites to add links to other similar posts in their answers, i.e. "Have you read this post [insert link to post here] by so-and-so? They're solving a similar problem..." I am considering these linked questions to be 'similar' to one another.
More concretely, let's say we have question A.
Question A has a collection of linked questions {B, C, D}. So A_linked = {B, C, D}.
My intuition tells me that the transitive property does not apply here. That is, just because A is similar to B, and A is similar to C, I cannot confirm that B is similar to C. (Or can I?)
However, I can confidently say that if A is similar to B, then B is similar to A.
So, to simplify these relationships, I will create a set of similar pairs: {A, B}, {A, C}, {A, D}
These pairs will serve as a ground truth of sorts. These are questions we know are similar to one another, so their similarity confidence values equals 1. So similarityConfidence({A,B}) = 1
Something to note about this set-up is that we know only a few similar questions for each question in our dataset. What we don't know is whether some other question E is also similar to A. It might be similar, it might not be similar, we don't know. So our 'ground truth' is really only some of the truth.
The algorithm:
A simplified pseudocode version of the algorithm is this:
for q in questions: #remember q = {'questionID':"...", 'questionText':"..."}
similarities = {} # will hold a mapping from questionID to similarity to q
q_Vector = vectorize(q) # create a vector from question text (each word is a dimension, value is unimportant)
for o in questions: #such that q!=o
o_Vector = vectorize(o)
similarities[o['questionID']] = cosineSimilarity(q_Vector,o_Vector) # values will be in the range of 1.0=identical to 0.0=not similar at all
#now what???
So now I have a complete mapping of cosine similarity scores between q and every other question in my dataset. My ultimate goal is to run this code for many variations of the vectorize() function (each of which will return a slightly different vector) and determine which variation performs best in terms of cosine scores.
The Problem:
So here lies my question. Now what? How do I quantitatively measure how good these cosine scores are?
These are some ideas of measurements I've brainstormed (though I feel like they're unrefined, incomplete):
Some sort of error function similar to Root Mean Square Error (RMSE). So for each document in the ground-truth similarities list, accumulate the squared error (with error roughly defined as 1-similarities[questionID]). We would then divide that accumulation by the total number of similar pairs *2 (since we will consider a->b as well as b->a). Finally, we'd take the square root of this error.
This requires some thought, since these values may need to be normalized. Though all variations of vectorize() will produce cosine scores in the range of 0 to 1, the cosine scores from two vectorize() functions may not compare to one another. vectorize_1() might have generally high cosine scores for each question, so a score of .5 might be a very low score. Alternatively, vectorize_2() might have generally low cosine scores for each question, so a .5 might be a very high score. I need to account for this variation somehow.
Also, I proposed an error function of 1-similarities[questionID]. I chose 1 because we know that the two questions are similar, therefore our similarity confidence is 1. However, a cosine similarity score of 1 means the two questions are identical. We are not claiming that our 'linked' questions are identical, merely that they are similar. Is this an issue?
We can find the recall (number of similar documents returned/number of similar documents), so long as we set a threshold for which questions we return as 'similar' and which we do not.
Although, for the reasons mentioned above, this shouldn't be a predefined threshold like similarity[documentID]>7 because each vectorize() function may return different values.
We could find recall # k, where we only analyze the top k posts.
This could be problematic though, because we don't have the full ground truth. If we set k=5, and only 1 document (B) of the 3 documents we knew to be relevant ({B,C,D}) were in the top 5, we do not know whether the other 4 top documents are actually equally or more similar to A than the 3 we knew about, but no one linked them.
Do you have any other ideas? How can I quantitatively measure which vectorize() function performs best?
First note that this question is highly relevant to the Information Retrieval problem of similarity and near duplicate detection.
As far as I see it, your problem can be split to two problems:
Determining ground truth: In many 'competitions', where the ground truth is unclear, the way
to determine which are the relevant documents is by taking documents
which were returned by X% of the candidates.
Choosing the best candidate: first note that usually comparing scores of two different algorithms is irrelevant. The scales could be completely different, and it is usually pointless. In order to compare between two algorithms, you should use the ranking of each algorithm - how each algorithm ranks documents, and how far is it from the ground truth.
A naive way to do it is simply using precision and recall - and you can compare them with the f-measure. Problem is, a document that is ranked 10th is as important as a document that is ranked 1st.
A better way to do it is NDCG - this is the most common way to compare algorithms in most articles I have encountered, and is widely used in the main IR conferences: WWW, sigIR. NDCG is giving a score to a ranking, and giving high importance to documents that were ranked 'better', and reduced importance to documents that were ranked 'worse'. Another common variation is NDCG#k - where NDCG is used only up to the k'th document for each query.
Hope this background and advises help.
I am trying to come up with a algorithm for the following problem.
There is a set of N objects with M different variations of each object. The goal is to find which variation is the best for each object based on feedback from different users.
At the end, the users will be placed in a category to determine which category prefers which variation.
It is required that at most two variations of an object are placed side by side.
The problem with this is that if M is large then the number of possible combinations become too large and the user may become disinterested and potentially skew the results.
The Elo algorithm/score can be used once I know the order of selection from the user as discussed in this this post
Comparison-based ranking algorithm
Question:
Is there an algorithm that can reduce the number of possible combinations presented to a user and still get correct order?
example: 7 different types of fruits. Each fruit is available in 5 different shapes. The users give their ranking of 1-5 for each fruit based on the size they prefer. This means that for each fruit there are max 10 combinations the user has to choose from (since sizes are different, no point presenting as {1,1}). How would I reduce "10 combinations" ?
If the user's preferences are always consistent with a total order, and you can change comparisons to take account of the results of comparisons made so far, you just need an efficient sorting algorithm. For 5 items it seems that you need a minimum of 7 comparisons - see Sorting 5 elements with minimum element comparison. You could also look at http://en.wikipedia.org/wiki/Sorting_network.
In general, when you are trying to produce some sort of experimental design, you will often find that making random comparisons, although not optimum, isn't too far away from the best possible answer.
Hey guys, I have a sort of speed dating type application (not used for dating, just a similar concept) that compares users and matches them in a round based event.
Currently I am storing each user to user comparison (using cosine similarity) and then finding a round in which both users are available. My current set up works fine for smaller scale but I seem to be missing a few matchings in larger data sets.
For example with a setup like so (assuming 6 users, 3 from each group)
Round (User1, User2)
----------------------------
1 (x1,y1) (x2,y2) (x3,y3)
2 (x1,y2) (x2,y3) (x3,y1)
3 (x1,y3) (x2,y1) (x3,y2)
My approach works well right now to ensure I have each user meeting the appropriate user without having overlaps so a user is left out, just not with larger data sets.
My current algorithm
I store a comparison of each user from x to each user from y like so
Round, user1, user2, similarity
And to build the event schedule I simply sort the comparisons by similarity and then iterate over the results, finding an open round for both users, like so:
event.user_maps.all(:order => 'similarity desc').each do |map|
(1..event.rounds).each do |round|
if user_free_in_round?(map.user1) and user_free_in_round?(map.user2)
#creates the pairing and breaks from the loop
end
end
end
This isn't exact code but the general algorithm to build the schedule. Does anyone know a better way of filling in a matrix of item pairings where no one item can be in more than one place in the same slot?
EDIT
For some clarification, the issue I am having is that in larger sets my algorithm of placing highest similarity matches first can sometimes result in collisions. What I mean by that is that the users are paired in such a way that they have no other user to meet with.
Like so:
Round (User1, User2)
----------------------------
1 (x1,y1) (x2,y2) (x3,y3)
2 (x1,y3) (x2,nil) (x3,y1)
3 (x1,y2) (x2,y1) (x3,y2)
I want to be able to prevent this from happening while preserving the need for higher similar users given higher priority in scheduling.
In real scenarios there are far more matches than there are available rounds and an uneven number of x users to y users and in my test cases instead of getting every round full I will only have about 90% or so of them filled while collisions like the above are causing problems.
I think the question still needs clarification even after edit, but I could be missing something.
As far as I can tell, what you want is that each new round should start with the best possible matching (defined as sum of the cosine similarities of all the matched pairs). After any pair (x_i,y_j) have been matched in a round, they are not eligible for the next round.
You could do this by building a bipartite graph where your Xs are nodes in one side and Ys are nodes in another side, and the edge weight is cosine similarity. Then you find the max weighted match in this graph. For the next rounds, you eliminate the edges that have already been used in previous round and run the matching algorithm again. For details on how to code max weight matching in bipartite graph, see here.
BTW, this solution is not optimum since we are proceeding from one round to next in a greedy fashion. I have a feeling that getting the optimum solution would be NP hard, but I don't have a proof so can't be sure.
I agree that the question still needs clarification. As Amit expressed, I have a gut feeling that this is an NP hard problem, so I am assuming that you are looking for an approximate solution.
That said, I would need more information on the tradeoffs you would be willing to make (and perhaps I'm just missing something in your question). What are the explicit goals of the algorithm?
Is there a lower threshold for similarity below which you don't want a pairing to happen? I'm still a bit confused as to why there would be individuals which could not be paired up at all during a given round...
Essentially, you are performing a search over the space of possible pairings, correct? Maybe you could use backtracking or some form of constraint-based algorithm to make sure that you can obtain a complete solution for a given round...?
We have a simulation program where we take a very large population of individual people and group them into families. Each family is then run through the simulation.
I am in charge of grouping the individuals into families, and I think it is a really cool problem.
Right now, my technique is pretty naive/simple. Each individual record has some characteristics, including married/single, age, gender, and income level. For married people I select an individual and loop through the population and look for a match based on a match function. For people/couples with children I essentially do the same thing, looking for a random number of children (selected according to an empirical distribution) and then loop through all of the children and pick them out and add them to the family based on a match function. After this, not everybody is matched, so I relax the restrictions in my match function and loop through again. I keep doing this, but I stop before my match function gets too ridiculous (marries 85-year-olds to 20-year-olds for example). Anyone who is leftover is written out as a single person.
This works well enough for our current purposes, and I'll probably never get time or permission to rework it, but I at least want to plan for the occasion or learn some cool stuff - even if I never use it. Also, I'm afraid the algorithm will not work very well for smaller sample sizes. Does anybody know what type of algorithms I can study that might relate to this problem or how I might go about formalizing it?
For reference, I'm comfortable with chapters 1-26 of CLRS, but I haven't really touched NP-Completeness or Approximation Algorithms. Not that you shouldn't bring up those topics, but if you do, maybe go easy on me because I probably won't understand everything you are talking about right away. :) I also don't really know anything about evolutionary algorithms.
Edit: I am specifically looking to improve the following:
Less ridiculous marriages.
Less single people at the end.
Perhaps what you are looking for is cluster analysis?
Lets try to think of your problem like this (starting by solving the spouses matching):
If you were to have a matrix where each row is a male and each column is a female, and every cell in that matrix is the match function's returned value, what you are now looking for is selecting cells so that there won't be a row or a column in which more than one cell is selected, and the total sum of all selected cells should be maximal. This is very similar to the N Queens Problem, with the modification that each allocation of a "queen" has a reward (which we should maximize).
You could solve this problem by using a graph where:
You have a root,
each of the first raw's cells' values is an edge's weight leading to first depth vertices
each of the second raw's cells' values is an edge's weight leading to second depth vertices..
Etc.
(Notice that when you find a match to the first female, you shouldn't consider her anymore, and so for every other female you find a match to)
Then finding the maximum allocation can be done by BFS, or better still by A* (notice A* typically looks for minimum cost, so you'll have to modify it a bit).
For matching between couples (or singles, more on that later..) and children, I think KNN with some modifications is your best bet, but you'll need to optimize it to your needs. But now I have to relate to your edit..
How do you measure your algorithm's efficiency?
You need a function that receives the expected distribution of all states (single, married with one children, single with two children, etc.), and the distribution of all states in your solution, and grades the solution accordingly. How do you calculate the expected distribution? That's quite a bit of statistics work..
First you need to know the distribution of all states (single, married.. as mentioned above) in the population,
then you need to know the distribution of ages and genders in the population,
and last thing you need to know - the distribution of ages and genders in your population.
Only then, according to those three, can you calculate how many people you expect to be in each state.. And then you can measure the distance between what you expected and what you got... That is a lot of typing.. Sorry for the general parts...