I am trying to come up with a algorithm for the following problem.
There is a set of N objects with M different variations of each object. The goal is to find which variation is the best for each object based on feedback from different users.
At the end, the users will be placed in a category to determine which category prefers which variation.
It is required that at most two variations of an object are placed side by side.
The problem with this is that if M is large then the number of possible combinations become too large and the user may become disinterested and potentially skew the results.
The Elo algorithm/score can be used once I know the order of selection from the user as discussed in this this post
Comparison-based ranking algorithm
Question:
Is there an algorithm that can reduce the number of possible combinations presented to a user and still get correct order?
example: 7 different types of fruits. Each fruit is available in 5 different shapes. The users give their ranking of 1-5 for each fruit based on the size they prefer. This means that for each fruit there are max 10 combinations the user has to choose from (since sizes are different, no point presenting as {1,1}). How would I reduce "10 combinations" ?
If the user's preferences are always consistent with a total order, and you can change comparisons to take account of the results of comparisons made so far, you just need an efficient sorting algorithm. For 5 items it seems that you need a minimum of 7 comparisons - see Sorting 5 elements with minimum element comparison. You could also look at http://en.wikipedia.org/wiki/Sorting_network.
In general, when you are trying to produce some sort of experimental design, you will often find that making random comparisons, although not optimum, isn't too far away from the best possible answer.
Related
I'm looking for an algorithm to sort a large number of items using the fewest comparisons. My specific case makes it unclear which of the obvious approaches is appropriate: the comparison function is slow and non-deterministic so it can make errors, because it's a human brain.
In other words, I want to sort arbitrary items on my computer into a list from "best" to "worst" by comparing them two at a time. They could be images, strings, songs, anything. My program would display two things for me to compare. The program doesn't know anything about what is being compared, its job is just to decide which pairs to compare. So that gives the following criteria
It's a comparison sort - The only time the user sees items is when comparing two of them.
It's an out-of-place sort - I don't want to move the actual files, so items can have placeholder values or metadata files
Comparisons are slow - at least compared to a computer. Data locality won't have an effect, but comparing obvious disparities will be quick, similar items will be slow.
Comparison is subjective - comparison results could vary slightly at different times.
Items don't have a total order - the desired outcome is an order that is "good enough" at runtime, which will vary depending on context.
Items will rarely be almost sorted - in fact, the goal is to get random data to an almost-sorted state.
Sets usually will contain runs - If every song on an album is a banger, it might be faster because of (2) to compare them to the next album rather than each other. Imagine a set {10.0, 10.2, 10.9, 5.0, 4.2, 6.9} where integer comparisons are fast but float comparisons are very slow.
There are many different ways to approach this problem. In addition to sorting algorithms, it's similar to creating tournament brackets, and voting systems. As that table illustrates, there are countless ways to define and solve the problem based on various criteria. For this question I'm only interested in treating it as a sorting problem where the user is comparing two items at a time and choosing a preference. So what approach makes sense for either of the two following versions of the question?
How to choose pairs to get the best result in O(n) or fewer operations? (for example compare random pairs of items with n/2 operations, then use n/2 operations to spot check or fine-tune)
How to create the best order with additional operations but no additional comparisons (e.g. similar items are sorted into buckets or losers are removed, anything that doesn't increase the number of comparisons)
The representation of comparison results can be anything that makes the solution convenient - it can be dictionary keys corresponding to the final order, a "score" based on number of comparisons, a database, etc.
Edit: The comments have helped clarify the question in that the goal is similar to something like bucket sort, samplesort or the partitioning phase of quicksort. So the question could be rephrased as how to choose good partitions based on comparisons, but I'm also interested in any other ways of using the comparison results that wouldn't be applicable in a standard in-place comparison sort like keeping a score for each item.
I am not very experienced in real world applications of known algorithms, nor do I know many named problems in computer science. However, a problem was presented to me and I'm not quite sure where to research to find a solution.
Problem is formally presented as:
There are N single-valued elements and they are normally distributed. There are K groups with each group having its own number of elements it can contain Ki. Group sizes Ki do not necessarily have to be different from one another (they can all be the same). Add each element Ni to a group in such a way that:
Primarily, mean of each group mean is as close to each other as possible
Secondarily, the standard deviation of each group is a close as possible
N, Ni, K, Ki are variables that are given at the start and are constant throughout a single problem. Mean similarity has a precedence over standard deviation, but one group having exclusively mean values and the other group having extreme/outlier values should be avoided. Number of elements is usually around 100 or in order of magnitude, so more complex but precise algorithms are preferred. This problem is translated so some details could be lost to translation so do not refrain from asking for clarification.
My main issue is that I do not know what areas to research; do I research evolutionary algorithms (Multicriteria optimization), linear programming (K+1 equalities and inequalities), dynamic programming (Partitioning into subgroups), statistics (Sampling methods)? Does this problem already have very well known solution?
Example of this problem would be: A class has 30 students with each student having a grade from 0 to 100. Group each student in six groups of five students so that each group has equal means and variance. Obviously, it's impossible to get equal means and variance between group, but the point is to get close the possible. Another example would be 3 groups of 15, 10 and 5 students respectively.
Thanks in advance.
I have a folder full of images. There are too many to just 'rank'. I made a program that shows two at a time and let's the user pick which one of the two is better. At the end I would like all of the photos to be ordered from best to worst.
I am purely trying to optimize for the fewest amount of comparisons possible. I don't care if the program runs in n cubed time. I've read the other questions here with similar questions but I'm looking for something more advanced.
I'm thinking maybe some sort of algorithm that based on what comparisons you've already made, the program chooses two images to compare that will offer the most information. Maybe even an algorithm that makes complex connections to help determine the orders and potential orders.
Like I said I don't care if it is slow just purely trying to minimize comparisons
If total order exists, you need at least nlog2(n) comparisons. It can be easily proved mathematically. No way around. So regular sorting algorithms in nlog(n) will do the job.
What you are trying to do is called 'topological sort'. Google it and read about it in wikipedia. You can achieve partial sorts in less comparisons. Its kind of a graduate sort. The more comparisons you get, the better the result will be.
However, what do you do if no total order exists? Humans are not able to generate a total order for subjective tasks.
For example picture 1 is better than 2, 2 is better than 3 but 3 is better than 1.
In this case no sorting algorithm can produce a permutation which will match all the decisions. During topological sort, you can detect those inconsitent decisions and get rid of them.
You are looking for a sorting algorithm - pick one. Most algorithms just need a comparison function (a < b?). This is when you show the user two pictures and he has to choose the better one.
You might wan't to read trough some of the algorithms and choose the best one for you. E.g. on quicksort, you would pick a random picture and the user have to compare this picture against all other pictures in the first round - might be too boring from the end user perspective.
Edit: to include concrete explanation of my problem (as correctly deduced by Billiska):
"Set A is the set of users. set B is the set of products. each user rates one or more products. the rating is 1 to 10. you want to deduce for each user, who is the other user that has the most similar taste to him."
"The other half is choosing how exactly do you want to rank similarity of A-elements." - this is also part of my problem. I feel that users who have rated similarly across the most products have the closed affinity, but at the same time I want to avoid user1 and user2 with many mediocre matches being matched ahead of user1 and user3 who have just a few very good matches (perhaps I need a non-linear score).
Disclaimer: I have never used a graph database.
I two sets of data A and B. A has a relationship with zero to many Bs. Each relationship has a fixed value.
e.g.
A1--5-->B10
A1--1-->B1000
So my initial thought "Yay, thats a graph, time to learn about graph databases!" but before I get too carried away.... the only reason for doing this so that I can answer the question....
For each A find the set of As that are most similar based on their weights, where I want to take in to consideration
the difference in weights (assuming 1 to 10) so that 10 and 10 is scored higher than 10 and 1; but then I have an issue with how to handle where is is no pairing (or do I - I am just not sure)
the number of vertices (ignoring weights) that two sets have in common. Intention is to rank two As with lots of vertices to the same Bs higher than two As that have just a single matching vertices.
What would the best approach be to doing this?
(Supplementary - as I realise this may count a second question): How would that approach change if the set of A was in the millions and B in the 100 thousands and I needed real-time answers?
Not a complete answer. I don't fully understand the technique either. but I know it's very relevant.
If you view the data as a matrix. e.g. have the rows correspond to set A, have the columns correspond to set B, and the entries are the weight.
Then it's a matrix with some missing values.
One technique used in recommender system (under the category of collaborative filtering) is low-rank approximation.
It's based on the assumption that the said user-product rating matrix usually have low-rank.
In a rough sense, the said matrix have low-rank if the rows of many users could be expressed as linear combination of other users' row.
I hope this would give a start for further reading.
Yes, you could see in low-rank approximation wiki page that the technique can be used to guess the missing entries (the missing rating). I know it's a different problem, but related.
I was out buying groceries the other day and needed to search through my wallet to find my credit card, my customer rewards (loyalty) card, and my photo ID. My wallet has dozens of other cards in it (work ID, other credit cards, etc.), so it took me a while to find everything.
My wallet has six slots in it where I can put cards, with only the first card in each slot initially visible at any one time. If I want to find a specific card, I have to remember which slot it's in, then look at all the cards in that slot one at a time to find it. The closer it is to the front of a slot, the easier it is to find it.
It occurred to me that this is pretty much a data structures question. Suppose that you have a data structure consisting of k linked lists, each of which can store an arbitrary number of elements. You want to distribute elements into the linked lists in a way that minimizes looking up. You can use whatever system you want for distributing elements into the different lists, and can reorder lists whenever you'd like. Given this setup, is there an optimal way to order the lists, under any of the assumptions:
You are given the probabilities of accessing each element in advance and accesses are independent, or
You have no knowledge in advance what elements will be accessed when?
The informal system I use in my wallet is to "hash" cards into different slots based on use case (IDs, credit cards, loyalty cards, etc.), then keep elements within each slot roughly sorted by access frequency. However, maybe there's a better way to do this (for example, storing the k most frequently-used elements at the front of each slot regardless of their use case).
Is there a known system for solving this problem? Is this a well-known problem in data structures? If so, what's the optimal solution?
(In case this doesn't seem programming-related: I could imagine an application in which the user has several drop-down lists of commonly-used items, and wants to keep those items ordered in a way that minimizes the time required to find a particular item.)
Although not a full answer for general k, this 1985 paper by Sleator and Tarjan gives a helpful analysis of the amortised complexity of several dynamic list update algorithms for the case k=1. It turns out that move-to-front is very good: assuming fixed access probabilities for each item, it never requires more than twice the number of steps (moves and swaps) that would be required by the optimal (static) algorithm, in which all elements are listed in nonincreasing order of probability.
Interestingly, a couple of other plausible heuristics -- namely swapping with the previous element after finding the desired element, and maintaining order according to explicit frequency counts -- don't share this desirable property. OTOH, on p. 2 they mention that an earlier paper by Rivest showed that the expected amortised cost of any access under swap-with-previous is <= the corresponding cost under move-to-front.
I've only read the first few pages, but it looks relevant to me. Hope it helps!
You need to look at skip lists. There is a similar problem with arranging stations for a train system where there are express trains and regular trains. An express train stops only at express stations while regular trains stop at regular stations and express stations. Where should the express stops be placed so that one can minimize the average number of stops when travelling from a start station to any station.
The solution is to use stations at ternary numbers (i.e., at 1, 3, 6, 10 etc where T_n = n * (n + 1) / 2).
This is assuming all stops (or cards) are equally likely to be accessed.
If you know the access probabilities of your n cards in advance and you have k wallet slots and accesses are independent, isn't it fairly clear that the greedy solution is optimal? That is, the most frequently-accessed k cards go at the front of the pockets, next-most-frequently accessed k go immediately behind, and so forth? (You never want a lower-probability card ranked before a higher-probability card.)
If you don't know the access probabilities, but you do know they exist and that card accesses are independent, I imagine sorting the cards similarly, but by number-of-accesses-seen-so-far instead is asymptotically optimal. (Move-to-front is cool too, but I don't see an obvious reason to use it here.)
Perhaps you get something interesting if you penalise card moves as well; if I have any known probability distribution on card accesses, independent or not, I just greedily re-sort the cards every time I do an access.