Assign people to monthly groups without repeating - algorithm

I have a problem for which I'm not sure there is a simple answer...
I have some number of captains (right now it's 11, but could change) that organize monthly meetings of members of an organization. The members of the organization are categorized into 1 of ~14 networks.
I'm tasked with writing a scheduler of sorts that fits members into groups with 1 captain such that:
No 2 members of the same network is ever in a group
No 2 members participate in the same meeting for 3 months.
I need to minimize the number of unassigned members, as they will be assigned manually as the admin sees fit.
Right now I'm trying a randomized brute force method that essentially generates the groups randomly, checks for repeats over last month and the month prior, and kicks out the bad matrix if it finds one and regenerates. I made it iterate over several thousand times and it's pretty much impossible for the randomized brute force method to work in any sort of reasonable amount of time.
I'd like some sort of algorithm that could fail gracefully in that if it's mathematically impossible for the rules to be followed 100%, it still generates a matrix with as few "errors" as possible.
I've look at the Round-Robin Algorithm, but it appears that's only good for groups of 2.
I think the thing that makes this more difficult is there's a small number of participants (67) compared to the number of captains and networks.
I'm not necessarily asking for code to do this, but more the types of algorithms that could handle something like this.

Related

what should I use to accurately predict the success of a team consisting of 6 individuals against another team based on past individual performance

For a given team of 6 individuals, calculate the probability that this particular configuration of 6 will defeat a different, known, team based on how well each of the chosen 12 individuals have performed in past matches within different teams.
Breakdown of rules and clarifications:
2 teams compete against each other.
For each competition, team configurations differ.
6 individuals are randomly assigned to each team from a large pool of N individuals.
Each "match" will result in either a win, draw, or loss.
Rows of raw data display: the individuals within both teams AND the match result
what is the best way to go about solving this problem? Initially, I thought about using a modified Elo system for each individual. Is this the correct path, and what else could be used instead? I would love to read some papers on the subject.

How to choose matchups in an ELO ratings system as matchups accumulate

I'm working on a crowdsourced app that will pit about 64 fictional strongmen/strongwomen from different franchises against one another and try and determine who the strongest is. (Think "Batman vs. Spiderman" writ large). Users will choose the winner of any given matchup between two at a time.
After researching many sorting algorithms, I found this fantastic SO post outlining the ELO rating system, which seems absolutely perfect. I've read up on the system and understand both how to award/subtract points in a matchup and how to calculate the performance rating between any two characters based on past results.
What I can't seem to find is any efficient and sensible way to determine which two characters to pit against one another at a given time. Naturally it will start off randomly, but quickly points will accumulate or degrade. We can expect a lot of disagreement but also, if I design this correctly, a large amount of user participation.
So imagine you arrive at this feature after 50,000 votes have been cast. Given that we can expect all sorts of non-transitive results under the hood, and a fair amount of deviance from the performance ratings, is there a way to calculate which matchups I most need more data on? It doesn't seem as simple as choosing two adjacent characters in a sorted list with the closest scores, or just focusing at the top of the list.
With 64 entrants (and yes, I did consider and reject a bracket!), I'm not worried about recomputing the performance ratings after every matchup. I just don't know how to choose the next one, seeing as we'll be ignorant of each voter's biases and favorite characters.
The amazing variation that you experience with multiplayer games is that different people with different ratings "queue up" at different times.
By the ELO system, ideally all players should be matched up with an available player with the closest score to them. Since, if I understand correctly, the 64 "players" in your game are always available, this combination leads to lack of variety, as optimal match ups will always be, well, optimal.
To resolve this, I suggest implementing a priority queue, based on when your "players" feel like playing again. For example, if one wants to take a long break, they may receive a low priority and be placed towards the end of the queue, meaning it will be a while before you see them again. If one wants to take a short break, maybe after about 10 matches, you'll see them in a match again.
This "desire" can be done randomly, and you can assign different characteristics to each character to skew this behaviour, such as, "winning against a higher ELO player will make it more likely that this player will play again sooner". From a game design perspective, these personalities would make the characters seem more interesting to me, making me want to stick around.
So here you have an ordered list of players who want to play. I can think of three approaches you might take for the actual matchmaking:
Peek at the first 5 players in the queue and pick the best match up
Match the first player with their best match in the next 4 players in the queue (presumably waited the longest so should be queued immediately, regardless of the fairness of the match up)
A combination of both, where if the person at the head of the list doesn't get picked, they'll increase in "entropy", which affects the ELO calculation making them more likely to get matched up
Edit
On an implementation perspective, I'd recommend using a delta list instead of an actual priority queue since players should be "promoted" as they wait.
To avoid obvious winner vs looser situation you group the players in tiers.
Obviously, initially everybody will be in the same tier [0 - N1].
Then within the tier you make a rotational schedule so each two parties can "match" at least once.
However if you don't want to maintain schedule ...then always match with the party who participated in the least amount of "matches". If there are multiple of those make a random pick.
This way you ensure that everybody participates fairly the same amount of "matches".

Divide a group of people into teams with constraints

So I want to write a small program that would be able to take a group of people (100-200) and divide them into several equal groups (10-15) with constraints.
Each person has a city they came from (usually around 8-12 different cities total).
Each person was in a group of people before this new division (10-20 different groups).
Thats it for the example.
Now I want to divide those people in different group such that we strive to have same number of people from different cities in each team (so not all new yorkes are in the same team etc) and strive that people who have been in the same team before wont team up again.
Cant find an algorithm that can help me.
There is an np-complete feeling about finding an absolute best answer. But you just want a pretty good answer, pretty fast, it isn't hard to come up with a heuristic.
Set up your empty teams. Decide on the maximum team size. Sort people by the number of other people to avoid (same city or past same team) descending. Put each person in the non-full team with the least other people you are trying to avoid, breaking ties for the team with smaller people, and randomly breaking any remaining ties.
This is not guaranteed to produce optimal results. But it is simple and will produce pretty good ones.

Algorithm for creating a school timetable

I've been wondering if there are known solutions for algorithm of creating a school timetable. Basically, it's about optimizing "hour-dispersion" (both in teachers and classes case) for given class-subject-teacher associations. We can assume that we have sets of classes, lesson subjects and teachers associated with each other at the input and that timetable should fit between 8AM and 4PM.
I guess that there is probably no accurate algorithm for that, but maybe someone knows a good approximation or hints for developing it.
This problem is NP-Complete!
In a nutshell one needs to explore all possible combinations to find the list of acceptable solutions. Because of the variations in the circumstances in which the problem appears at various schools (for example: Are there constraints with regards to classrooms?, Are some of the classes split in sub-groups some of the time?, Is this a weekly schedule? etc.) there isn't a well known problem class which corresponds to all the scheduling problems. Maybe, the Knapsack problem has many elements of similarity with these problems at large.
A confirmation that this is both a hard problem and one for which people perennially seek a solution, is to check this (long) list of (mostly commercial) software scheduling tools
Because of the big number of variables involved, the biggest source of which are, typically, the faculty member's desires ;-)..., it is typically impractical to consider enumerating all possible combinations. Instead we need to choose an approach which visits a subset of the problem/solution spaces.
- Genetic Algorithms, cited in another answer is (or, IMHO, seems) well equipped to perform this kind of semi-guided search (The problem being to find a good evaluation function for the candidates to be kept for the next generation)
- Graph Rewriting approaches are also of use with this type of combinatorial optimization problems.
Rather than focusing on particular implementations of an automatic schedule generator program, I'd like to suggest a few strategies which can be applied, at the level of the definition of the problem.
The general rationale is that in most real world scheduling problems, some compromises will be required, not all constraints, expressed and implied: will be satisfied fully. Therefore we help ourselves by:
Defining and ranking all known constraints
Reducing the problem space, by manually, providing a set of additional constraints.This may seem counter-intuitive but for example by providing an initial, partially filled schedule (say roughly 30% of the time-slots), in a way that fully satisfies all constraints, and by considering this partial schedule immutable, we significantly reduce the time/space needed to produce candidate solutions. Another way additional constraints help is for example "artificially" adding a constraint which prevent teaching some subjects on some days of the week (if this is a weekly schedule...); this type of constraints results in reducing the problem/solution spaces, without, typically, excluding a significant number of good candidates.
Ensuring that some of the constraints of the problem can be quickly computed. This is often associated with the choice of data model used to represent the problem; the idea is to be able to quickly opt-for (or prune-out) some of the options.
Redefining the problem and allowing some of the constraints to be broken, a few times, (typically towards the end nodes of the graph). The idea here is to either remove some of constraints for filling-in the last few slots in the schedule, or to have the automatic schedule generator program stop shy of completing the whole schedule, instead providing us with a list of a dozen or so plausible candidates. A human is often in a better position to complete the puzzle, as indicated, possibly breaking a few of the contraints, using information which is not typically shared with the automated logic (eg "No mathematics in the afternoon" rule can be broken on occasion for the "advanced math and physics" class; or "It is better to break one of Mr Jones requirements than one of Ms Smith ... ;-) )
In proof-reading this answer , I realize it is quite shy of providing a definite response, but it none the less full of practical suggestions. I hope this help, with what is, after all, a "hard problem".
It's a mess. a royal mess. To add to the answers, already very complete, I want to point out my family experience. My mother was a teacher and used to be involved in the process.
Turns out that having a computer to do so is not only difficult to code per-se, it is also difficult because there are conditions that are difficult to specify to a pre-baked computer program. Examples:
a teacher teaches both at your school and at another institute. Clearly, if he ends the lesson there at 10.30, he cannot start at your premises at 10.30, because he needs some time to commute between the institutes.
two teachers are married. In general, it's considered good practice not to have two married teachers on the same class. These two teachers must therefore have two different classes
two teachers are married, and their child attends the same school. Again, you have to prevent the two teachers to teach in the specific class where their child is.
the school has separate facilities, like one day the class is in one institute, and another day the class is in another.
the school has shared laboratories, but these laboratories are available only on certain weekdays (for security reasons, for example, where additional personnel is required).
some teachers have preferences for the free day: some prefer on Monday, some on Friday, some on Wednesday. Some prefer to come early in the morning, some prefer to come later.
you should not have situations where you have a lesson of say, history at the first hour, then three hours of math, then another hour of history. It does not make sense for the students, nor for the teacher.
you should spread the arguments evenly. It does not make sense to have the first days in the week only math, and then the rest of the week only literature.
you should give some teachers two consecutive hours to do evaluation tests.
As you can see, the problem is not NP-complete, it's NP-insane.
So what they do is that they have a large table with small plastic insets, and they move the insets around until a satisfying result is obtained. They never start from scratch: they normally start from the previous year timetable and make adjustments.
The International Timetabling Competition 2007 had a lesson scheduling track and exam scheduling track. Many researchers participated in that competition. Lots of heuristics and metaheuristics were tried, but in the end the local search metaheuristics (such as Tabu Search and Simulated Annealing) clearly beat other algorithms (such as genetic algorithms).
Take a look at the 2 open source frameworks used by some of the finalists:
JBoss OptaPlanner (Java, open source)
Unitime (Java, open source) - more for universities
One of my half-term assignments was an genetic-algorithm school table generation.
Whole table is one "organism". There were some changes and caveats to the generic genetic algorithms approach:
Rules were made for "illegal tables": two classes in the same classroom, one teacher teaching two groups at the same time etc. These mutations were deemed lethal immediately and a new "organism" was sprouted in place of the "deceased" immediately. The initial one was generated by a series of random tries to get a legal (if senseless) one. Lethal mutation wasn't counted towards count of mutations in iteration.
"Exchange" mutations were much more common than "Modify" mutations. Changes were only between parts of the gene that made sense - no substituting a teacher with a classroom.
Small bonuses were assigned for bundling certain 2 hours together, for assigning same generic classroom in sequence for the same group, for keeping teacher's work hours and class' load continuous. Moderate bonuses were assigned for giving correct classrooms for given subject, keeping class hours within bonds (morning or afternoon), and such. Big bonuses were for assigning correct number of given subject, given workload for a teacher etc.
Teachers could create their workload schedules of "want to work then", "okay to work then", "doesn't like to work then", "can't work then", with proper weights assigned. Whole 24h were legal work hours except night time was very undesired.
The weight function... oh yeah. The weight function was huge, monstrous product (as in multiplication) of weights assigned to selected features and properties. It was extremely steep, one property easily able to change it by an order of magnitude up or down - and there were hundreds or thousands of properties in one organism. This resulted in absolutely HUGE numbers as the weights, and as a direct result, need to use a bignum library (gmp) to perform the calculations. For a small testcase of some 10 groups, 10 teachers and 10 classrooms, the initial set started with note of 10^-200something and finished with 10^+300something. It was totally inefficient when it was more flat. Also, the values grew a lot wider distance with bigger "schools".
Computation time wise, there was little difference between a small population (100) over a long time and a big population (10k+) over less generations. The computation over the same time produced about the same quality.
The calculation (on some 1GHz CPU) would take some 1h to stabilize near 10^+300, generating schedules that looked quite nice, for said 10x10x10 test case.
The problem is easily paralellizable by providing networking facility that would exchange best specimens between computers running the computation.
The resulting program never saw daylight outside getting me a good grade for the semester. It showed some promise but I never got enough motivation to add any GUI and make it usable to general public.
This problem is tougher than it seems.
As others have alluded to, this is a NP-complete problem, but let's analyse what that means.
Basically, it means you have to look at all possible combinations.
But "look at" doesn't tell you much what you need to do.
Generating all possible combinations is easy. It might produce a huge amount of data, but you shouldn't have much problems understanding the concepts of this part of the problem.
The second problem is the one of judging whether a given possible combination is good, bad, or better than the previous "good" solution.
For this you need more than just "is it a possible solution".
For instance, is the same teacher working 5 days a week for X weeks straight? Even if that is a working solution, it might not be a better solution than alternating between two people so that each teacher does one week each. Oh, you didn't think about that? Remember, this is people you're dealing with, not just a resource allocation problem.
Even if one teacher could work full-time for 16 weeks straight, that might be a sub-optimal solution compared to a solution where you try to alternate between teachers, and this kind of balancing is very hard to build into software.
To summarize, producing a good solution to this problem will be worth a lot, to many many people. Hence, it's not an easy problem to break down and solve. Be prepared to stake out some goals that aren't 100% and calling them "good enough".
My timetabling algorithm, implemented in FET (Free Timetabling Software, http://lalescu.ro/liviu/fet/ , a successful application):
The algorithm is heuristic. I named it "recursive swapping".
Input: a set of activities A_1...A_n and the constraints.
Output: a set of times TA_1...TA_n (the time slot of each activity. Rooms are excluded here, for simplicity). The algorithm must put each activity at a time slot, respecting constraints. Each TA_i is between 0 (T_1) and max_time_slots-1 (T_m).
Constraints:
C1) Basic: a list of pairs of activities which cannot be simultaneous (for instance, A_1 and A_2, because they have the same teacher or the same students);
C2) Lots of other constraints (excluded here, for simplicity).
The timetabling algorithm (which I named "recursive swapping"):
Sort activities, most difficult first. Not critical step, but speeds up the algorithm maybe 10 times or more.
Try to place each activity (A_i) in an allowed time slot, following the above order, one at a time. Search for an available slot (T_j) for A_i, in which this activity can be placed respecting the constraints. If more slots are available, choose a random one. If none is available, do recursive swapping:
a. For each time slot T_j, consider what happens if you put A_i into T_j. There will be a list of other activities which don't agree with this move (for instance, activity A_k is on the same slot T_j and has the same teacher or same students as A_i). Keep a list of conflicting activities for each time slot T_j.
b. Choose a slot (T_j) with lowest number of conflicting activities. Say the list of activities in this slot contains 3 activities: A_p, A_q, A_r.
c. Place A_i at T_j and make A_p, A_q, A_r unallocated.
d. Recursively try to place A_p, A_q, A_r (if the level of recursion is not too large, say 14, and if the total number of recursive calls counted since step 2) on A_i began is not too large, say 2*n), as in step 2).
e. If successfully placed A_p, A_q, A_r, return with success, otherwise try other time slots (go to step 2 b) and choose the next best time slot).
f. If all (or a reasonable number of) time slots were tried unsuccessfully, return without success.
g. If we are at level 0, and we had no success in placing A_i, place it like in steps 2 b) and 2 c), but without recursion. We have now 3 - 1 = 2 more activities to place. Go to step 2) (some methods to avoid cycling are used here).
UPDATE: from comments ... should have heuristics too!
I'd go with Prolog ... then use Ruby or Perl or something to cleanup your solution into a prettier form.
teaches(Jill,math).
teaches(Joe,history).
involves(MA101,math).
involves(SS104,history).
myHeuristic(D,A,B) :- [test_case]->D='<';D='>'.
createSchedule :- findall(Class,involves(Class,Subject),Classes),
predsort(myHeuristic,Classes,ClassesNew),
createSchedule(ClassesNew,[]).
createSchedule(Classes,Scheduled) :- [the actual recursive algorithm].
I am (still) in the process of doing something similar to this problem but using the same path as I just mentioned. Prolog (as a functional language) really makes solving NP-Hard problems easier.
Genetic algorithms are often used for such scheduling.
Found this example (Making Class Schedule Using Genetic Algorithm) which matches your requirement pretty well.
Here are a few links I found:
School timetable - Lists some problems involved
A Hybrid Genetic Algorithm for School Timetabling
Scheduling Utilities and Tools
This paper describes the school timetable problem and their approach to the algorithm pretty well: "The Development of SYLLABUS—An Interactive, Constraint-Based Scheduler for Schools and Colleges."[PDF]
The author informs me the SYLLABUS software is still being used/developed here: http://www.scientia.com/uk/
I work on a widely-used scheduling engine which does exactly this. Yes, it is NP-Complete; the best approaches seek to approximate an optimal solution. And, of course there are a lot of different ways to say which one is the "best" solution - is it more important that your teachers are happy with their schedules, or that students get into all their classes, for instance?
The absolute most important question you need to resolve early on is what makes one way of scheduling this system better than another? That is, if I have a schedule with Mrs Jones teaching Math at 8 and Mr Smith teaching Math at 9, is that better or worse than one with both of them teaching Math at 10? Is it better or worse than one with Mrs Jones teaching at 8 and Mr Jones teaching at 2? Why?
The main advice I'd give here is to divide the problem up as much as possible - maybe course by course, maybe teacher by teacher, maybe room by room - and work on solving the sub-problem first. There you should end up with multiple solutions to choose from, and need to pick one as the most likely optimal. Then, work on making the "earlier" sub-problems take into account the needs of later sub-problems in scoring their potential solutions. Then, maybe work on how to get yourself out of painted-into-the-corner situations (assuming you can't anticipate those situations in earlier sub-problems) when you get to a "no valid solutions" state.
A local-search optimization pass is often used to "polish" the end answer for better results.
Note that typically we are dealing with highly resource-constrained systems in school scheduling. Schools don't go through the year with a lot of empty rooms or teachers sitting in the lounge 75% of the day. Approaches which work best in solution-rich environments aren't necessarily applicable in school scheduling.
Generally, constraint programming is a good approach to this type of scheduling problem. A search on "constraint programming" and scheduling or "constraint based scheduling" both within stack overflow and on Google will generate some good references. It's not impossible - it's just a little hard to think about when using traditional optimization methods like linear or integer optimization. One output would be - does a schedule exist that satisfies all the requirements? That, in itself, is obviously helpful.
Good luck !
I have designed commercial algorithms for both class timetabling and examination timetabling. For the first I used integer programming; for the second a heuristic based on maximizing an objective function by choosing slot swaps, very similar to the original manual process that had been evolved. They main things in getting such solutions accepted are the ability to represent all the real-world constraints; and for human timetablers to not be able to see ways to improve the solution. In the end the algorithmic part was quite straightforward and easy to implement compared with the preparation of the databases, the user interface, ability to report on statistics like room utilization, user education and so on.
You can takle it with genetic algorithms, yes. But you shouldn't :). It can be too slow and parameter tuning can be too timeconsuming etc.
There are successful other approaches. All implemented in open source projects:
Constraint based approach
Implemented in UniTime (not really for schools)
You could also go further and use Integer programming. Successfully done at Udine university and also at University Bayreuth (I was involved there) using the commercial software (ILOG CPLEX)
Rule based approach with heuristisc - See Drools planner
Different heuristics - FET and my own
See here for a timetabling software list
I think you should use genetic algorithm because:
It is best suited for large problem instances.
It yields reduced time complexity on the price of inaccurate answer(Not the ultimate best)
You can specify constraints & preferences easily by adjusting fitness punishments for not met ones.
You can specify time limit for program execution.
The quality of solution depends on how much time you intend to spend solving the program..
Genetic Algorithms Definition
Genetic Algorithms Tutorial
Class scheduling project with GA
Also take a look at :a similar question and another one
This problem is MASSIVE where I work - imagine 1800 subjects/modules, and 350 000 students, each doing 5 to 10 modules, and you want to build an exam in 10 weeks, where papers are 1 hour to 3 days long... one plus point - all exams are online, but bad again, cannot exceed the system's load of max 5k concurrent. So yes we are doing this now in cloud on scaling servers.
The "solution" we used was simply to order modules on how many other modules they "clash" with descending (where a student does both), and to "backpack" them, allowing for these long papers to actually overlap, else it simply cannot be done.
So when things get too large, I found this "heuristic" to be practical... at least.
I don't know any one will agree with this code but i developed this code with the help of my own algorithm and is working for me in ruby.Hope it will help them who are searching for it
in the following code the periodflag ,dayflag subjectflag and the teacherflag are the hash with the corresponding id and the flag value which is Boolean.
Any issue contact me.......(-_-)
periodflag.each do |k2,v2|
if(TimetableDefinition.find(k2).period.to_i != 0)
subjectflag.each do |k3,v3|
if (v3 == 0)
if(getflag_period(periodflag,k2))
#teachers=EmployeesSubject.where(subject_name: #subjects.find(k3).name, division_id: division.id).pluck(:employee_id)
#teacherlists=Employee.find(#teachers)
teacherflag=Hash[teacher_flag(#teacherlists,teacherflag,flag).to_a.shuffle]
teacherflag.each do |k4,v4|
if(v4 == 0)
if(getflag_subject(subjectflag,k3))
subjectperiod=TimetableAssign.where("timetable_definition_id = ? AND subject_id = ?",k2,k3)
if subjectperiod.blank?
issubjectpresent=TimetableAssign.where("section_id = ? AND subject_id = ?",section.id,k3)
if issubjectpresent.blank?
isteacherpresent=TimetableAssign.where("section_id = ? AND employee_id = ?",section.id,k4)
if isteacherpresent.blank?
#finaltt=TimetableAssign.new
#finaltt.timetable_struct_id=#timetable_struct.id
#finaltt.employee_id=k4
#finaltt.section_id=section.id
#finaltt.standard_id=standard.id
#finaltt.division_id=division.id
#finaltt.subject_id=k3
#finaltt.timetable_definition_id=k2
#finaltt.timetable_day_id=k1
set_school_id(#finaltt,current_user)
if(#finaltt.save)
setflag_sub(subjectflag,k3,1)
setflag_period(periodflag,k2,1)
setflag_teacher(teacherflag,k4,1)
end
end
else
#subjectdetail=TimetableAssign.find_by_section_id_and_subject_id(#section.id,k3)
#finaltt=TimetableAssign.new
#finaltt.timetable_struct_id=#subjectdetail.timetable_struct_id
#finaltt.employee_id=#subjectdetail.employee_id
#finaltt.section_id=section.id
#finaltt.standard_id=standard.id
#finaltt.division_id=division.id
#finaltt.subject_id=#subjectdetail.subject_id
#finaltt.timetable_definition_id=k2
#finaltt.timetable_day_id=k1
set_school_id(#finaltt,current_user)
if(#finaltt.save)
setflag_sub(subjectflag,k3,1)
setflag_period(periodflag,k2,1)
setflag_teacher(teacherflag,k4,1)
end
end
end
end
end
end
end
end
end
end
end

How to rank a million images with a crowdsourced sort

I'd like to rank a collection of landscape images by making a game whereby site visitors can rate them, in order to find out which images people find the most appealing.
What would be a good method of doing that?
Hot-or-Not style? I.e. show a single image, ask the user to rank it from 1-10. As I see it, this allows me to average the scores, and I would just need to ensure that I get an even distribution of votes across all the images. Fairly simple to implement.
Pick A-or-B? I.e. show two images, ask user to pick the better one. This is appealing as there is no numerical ranking, it's just a comparison. But how would I implement it? My first thought was to do it as a quicksort, with the comparison operations being provided by humans, and once completed, simply repeat the sort ad-infinitum.
How would you do it?
If you need numbers, I'm talking about one million images, on a site with 20,000 daily visits. I'd imagine a small proportion might play the game, for the sake of argument, lets say I can generate 2,000 human sort operations a day! It's a non-profit website, and the terminally curious will find it through my profile :)
As others have said, ranking 1-10 does not work that well because people have different levels.
The problem with the Pick A-or-B method is that its not guaranteed for the system to be transitive (A can beat B, but B beats C, and C beats A). Having nontransitive comparison operators breaks sorting algorithms. With quicksort, against this example, the letters not chosen as the pivot will be incorrectly ranked against each other.
At any given time, you want an absolute ranking of all the pictures (even if some/all of them are tied). You also want your ranking not to change unless someone votes.
I would use the Pick A-or-B (or tie) method, but determine ranking similar to the Elo ratings system which is used for rankings in 2 player games (originally chess):
The Elo player-rating
system compares players’ match records
against their opponents’ match records
and determines the probability of the
player winning the matchup. This
probability factor determines how many
points a players’ rating goes up or
down based on the results of each
match. When a player defeats an
opponent with a higher rating, the
player’s rating goes up more than if
he or she defeated a player with a
lower rating (since players should
defeat opponents who have lower
ratings).
The Elo System:
All new players start out with a base rating of 1600
WinProbability = 1/(10^(( Opponent’s Current Rating–Player’s Current Rating)/400) + 1)
ScoringPt = 1 point if they win the match, 0 if they lose, and 0.5 for a draw.
Player’s New Rating = Player’s Old Rating + (K-Value * (ScoringPt–Player’s Win Probability))
Replace "players" with pictures and you have a simple way of adjusting both pictures' rating based on a formula. You can then perform a ranking using those numeric scores. (K-Value here is the "Level" of the tournament. It's 8-16 for small local tournaments and 24-32 for larger invitationals/regionals. You can just use a constant like 20).
With this method, you only need to keep one number for each picture which is a lot less memory intensive than keeping the individual ranks of each picture to each other picture.
EDIT: Added a little more meat based on comments.
Most naive approaches to the problem have some serious issues. The worst is how bash.org and qdb.us displays quotes - users can vote a quote up (+1) or down (-1), and the list of best quotes is sorted by the total net score. This suffers from a horrible time bias - older quotes have accumulated huge numbers of positive votes via simple longevity even if they're only marginally humorous. This algorithm might make sense if jokes got funnier as they got older but - trust me - they don't.
There are various attempts to fix this - looking at the number of positive votes per time period, weighting more recent votes, implementing a decay system for older votes, calculating the ratio of positive to negative votes, etc. Most suffer from other flaws.
The best solution - I think - is the one that the websites The Funniest The Cutest, The Fairest, and Best Thing use - a modified Condorcet voting system:
The system gives each one a number based on, out of the things that it has faced, what percentage of them it usually beats. So each one gets the percentage score NumberOfThingsIBeat / (NumberOfThingsIBeat + NumberOfThingsThatBeatMe). Also, things are barred from the top list until they've been compared to a reasonable percentage of the set.
If there's a Condorcet winner in the set, this method will find it. Since that's unlikely, given the statistical nature, it finds the one that's the "closest" to being a Condorcet winner.
For more information on implementing such systems the Wikipedia page on Ranked Pairs should be helpful.
The algorithm requires people to compare two objects (your Pick-A-or-B option), but frankly, that's a good thing. I believe it's very well accepted in decision theory that humans are vastly better at comparing two objects than they are at abstract ranking. Millions of years of evolution make us good at picking the best apple off the tree, but terrible at deciding how closely the apple we picked hews to the true Platonic Form of appleness. (This is, by the way, why the Analytic Hierarchy Process is so nifty...but that's getting a bit off topic.)
One final point to make is that SO uses an algorithm to find the best answers which is very similar to bash.org's algorithm to find the best quote. It works well here, but fails terribly there - in large part because an old, highly rated, but now outdated answer here is likely to be edited. bash.org doesn't allow editing, and it's not clear how you'd even go about editing decade-old jokes about now-dated internet memes even if you could... In any case, my point is that the right algorithm usually depends on the details of your problem. :-)
I know this question is quite old but I thought I'd contribute
I'd look at the TrueSkill system developed at Microsoft Research. It's like ELO but has a much faster convergence time (looks exponential compared to linear), so you get more out of each vote. It is, however, more complex mathematically.
http://en.wikipedia.org/wiki/TrueSkill
I don't like the Hot-or-Not style. Different people would pick different numbers even if they all liked the image exactly the same. Also I hate rating things out of 10, I never know which number to choose.
Pick A-or-B is much simpler and funner. You get to see two images, and comparisons are made between the images on the site.
These equations from Wikipedia makes it simpler/more effective to calculate Elo ratings, the algorithm for images A and B would be simple:
Get Ne, mA, mB and ratings RA,RB from your database.
Calculate KA ,KB, QA, QB by using the number of comparisons performed (Ne) and the number of times that image was compared (m) and current ratings :
Calculate EA and EB.
Score the winner's S : the winner as 1, loser as 0, and if you have a draw as 0.5,
Calculate the new ratings for both using:
Update the new ratings RA,RB and counts mA,mB in the database.
You may want to go with a combination.
First phase:
Hot-or-not style (although I would go with a 3 option vote: Sucks, Meh/OK. Cool!)
Once you've sorted the set into the 3 buckets, then I would select two images from the same bucket and go with the "Which is nicer"
You could then use an English Soccer system of promotion and demotion to move the top few "Sucks" into the Meh/OK region, in order to refine the edge cases.
Ranking 1-10 won't work, everyone has different levels. Someone who always gives 3-7 ratings would have his rankings eclipsed by people who always give 1 or 10.
a-or-b is more workable.
Wow, I'm late in the game.
I like the ELO system very much so, but like Owen says it seems to me that you'd be slow building up any significant results.
I believe humans have much greater capacity than just comparing two images, but you want to keep interactions to the bare minimum.
So how about you show n images (n being any number you can visibly display on a screen, this may be 10, 20, 30 depending on user's preference maybe) and get them to pick which they think is best in that lot. Now back to ELO. You need to modify you ratings system, but keep the same spirit. You have in fact compared one image to n-1 others. So you do your ELO rating n-1 times, but you should divide the change of rating by n-1 to match (so that results with different values of n are coherent with one another).
You're done. You've now got the best of all worlds. A simple rating system working with many images in one click.
If you prefer using the Pick A or B strategy I would recommend this paper: http://research.microsoft.com/en-us/um/people/horvitz/crowd_pairwise.pdf
Chen, X., Bennett, P. N., Collins-Thompson, K., & Horvitz, E. (2013,
February). Pairwise ranking aggregation in a crowdsourced setting. In
Proceedings of the sixth ACM international conference on Web search
and data mining (pp. 193-202). ACM.
The paper tells about the Crowd-BT model which extends the famous Bradley-Terry pairwise comparison model into crowdsource setting. It also gives an adaptive learning algorithm to enhance the time and space efficiency of the model. You can find a Matlab implementation of the algorithm on Github (but I'm not sure if it works).
The defunct web site whatsbetter.com used an Elo style method. You can read about the method in their FAQ on the Internet Archive.
Pick A-or-B its the simplest and less prone to bias, however at each human interaction it gives you substantially less information. I think because of the bias reduction, Pick is superior and in the limit it provides you with the same information.
A very simple scoring scheme is to have a count for each picture. When someone gives a positive comparison increment the count, when someone gives a negative comparison, decrement the count.
Sorting a 1-million integer list is very quick and will take less than a second on a modern computer.
That said, the problem is rather ill-posed - It will take you 50 days to show each image only once.
I bet though you are more interested in the most highly ranked images? So, you probably want to bias your image retrieval by predicted rank - so you are more likely to show images that have already achieved a few positive comparisons. This way you will more quickly just start showing 'interesting' images.
I like the quick-sort option but I'd make a few tweeks:
Keep the "comparison" results in a DB and then average them.
Get more than one comparison per view by giving the user 4-6 images and having them sort them.
Select what images to display by running qsort and recording and trimming anything that you don't have enough data on. Then when you have enough items recorded, spit out a page.
The other fun option would be to use the crowd to teach a neural-net.

Resources