Algorithm for incomplete ranking with imprecise comparisons - algorithm

SUMMARY
I'm looking for an algorithm to rank objects. Two objects can be compared. However, the comparisons are real world comparisons that may be flawed. Also, I care more about finding out the very best object than which ones are the worst.
TO MOTIVATE:
Think that I'm scientifically evaluating materials. I combine two materials. I want to find the best working material for in-depth testing. So, I don't care about materials that are unpromising. However, each test can be a false positive or have anomalies between those particular two materials.
PRECISE PROBLEM:
There is an unlimited pool of objects.
Two objects can be compared to each other. It is resource expensive to compare two objects.
It's resource expensive to consider an additional object. So, an object should only be included in the evaluation if it can be fully ranked.
It is very important to find the very best object in the pool of the tested ones. If an object is in the bottom half, it doesn't matter to find out where in the bottom half it is. The importance of finding out the exact rank is a gradient with the top much more important.
Most of the time, if A > B and B > C, it is safe to assume that A > C. Sometimes, there are false positives. Sometimes A > B and B > C and C > A. This is not an abstract math space but real world measurements.
At the start, it is not known how many comparisons are allowed to be taken. The algorithm is granted permission to do another comparison until it isn't. Thus, the decision on including an additional object or testing more already tested objects has to be made.
TO MOTIVATE MORE IN-DEPTH:
Imagine that you are tasked with hiring a team of boxers. You know nothing about evaluating boxers but can ask two boxers to fight each other. There is an unlimited number of boxers in the world. But it's expensive to fly them in. Ideally, you want to hire the n best boxers. Realistically, you don't know if the boxers are going to accept your offer. Plus, you don't know how competitively the other boxing clubs bid. You are going to make offers to only the best n boxers, but have to be prepared to know which next n boxers to send offers to. That you only get the worst boxers is very unlikely.
SOME APPROACHES
I could think of the following approaches. However, they all have drawbacks. I feel like there should be a much better approach.
USE TRADITIONAL SORTING ALGORITHMS
Traditional sorting algorithms could be used.
Drawback:
- A false positive could serious throw of the correctness of the algorithm.
- A sorting algorithm would spend half the time sorting the bottom half of the pack, which is unimportant.
- Sorting algorithms start with all items. With this problem, we are allowed to do the first test, not knowing if we are allowed to do a second test. We may end up only being allowed to do two test. Or we may be allowed to do a million tests.
USE TOURNAMENT ALGORITHMS
There are algorithms for tournaments. E.g., everyone gets a first match. The winner of the first match moves on to the next round. There is a variety of tournament strategies that accounts for people having a bad day or being paired with the champion in their first match.
Drawback:
- This seems pretty promising. The difficulty is to find one that allows adding one more player at a time as we are allowed more comparisons. It seems that there should be a highly specialized solution that's better than a standard tournament algorithm.
BINARY SEARCH
We could start with two objects. Each time an object is added, we could use a binary search to find its spot in the ranking. Because the top is more important, we could use a weighted binary search. E.g. instead of testing the mid point, it tests the point at the top 1/3.
Drawback:
- The algorithm doesn't correct for false positives. If there is a false positive at the top early on, it could skew the whole rest of the tests.
COUNT WINS AND LOSSES
The wins and losses could be counted. The algorithm would choose test subjects by a priority of the least losses and second priority of the most wins. This would focus on testing the best objects. If an object has zero losses, it would get the focus of the testing. It would either quickly get a loss and drop in priority, or it would get a lot more tests because it's the likely top candidate.
DRAWBACK:
- The approach is very nice in that it corrects for false positives. It also allows adding more objects to the test pool easily. However, it does not consider that a win against a top object counts a lot more than a win against a bottom object. Thus, comparisons are wasted.
GRAPH
All the objects could be added to a graph. The graph could be flattened.
DRAWBACK:
- I don't know how to flatten such a messy graph that could have cycles and ambiguous end nodes. There could be multiple objects that are undefeated. How would one pick a winner in such a messy graph? How would one know which comparison would be the most valuable?
SCORING
As a win depends on the rank of the loser, a win could be given a score. Say A > B, means that A gets 1 point. if C > A, C gets 2 points because A has 1 point. In the end, objects are ranked by how many points they have.
DRAWBACK
- The approach seems promising in that it is easy to add new objects to the pool of tested objects. It also takes into account that wins against top objects should count for more. I can't think of a good way to determine the points. That first comparison, was awarded 1 point. Once 10,000 objects are in the pool, an average win would be worth 5,000 points. The award of both tests should be roughly equal. Later comparisons overpower the earlier comparisons and make them be ignored when they shouldn't.
Does anyone have a good idea on tackling this problem?

I would search for an easily computable value for an object, that could be compared between objects to give a good enough approximation of order. You could compare each new object with the current best accurately, then insertion sort the loser into a list of the rest using its computed value.
The best will always be accurate. The ordering of the rest depending on your "value".

I would suggest looking into Elo Rating systems and its derivatives. (like Glicko, BayesElo, WHR, TrueSkill etc.)
So you assign each object a preliminary rating, and then update that value according to the matches/comparisons you make. (with bigger changes to the ratings the more unexpected the outcome was)
This still leaves open the question of how to decide which object to compare to which other object to gain most information. For that I suggest looking into tournament systems and playoff formats. Though I suspect that an optimal solution will be decidedly more ad-hoc than that.

Related

Sorting in Computer Science vs. sorting in the 'real' world

I was thinking about sorting algorithms in software, and possible ways one could surmount the O(nlogn) roadblock. I don't think it IS possible to sort faster in a practical sense, so please don't think that I do.
With that said, it seems with almost all sorting algorithms, the software must know the position of each element. Which makes sense, otherwise, how would it know where to place each element according to some sorting criteria?
But when I crossed this thinking with the real world, a centrifuge has no idea what position each molecule is in when it 'sorts' the molecules by density. In fact, it doesn't care about the position of each molecule. However it can sort trillions upon trillions of items in a relatively short period of time, due to the fact that each molecule follows density and gravitational laws - which got me thinking.
Would it be possible with some overhead on each node (some value or method tacked on to each of the nodes) to 'force' the order of the list? Something like a centrifuge, where only each element cares about its relative position in space (in relation to other nodes). Or, does this violate some rule in computation?
I think one of the big points brought up here is the quantum mechanical effects of nature and how they apply in parallel to all particles simultaneously.
Perhaps classical computers inherently restrict sorting to the domain of O(nlogn), where as quantum computers may be able to cross that threshold into O(logn) algorithms that act in parallel.
The point that a centrifuge being basically a parallel bubble sort seems to be correct, which has a time complexity of O(n).
I guess the next thought is that if nature can sort in O(n), why can't computers?
EDIT: I had misunderstood the mechanism of a centrifuge and it appears that it does a comparison, a massively-parallel one at that. However there are physical processes that operate on a property of the entity being sorted rather than comparing two properties. This answer covers algorithms that are of that nature.
A centrifuge applies a sorting mechanism that doesn't really work by means of comparisons between elements, but actually by a property ('centrifugal force') on each individual element in isolation.Some sorting algorithms fall into this theme, especially Radix Sort. When this sorting algorithm is parallelized it should approach the example of a centrifuge.
Some other non-comparative sorting algorithms are Bucket sort and Counting Sort. You may find that Bucket sort also fits into the general idea of a centrifuge (the radius could correspond to a bin).
Another so-called 'sorting algorithm' where each element is considered in isolation is the Sleep Sort. Here time rather than the centrifugal force acts as the magnitude used for sorting.
Computational complexity is always defined with respect to some computational model. For example, an algorithm that's O(n) on a typical computer might be O(2n) if implemented in Brainfuck.
The centrifuge computational model has some interesting properties; for example:
it supports arbitrary parallelism; no matter how many particles are in the solution, they can all be sorted simultaneously.
it doesn't give a strict linear sort of particles by mass, but rather a very close (low-energy) approximation.
it's not feasible to examine the individual particles in the result.
it's not possible to sort particles by different properties; only mass is supported.
Given that we don't have the ability to implement something like this in general-purpose computing hardware, the model may not have practical relevance; but it can still be worth examining, to see if there's anything to be learned from it. Nondeterministic algorithms and quantum algorithms have both been active areas of research, for example, even though neither is actually implementable today.
The trick is there, that you only have a probability of sorting your list using a centrifuge. As with other real-world sorts [citation needed], you can change the probability that your have sorted your list, but never be certain without checking all the values (atoms).
Consider the question: "How long should you run your centrifuge for?"
If you only ran it for a picosecond, your sample may be less sorted than the initial state.. or if you ran it for a few days, it may be completely sorted. However, you wouldn't know without actually checking the contents.
A real world example of a computer based "ordering" would be autonomous drones that cooperatively work with each other, known as "drone swarms". The drones act and communicate both as individuals and as a group, and can track multiple targets. The drones collectively decide which drones will follow which targets and the obvious need to avoid collisions between drones. The early versions of this were drones that moved through way points while staying in formation, but the formation could change.
For a "sort", the drones could be programmed to form a line or pattern in a specific order, initially released in any permutation or shape, and collectively and in parallel they would quickly form the ordered line or pattern.
Getting back to a computer based sort, one issue is that there's one main memory bus, and there's no way for a large number of objects to move about in memory in parallel.
know the position of each element
In the case of a tape sort, the position of each element (record) is only "known" to the "tape", not to the computer. A tape based sort only needs to work with two elements at a time, and a way to denote run boundaries on a tape (file mark, or a record of different size).
IMHO, people overthink log(n). O(nlog(n)) IS practically O(n). And you need O(n) just to read the data.
Many algorithms such as quicksort do provide a very fast way to sort elements. You could implement variations of quicksort that would be very fast in practice.
Inherently all physical systems are infinitely parallel. You might have a buttload of atoms in a grain of sand, nature has enough computational power to figure out where each electron in each atom should be. So if you had enough computational resources (O(n) processors) you could sort n numbers in log(n) time.
From comments:
Given a physical processor that has k number of elements, it can achieve a parallelness of at most O(k). If you process n numbers arbitrarily, it would still process it at a rate related to k. Also, you could formulate this problem physically. You could create n steel balls with weights proportional to the number you want to encode, which could be solved by a centrifuge in a theory. But here the amount of atoms you are using is proportional to n. Whereas in a standard case you have a limited number of atoms in a processor.
Another way to think about this is, say you have a small processor attached to each number and each processor can communicate with its neighbors, you could sort all those numbers in O(log(n)) time.
I worked in an office summers after high school when I started college. I had studied in AP Computer Science, among other things, sorting and searching.
I applied this knowledge in several physical systems that I can recall:
Natural merge sort to start…
A system printed multipart forms including a file-card-sized tear off, which needed to be filed in a bank of drawers.
I started with a pile of them and sorted the pile to begin with. The first step is picking up 5 or so, few enough to be easily placed in order in your hand. Place the sorted packet down, criss-crossing each stack to keep them separate.
Then, merge each pair of stacks, producing a larger stack. Repeat until there is only one stack.
…Insertion sort to complete
It is easier to file the sorted cards, as each next one is a little farther down the same open drawer.
Radix sort
This one nobody else understood how I did it so fast, despite repeated tries to teach it.
A large box of check stubs (the size of punch cards) needs to be sorted. It looks like playing solitaire on a large table—deal out, stack up, repeat.
In general
30 years ago, I did notice what you’re asking about: the ideas transfer to physical systems quite directly because there are relative costs of comparisons and handling records, and levels of caching.
Going beyond well-understood equivalents
I recall an essay about your topic, and it brought up the spaghetti sort. You trim a length of dried noodle to indicate the key value, and label it with the record ID. This is O(n), simply processing each item once.
Then you grab the bundle and tap one end on the table. They align on the bottom edges, and they are now sorted. You can trivially take off the longest one, and repeat. The read-out is also O(n).
There are two things going on here in the “real world” that don’t correspond to algorithms. First, aligning the edges is a parallel operation. Every data item is also a processor (the laws of physics apply to it). So, in general, you scale the available processing with n, essentially dividing your classic complexity by a factor on n.
Second, how does aligning the edges accomplish a sort? The real sorting is in the read-out which lets you find the longest in one step, even though you did compare all of them to find the longest. Again, divide by a factor of n, so finding the largest is now O(1).
Another example is using analog computing: a physical model solves the problem “instantly” and the prep work is O(n). In principle the computation is scaling with the number of interacting components, not the number of prepped items. So the computation scales with n². The example I'm thinking of is a weighted multi-factor computation, which was done by drilling holes in a map, hanging weights from strings passing through the holes, and gathering all the strings on a ring.
Sorting is still O(n) total time. That it is faster than that is because of Parallelization.
You could view a centrifuge as a Bucketsort of n atoms, parallelized over n cores(each atom acts as a processor).
You can make sorting faster by parallelization but only by a constant factor because the number of processors is limited, O(n/C) is still O(n) (CPUs have usually < 10 cores and GPUs < 6000)
The centrifuge is not sorting the nodes, it applies applies a force to them then they react in parallel to it.
So if you were to implement a bubble sort where each node is moving itself in parallel up or down based on it's "density", you'd have a centrifuge implementation.
Keep in mind that in the real world you can run a very large amount of parallel tasks where in a computer you can have a maximum of real parallel tasks equals to the number of physical processing units.
In the end, you would also be limited with the access to the list of elements because it cannot be modified simultaneously by two nodes...
Would it be possible with some overhead on each node (some value or
method tacked on to each of the nodes) to 'force' the order of the
list?
When we sort using computer programs we select a property of the values being sorted. That's commonly magnitude of the number or the alphabetical order.
Something like a centrifuge, where only each element cares about its
relative position in space (in relation to other nodes)
This analogy aptly reminds me of simple bubble sort. How smaller numbers bubble up in each iteration. Like your centrifuge logic.
So to answer this, don't we actually do something of that sort in software based sorting?
First of all, you are comparing two different contexts, one is logic(computer) and the other is physics which (so far) is proven that we can model some parts of it using mathematical formulas and we as programmers can use this formulas to simulate (some parts of) physics in the logic work (e.g physics engine in game engine).
Second We have some possibilities in the computer (logic) world that is nearly impossible in physics for example we can access memory and find the exact location of each entity at each time but in physics that is a huge problem Heisenberg's uncertainty principle.
Third If you want to map centrifuges and its operation in real world, to computer world, it is like someone (The God) has given you a super-computer with all the rules of physics applied and you are doing your small sorting in it (using centrifuge) and by saying that your sorting problem was solved in o(n) you are ignoring the huge physics simulation going on in background...
Consider: is "centrifuge sort" really scaling better? Think about what happens as you scale up.
The test tubes have to get longer and longer.
The heavy stuff has to travel further and further to get to the bottom.
The moment of inertia increases, requiring more power and longer times to accelerate up to sorting speed.
It's also worth considering other problems with centrifuge sort. For example, you can only operate on a narrow size scale. A computer sorting algorithm can handle integers from 1 to 2^1024 and beyond, no sweat. Put something that weighs 2^1024 times as much as a hydrogen atom into a centrifuge and, well, that's a black hole and the galaxy has been destroyed. The algorithm failed.
Of course the real answer here is that computational complexity is relative to some computational model, as mentioned in other answer. And "centrifuge sort" doesn't make sense in the context of common computational models, such as the RAM model or the IO model or multitape Turing machines.
Another perspective is that what you're describing with the centrifuge is analogous to what's been called the "spaghetti sort" (https://en.wikipedia.org/wiki/Spaghetti_sort). Say you have a box of uncooked spaghetti rods of varying lengths. Hold them in your fist, and loosen your hand to lower them vertically so the ends are all resting on a horizontal table. Boom! They're sorted by height. O(constant) time. (Or O(n) if you include picking the rods out by height and putting them in a . . . spaghetti rack, I guess?)
You can note there that it's O(constant) in the number of pieces of spaghetti, but, due to the finite speed of sound in spaghetti, it's O(n) in the length of the longest strand. So nothing comes for free.

Algorithm to find smallest number of points to cover area (war game)

I'm dealing with a war game. I have a list of my bases B(x,y) from which I can send attacks on the enemy (they have bases between my own bases). Each base B can attack at a range R (the same radius for all bases). How can I find my bases to be able to attack as many enemy bases as possible, but use a minimum number of my bases?
I've reduced the problem to finding the minimum number of bases (and their coordinates) required to cover the largest area possible. I wonder if there is a better way than looking at all the possible combinations and because the number of bases could reach thousands.
Example: If the attack radius is 10 and I have five bases in a square and its center: (0,0), (10,0), (10,10), (0,10), (5,5) then the answer is that only the first four would be needed because all the area covered by the one in the center is already covered by the others.
Note 1 The solution must be single-threaded.
Note 2 The solution doesn't have to be perfect if that means a big gain in speed. The number of bases reaches thousands and this needs to use as little time as possible. I would consider running time greater than 100 ms for 10,000 bases in Python on a modern computer unacceptable, so I was thinking maybe I could start by eliminating the obvious, like if there are multiple bases within R/10 distance of each other, simply eliminate all except for one (whichever).
If I understand you correctly, the enemy bases and your bases are given as well as the (constant) attack radius. I.e. if you select one of your bases, you know exactly which of the enemy bases get attacked due to the selection.
The first step would be to eliminate those enemy cities from the problem which can not be attacked by any of your bases. Then, selecting all of your bases guarantees attacking all attackable enemy bases, so there is solution that attacks as many enemy bases as possible.
Under all those solutions you are looking for the one that uses the minimum number of your bases. This problem is equivalent to the https://en.wikipedia.org/wiki/Set_cover_problem, which is unfortunately NP-hard. You can apply all known solution methods such as Integer Linear Programming or the already mentioned greedy algorithm / metaheuristics.
If your problem instance is large and runtime is the primary concern, greedy is probably the way to go. For example you could always add that particular base of yours to the selection which adds the highest number of enemy bases that can be attacked which were previously not under attack by your already selected bases.
Hum the solution depends on your needs. If you need real time answer, maybe a greedy algorithm could provide good solution.
Other solution could be using meta-heuristic with constraint time(http://en.wikipedia.org/wiki/Metaheuristic). I probably would use genetic algorithm to search a solution for this problem under a limited time.
If interested I can provide a toy example of implementation in Python.
EDIT :
When you have to provide solution quickly a greedy algorithm is often better. But in your case I doubt. Particularity of many greedy algorithm is that you need to start from scratch each time you try to compute a new result.
Speaking again of genetic algorithm, you could for example each time you have to take a decision restart the search process from its last result. In fact you could probably let him turning has a subprocess and each 100ms take the better solution computed during the last loop.
If not too greedy in computing resource, this solution would provide better results than greedy one on the long run as the solution will probably need to be adapted to the changes of the situation but many element will stay unchanged. Just be aware that initializing a meta-search with the solution of a greedy algorithm is anyway a good idea!

Optimal placement of objects wrt pairwise similarity weights

Ok this is an abstract algorithmic challenge and it will remain abstract since it is a top secret where I am going to use it.
Suppose we have a set of objects O = {o_1, ..., o_N} and a symmetric similarity matrix S where s_ij is the pairwise correlation of objects o_i and o_j.
Assume also that we have an one-dimensional space with discrete positions where objects may be put (like having N boxes in a row or chairs for people).
Having a certain placement, we may measure the cost of moving from the position of one object to that of another object as the number of boxes we need to pass by until we reach our target multiplied with their pairwise object similarity. Moving from a position to the box right after or before that position has zero cost.
Imagine an example where for three objects we have the following similarity matrix:
1.0 0.5 0.8
S = 0.5 1.0 0.1
0.8 0.1 1.0
Then, the best ordering of objects in the tree boxes is obviously:
[o_3] [o_1] [o_2]
The cost of this ordering is the sum of costs (counting boxes) for moving from one object to all others. So here we have cost only for the distance between o_2 and o_3 equal to 1box * 0.1sim = 0.1, the same as:
[o_3] [o_1] [o_2]
On the other hand:
[o_1] [o_2] [o_3]
would have cost = cost(o_1-->o_3) = 1box * 0.8sim = 0.8.
The target is to determine a placement of the N objects in the available positions in a way that we minimize the above mentioned overall cost for all possible pairs of objects!
An analogue is to imagine that we have a table and chairs side by side in one row only (like the boxes) and you need to put N people to sit on the chairs. Now those ppl have some relations that is -lets say- how probable is one of them to want to speak to another. This is to stand up pass by a number of chairs and speak to the guy there. When the people sit on two successive chairs then they don't need to move in order to talk to each other.
So how can we put those ppl down so that every distance-cost between two ppl are minimized. This means that during the night the overall number of distances walked by the guests are close to minimum.
Greedy search is... ok forget it!
I am interested in hearing if there is a standard formulation of such problem for which I could find some literature, and also different searching approaches (e.g. dynamic programming, tabu search, simulated annealing etc from combinatorial optimization field).
Looking forward to hear your ideas.
PS. My question has something in common with this thread Algorithm for ordering a list of Objects, but I think here it is better posed as problem and probably slightly different.
That sounds like an instance of the Quadratic Assignment Problem. The speciality is due to the fact that the locations are placed on one line only, but I don't think this will make it easier to solve. The QAP in general is NP hard. Unless I misinterpreted your problem you can't find an optimal algorithm that solves the problem in polynomial time without proving P=NP at the same time.
If the instances are small you can use exact methods such as branch and bound. You can also use tabu search or other metaheuristics if the problem is more difficult. We have an implementation of the QAP and some metaheuristics in HeuristicLab. You can configure the problem in the GUI, just paste the similarity and the distance matrix into the appropriate parameters. Try starting with the robust Taboo Search. It's an older, but still quite well working algorithm. Taillard also has the C code for it on his website if you want to implement it for yourself. Our implementation is based on that code.
There has been a lot of publications done on the QAP. More modern algorithms combine genetic search abilities with local search heuristics (e. g. Genetic Local Search from Stützle IIRC).
Here's a variation of the already posted method. I don't think this one is optimal, but it may be a start.
Create a list of all the pairs in descending cost order.
While list not empty:
Pop the head item from the list.
If neither element is in an existing group, create a new group containing
the pair.
If one element is in an existing group, add the other element to whichever
end puts it closer to the group member.
If both elements are in existing groups, combine them so as to minimize
the distance between the pair.
Group combining may require reversal of order in a group, and the data structure should
be designed to support that.
Let me help the thread (of my own) with a simplistic ordering approach.
1. Order the upper half of the similarity matrix.
2. Start with the pair of objects having the highest similarity weight and place them in the center positions.
3. The next object may be put on the left or the right side of them. So each time you may select the object that when put to left or right
has the highest cost to the pre-placed objects. Goto Step 2.
The selection of Step 3 is because if you left this object and place it later this cost will be again the greatest of the remaining, and even more (farther to the pre-placed objects). So the costly placements should be done as earlier as it can be.
This is too simple and of course does not discover a good solution.
Another approach is to
1. start with a complete ordering generated somehow (random or from another algorithm)
2. try to improve it using "swaps" of object pairs.
I believe local minima would be a huge deterrent.

breeding parents for multiple children in genetic algorithm

I'm building my first Genetic Algorithm in javascript, using a collection of tutorials.
I'm building a somewhat simpler structure to this scheduling tutorial http://www.codeproject.com/KB/recipes/GaClassSchedule.aspx#Chromosome8, but I've run into a problem with breeding.
I get a population of 60 individuals, and now I'm picking the top two individuals to breed, and then selecting a few random other individuals to breed with the top two, am I not going to end up with a fairly small amount of parents rather quickly?
I figure I'm not going to be making much progress in the solution if I breed the top two results with each of the next 20.
Is that correct? Is there a generally accepted method for doing this?
I have a sample of genetic algorithms in Javascript here.
One problem with your approach is that you are killing diversity in the population by mating always the top 2 individuals. That will never work very well because it's too greedy, and you'll actually be defeating the purpose of having a genetic algorithm in the first place.
This is how I am implementing mating with elitism (which means I am retaining a percentage of unaltered best fit individuals and randomly mating all the rest), and I'll let the code do the talking:
// save best guys as elite population and shove into temp array for the new generation
for(var e = 0; e < ELITE; e++) {
tempGenerationHolder.push(fitnessScores[e].chromosome);
}
// randomly select a mate (including elite) for all of the remaining ones
// using double-point crossover should suffice for this silly problem
// note: this should create INITIAL_POP_SIZE - ELITE new individualz
for(var s = 0; s < INITIAL_POP_SIZE - ELITE; s++) {
// generate random number between 0 and INITIAL_POP_SIZE - ELITE - 1
var randInd = Math.floor(Math.random()*(INITIAL_POP_SIZE - ELITE));
// mate the individual at index s with indivudal at random index
var child = mate(fitnessScores[s].chromosome, fitnessScores[randInd].chromosome);
// push the result in the new generation holder
tempGenerationHolder.push(child);
}
It is fairly well commented but if you need any further pointers just ask (and here's the github repo, or you can just do a view source on the url above). I used this approach (elitism) a number of times, and for basic scenarios it usually works well.
Hope this helps.
When I've implemented genetic algorithms in the past, what I've done is to pick the parents always probabilistically - that is, you don't necessarily pick the winners, but you will pick the winners with a probability depending on how much better they are than everyone else (based on the fitness function).
I cannot remember the name of the paper to back it up, but there is a mathematical proof that "ranking" selection converges faster than "proportional" selection. If you try looking around for "genetic algorithm selection strategy" you may find something about this.
EDIT:
Just to be more specific, since pedalpete asked, there are two kinds of selection algorithms: one based on rank, one based on fitness proportion. Consider a population with 6 solutions and the following fitness values:
Solution Fitness Value
A 5
B 4
C 3
D 2
E 1
F 1
In ranking selection, you would take the top k (say, 2 or 4) and use those as the parents for your next generation. In proportional ranking, to form each "child", you randomly pick the parent with a probability based on fitness value:
Solution Probability
A 5/16
B 4/16
C 3/16
D 2/16
E 1/16
F 1/16
In this scheme, F may end up being a parent in the next generation. With a larger population size (100 for example - may be larger or smaller depending on the search space), this will mean that the bottom solutions will end up being a parent some of the time. This is OK, because even "bad" solutions have some "good" aspects.
Keeping the absolute fittest individuals is called elitism, and it does tend to lead to faster convergence, which, depending on the fitness landscape of the problem, may or may not be what you want. Faster convergence is good if it reduces the amount of effort taken to find an acceptable solution but it's bad if it means that you end up with a local optimum and ignore better solutions.
Picking the other parents completely at random isn't going to work very well. You need some mechanism whereby fitter candidates are more likely to be selected than weaker ones. There are several different selection strategies that you can use, each with different pros and cons. Some of the main ones are described here. Typically you will use roulette wheel selection or tournament selection.
As for combining the elite individuals with every single one of the other parents, that is a recipe for destroying variation in the population (as well as eliminating the previously preserved best candidates).
If you employ elitism, keep the elite individuals unchanged (that's the point of elitism) and then mate pairs of the other parents (which may or may not include some or all of the elite individuals, depending on whether they were also picked out as parents by the selection strategy). Each parent will only mate once unless it was picked out multiple times by the selection strategy.
Your approach is likely to suffer from premature convergence. There are lots of other selection techniques to pick from though. One of the more popular that you may wish to consider is Tournament selection.
Different selection strategies provide varying levels of 'selection pressure'. Selection pressure is how strongly the strategy insists on choosing the best programs. If the absolute best programs are chosen every time, then your algorithm effectively becomes a hill-climber; it will get trapped in local optimum with no way of navigating to other peaks in the fitness landscape. At the other end of the scale, no fitness pressure at all means the algorithm will blindly stumble around the fitness landscape at random. So, the challenge is to try to choose an operator with sufficient (but not excessive) selection pressure, for the problem you are tackling.
One of the advantages of the tournament selection operator is that by just modifying the size of the tournament, you can easily tweak the level of selection pressure. A larger tournament will give more pressure, a smaller tournament less.

How do you evaluate the efficiency of an algorithm, if the problem space is underspecified?

There was a post on here recently which posed the following question:
You have a two-dimensional plane of (X, Y) coordinates. A bunch of random points are chosen. You need to select the largest possible set of chosen points, such that no two points share an X coordinate and no two points share a Y coordinate.
This is all the information that was provided.
There were two possible solutions presented.
One suggested using a maximum flow algorithm, such that each selected point maps to a path linking (source → X → Y → sink). This runs in O(V3) time, where V is the number of vertices selected.
Another (mine) suggested using the Hungarian algorithm. Create an n×n matrix of 1s, then set every chosen (x, y) coordinate to 0. The Hungarian algorithm will give you the lowest cost for this matrix, and the answer is the number of coordinates selected which equal 0. This runs in O(n3) time, where n is the greater of the number of rows or the number of columns.
My reasoning is that, for the vast majority of cases, the Hungarian algorithm is going to be faster; V is equal to n in the case where there's one chosen point for each row or column, and substantially greater for any case where there's more than that: given a 50×50 matrix with half the coordinates chosen, V is 1,250 and n is 50.
The counterargument is that there are some cases, like a 109×109 matrix with only two points selected, where V is 2 and n is 1,000,000,000. For this case, it takes the Hungarian algorithm a ridiculously long time to run, while the maximum flow algorithm is blinding fast.
Here is the question: Given that the problem doesn't provide any information regarding the size of the matrix or the probability that a given point is chosen (so you can't know for sure) how do you decide which algorithm, in general, is a better choice for the problem?
You can't, it's an imponderable.
You can only define which is better "in general" by defining what inputs you will see "in general". So for example you could whip up a probability model of the inputs, so that the expected value of V is a function of n, and choose the one with the best expected runtime under that model. But there may be arbitrary choices made in the construction of your model, so that different models give different answers. One model might choose co-ordinates at random, another model might look at the actual use-case for some program you're thinking of writing, and look at the distribution of inputs it will encounter.
You can alternatively talk about which has the best worst case (across all possible inputs with given constraints), which has the virtue of being easy to define, and the flaw that it's not guaranteed to tell you anything about the performance of your actual program. So for instance HeapSort is faster than QuickSort in the worst case, but slower in the average case. Which is faster? Depends whether you care about average case or worst case. If you don't care which case, you're not allowed to care which "is faster".
This is analogous to trying to answer the question "what is the probability that the next person you see will have an above (mean) average number of legs?".
We might implicitly assume that the next person you meet will be selected at random with uniform distribution from the human population (and hence the answer is "slightly less than one", since the mean is less than the mode average, and the vast majority of people are at the mode).
Or we might assume that your next meeting with another person is randomly selected with uniform distribution from the set of all meetings between two people, in which case the answer is still "slightly less than one", but I reckon not the exact same value as the first - one-and-zero-legged people quite possibly congregate with "their own kind" very slightly more than their frequency within the population would suggest. Or possibly they congregate less, I really don't know, I just don't see why it should be exactly the same once you take into account Veterans' Associations and so on.
Or we might use knowledge about you - if you live with a one-legged person then the answer might be "very slightly above 0".
Which of the three answers is "correct" depends precisely on the context which you are forbidding us from talking about. So we can't talk about which is correct.
Given that you don't know what each pill does, do you take the red pill or the blue pill?
If there really is not enough information to decide, there is not enough information to decide. Any guess is as good as any other.
Maybe, in some cases, it is possible to divine extra information to base the decision on. I haven't studied your example in detail, but it seems like the Hungarian algorithm might have higher memory requirements. This might be a reason to go with the maximum flow algorithm.
You don't. I think you illustrated that clearly enough. I think the proper practical solution is to spawn off both implementations in different threads, and then take the response that comes back first. If you're more clever, you can heuristically route requests to implementations.
Many algorithms require huge amounts of memory beyond the physical maximum of a machine, and in these cases, the algorithmically more ineffecient in time but efficient in space algorithm is chosen.
Given that we have distributed parallel computing, I say you just let both horses run and let the results speak for themselves.
This is a valid question, but there's no "right" answer — they are incomparable, so there's no notion of "better".
If your interest is practical, then you need to analyze the kinds of inputs that are likely to arise in practice, as well as the practical running times (constants included) of the two algorithms.
If your interest is theoretical, where worst-case analysis is often the norm, then, in terms of the input size, the O(V3) algorithm is better: you know that V ≤ n2, but you cannot polynomially bound n in terms of V, as you showed yourself. Of course the theoretical best algorithm is a hybrid algorithm that runs both and stops when whichever one of them finishes first, thus its running time would be O(min(V3,n3)).
Theoretically, they are both the same, because you actually compare how the number of operations grows when the size of the problem is increased to infinity.
The way your problem is defined, it has 2 sizes - n and number of points, so this question has no answer.

Resources