I made a genetic algorithm a couple of months ago, but I turned out to be very week in solving any kind of problem. My initial goal was to use the genetic algorithm in a game applications
Now I'm recreating the whole thing and trying to take a view from another perspective.
Now that I'm about to define the steps in which the next generation is set.
My last idea was:
Take the top rated genes from the current generation and duplicate them in the next (the amount is set by the elitism)
Take two random genes and crossover them (the chances to do be picked is correlated to the gene rank), I made several of the crossover methods(one-point,two-points,three-parents,average,uniform...)
Fill the new generation with genes using the above method
Apply some mutation to the genes ( the chance of being selected is set by mutation rate) and it will change just part of the DNA ( top rated genes are excluded )
This proved to be very inefficient(Which I don't know why) and also very computing demanding since the crossover process looped over all the genes several times.
Now I'm thinking in a new approach.
Basically I'm aiming to maintain the the genes and remove the 'bad' ones, and fill the Gene Pool with crossed genes.
In a Gene Pool with 1.000 individuals I would:
Discard the 500 lowest ranked.
Duplicate the top rated (in a 10% elitism it's 100)
Generate 400 new genes using crossover.
Apply the mutation
I was taking the concept of 'generations' too literally and making them all die(expect the top rated ones), now I'm will let them all live, expect the bad ones. And repopulate as needed.
Am I missing anything? Will this new method be any better?
There is an alternative to vertical gene transfer (the traditional concept of generations), which is horizontal gene transfer (see this paper). With horizontal gene transfer, the population size remains constant throughout the simulation.
Also, when you breed genotypes (with whichever method you choose) you should definitely not keep only the fittest candidates throughout the generations. If you do so, the solution you find will most likely be a local optimum. Every genotype should have some chance of making it to the next generation, with the fittest having a better chance (see this answer about linear rank selection).
Related
I am currently trying to coding a genetic algorithm which intended to find the optimal solution for timetable planning. I have successfully created the population and also able to calculate the fitness, what I confuse is that mating pool and the selection.
I am planning to do the tournament solution.
What I know So far is that I need to select a random number of candidate, and choose the "most fit" and the first parent. Repeat the step and find the second parent. the crossover each other. But How many crossovers I need to do? Until the same size of the population size that I set. then How about my original population?
Can Someone help me?
Your original population dies. If you want to keep the best solution(s) you can copy it to the new population (this is called elitism). You then generate offspring until your new population is full. Have a look at this Outline of the Basic Genetic Algorithm.
Cross-over is just one way to make children "different" from parents (in addition to mutation). This is independent from how you select good parents (e.g. tournaments). But note that there are so many GA variants that this may not be true for all of them.
I would consider to start with only mutation (no cross-over). This is much easier to implement, and sometimes good enough. You can always add cross-over later and see if you get an improvement.
First of all, this is a part of a homework.
I am trying to implement a genetic algorithm. I am confused about selecting parents to crossover.
In my notes (obviously something is wrong) this is what is done as example;
Pc (possibility of crossover) * population size = estimated chromosome count to crossover (if not even, round to one of closest even)
Choose a random number in range [0,1] for every chromosome and if this number is smaller then Pc, choose this chromosome for a crossover pair.
But when second step applied, chosen chromosome count is equals to result found in first step. Which is not always guaranteed because of randomness.
So this does not make any sense. I searched for selecting parents for crossover but all i found is crossover techniques (one-point, cut and slice etc.) and how to crossover between chosen parents (i do not have a problem with these). I just don't know which chromosomesi should choose for crossover. Any suggestions or simple example?
You can implement it like this:
For every new child you decide if it will result from crossover by random probability. If yes, then you select two parents, eg. through roulette wheel selection or tournament selection. The two parents make a child, then you mutate it with mutation probability and add it to the next generation. If no, then you select only one "parent" clone it, mutate it with probability and add it to the next population.
Some other observations I noted and that I like to comment. I often read the word "chromosomes" when it should be individual. You hardly ever select chromosomes, but full individuals. A chromosome is just one part of a solution. That may be nitpicking, but a solution is not a chromosome. A solution is an individual that consists of several chromosomes which consist of genes which show their expression in the form of alleles. Often an individual has only one chromosome, but it's still not okay to mix terms.
Also I noted that you tagged genetic programming which is basically only a special type of a genetic algorithm. In GP you consider trees as a chromosome which can represent mathematical formulas or computer programs. Your question does not seem to be about GP though.
This is very late answer, but hoping it will help someone in the future. Even if two chromosomes are not paired (and produced children), they goes to the next generation (without crossover) but after some mutation (subject to probability again). And on the other hand, if two chromosomes paired, then they produce two children (replacing the original two parents) for the next generation. So, that's why the no of chromosomes remain same in two generations.
I'm trying to solve the DARP Problem, using a Genetic Algorithm DLL.
Thing is that eventhough it sometimes comes up with the right solution, others times it does not. Although Im using a really simple version of the problem. When I checked the genomes the algorithm evaluated I found out that it evaluates several times the same genome.
Why does it evaluate the same several time? wouldn't be more efficient if it does not?
There is a difference between evaluating the same chromosome twice, and using the same chromosome in a population (or different populations) more than once. The first can probably be usefully avoided; the second, maybe not.
Consider:
In some generation G1, you mate 0011 and 1100, cross them right down the middle, get lucky and fail to mutate any of the genes, and end up with 0000 and 1111. You evalate them, stick them back into the population for the next generation, and continue the algorithm.
Then in some later generation G2, you mate 0111 and 1001 at the first index and (again, ignoring mutation) end up with 1111 and 0001. One of those has already been evaluated, so why evaluate it again? Especially if evaluating that function is very expensive, it may very well be better to keep a hash table (or some such) to store the results of previous evaluations, if you can afford the memory.
But! Just because you've already generated a value for a chromosome doesn't mean you shouldn't include it naturally in the results going forward, allowing it to either mutate further or allowing it to mate with other members of the same population. If you don't do that, you're effectively punishing success, which is exactly the opposite of what you want to do. If a chromosome persists or reappears from generation to generation, that is a strong sign that it is a good solution, and a well-chosen mutation operator will act to drive it out if it is a local maximum instead of a global maximum.
The basic explanation for why a GA might evaluate the same individual is precisely because it is non-guided, so the recreation of a previously-seen (or simultaneously-seen) genome is something to be expected.
More usefully, your question could be interpreted as about two different things:
A high cost associated with evaluating the fitness function, which could be mitigated by hashing the genome, but trading off memory. This is conceivable, but I've never seen it. Usually GAs search high-dimensional-spaces so you'd end up with a very sparse hash.
A population in which many or all members have converged to a single or few patterns: at some point, the diversity of your genome will tend towards 0. This is the expected outcome as the algorithm has converged upon the best solution that it has found. If this happens too early, with mediocre results, it indicates that you are stuck in a local minimum and you have lost diversity too early.
In this situation, study the output to determine which of the two scenarios has happened:
You lose diversity because particularly-fit individuals win a very large percentage of the parental lotteries. Or,
You don't gain diversity because the population over time is quite static.
In the former case, you have to maintain diversity. Ensure that less-fit individuals get more chances, perhaps by decreasing turnover or scaling the fitness function.
In the latter case, you have to increase diversity. Ensure that you inject more randomness into the population, perhaps by increasing mutation rate or increasing turnover.
(In both cases, of course, you ultimately do want diversity to decrease as the GA converges in the solution space. So you don't just want to "turn up all the dials" and devolve into a Monte Carlo search.)
Basicly genetic algoritm consists of
initial population (size N)
fitness function
mutation operation
crossover operation (performed usually on 2 individuals by taking some parts of their genome and combining it into new individual)
At every steps it
choses random individuals
performs crossover resulting in new individuals
possibly perform mutation(change random gene in random individual)
evaluate all old and new individuals with fitness function
choose N best fitted to be a new population on next iteration
The algorythm stops when it reaches a threshold of fitness function, or if there is no changes in population in last K iterations.
So, it could stop not at the best solution, but at local maximum.
A part of the population could stay unchanged from one iteration to another, because they could have a good value of fitness function.
Also it is possible to "fall back" to previos genomes because of mutation.
There are a lot of tricks to make genetic algorytm work better : choosing appropriate population encoding into genom, finding a good fitness function, playing with crossover and mutation ratio.
Depending on the particulars of your GA, you may have the same genomes in successive populations. For example Elitism saves the best or n best genomes from each population.
Reevaluating genomes is inefficient from a computational standpoint. The easiest way to avoid this is to put a boolean HasFitness flag for each genome. You could also create a unique string key for each genome encoding and store all the fitness values in a dictionary. This lookup can get very expensive, so this is only recommended if your fitness function is expensive enough to warrant the added expense of the lookup.
Elitism aside, The GA does not evaluate the same genome repeatedly. What you are seeing is identical genomes being regenerated and reevaluated. This is because each generation is a new set of genomes, which may or may not have been evaluated before.
To avoid the reevaluation you would need to keep a list of already produced genomes, with their fitness. To access the fitness you would need to have compare each of your new population with the list, when it is not in the list you would need to evaluate it, and add it to the list.
As real world applications have thousands of parameters, you end up with millions of stored genomes. This then becomes massively expensive to search & maintain. So probably quicker to just evaluate the genome each time.
I'm building my first Genetic Algorithm in javascript, using a collection of tutorials.
I'm building a somewhat simpler structure to this scheduling tutorial http://www.codeproject.com/KB/recipes/GaClassSchedule.aspx#Chromosome8, but I've run into a problem with breeding.
I get a population of 60 individuals, and now I'm picking the top two individuals to breed, and then selecting a few random other individuals to breed with the top two, am I not going to end up with a fairly small amount of parents rather quickly?
I figure I'm not going to be making much progress in the solution if I breed the top two results with each of the next 20.
Is that correct? Is there a generally accepted method for doing this?
I have a sample of genetic algorithms in Javascript here.
One problem with your approach is that you are killing diversity in the population by mating always the top 2 individuals. That will never work very well because it's too greedy, and you'll actually be defeating the purpose of having a genetic algorithm in the first place.
This is how I am implementing mating with elitism (which means I am retaining a percentage of unaltered best fit individuals and randomly mating all the rest), and I'll let the code do the talking:
// save best guys as elite population and shove into temp array for the new generation
for(var e = 0; e < ELITE; e++) {
tempGenerationHolder.push(fitnessScores[e].chromosome);
}
// randomly select a mate (including elite) for all of the remaining ones
// using double-point crossover should suffice for this silly problem
// note: this should create INITIAL_POP_SIZE - ELITE new individualz
for(var s = 0; s < INITIAL_POP_SIZE - ELITE; s++) {
// generate random number between 0 and INITIAL_POP_SIZE - ELITE - 1
var randInd = Math.floor(Math.random()*(INITIAL_POP_SIZE - ELITE));
// mate the individual at index s with indivudal at random index
var child = mate(fitnessScores[s].chromosome, fitnessScores[randInd].chromosome);
// push the result in the new generation holder
tempGenerationHolder.push(child);
}
It is fairly well commented but if you need any further pointers just ask (and here's the github repo, or you can just do a view source on the url above). I used this approach (elitism) a number of times, and for basic scenarios it usually works well.
Hope this helps.
When I've implemented genetic algorithms in the past, what I've done is to pick the parents always probabilistically - that is, you don't necessarily pick the winners, but you will pick the winners with a probability depending on how much better they are than everyone else (based on the fitness function).
I cannot remember the name of the paper to back it up, but there is a mathematical proof that "ranking" selection converges faster than "proportional" selection. If you try looking around for "genetic algorithm selection strategy" you may find something about this.
EDIT:
Just to be more specific, since pedalpete asked, there are two kinds of selection algorithms: one based on rank, one based on fitness proportion. Consider a population with 6 solutions and the following fitness values:
Solution Fitness Value
A 5
B 4
C 3
D 2
E 1
F 1
In ranking selection, you would take the top k (say, 2 or 4) and use those as the parents for your next generation. In proportional ranking, to form each "child", you randomly pick the parent with a probability based on fitness value:
Solution Probability
A 5/16
B 4/16
C 3/16
D 2/16
E 1/16
F 1/16
In this scheme, F may end up being a parent in the next generation. With a larger population size (100 for example - may be larger or smaller depending on the search space), this will mean that the bottom solutions will end up being a parent some of the time. This is OK, because even "bad" solutions have some "good" aspects.
Keeping the absolute fittest individuals is called elitism, and it does tend to lead to faster convergence, which, depending on the fitness landscape of the problem, may or may not be what you want. Faster convergence is good if it reduces the amount of effort taken to find an acceptable solution but it's bad if it means that you end up with a local optimum and ignore better solutions.
Picking the other parents completely at random isn't going to work very well. You need some mechanism whereby fitter candidates are more likely to be selected than weaker ones. There are several different selection strategies that you can use, each with different pros and cons. Some of the main ones are described here. Typically you will use roulette wheel selection or tournament selection.
As for combining the elite individuals with every single one of the other parents, that is a recipe for destroying variation in the population (as well as eliminating the previously preserved best candidates).
If you employ elitism, keep the elite individuals unchanged (that's the point of elitism) and then mate pairs of the other parents (which may or may not include some or all of the elite individuals, depending on whether they were also picked out as parents by the selection strategy). Each parent will only mate once unless it was picked out multiple times by the selection strategy.
Your approach is likely to suffer from premature convergence. There are lots of other selection techniques to pick from though. One of the more popular that you may wish to consider is Tournament selection.
Different selection strategies provide varying levels of 'selection pressure'. Selection pressure is how strongly the strategy insists on choosing the best programs. If the absolute best programs are chosen every time, then your algorithm effectively becomes a hill-climber; it will get trapped in local optimum with no way of navigating to other peaks in the fitness landscape. At the other end of the scale, no fitness pressure at all means the algorithm will blindly stumble around the fitness landscape at random. So, the challenge is to try to choose an operator with sufficient (but not excessive) selection pressure, for the problem you are tackling.
One of the advantages of the tournament selection operator is that by just modifying the size of the tournament, you can easily tweak the level of selection pressure. A larger tournament will give more pressure, a smaller tournament less.
What is Crossover Probability & Mutation Probability in Genetic Algorithm or Genetic Programming ? Could someone explain them from implementation perspective!
Mutation probability (or ratio) is basically a measure of the likeness that random elements of your chromosome will be flipped into something else. For example if your chromosome is encoded as a binary string of lenght 100 if you have 1% mutation probability it means that 1 out of your 100 bits (on average) picked at random will be flipped.
Crossover basically simulates sexual genetic recombination (as in human reproduction) and there are a number of ways it is usually implemented in GAs. Sometimes crossover is applied with moderation in GAs (as it breaks symmetry, which is not always good, and you could also go blind) so we talk about crossover probability to indicate a ratio of how many couples will be picked for mating (they are usually picked by following selection criteria - but that's another story).
This is the short story - if you want the long one you'll have to make an effort and follow the link Amber posted. Or do some googling - which last time I checked was still a good option too :)
According to Goldberg (Genetic Algorithms in Search, Optimization and Machine Learning) the probability of crossover is the probability that crossover will occur at a particular mating; that is, not all matings must reproduce by crossover, but one could choose Pc=1.0.
Probability of Mutation is per JohnIdol.
It's shows the quantity of features which inherited from the parents in crossover!
Note: If crossover probability is 100%, then all offspring is made by crossover. If it is 0%, whole new generation is made from exact
copies of chromosomes from old population (but this does not mean that
the new generation is the same!).
Here might be a little good explanation on these two probabilities:
http://www.optiwater.com/optiga/ga.html
Johnldol's answer on mutation probability is exactly words that the website is saying:
"Each bit in each chromosome is checked for possible mutation by generating a random number between zero and one and if this number is less than or equal to the given mutation probability e.g. 0.001 then the bit value is changed."
For crossover probability, maybe it is the ratio of next generation population born by crossover operation. While the rest of population...maybe by previous selection
or you can define it as best fit survivors