What is meant by (non-) uniform mutation in genetic algorithms? - genetic-algorithm

I have been working on a literature study on Genetic Algorithms in preparation of a project. When researching mutation I encountered the terms "Uniform mutation" and "Non-Uniform mutation" quite often.
Wikipedia explains uniform and non-uniform mutation mutation as a "type":
Uniform Mutation: This operator replaces the value of the chosen gene with a uniform random value selected between the user-specified upper and lower bounds for that gene. This mutation operator can only be used for integer and float genes.
Non-Uniform Mutation: The probability that amount of mutation will go to 0 with the next generation is increased by using non-uniform mutation operator. It keeps the population from stagnating in the early stages of the evolution. It tunes solution in later stages of evolution. This mutation operator can only be used for integer and float genes.
A powerpoint presentation on the subject of genetic algorithms explains uniform mutation in the context of floating point mutations:
xi' is drawn randomly (uniform) from [Lower bound, Upper bound]. It is analogous to bit-flipping of binary strings or random resetting of integer strings.
The MathWorks documentation explains uniform mutation as:
Uniform mutation is a two-step process. First, the algorithm selects a fraction of the vector entries of an individual for mutation, where each entry has a probability Rate of being mutated. The default value of Rate is 0.01. In the second step, the algorithm replaces each selected entry by a random number selected uniformly from the range for that entry.
In line with MathWorks' explanation of uniform as "random", I found this source, which doesn't even name uniform or non-uniform mutation.
However, no information is given on what it actually is. I am unsure if it is an umbrella term for certain methods adhering to some properties, or if it is a method on its own, like Wikipedia says.
I can't find any real demonstration of the term as a method. But I can't find any definition of the term as an umbrella term either. Since one source cited it as being analogous to bit-flipping I am unsure.
What is meant, in a genetic algorithms context, with uniform and non-uniform mutation and what is an example of the use of such methods or terms?

uniform mutation - choose a certain percentage of genes, say 1%, at random and set them to random values, and do this at the same rate throughout the program.
non-uniform mutation - any other scheme, but typically you either lower the mutation rate as the population gets fitter (so mutate 0.1 % of genes after several thousand generations) or you make the mutations smaller as time progresses (so add or subtract one or two places instead of setting to random).

Related

Change of density during iterations in NSGA-II

I am working with NSGA II and I am wondering, how the density of the population changes while the algorithm is running. Suppose you initialize your population according to a uniform distribution. Your population is changed after each iteration. How does this afflict the density of the population? E.g. If the sample size is huge will the population after n steps still be roughly uniform distributed? Does anybody know an answer to this?
Candidate solutions will close after a while. It's normal. Uniform distributon necessary just for first population. If you hesitate to get stuck on a local optimum, you can raise mutation operator probability. Mutation operator provides to jump another area.

Generating Gaussian Random Numbers without a Uniform Random Number Generator

I know many uniform random number generators(RNGs) based on some algorithms, physical systems and so on. Eventually, all these lead to uniformly distributed random numbers. It's interesting and important to know whether there is Gaussian RNGs, i.e. the algorithm or something else creates Gaussian random numbers. Much precisely I want to say that I don't want to use transformations such as Box–Muller or Marsaglia polar method to get Gaussian from Uniform RNGs. I am interested if there is some paper, algorithm or even idea to create Gaussian random numbers without any of use Uniform RNGs. It's just to say we pretend that we don't know there exist Uniform random number generators.
As already noted in answers/comments, by virtue of CLT some sum of any iid random number could be made into some reasonable looking gaussian. If incoming stream is uniform, this is basically Bates distribution. Ami Tavory answer is pretty much amounts to using Bates in disguise. You could look at closely related Irwin-Hall distribution, and at n=12 or higher they look a lot like gaussian.
There is one method which is used in practice and does not rely on transformation of the U(0,1) - Wallace method (Wallace, C. S. 1996. "Fast Pseudorandom Generators for Normal and Exponential Variates." ACM Transactions on Mathematical Software.), or gaussian pool method. I would advice to read description here and see if it fits your purpose
As others have noted, it's a bit unclear what is your motivation for this, and therefore I'm not sure if the following answers your question.
Nevertheless, it is possible to generate (an approximation of) this without the specific formulas transforming uniform RNGs that you mention.
As with any RNG, we have to have some source of randomness (or pseudo-randomness). I'm assuming, therefore, that there is some limitless sequence of binary bits which are independently equally likely to be 0 or 1 (note that it's possible to counter that this is a uniform discrete binary RNG, so I'm unsure if this answers your question).
Choose some large fixed n. For each invocation of the RNG, generate n such bits, sum them as x, and return
(2 x - 1) / √n
By the de Moivre–Laplace theorem this is normal with mean 0 and variance 1.

Mutation step-size in genetic algorithm

Can someone explain to me what means "mutation step size" ?
I'm reading an article regarding genetic algorithms and it says:
"the mutation randomly changes the decision of
a node or mutate the value with a step-size of 0.25"
I know the role of mutation in GA life cycle , but I can't find a good explanation for what is step size of mutation.
thanks.
Essentially it's how far a mutation can be away from the last value.
"As far as real-valued search spaces are concerned, mutation is normally performed by adding a normally distributed random value to each vector component. The step size or mutation strength (i.e. the standard deviation of the normal distribution) is often governed by self-adaptation (see evolution window)."
That's complex talk for given a vector you are mutating (say X = [x1,x2,..,xN]) then you'll modify that vector's values by some random amount that won't be more than the mutation step size. So say we had a function called normal(v,stdDev) that generated random values with a normal distribution around some value with stdDev. Then we'd modify each value of that vector with the following psuedo code:
for x in X {
x = normal(x,mutationStepSize)
}

Reasonable bit string size for genetic algorithm convergeance

In a typical genetic algorithm, is there any guideline for estimating the generations required to converge given the amount of entropy in the description of an individual in the population?
Also, I suppose it is reasonable to also require the number of offspring per generation and rate of mutation, but adjustment of those parameters is of less interest to me at the moment.
Well, there are not any concrete guidelines in the form of mathematical models, but there are several concepts that people use to communicate about parameter settings and advice on how to choose them. One of these concepts is diversity, which would be similar to the entropy that you mentioned. The other concept is called selection pressure and determines the chance an individual has to be selected based on its relative fitness.
Diversity and selection pressure can be computed for each generation, but the change between generations is very difficult to estimate. You would also need models that predict the expected quality of your crossover and mutation operator in order to estimate the fitness distribution in the next generation.
There have been work published on these topics very recently:
* Chicano and Alba. 2011. Exact Computation of the Expectation Curves of the Bit-Flip Mutation using Landscapes Theory
* Chicano, Whitley, and Alba. 2012. Exact computation of the expectation curves for uniform crossover
Is your question resulting from a general research interest or do you seek practical guidence?
No. If you define a mathematical model of the algorithm (initial population, combination function, mutation function) you can use normal mathematical methods to calculate what you want to know, but "typical genetic algorithm" is too vague to have any meaningful answer.
If you want to set the hyperparameters of some genetic algorithm (eg number of "DNA" bits) than this is typically done in the usual way for any machine learning algorithm, with a cross validation set.

What is Crossover Probability & Mutation Probability in Genetic Algorithm or Genetic Programming?

What is Crossover Probability & Mutation Probability in Genetic Algorithm or Genetic Programming ? Could someone explain them from implementation perspective!
Mutation probability (or ratio) is basically a measure of the likeness that random elements of your chromosome will be flipped into something else. For example if your chromosome is encoded as a binary string of lenght 100 if you have 1% mutation probability it means that 1 out of your 100 bits (on average) picked at random will be flipped.
Crossover basically simulates sexual genetic recombination (as in human reproduction) and there are a number of ways it is usually implemented in GAs. Sometimes crossover is applied with moderation in GAs (as it breaks symmetry, which is not always good, and you could also go blind) so we talk about crossover probability to indicate a ratio of how many couples will be picked for mating (they are usually picked by following selection criteria - but that's another story).
This is the short story - if you want the long one you'll have to make an effort and follow the link Amber posted. Or do some googling - which last time I checked was still a good option too :)
According to Goldberg (Genetic Algorithms in Search, Optimization and Machine Learning) the probability of crossover is the probability that crossover will occur at a particular mating; that is, not all matings must reproduce by crossover, but one could choose Pc=1.0.
Probability of Mutation is per JohnIdol.
It's shows the quantity of features which inherited from the parents in crossover!
Note: If crossover probability is 100%, then all offspring is made by crossover. If it is 0%, whole new generation is made from exact
copies of chromosomes from old population (but this does not mean that
the new generation is the same!).
Here might be a little good explanation on these two probabilities:
http://www.optiwater.com/optiga/ga.html
Johnldol's answer on mutation probability is exactly words that the website is saying:
"Each bit in each chromosome is checked for possible mutation by generating a random number between zero and one and if this number is less than or equal to the given mutation probability e.g. 0.001 then the bit value is changed."
For crossover probability, maybe it is the ratio of next generation population born by crossover operation. While the rest of population...maybe by previous selection
or you can define it as best fit survivors

Resources