Relationship between genetic representation and fitness function - genetic-algorithm

As you know choosing a genetic representation is a part of building any Genetic Algorithm (GA). A mapping can be hence defined between the genotype space (problem solving space) and the phenotype space (original problem context). The fitness function, let's called it f, can be this mapping, in case assessing individuals of GA is identical to the objective function of the original problem:
f: Genotype Space ---------> Phenotype Space
For each genotype there is one corresponding phenotype. So, f is injective. A good GA representation encodes all phenotype into genotypes. So, f is bijective. My question: is it possible to go further and assess the quality of genetic representations by just examining some analytical properties of the fitness function. Thank you.

There is not, as of yet, any set of general guidelines for assessing the quality of a fitness function.
For someone starting out on a genetic algorithm problem, the fitness function is first formulated as a heuristic which suits ones own understanding. Development of "better" measures of fitness are done progressively, with the researcher refining the fitness function as new metrics come to light.
As the Wikipedia article on fitness functions states:
Definition of the fitness function is not straightforward in many cases and often is performed iteratively if the fittest solutions produced by GA are not what is desired. In some cases, it is very hard or impossible to come up even with a guess of what fitness function definition might be.
Evaluation of the suitability of fitness function, however, is an active area of research. There has been directed research in the past towards this end, though no promising results have arisen.

Related

Does all evolutionary algorithm encode the population in binary terms

I am new to heuristic methods of optimization and learning about different optimization algorithms available in this space like Gentic Algorithm, PSO, DE, CMA ES etc.. The general flow of any of these algorithms seem to be initialise a population, select, crossover and mutation for update , evaluate and the cycle continues. The initial step of population creation in genetic algorithm seems to be that each member of the population is encoded by a chromosome, which is a bitstring of 0s and 1s and then all the other operations are performed. GE has simple update methods to popualation like mutation and crossover, but update methods are different in other algorithms.
My query here is do all the other heuristic algorithms also initialize the population as bitstrings of 0 and 1s or do they use the general natural numbers?
The representation of individuals in evolutionary algorithms (EA) depends on the representation of a candidate solution. If you are solving a combinatorial problem i.e. knapsack problem, the final solution is comprised of (0,1) string, so it makes sense to have a binary representation for the EA. However, if you are solving a continuous black-box optimisation problem, then it makes sense to have a representation with continuous decision variables.
In the old days, GA and other algorithms only used binary representation even for solving continuous problems. But nowadays, all the algorithms you mentioned have their own binary and continuous (and etc.) variants. For example, PSO is known as a continuous problem solver, but to update the individuals (particles), there are mapping strategies such as s-shape transform or v-shape transform to update the binary individuals for the next iteration.
My two cents: the choice of the algorithm relies on the type of the problem, and I personally won't recommend using a binary PSO at first try to solve a problem. Maybe there are benefits hidden there but need investigation.
Please feel free to extend your question.

Impact of fitness function domain selection in multi-objective evolutionary optimization

I am using evolutionary algorithms e.g. the NSGA-II algorithm to solve unconstrained optimization problems with multiple objectives.
As my fitness functions sometimes have very different domains (e.g. f1(x) generates fitness values within [0..1] and f2(x) within [10000..10000000]) I am wondering if this has an effect on the search behaviour of the selected algorithm.
Does the selection of the fitness function domain (e.g. scaling all domains to a common domain from [lb..ub]) impact the solution quality and the speed of finding good solutions? Or is there no general answer to this question?
Unfortunately, I could not find anything on this topic. Any hints are welcome!
Your question is related to the selection strategy implemented in the algorithm. In the case of the original NSGA II, selection is made using a mixture of pareto rank and crowding distance. While the pareto rank (i.e. the non dominated front id of a point) is not changing scaling the numerical values by some constant, the crowding distance does.
So the answer is yes, if your second objective is in [10000 .. 10000000] its contribution to the crowding distance might be eating up the one of the other objective.
In algorithms such as NSGA II units count!
I have just come across your question and I have to disagree with the previous answer. If you read the paper carefully you will see that the crowding distance is supposed to be calculated in the normalized objective space. Exactly for the reason that one objective shall not dominating another.
My PhD advisor is Kalyanmoy Deb who has proposed NSGA-II and I have implemented the algorithm by myself (available in our evolutionary multi-objective optimization framework pymoo). So I can state with certainty that normalization is supposed to be incorporated into the algorithm.
If you are curious about a vectorized crowding distance implementation feel free to have a look at pymoo/algorithms/nsga2.py in pymoo on GitHub.

Are all genetic algorithms maximization algorithms?

I'm not sure if my understanding of maximization and minimization is correct.
So lets say for some function f(x,y,z) I want to find what would give the highest value that would be maximization, right? And if I wanted to find the lowest value that would be minimization?
So if a genetic algorithm is a search algorithm trying to maximize some fitness function would they by definition be maximization algorithms?
So let's say for some function f(x,y,z), I want to find what would give the highest value that would be maximization, right? And if I wanted to find the lowest value that would be minimization?
Yes, that's by definition true.
So if a genetic algorithm is a search algorithm trying to maximize some fitness function would they by definition be maximization algorithms?
Pretty much yes, although I'm not sure a "maximization algorithm" is a well-used term, and only if a genetic algorithm is defined as such, which I don't believe it is strictly.
Generic algorithms can also try to minimize the distance to some goal function value, or minimize the function value, but then again, this can just be rephrased as maximization without loss of generality.
Perhaps more significantly, there isn't a strict need to even have a function - the candidates just need to be comparable. If they have a total order, it's again possible to rephrase it as a maximization problem. If they don't have a total order, it might be a bit more difficult to get candidates objectively better than all the others, although nothing's stopping you from running the GA on this type of data.
In conclusion - trying to maximize a function is the norm (and possibly in line with how you'll mostly see it defined), but don't be surprised if you come across a GA that doesn't do this.
Are all genetic algorithms maximization algorithms?
No they aren't.
Genetic algorithms are popular approaches to multi-objective optimization (e.g. NSGA-II or SPEA-2 are very well known genetic algorithm based approaches).
For multi-objective optimization you aren't trying to maximize a function.
This because scalarizing multi-objective optimization problems is seldom a viable way (i.e. there isn't a single solution that simultaneously optimizes each objective) and what you are looking for is a set of nondominated solutions (or a representative subset of the Pareto optimal solutions).
There are also approaches to evolutionary algorithms which try to capture open-endedness of natural evolution searching for behavioral novelty. Even in an objective-based problem, such novelty search ignores the objective (see Abandoning Objectives: Evolution through the
Search for Novelty Alone by Joel Lehman and Kenneth O. Stanley for details).

In regards to genetic algorithms

Currently, I'm studying genetic algorithms (personal, not required) and I've come across some topics I'm unfamiliar or just basically familiar with and they are:
Search Space
The "extreme" of a Function
I understand that one's search space is a collection of all possible solutions but I also wish to know how one would decide the range of their search space. Furthermore I would like to know what an extreme is in relation to functions and how it is calculated.
I know I should probably understand what these are but so far I've only taken Algebra 2 and Geometry but I have ventured into physics, matrix/vector math, and data structures on my own so please excuse me if I seem naive.
Generally, all algorithms which are looking for a specific item in a collection of items are called search algorithms. When the collection of items is defined by a mathematical function (opposed to existing in a database), it is called a search space.
One of the most famous problems of this kind is the travelling salesman problem, where an algorithm is sought which will, given a list of cities and their distances, find the shortest route for visiting each city only once. For this problem, the exact solution can be found only by examining all possible routes (the entire search space), and finding the shortest one (the route which has the minimum distance, which is the extreme value in the search space). The best time complexity of such an algorithm (called an exhaustive search) is exponential (although it is still possible that there may be a better solution), meaning that the worst-case running time increases exponentially as the number of cities increases.
This is where genetic algorithms come into play. Similar to other heuristic algorithms, genetic algorithms try to get close to the optimal solution by improving a candidate solution iteratively, with no guarantee that an optimal solution will actually be found.
This iterative approach has the problem that the algorithm can easily get "stuck" in a local extreme (while trying to improve a solution), not knowing that there is a potentially better solution somewhere further away:
The figure shows that, in order to get to the actual, optimal solution (the global minimum), an algorithm currently examining the solution around the local minimum needs to "jump over" a large maximum in the search space. A genetic algorithm will rapidly locate such local optimums, but it will usually fail to "sacrifice" this short-term gain to get a potentially better solution.
So, a summary would be:
exhaustive search
examines the entire search space (long time)
finds global extremes
heuristic (e.g. genetic algorithms)
examines a part of the search space (short time)
finds local extremes
Genetic algorithms are not good in tuning to a local optimum. If you want to find a global optimum at least you should be able to approach or find a strategy to approach the local optimum. Recently some improvements have been developed to better find the local optima.
"GENETIC ALGORITHM FOR INFORMATIVE BASIS FUNCTION SELECTION
FROM THE WAVELET PACKET DECOMPOSITION WITH APPLICATION TO
CORROSION IDENTIFICATION USING ACOUSTIC EMISSION"
http://gbiomed.kuleuven.be/english/research/50000666/50000669/50488669/neuro_research/neuro_research_mvanhulle/comp_pdf/Chemometrics.pdf
In general, "search space" means, what type of answers are you looking for. For example, if you are writing a genetic algorithm which builds bridges, tests them out, and then builds more, the answers you are looking for are bridge models (in some form). As another example, if you're trying to find a function which agrees with a set of sample inputs on some number of points, you might try to find a polynomial which has this property. In this instance your search space might be polynomials. You might make this simpler by putting a bound on the number of terms, maximum degree of the polynomial, etc... So you could specify that you wanted to search for polynomials with integer exponents in the range [-4, 4]. In genetic algorithms, the search space is the set of possible solutions you could generate. In genetic algorithms you need to carefully limit your search space so you avoid answers which are completely dumb. At my former university, a physics student wrote a program which was a GA to calculate the best configuration of atoms in a molecule to have low energy properties: they found a great solution having almost no energy. Unfortunately, their solution put all the atoms at the exact center of the molecule, which is physically impossible :-). GAs really hone in on good solutions to your fitness functions, so it's important to choose your search space so that it doesn't produce solutions with good fitness but are in reality "impossible answers."
As for the "extreme" of a function. This is simply the point at which the function takes its maximum value. With respect to genetic algorithms, you want the best solution to the problem you're trying to solve. If you're building a bridge, you're looking for the best bridge. In this scenario, you have a fitness function that can tell you "this bridge can take 80 pounds of weight" and "that bridge can take 120 pounds of weight" then you look around for solutions which have higher fitness values than others. Some functions have simple extremes: you can find the extreme of a polynomial using simple high school calculus. Other functions don't have a simple way to calculate their extremes. Notably, highly nonlinear functions have extremes which might be difficult to find. Genetic algorithms excel at finding these solutions using a clever search technique which looks around for high points and then finds others. It's worth noting that there are other algorithms that do this as well, hill climbers in particular. The things that make GAs different is that if you find a local maximum, other types of algorithms can get "stuck," blinded by a locally good solution, so that they never see a possibly much better solution farther away in the search space. There are other ways to adapt hill climbers to this as well, simulated annealing, for one.
The range space usually requires some intuitive understanding of the problem you're trying to solve-- some expertise in the domain of the problem. There's really no guaranteed method to pick the range.
The extremes are just the minimum and maximum values of the function.
So for instance, if you're coding up a GA just for practice, to find the minimum of, say, f(x) = x^2, you know pretty well that your range should be +/- something because you already know that you're going to find the answer at x=0. But then of course, you wouldn't use a GA for that because you already have the answer, and even if you didn't, you could use calculus to find it.
One of the tricks in genetic algorithms is to take some real-world problem (often an engineering or scientific problem) and translate it, so to speak, into some mathematical function that can be minimized or maximized. But if you're doing that, you probably already have some basic notion where the solutions might lie, so it's not as hopeless as it sounds.
The term "search space" does not restrict to genetic algorithms. I actually just means the set of solutions to your optimization problem. An "extremum" is one solution that minimizes or maximizes the target function with respect to the search space.
Search space simply put is the space of all possible solutions. If you're looking for a shortest tour, the search space consists of all possible tours that can be formed. However, beware that it's not the space of all feasible solutions! It only depends on your encoding. If your encoding is e.g. a permutation, than the search space is that of the permutation which is n! (factorial) in size. If you're looking to minimize a certain function the search space with real valued input the search space is bounded by the hypercube of the real valued inputs. It's basically infinite, but of course limited by the precision of the computer.
If you're interested in genetic algorithms, maybe you're interested in experimenting with our software. We're using it to teach heuristic optimization in classes. It's GUI driven and windows based so you can start right away. We have included a number of problems such as real-valued test functions, traveling salesman, vehicle routing, etc. This allows you to e.g. look at how the best solution of a certain TSP is improving over the generations. It also exposes the problem of parameterization of metaheuristics and lets you find better parameters that will solve the problems more effectively. You can get it at http://dev.heuristiclab.com.

Genetic Programming and Search Algorithms

Is Genetic Programming currently capable of evolving one type of search algorithm into another? For example, has any experiment ever ever bred / mutated BubbleSort from QuickSort (see http://en.wikipedia.org/wiki/Sorting_algorithm)
You might want to look at the work of W. Daniel Hillis from the 80s. He spent a great deal of time creating sorting networks by genetic programming. While he was more interested in solving the problem of sorting a constant number of objects (16-object sorting networks had been a major academic problem for nearly a decade,) it would be a good idea to be familiar with his work if you're really interested in genetic sorting algorithms.
In the evolution of an algorithm for sorting a list of arbitrary length, you might also want to be familiar with the concept of co-evolution. I've built a co-evolutionary system before where the point was to have one genetic algorithm evolving sorting algorithms while another GA develops unsorted lists of numbers. The fitness of the sorter is its accuracy (plus a bonus for fewer comparisons if it is 100% accurate) and the fitness of the list generator is how many errors sort algorithms make in sorting its list.
To answer your specific question of whether bubble had ever been evolved from quick, I would have to say that I would seriously doubt it, unless the programmer's fitness function was both very specific and ill-advised. Yes, bubble is very simple, so maybe a GP whose fitness function was accuracy plus program size would eventually find bubble. However, why would a programmer select size instead of number of comparisons as a fitness function when it is the latter that determines runtime?
By asking if GP can evolve one algorithm into another, I'm wondering if you're entirely clear on what GP is. Ideally, each unique chromosome defines a unique sort. A population of 200 chromosomes represents 200 different algorithms. Yes, quick and bubble may be in there somewhere, but so are 198 other, potentially unnamed, methods.
There's no reason why GP couldn't evolve e.g. either type of algorithm. I'm not sure that it really makes sense to think of evolving one "into" the other. GP will simply evolve a program that comes ever-closer to a fitness function you define.
If your fitness function only looks at sort correctness (and assuming you have the proper building blocks for your GP to use) then it could very well evolve both BubbleSort and QuickSort. If you also include efficiency as a measure of fitness, then that might influence which of these would be determined as a better solution.
You could seed the GP with e.g. QuickSort and if you had an appropriate fitness function it certainly could eventually come up with BubbleSort - but it could come up with anything else that is fitter than QuickSort as well.
Now how long it takes the GP engine to do this evolution is another question...
I'm not aware of one, and the particular direction you're suggesting in your example seems unlikely; it would take a sort of perverse fitness function, since bubble sort is in most measures worse than quicksort. It's not inconceivable that this could happen, but in general once you've got a well-understood algorithm, it's already pretty fit -- going to another one probably requires passing through some worse choices.
Being trapped in local minima isn't an unknown problem for most search strategies.

Resources