I am new to heuristic methods of optimization and learning about different optimization algorithms available in this space like Gentic Algorithm, PSO, DE, CMA ES etc.. The general flow of any of these algorithms seem to be initialise a population, select, crossover and mutation for update , evaluate and the cycle continues. The initial step of population creation in genetic algorithm seems to be that each member of the population is encoded by a chromosome, which is a bitstring of 0s and 1s and then all the other operations are performed. GE has simple update methods to popualation like mutation and crossover, but update methods are different in other algorithms.
My query here is do all the other heuristic algorithms also initialize the population as bitstrings of 0 and 1s or do they use the general natural numbers?
The representation of individuals in evolutionary algorithms (EA) depends on the representation of a candidate solution. If you are solving a combinatorial problem i.e. knapsack problem, the final solution is comprised of (0,1) string, so it makes sense to have a binary representation for the EA. However, if you are solving a continuous black-box optimisation problem, then it makes sense to have a representation with continuous decision variables.
In the old days, GA and other algorithms only used binary representation even for solving continuous problems. But nowadays, all the algorithms you mentioned have their own binary and continuous (and etc.) variants. For example, PSO is known as a continuous problem solver, but to update the individuals (particles), there are mapping strategies such as s-shape transform or v-shape transform to update the binary individuals for the next iteration.
My two cents: the choice of the algorithm relies on the type of the problem, and I personally won't recommend using a binary PSO at first try to solve a problem. Maybe there are benefits hidden there but need investigation.
Please feel free to extend your question.
Related
I have solved a single objective convex optimization problem (actually related to reducing interference reduction) using cvx package with MATLAB. Now I want to extend the problem to multi objective one. What are the pros-cons of solving it using genetic algorithm in comparison to cvx package? I haven't read anything about genetic algorithms and it came about by searching net for multiobjective optimization.
The optimization algorithms based on derivatives (or gradients) including convex optimization algorithm essentially try to find a local minimum. The pros and cons are as follows.
Pros:
1. It can be extremely fast since it only tries to follow the path given by derivative.
2. Sometimes, it achieves the global minimum (e.g., the problem is convex).
Cons:
1. When the problem is highly nonlinear and non-convex, the solution depends on initial point, hence there is high probability that the solution achieved is far from the global optimum.
2. It's not quite for multi-objective optimization problem.
Because of the disadvantages described above, for multi-objective optimization, we generally use evolutionary algorithm. Genetic algorithms belong to evolutionary algorithm.
Evolutionary algorithms developed for multi-objective optimization problems are fundamentally different from the gradient-based algorithms. They are population-based, i.e., maintain multiple solutions (hundreds or thousands of them) where as the latter ones maintain only one solution.
NSGA-II is an example: https://ieeexplore.ieee.org/document/996017, https://mae.ufl.edu/haftka/stropt/Lectures/multi_objective_GA.pdf, https://web.njit.edu/~horacio/Math451H/download/Seshadri_NSGA-II.pdf
The purpose of the multi-objective optimization is find the Pareto surface (or optimal trade-off surface). Since the surface consists of multiple points, population-based evolutionary algorithms suit well.
(You can solve a series of single objective optimization problems using gradient-based algorithms, but unless the feasible set is convex, it cannot find them accurately.)
I'm not sure if my understanding of maximization and minimization is correct.
So lets say for some function f(x,y,z) I want to find what would give the highest value that would be maximization, right? And if I wanted to find the lowest value that would be minimization?
So if a genetic algorithm is a search algorithm trying to maximize some fitness function would they by definition be maximization algorithms?
So let's say for some function f(x,y,z), I want to find what would give the highest value that would be maximization, right? And if I wanted to find the lowest value that would be minimization?
Yes, that's by definition true.
So if a genetic algorithm is a search algorithm trying to maximize some fitness function would they by definition be maximization algorithms?
Pretty much yes, although I'm not sure a "maximization algorithm" is a well-used term, and only if a genetic algorithm is defined as such, which I don't believe it is strictly.
Generic algorithms can also try to minimize the distance to some goal function value, or minimize the function value, but then again, this can just be rephrased as maximization without loss of generality.
Perhaps more significantly, there isn't a strict need to even have a function - the candidates just need to be comparable. If they have a total order, it's again possible to rephrase it as a maximization problem. If they don't have a total order, it might be a bit more difficult to get candidates objectively better than all the others, although nothing's stopping you from running the GA on this type of data.
In conclusion - trying to maximize a function is the norm (and possibly in line with how you'll mostly see it defined), but don't be surprised if you come across a GA that doesn't do this.
Are all genetic algorithms maximization algorithms?
No they aren't.
Genetic algorithms are popular approaches to multi-objective optimization (e.g. NSGA-II or SPEA-2 are very well known genetic algorithm based approaches).
For multi-objective optimization you aren't trying to maximize a function.
This because scalarizing multi-objective optimization problems is seldom a viable way (i.e. there isn't a single solution that simultaneously optimizes each objective) and what you are looking for is a set of nondominated solutions (or a representative subset of the Pareto optimal solutions).
There are also approaches to evolutionary algorithms which try to capture open-endedness of natural evolution searching for behavioral novelty. Even in an objective-based problem, such novelty search ignores the objective (see Abandoning Objectives: Evolution through the
Search for Novelty Alone by Joel Lehman and Kenneth O. Stanley for details).
Is it possible to warm start any of the well known algorithms (Dijkstra/Floyd-Warshall etc) for the APSP problem so as to be able to reduce the time complexity, and potentially the computation time?
Let's say the graph is represented by a NxN matrix. I am only considering changes in one or more matrix entries( << N), i.e. distance between the corresponding vertices, between any 2 calls to the algorithm procedure. Can we use the solution from the first call and just the incremental changes to the matrix to speed up the calculation on the second call to the algorithm? I am primarily looking at dense matrices, but if there are known methods for sparse matrices, please feel free to share. Thanks.
I'm not aware of an incremental algorithm for APSP. However, there is an incremental version of A* for solving SSSP called Lifelong Planning A* (aka 'LPA*,' rarely also called 'Incremental A*'), which seems to be what you're asking about in the second paragraph.
Here is a link to the original paper. You can find more information about it in this post about A* variations.
An interesting study paper is: Experimental Analysis of Dynamic All Pairs Shortest Path Algorithms [Demetrescu, Emiliozzi, Italiano]:
We present the results of an extensive computational study on dynamic algorithms for all pairs shortest path problems. We describe our
implementations of the recent dynamic algorithms of King [18] and of
Demetrescu and Italiano [7], and compare them to the dynamic algorithm
of Ramalingam and Reps [25] and to static algorithms on random,
real-world and hard instances. Our experimental data suggest that some
of the dynamic algorithms and their algorithmic techniques can be
really of practical value in many situations.
Another interesting distributed algorithm is Engineering a New Algorithm for Distributed Shortest Paths on Dynamic Networks [Cicerone, D’Angelo, Di Stefano, Frigioni, Maurizio]:
We study the problem of dynamically updating all-pairs shortest paths in a distributed network while edge update operations occur to the
network. We consider the practical case of a dynamic network in which
an edge update can occur while one or more other edge updates are
under processing.
You can find more resources searching for All Pairs Shortest Paths on Dynamic Networks.
Currently, I'm studying genetic algorithms (personal, not required) and I've come across some topics I'm unfamiliar or just basically familiar with and they are:
Search Space
The "extreme" of a Function
I understand that one's search space is a collection of all possible solutions but I also wish to know how one would decide the range of their search space. Furthermore I would like to know what an extreme is in relation to functions and how it is calculated.
I know I should probably understand what these are but so far I've only taken Algebra 2 and Geometry but I have ventured into physics, matrix/vector math, and data structures on my own so please excuse me if I seem naive.
Generally, all algorithms which are looking for a specific item in a collection of items are called search algorithms. When the collection of items is defined by a mathematical function (opposed to existing in a database), it is called a search space.
One of the most famous problems of this kind is the travelling salesman problem, where an algorithm is sought which will, given a list of cities and their distances, find the shortest route for visiting each city only once. For this problem, the exact solution can be found only by examining all possible routes (the entire search space), and finding the shortest one (the route which has the minimum distance, which is the extreme value in the search space). The best time complexity of such an algorithm (called an exhaustive search) is exponential (although it is still possible that there may be a better solution), meaning that the worst-case running time increases exponentially as the number of cities increases.
This is where genetic algorithms come into play. Similar to other heuristic algorithms, genetic algorithms try to get close to the optimal solution by improving a candidate solution iteratively, with no guarantee that an optimal solution will actually be found.
This iterative approach has the problem that the algorithm can easily get "stuck" in a local extreme (while trying to improve a solution), not knowing that there is a potentially better solution somewhere further away:
The figure shows that, in order to get to the actual, optimal solution (the global minimum), an algorithm currently examining the solution around the local minimum needs to "jump over" a large maximum in the search space. A genetic algorithm will rapidly locate such local optimums, but it will usually fail to "sacrifice" this short-term gain to get a potentially better solution.
So, a summary would be:
exhaustive search
examines the entire search space (long time)
finds global extremes
heuristic (e.g. genetic algorithms)
examines a part of the search space (short time)
finds local extremes
Genetic algorithms are not good in tuning to a local optimum. If you want to find a global optimum at least you should be able to approach or find a strategy to approach the local optimum. Recently some improvements have been developed to better find the local optima.
"GENETIC ALGORITHM FOR INFORMATIVE BASIS FUNCTION SELECTION
FROM THE WAVELET PACKET DECOMPOSITION WITH APPLICATION TO
CORROSION IDENTIFICATION USING ACOUSTIC EMISSION"
http://gbiomed.kuleuven.be/english/research/50000666/50000669/50488669/neuro_research/neuro_research_mvanhulle/comp_pdf/Chemometrics.pdf
In general, "search space" means, what type of answers are you looking for. For example, if you are writing a genetic algorithm which builds bridges, tests them out, and then builds more, the answers you are looking for are bridge models (in some form). As another example, if you're trying to find a function which agrees with a set of sample inputs on some number of points, you might try to find a polynomial which has this property. In this instance your search space might be polynomials. You might make this simpler by putting a bound on the number of terms, maximum degree of the polynomial, etc... So you could specify that you wanted to search for polynomials with integer exponents in the range [-4, 4]. In genetic algorithms, the search space is the set of possible solutions you could generate. In genetic algorithms you need to carefully limit your search space so you avoid answers which are completely dumb. At my former university, a physics student wrote a program which was a GA to calculate the best configuration of atoms in a molecule to have low energy properties: they found a great solution having almost no energy. Unfortunately, their solution put all the atoms at the exact center of the molecule, which is physically impossible :-). GAs really hone in on good solutions to your fitness functions, so it's important to choose your search space so that it doesn't produce solutions with good fitness but are in reality "impossible answers."
As for the "extreme" of a function. This is simply the point at which the function takes its maximum value. With respect to genetic algorithms, you want the best solution to the problem you're trying to solve. If you're building a bridge, you're looking for the best bridge. In this scenario, you have a fitness function that can tell you "this bridge can take 80 pounds of weight" and "that bridge can take 120 pounds of weight" then you look around for solutions which have higher fitness values than others. Some functions have simple extremes: you can find the extreme of a polynomial using simple high school calculus. Other functions don't have a simple way to calculate their extremes. Notably, highly nonlinear functions have extremes which might be difficult to find. Genetic algorithms excel at finding these solutions using a clever search technique which looks around for high points and then finds others. It's worth noting that there are other algorithms that do this as well, hill climbers in particular. The things that make GAs different is that if you find a local maximum, other types of algorithms can get "stuck," blinded by a locally good solution, so that they never see a possibly much better solution farther away in the search space. There are other ways to adapt hill climbers to this as well, simulated annealing, for one.
The range space usually requires some intuitive understanding of the problem you're trying to solve-- some expertise in the domain of the problem. There's really no guaranteed method to pick the range.
The extremes are just the minimum and maximum values of the function.
So for instance, if you're coding up a GA just for practice, to find the minimum of, say, f(x) = x^2, you know pretty well that your range should be +/- something because you already know that you're going to find the answer at x=0. But then of course, you wouldn't use a GA for that because you already have the answer, and even if you didn't, you could use calculus to find it.
One of the tricks in genetic algorithms is to take some real-world problem (often an engineering or scientific problem) and translate it, so to speak, into some mathematical function that can be minimized or maximized. But if you're doing that, you probably already have some basic notion where the solutions might lie, so it's not as hopeless as it sounds.
The term "search space" does not restrict to genetic algorithms. I actually just means the set of solutions to your optimization problem. An "extremum" is one solution that minimizes or maximizes the target function with respect to the search space.
Search space simply put is the space of all possible solutions. If you're looking for a shortest tour, the search space consists of all possible tours that can be formed. However, beware that it's not the space of all feasible solutions! It only depends on your encoding. If your encoding is e.g. a permutation, than the search space is that of the permutation which is n! (factorial) in size. If you're looking to minimize a certain function the search space with real valued input the search space is bounded by the hypercube of the real valued inputs. It's basically infinite, but of course limited by the precision of the computer.
If you're interested in genetic algorithms, maybe you're interested in experimenting with our software. We're using it to teach heuristic optimization in classes. It's GUI driven and windows based so you can start right away. We have included a number of problems such as real-valued test functions, traveling salesman, vehicle routing, etc. This allows you to e.g. look at how the best solution of a certain TSP is improving over the generations. It also exposes the problem of parameterization of metaheuristics and lets you find better parameters that will solve the problems more effectively. You can get it at http://dev.heuristiclab.com.
Several of my peers have mentioned that "linear algebra" is very important when studying algorithms. I've studied a variety of algorithms and taken a few linear algebra courses and I don't see the connection. So how is linear algebra used in algorithms?
For example what interesting things can one with a connectivity matrix for a graph?
Three concrete examples:
Linear algebra is the fundament of modern 3d graphics. This is essentially the same thing that you've learned in school. The data is kept in a 3d space that is projected in a 2d surface, which is what you see on your screen.
Most search engines are based on linear algebra. The idea is to represent each document as a vector in a hyper space and see how the vector relates to each other in this space. This is used by the lucene project, amongst others. See VSM.
Some modern compression algorithms such as the one used by the ogg vorbis format is based on linear algebra, or more specifically a method called Vector Quantization.
Basically it comes down to the fact that linear algebra is a very powerful method when dealing with multiple variables, and there's enormous benefits for using this as a theoretical foundation when designing algorithms. In many cases this foundation isn't as appearent as you might think, but that doesn't mean that it isn't there. It's quite possible that you've already implemented algorithms which would have been incredibly hard to derive without linalg.
A cryptographer would probably tell you that a grasp of number theory is very important when studying algorithms. And he'd be right--for his particular field. Statistics has its uses too--skip lists, hash tables, etc. The usefulness of graph theory is even more obvious.
There's no inherent link between linear algebra and algorithms; there's an inherent link between mathematics and algorithms.
Linear algebra is a field with many applications, and the algorithms that draw on it therefore have many applications as well. You've not wasted your time studying it.
Ha, I can't resist putting this here (even though the other answers are good):
The $25 billion dollar eigenvector.
I'm not going to lie... I never even read the whole thing... maybe I will now :-).
I don't know if I'd phrase it as 'linear algebra is very important when studying algorithms". I'd almost put it the other way around. Many, many, many, real world problems end up requiring you to solve a set of linear equations. If you end up having to tackle one of those problems you are going to need to know about some of the many algorithms for dealing with linear equations. Many of those algorithms were developed when computers was a job title, not a machine. Consider gaussian elimination and the various matrix decomposition algorithms for example. There is a lot of very sophisticated theory on how to solve those problems for very large matrices for example.
Most common methods in machine learning end up having an optimization step which requires solving a set of simultaneous equations. If you don't know linear algebra you'll be completely lost.
Many signal processing algorithms are based on matrix operations, e.g. Fourier transform, Laplace transform, ...
Optimization problems can often be reduced to solving linear equation systems.
Linear algebra is also important in many algorithms in computer algebra, as you might have guessed. For example, if you can reduce a problem to saying that a polynomial is zero, where the coefficients of the polynomial are linear in the variables x1, …, xn, then you can solve for what values of x1, …, xn make the polynomial equal to 0 by equating the coefficient of each x^n term to 0 and solving the linear system. This is called the method of undetermined coefficients, and is used for example in computing partial fraction decompositions or in integrating rational functions.
For the graph theory, the coolest thing about an adjacency matrix is that if you take the nth power of an adjacency Matrix for an unweighted graph (each entry is either 0 or 1), M^n, then each entry i,j will be the number of paths from vertex i to vertex j of length n. And if that isn't just cool, then I don't know what is.
All of the answers here are good examples of linear algebra in algorithms.
As a meta answer, I will add that you might be using linear algebra in your algorithms without knowing it. Compilers that optimize with SSE(2) typically vectorize your code by having many data values manipulated in parallel. This is essentially elemental LA.
It depends what type of "algorithms".
Some examples:
Machine-Learning/Statistics algorithms: Linear Regressions (least-squares, ridge, lasso).
Lossy compression of signals and other processing (face recognition, etc). See Eigenfaces
For example what interesting things can one with a connectivity matrix for a graph?
A lot of algebraic properties of the matrix are invariant under permutations of vertices (for example abs(determinant)), so if two graphs are isomorphic, their values will be equal.
This is a source for good heuristics for determining whether two graphs
are not isomorphic, since of course equality does not guarantee existance of isomorphism.
Check algebraic graph theory for a lot of other interesting techniques.