Hi guys I want to know what is the main similarities and differences between Ant colony optimization algorithm and Particle swarm optimization and Genetic algorithm.
Particle swarm optimization is a population based stochastic optimization. It is motivated by the behavior of flocks of birds or swarms of fishes to search for a good food place. the coordinates in the search space that are associated with the best solution is tracked by each particle. It is random.
While Ant colony optimization algorithm is a probabilistic technique for solving non-deterministic problems. The requirement for designing an Ant colony optimization algorithm is to have a constructive method such as used by an ant to create different solutions through a sequence of decisions.
Related
Is it possible to add practical materials and theoretical materials using the genetic algorithm only? Or does it need other different algorithms?
is it possible to add practical materials and theoretical materials using the genetic algorithm only? Or does it need other different algorithms?
I was suggested to create a GA algorithm to solve the problem of linear intersections identification in 2d space. I have been thinking to modify the sweep line algorithm or a similar one (like Bentley Ottman) and create a GA algorithm.
I have been studying the literature and i also came accross "Overlay of Subdivisions" problem which is quite similar, but for multiple layers of data. Not a lot information on that problem and i am struggling to understand the algorithm.
I am now thinking that in the first case a GA optimization is impossible but maybe it is possible for the overlay of subdivisions problem.
Anyone with more experience thinks that this will work or am i looking for the impossible?
I am wondering whether there are any algorithms that use the Akl-Toussaint throw-away heuristic to compute the convex hull in 3D (not just as a simple pre-processing, but as the algorithmic principle or building block). And if so, what would their expected time complexity be?
Also, I am interested in experimental comparisons of such algorithms with the more traditional algorithms in 3D (e.g., Clarkson-Shor).
I would appreciate it very much if you could point me to papers or web pages that shed some light on my questions. (Or answer them directly :-) )
Can someone explain to me what is the difference between Genetic algorithm and Cellular Genetic algorithm? All what I know is that in Cellular the individuals cannot mate randomly, they interact with their neighbors only. What are other differences between the two algorithms?
The difference lies in how mating pairs are chosen. There is quite a lot more that can be said but it mostly revolves around implementation.
The usual method is to select two individuals randomly and weighted so that the more fit individuals are likely to be chosen for mating.
In the cellular implementation, the individuals are connected in some way and are more likely to mate with closer neighbors while also taking fitness into account. The connection could be implied by placing individuals in a grid or it could be explicit by placing them on a graph. This tends to produce localized optimizations.
So, another key difference is how the problem is approached. If local optimization makes sense in the context of the problem then cellular algorithms are more suited. Otherwise, they can just waste time and in extreme cases, perhaps, always fail.
I want to implement the robot path planning program applying hill climbing algorithm.
I understand the basic of hill climbing algorithm but I cannot think any idea!
I also Googled the hill climbing algorithm, but I cannot find any information about robot path planning with hill climbing algorithm.
It is hard to implement start function, choosing neighbor function, and check/draw path using Bresenham's line algorithm.
It all depends on which pathfinding algorithm you are using of course, but essentially just add a multiplier to the amount of 'cost' associated with hill climbing. Something as simple as:
//Psuedo-code
MovementCost = FlatDistance + (HillClimbAltitude * 2)
//Where 2 is the 'effort' involved in climbing compared to a flat distance
Would suffice. This also easily accomodates for a cost reduction where a hill decline (downhill) is involved. You could fancy it up by making the cost increase based on the angle of the incline etc