Practical applications of Bin packing using genetic algorithm - genetic-algorithm

I am doing research on genetic algorithms for solving the bin packing problem. I can understand the process now, but since the final output is a set of solutions for one list of items, I cannot figure out why do we need a set of solutions for one list of items when one solution should be enough? What are the applications where a GA solution would be better than classical approaches?
It would be great if someone could refer me to any scholarly/non-scholarly links that explain some practical applications of bin packing using genetic algorithms. I have visited the link for list of GA applications on wikipedia but it is not specifically for bin packing.

Background
The classical version of bin packing is a well understood problem for which relatively large instances can be efficiently solved to optimality or near-optimality using methods such as integer programming with column generation.
However, these models may not be as effective for solving special cases of bin packing which exhibit complex constraints or objectives (e.g., bin packing with conflicts or profits, multidimensional bin packing, bin packing with fragile objects, bin packing with load balancing, etc.).
In your case
You don't need a set of solutions, it's just that the way the genetic algorithm (GA) is designed, you end up with a set of solutions (your current population) once you stop its execution. You just pick the best of those solutions.
One advantage of GAs over classical methods for bin packing would be its ability to efficiently solve problems with complex constraints. For instance, here is a paper which uses GAs to solve the three dimensional single container arbitrary sized rectangular prismatic bin packing optimization problem (isn't that a mouthful to say!). While classical methods are usually very efficient on convex problems such as traditional bin packing, once you add non-convex constraints they become much harder to solve. For such problems, other methods such as GAs (among others) tend to do very well.

Related

Is applying AI ok and/or practical to finding the optimal solution to the algorithmic problem

Both in learning environment and practice, from time to time I had to use different algorithms to solve problems. But the more I use them, the more it seems like the AI could be deployed to try finding the optimal solution, especially to the NP-complete problems, since the AI "progression" is easily tracked
If we, for example, never knew how to solve knapsack problem efficiently; I wonder, is applying AI practical and/or ever OK to finding the optimal solution to the given problem?
AI algorithms in general can find an approximation to basically any function. They are so powerful because this is true even for extremely complex functions with many input parameters and/or many output parameters and/or a very complicated internal structure.
On the other hand, there is no known way to solve NP-complete problems "quickly". In practice, you would often have to search through a huge solution space for finding the optimal solution. This is why people use heuristic methods and approximation algorithms to efficiently find a "sufficiently good" solution.
So yes, you can use AI to find a good approximate solution (and possibly even a better one than with traditional heuristics) to a computationally hard problem.
But no, if the problem is NP-complete, you still cannot know that you have found the optimal solution.

Multi objective convex optimization using genetic algorithm or cvx tool

I have solved a single objective convex optimization problem (actually related to reducing interference reduction) using cvx package with MATLAB. Now I want to extend the problem to multi objective one. What are the pros-cons of solving it using genetic algorithm in comparison to cvx package? I haven't read anything about genetic algorithms and it came about by searching net for multiobjective optimization.
The optimization algorithms based on derivatives (or gradients) including convex optimization algorithm essentially try to find a local minimum. The pros and cons are as follows.
Pros:
1. It can be extremely fast since it only tries to follow the path given by derivative.
2. Sometimes, it achieves the global minimum (e.g., the problem is convex).
Cons:
1. When the problem is highly nonlinear and non-convex, the solution depends on initial point, hence there is high probability that the solution achieved is far from the global optimum.
2. It's not quite for multi-objective optimization problem.
Because of the disadvantages described above, for multi-objective optimization, we generally use evolutionary algorithm. Genetic algorithms belong to evolutionary algorithm.
Evolutionary algorithms developed for multi-objective optimization problems are fundamentally different from the gradient-based algorithms. They are population-based, i.e., maintain multiple solutions (hundreds or thousands of them) where as the latter ones maintain only one solution.
NSGA-II is an example: https://ieeexplore.ieee.org/document/996017, https://mae.ufl.edu/haftka/stropt/Lectures/multi_objective_GA.pdf, https://web.njit.edu/~horacio/Math451H/download/Seshadri_NSGA-II.pdf
The purpose of the multi-objective optimization is find the Pareto surface (or optimal trade-off surface). Since the surface consists of multiple points, population-based evolutionary algorithms suit well.
(You can solve a series of single objective optimization problems using gradient-based algorithms, but unless the feasible set is convex, it cannot find them accurately.)

Can you propose to me two programs to make as project for my evolutionary computing class?

My teacher wants us to make two projects, but we haven´t seen many topics and I don´t have very clear what evolutionary computing is used for. Can you give me some ideas, please?
A good place to start is to identify something that can be improved with successive alterations. A great example is the design of a simple windmill propeller or turbine. Start of with a random design and arrangement of the blands. Input this geometry into the genetic algorithm and define the fitness as how fast the propeller spins based on a fixed air flow (a fan for example). Even if you do not build the fan, this is an interesting one to write about and should get you some marks!
Some great problems for Evolutionary programming are the Traveling Salesman Problem and Knapsack Problem.
You may also want to consider another NP Complete problem like Sudoku. Sudoku is a good example of a problem that is possible to solve with stochastic optimization, but more efficient techniques exist. There are a number of Sudoku Solutions Here.
You could compare the difficulty of using evolutionary programming with the Sudoku problem to either the Traveling Salesman or the Knapsack problem and explain why the algorithm works well for the first 2 problems but not the Sudoku.

In regards to genetic algorithms

Currently, I'm studying genetic algorithms (personal, not required) and I've come across some topics I'm unfamiliar or just basically familiar with and they are:
Search Space
The "extreme" of a Function
I understand that one's search space is a collection of all possible solutions but I also wish to know how one would decide the range of their search space. Furthermore I would like to know what an extreme is in relation to functions and how it is calculated.
I know I should probably understand what these are but so far I've only taken Algebra 2 and Geometry but I have ventured into physics, matrix/vector math, and data structures on my own so please excuse me if I seem naive.
Generally, all algorithms which are looking for a specific item in a collection of items are called search algorithms. When the collection of items is defined by a mathematical function (opposed to existing in a database), it is called a search space.
One of the most famous problems of this kind is the travelling salesman problem, where an algorithm is sought which will, given a list of cities and their distances, find the shortest route for visiting each city only once. For this problem, the exact solution can be found only by examining all possible routes (the entire search space), and finding the shortest one (the route which has the minimum distance, which is the extreme value in the search space). The best time complexity of such an algorithm (called an exhaustive search) is exponential (although it is still possible that there may be a better solution), meaning that the worst-case running time increases exponentially as the number of cities increases.
This is where genetic algorithms come into play. Similar to other heuristic algorithms, genetic algorithms try to get close to the optimal solution by improving a candidate solution iteratively, with no guarantee that an optimal solution will actually be found.
This iterative approach has the problem that the algorithm can easily get "stuck" in a local extreme (while trying to improve a solution), not knowing that there is a potentially better solution somewhere further away:
The figure shows that, in order to get to the actual, optimal solution (the global minimum), an algorithm currently examining the solution around the local minimum needs to "jump over" a large maximum in the search space. A genetic algorithm will rapidly locate such local optimums, but it will usually fail to "sacrifice" this short-term gain to get a potentially better solution.
So, a summary would be:
exhaustive search
examines the entire search space (long time)
finds global extremes
heuristic (e.g. genetic algorithms)
examines a part of the search space (short time)
finds local extremes
Genetic algorithms are not good in tuning to a local optimum. If you want to find a global optimum at least you should be able to approach or find a strategy to approach the local optimum. Recently some improvements have been developed to better find the local optima.
"GENETIC ALGORITHM FOR INFORMATIVE BASIS FUNCTION SELECTION
FROM THE WAVELET PACKET DECOMPOSITION WITH APPLICATION TO
CORROSION IDENTIFICATION USING ACOUSTIC EMISSION"
http://gbiomed.kuleuven.be/english/research/50000666/50000669/50488669/neuro_research/neuro_research_mvanhulle/comp_pdf/Chemometrics.pdf
In general, "search space" means, what type of answers are you looking for. For example, if you are writing a genetic algorithm which builds bridges, tests them out, and then builds more, the answers you are looking for are bridge models (in some form). As another example, if you're trying to find a function which agrees with a set of sample inputs on some number of points, you might try to find a polynomial which has this property. In this instance your search space might be polynomials. You might make this simpler by putting a bound on the number of terms, maximum degree of the polynomial, etc... So you could specify that you wanted to search for polynomials with integer exponents in the range [-4, 4]. In genetic algorithms, the search space is the set of possible solutions you could generate. In genetic algorithms you need to carefully limit your search space so you avoid answers which are completely dumb. At my former university, a physics student wrote a program which was a GA to calculate the best configuration of atoms in a molecule to have low energy properties: they found a great solution having almost no energy. Unfortunately, their solution put all the atoms at the exact center of the molecule, which is physically impossible :-). GAs really hone in on good solutions to your fitness functions, so it's important to choose your search space so that it doesn't produce solutions with good fitness but are in reality "impossible answers."
As for the "extreme" of a function. This is simply the point at which the function takes its maximum value. With respect to genetic algorithms, you want the best solution to the problem you're trying to solve. If you're building a bridge, you're looking for the best bridge. In this scenario, you have a fitness function that can tell you "this bridge can take 80 pounds of weight" and "that bridge can take 120 pounds of weight" then you look around for solutions which have higher fitness values than others. Some functions have simple extremes: you can find the extreme of a polynomial using simple high school calculus. Other functions don't have a simple way to calculate their extremes. Notably, highly nonlinear functions have extremes which might be difficult to find. Genetic algorithms excel at finding these solutions using a clever search technique which looks around for high points and then finds others. It's worth noting that there are other algorithms that do this as well, hill climbers in particular. The things that make GAs different is that if you find a local maximum, other types of algorithms can get "stuck," blinded by a locally good solution, so that they never see a possibly much better solution farther away in the search space. There are other ways to adapt hill climbers to this as well, simulated annealing, for one.
The range space usually requires some intuitive understanding of the problem you're trying to solve-- some expertise in the domain of the problem. There's really no guaranteed method to pick the range.
The extremes are just the minimum and maximum values of the function.
So for instance, if you're coding up a GA just for practice, to find the minimum of, say, f(x) = x^2, you know pretty well that your range should be +/- something because you already know that you're going to find the answer at x=0. But then of course, you wouldn't use a GA for that because you already have the answer, and even if you didn't, you could use calculus to find it.
One of the tricks in genetic algorithms is to take some real-world problem (often an engineering or scientific problem) and translate it, so to speak, into some mathematical function that can be minimized or maximized. But if you're doing that, you probably already have some basic notion where the solutions might lie, so it's not as hopeless as it sounds.
The term "search space" does not restrict to genetic algorithms. I actually just means the set of solutions to your optimization problem. An "extremum" is one solution that minimizes or maximizes the target function with respect to the search space.
Search space simply put is the space of all possible solutions. If you're looking for a shortest tour, the search space consists of all possible tours that can be formed. However, beware that it's not the space of all feasible solutions! It only depends on your encoding. If your encoding is e.g. a permutation, than the search space is that of the permutation which is n! (factorial) in size. If you're looking to minimize a certain function the search space with real valued input the search space is bounded by the hypercube of the real valued inputs. It's basically infinite, but of course limited by the precision of the computer.
If you're interested in genetic algorithms, maybe you're interested in experimenting with our software. We're using it to teach heuristic optimization in classes. It's GUI driven and windows based so you can start right away. We have included a number of problems such as real-valued test functions, traveling salesman, vehicle routing, etc. This allows you to e.g. look at how the best solution of a certain TSP is improving over the generations. It also exposes the problem of parameterization of metaheuristics and lets you find better parameters that will solve the problems more effectively. You can get it at http://dev.heuristiclab.com.

When locally optimal solutions equal global optimal? Thinking about greedy algorithm

Recently I've been looking at some greedy algorithm problems. I am confused about locally optimal. As you know, greedy algorithms are composed of locally optimal choices. But combining of locally optimal decisions doesn't necessarily mean globally optimal, right?
Take making change as an example: using the least number of coins to make 15¢, if we have
10¢, 5¢, and 1¢ coins then you can achieve this with one 10¢ and one 5¢. But if we add in a 12¢ coin the greedy algorithm fails as (1×12¢ + 3×1¢) uses more coins than (1×10¢ + 1×5¢).
Consider some classic greedy algorithms, e.g. Huffman, Dijkstra. In my opinion, these algorithms are successful as they have no degenerate cases which means a combination of locally optimal steps always equals global optimal. Do I understand right?
If my understanding is correct, is there a general method for checking if a greedy algorithm is optimal?
I found some discussion of greedy algorithms elsewhere on the site.
However, the problem doesn't go into too much detail.
Generally speaking, a locally optimal solution is always a global optimum whenever the problem is convex. This includes linear programming; quadratic programming with a positive definite objective; and non-linear programming with a convex objective function. (However, NLP problems tend to have a non-convex objective function.)
Heuristic search will give you a global optimum with locally optimum decisions if the heuristic function has certain properties. Consult an AI book for details on this.
In general, though, if the problem is not convex, I don't know of any methods for proving global optimality of a locally optimal solution.
There are some theorems that express problems for which greedy algorithms are optimal in terms of matroids (also:greedoids.) See this Wikipedia section for details: http://en.wikipedia.org/wiki/Matroid#Greedy_algorithms
A greedy algorithm almost never succeeds in finding the optimal solution. In the cases that it does, this is highly dependent on the problem itself. As Ted Hopp explained, with convex curves, the global optimal can be found, assuming you are to find the maximum of the objective function of course (conversely, concave curves also work if you are to minimise). Otherwise, you will almost certainly get stuck in the local optima. This assumes that you already know the objective function.
Another factor which I can think of is the neighbourhood function. Certain neighbourhoods, if large enough, will encompass both the global and local maximas, so that you can avoid the local maxima. However, you can't make the neighbourhood too large or search will be slow.
In other words, whether you find a global optimal or not with greedy algorithms is problem specific, although for most cases, you will not find the globally optimal.
You need to design a witness example where your premise that the algorithm is a global one fails. Design it according to the algorithm and the problem.
Your example of the coin change was not a valid one. Coins are designed purposely to have all the combinations possible, but not to add confusion. Your addition of 12c is not warranted and is extra.
With your addition, the problem is not coin change but a different one (even though the subject are coins, you can change the example to what you want). For this, you yourself gave a witness example to show the greedy algorithm for this problem will get stuck in a local maximum.

Resources