Library for solving knapsack-prblm(integer-programming) - knapsack-problem

I am trying to solve the knapsack-problem, which is also an integer-programming problem. I have looked at several approximate solutions like dynamic-programming, greedy algorithm, branch-and-bound algorithm, genetic algorithms. Can you tell me a library(in any language) that helps implementing any/all of these algorithms?
Thanks in advance.

Here are a few implementations of the Knapsack Problem (KP):
CPLEX If you are familiar with CPLEX (IBM) they have a page for Knapsack (among many other IP formulations) here.
Java: They also have a Java implementaion of the knapsack problem here. (look specifically at javaknapsack.mod)
Python: Here's one example of multiple solution techniques of the Knapsack problem.(by Dave Eppstein)
CPP: Here's a Genetic Algorithm implementation of the KP.
A simple web-search should get you many more examples because the knapsack problem is easy to solve (and to teach) using several of the techniques you mention.

Related

Scheme genetic algortihm

I have started to use Scheme in many different things, right now I have been assigned a task that implement a genetic algorithm just for complete the training. I took the knapsack algorithm as the one that I have to develop, so is there a way to start to figured out how do I have to implement it?
If there is an example somewhere of this algorithm on scheme I would be very grateful to have the link.
Thanks.

Can you propose to me two programs to make as project for my evolutionary computing class?

My teacher wants us to make two projects, but we haven´t seen many topics and I don´t have very clear what evolutionary computing is used for. Can you give me some ideas, please?
A good place to start is to identify something that can be improved with successive alterations. A great example is the design of a simple windmill propeller or turbine. Start of with a random design and arrangement of the blands. Input this geometry into the genetic algorithm and define the fitness as how fast the propeller spins based on a fixed air flow (a fan for example). Even if you do not build the fan, this is an interesting one to write about and should get you some marks!
Some great problems for Evolutionary programming are the Traveling Salesman Problem and Knapsack Problem.
You may also want to consider another NP Complete problem like Sudoku. Sudoku is a good example of a problem that is possible to solve with stochastic optimization, but more efficient techniques exist. There are a number of Sudoku Solutions Here.
You could compare the difficulty of using evolutionary programming with the Sudoku problem to either the Traveling Salesman or the Knapsack problem and explain why the algorithm works well for the first 2 problems but not the Sudoku.

Packing by weight

If I have a number of items less than one pound, and I want to efficiently pack them into one pound containers, should I do that by brute force? (Figure out all the various combinations, pack, and see which combination leads to the smallest number of packages?)
Is there a name for this sort of algorithm?
In my case, I don't have a large number of packages.
you might want to look at the knapsack problem
You can also look for 1d bin-packing or 2d bin-packing algorithm. If you don't have too much bins I suggest a brute-force algorithm but it seems to be a very hard problem.
You can look into the Algorithm Design Manual for descriptions of your problem:
Bin Packing
http://www.cs.sunysb.edu/~algorith/files/bin-packing.shtml
Knapsack problem
http://www.cs.sunysb.edu/~algorith/files/knapsack.shtml
You can probably make it easier on yourself if you can define a fitting solution (good enough), rather then if you want to know the best solution.
It's NP complete problem. You dont have much better options, best would probably be some dynamic programming algorithm with pseudopolynomial (exponential) complexity.
I wrote the following Ruby program to solve this problem, it seems to work well.
https://gist.github.com/1398026

Solutions to problems using dynamic programming or greedy methods?

What properties should the problem have so that I can decide which method to use dynamic programming or greedy method?
Dynamic programming problems exhibit optimal substructure. This means that the solution to the problem can be expressed as a function of solutions to subproblems that are strictly smaller.
One example of such a problem is matrix chain multiplication.
Greedy algorithms can be used only when a locally optimal choice leads to a totally optimal solution. This can be harder to see right away, but generally easier to implement because you only have one thing to consider (the greedy choice) instead of multiple (the solutions to all smaller subproblems).
One famous greedy algorithm is Kruskal's algorithm for finding a minimum spanning tree.
The second edition of Cormen, Leiserson, Rivest and Stein's Algorithms book has a section (16.4) titled "Theoretical foundations for greedy methods" that discusses when the greedy methods yields an optimum solution. It covers many cases of practical interest, but not all greedy algorithms that yield optimum results can be understood in terms of this theory.
I also came across a paper titled "From Dynamic Programming To Greedy Algorithms" linked here that talks about certain greedy algorithms can be seen as refinements of dynamic programming. From a quick scan, it may be of interest to you.
There's really strict rule to know it. As someone already said, there are some things that should turn the red light on, but at the end, only experience will be able to tell you.
We apply greedy method when a decision can be made on the local information available at each stage.We are sure that following the set of decisions at each stage,we will find the optimal solution.
However, in dynamic approach we may not be sure about making a decision at one stage, so we carry a set of probable decisions , one of the probable elements may take to a solution.

Information about the complexity of recursive algorithms

does anyone know about some good sources about counting complexity of recursive algorithms?
somehow recurrent equation isn't really popular title for web page or what, I just couldn't google out anything reasonable...
This is a complex topic that is not so well-documented on free licterature on internet.
I just did a similar exam and I can point you to the handbook written by my teacher: PDF Handbook
This handbook covers mostly another tool called generating functions that are useful to solve any kind of recurrence without bothering too much on the kind of recurrence.
There is a good book about Analysis of Algorithms that is An introduction to the Analysis of Algorithms (amazon link) by Sedgewick and Philippe Flajolet but you won't find it online (I had to scan parts of it).
By the way I've searched over internet a lot but I haven't found any complete reference with examples useful to learn the techniques.
I think you would have had more luck with recurrence equation.
You can also check out the Master theorem.
In the analysis of algorithms, the
master theorem, which is a specific
case of the Akra-Bazzi theorem,
provides a cookbook solution in
asymptotic terms for recurrence
relations of types that occur in
practice. It was popularized by the
canonical algorithms textbook
Introduction to Algorithms by Cormen,
Leiserson, Rivest, and Stein, which
introduces and proves it in sections
4.3 and 4.4, respectively. Nevertheless, not all recurrence
relations can be solved with the use
of the master theorem.

Resources