A greedy or dynamic algorithm to subset selection - algorithm

I have a simple algorithmic question. I would be grateful if you could help me.
We have some 2 dimensional points. A positive weight is associated to them (a sample problem is attached). We want to select a subset of them which maximizes the weights and neither of two selected points overlap each other (for example, in the attached file, we cannot select both A and C because they are in the same row, and in the same way we cannot select both A and B, because they are in the same column.) If there is any greedy (or dynamic) approach I can use. I'm aware of non-overlapping interval selection algorithm, but I cannot use it here, because my problem is 2 dimensional.
Any reference or note is appreciated.
Regards
Attachment:
A simple sample of the problem:
A (30$) -------- B (10$)
|
|
|
|
C (8$)

If you are OK with a good solution, and do not demand the best solution - you can use heuristical algorithms to solve this.
Let S be the set of points, and w(s) - the weightening function.
Create a weight function W:2^S->R (from the subsets of S to real numbers):
W(U) = - INFINITY is the solution is not feasible
Sigma(w(u)) for each u in U otherwise
Also create a function next:2^S -> 2^2^S (a function that gets a subset of S, and returns a set of subsets of S)
next(U) = V you can get V from U by adding/removing one element to/from U
Now, given that data - you can invoke any optimization algorithm in the Artificial Intelligence book, such as Genetic Algorithm or Hill Climbing.
For example, Hill Climbing with random restarts, will be something like that:
1. best<- -INFINITY
2. while there is more time
3. choose a random subset s
4. NEXT <- next(s)
5. if max{ W(v) | for each v in NEXT} < W(s): //s is a local maximum
5.1. if W(s) > best: best <- W(s) //if s is better then the previous result - store it.
5.2. go to 2. //restart the hill climbing from a different random point.
6. else:
6.1. s <- max { NEXT }
6.2. goto 4.
7. return best //when out of time, return the best solution found so far.
The above algorithm is anytime - meaning it will produce better results if given more time.

This can be treated as a linear assignment problem, which can be solved using an algorithm like the Hungarian algorithm. The algorithm tries to minimize the sum of costs, so just negate your weights, and use them as the costs. The assignment of rows to columns will give you the subset of points that you need. There are sparse variants for cases where not every (row,column) pair has an associated point, but you can also just use a large positive cost for these.

Well you can think of this as a binary constraint optimization problem, and there are various algorithms. The easiest algorithm for this problem is backtracking and arc propogation. However, it takes exponential time in the worst case. I am not sure if there are any specific algorithms to take advantage of the geometrical nature of the problem.

This can be solved by a pretty straight forward dynamic programming approach with a exponential time complexity
s = {A, B, C ...}
getMaxSum(s) = max( A.value + getMaxSum(compatibleSubSet(s, A)),
B.value + getMaxSum(compatibleSubSet(s, B)),
...)
where compatibleSubSet(s, A) gets the subset of s that does not overlap with A
To optimize it, you can memorize the result for each subset

Some way to do it:
Write a function that generates subsets ordered from the subset off maximum weight to the subset off minimum weight while ignoring the constraints.
Then call this function repeatedly until a subset that honors the constraints pops up.
In order to improve the performance, you can write a not so dumb generator function that for instance honors the not-on-the-same-row constraint but that ignores the not-on-the-same-column one.

Related

Knapsack with unique elements

I'm trying to solve the following:
The knapsack problem is as follows: given a set of integers S={s1,s2,…,sn}, and a given target number T, find a subset of S that adds up exactly to T. For example, within S={1,2,5,9,10} there is a subset that adds up to T=22 but not T=23. Give a correct programming algorithm for knapsack that runs in O(nT) time.
but the only algorithm I could come up with is generating all the 1 to N combinations and try the sum out (exponential time).
I can't devise a dynamic programming solution since the fact that I can't reuse an object makes this problem different from a coin rest exchange problem and from a general knapsack problem.
Can somebody help me out with this or at least give me a hint?
The O(nT) running time gives you the hint: do dynamic programming on two axes. That is, let f(a,b) denote the maximum sum <= b which can be achieved with the first a integers.
f satisfies the recurrence
f(a,b) = max( f(a-1,b), f(a-1,b-s_a)+s_a )
since the first value is the maximum without using s_a and the second is the maximum including s_a. From here the DP algorithm should be straightforward, as should outputting the correct subset of S.
I did find a solution but with O(T(n2)) time complexity. If we make a table from bottom to top. In other words If we sort the array and start with the greatest number available and make a table where columns are the target values and rows the provided number. We will need to consider the sum of all possible ways of making i- cost [j] +j . Which will take n^2 time. And this multiplied with target.

Is dynamic programming backtracking with cache

I've always wondered about this. And no books state this explicitly.
Backtracking is exploring all possibilities until we figure out one possibility cannot lead us to a possible solution, in that case we drop it.
Dynamic programming as I understand it is characterized by overlapping sub-problems. So, can dynamic programming can be stated as backtracking with cache (for previously explored paths) ?
Thanks
This is one face of dynamic programming, but there's more to it.
For a trivial example, take Fibonacci numbers:
F (n) =
n = 0: 0
n = 1: 1
else: F (n - 2) + F (n - 1)
We can call the above code "backtracking" or "recursion".
Let us transform it into "backtracking with cache" or "recursion with memoization":
F (n) =
n in Fcache: Fcache[n]
n = 0: 0, and cache it as Fcache[0]
n = 1: 1, and cache it as Fcache[1]
else: F (n - 2) + F (n - 1), and cache it as Fcache[n]
Still, there is more to it.
If a problem can be solved by dynamic programming, there is a directed acyclic graph of states and dependencies between them.
There is a state that interests us.
There are also base states for which we know the answer right away.
We can traverse that graph from the vertex that interests us to all its dependencies, from them to all their dependencies in turn, etc., stopping to branch further at the base states.
This can be done via recursion.
A directed acyclic graph can be viewed as a partial order on vertices. We can topologically sort that graph and visit the vertices in sorted order.
Additionally, you can find some simple total order which is consistent with your partial order.
Also note that we can often observe some structure on states.
For example, the states can be often expressed as integers or tuples of integers.
So, instead of using generic caching techniques (e.g., associative arrays to store state->value pairs), we may be able to preallocate a regular array which is easier and faster to use.
Back to our Fibonacci example, the partial order relation is just that state n >= 2 depends on states n - 1 and n - 2.
The base states are n = 0 and n = 1.
A simple total order consistent with this order relation is the natural order: 0, 1, 2, ....
Here is what we start with:
Preallocate array F with indices 0 to n, inclusive
F[0] = 0
F[1] = 1
Fine, we have the order in which to visit the states.
Now, what's a "visit"?
There are again two possibilities:
(1) "Backward DP": When we visit a state u, we look at all its dependencies v and calculate the answer for that state u:
for u = 2, 3, ..., n:
F[u] = F[u - 1] + F[u - 2]
(2) "Forward DP": When we visit a state u, we look at all states v that depend on it and account for u in each of these states v:
for u = 1, 2, 3, ..., n - 1:
add F[u] to F[u + 1]
add F[u] to F[u + 2]
Note that in the former case, we still use the formula for Fibonacci numbers directly.
However, in the latter case, the imperative code cannot be readily expressed by a mathematical formula.
Still, in some problems, the "forward DP" approach is more intuitive (no good example for now; anyone willing to contribute it?).
One more use of dynamic programming which is hard to express as backtracking is the following: Dijkstra's algorithm can be considered DP, too.
In the algorithm, we construct the optimal paths tree by adding vertices to it.
When we add a vertex, we use the fact that the whole path to it - except the very last edge in the path - is already known to be optimal.
So, we actually use an optimal solution to a subproblem - which is exactly the thing we do in DP.
Still, the order in which we add vertices to the tree is not known in advance.
No. Or rather sort of.
In backtracking, you go down and then back up each path. However, dynamic programming works bottom-up, so you only get the going-back-up part not the original going-down part. Furthermore, the order in dynamic programming is more breadth first, whereas backtracking is usually depth first.
On the other hand, memoization (dynamic programming's very close cousin) does very often work as backtracking with a cache, as you describede.
Yes and no.
Dynamic Programming is basically an efficient way to implement a recursive formula, and top-down DP is many times actually done with recursion + cache:
def f(x):
if x is in cache:
return cache[x]
else:
res <- .. do something with f(x-k)
cahce[x] <- res
return res
Note that bottom-up DP is implemented completely different however - but still pretty much follows the basic principles of the recursive approach, and at each step 'calculates' the recursive formula on the smaller (already known) sub-problems.
However, in order to be able to use DP - you need to have some characteristics for the problem, mainly - an optimal solution to the problem consists of optimal solutions to its sub-problems. An example where it holds is shortest-path problem (An optimal path from s to t that goes through u must consist of an optimal path from s to u).
It does not exist on some other problems such as Vertex-Cover or Boolean satisfiability Problem , and thus you cannot replace the backtracking solution for it with DP.
No. What you call backtracking with cache is basically memoization.
In dynamic programming, you go bottom-up. That is, you start from a place where you don't need any subproblems. In particular, when you need to calculate the nth step, all the n-1 steps are already calculated.
This is not the case for memoization. Here, you start off from the kth step (the step you want) and go on solving the previous steps wherever required. And obviously keep these values stored somewhere so that you may access these later.
All these being said, there are no differences in running time in case of memoization and dynamic programming.

Knapsack with continuous (non distinct) constraint

I watched Dynamic Programming - Kapsack Problem (YouTube). However, I am solving a slightly different problem where the constraint is the budget, price, in double, not integer. So I am wondering how can I modify that? Double is "continuous" unlike integer where I can have 1,2,3 .... I don't suppose I do 0.0, 0.1, 0.2 ...?
UPDATE 1
I thought of converting double to int by multiply by 100. Money is only 2 decimal places. But that will mean the range of values will be very large?
UPDATE 2
The problem I need to solve is:
Items have a price (double) & satisfaction (integer) value. I have a budget as a constraint and I need to maximize satisfaction value.
In the youtube video, the author created two 2d array like int[numItems][possibleCapacity(weight)]. Here, I can't as budget is a double not integer
If you want to use floating point numbers with arbitrary precision (i.e., don't have a fixed number of decimals), and these are not fractions, dynamic programming won't work.
The basis of dynamic programming is to store previous results of a calculation for specific inputs. Therefore, if you used floating point numbers with arbitrary precision, you would need practically infinite memory for each of the possible floating point numbers and, of course, do infinite calculations, something that is impossible and non-optimal.
However, if these numbers have a fixed precision (as with the money, which only have two decimal numbers), you can convert these into integers by multiplying them (as you've mentioned), and then solve the knapsack problem as usual.
You will have to do what you said in UPDATE 1: express the budget and item prices in cents (assuming we are talking about dollars). Then we're not talking about arbitrary precision or continuous numbers. Every price (and the budget) will be an integer, it's just that that integer will represent cents.
To make things easier let's assume the budget is $10. The problem is that the Knapsack Capacity will have to take all the values in:
[0.00, 0.01, 0.02, 0.03, ..., 9.99, 10.00]
The values are two many. Each line of the SOLUTION MATRIX and the KEEP MATRIX will have 1001 columns so you won't be able to solve the problem by hand (if the budget is millions of dollars even a computer might have a hard time) but that is inherent to the original problem (you can't do anything about it).
Your best bet is to use some existing code about KNAPSACK or maybe write your own (I don't advice that).
If you can't find existing code about KNAPSACK and are familiar with Linux/Mac I suggest you install the GNU Linear Programming Kit (GLPK) and express the problem as an Integer Linear Program or a Binary Linear Program (if you're trying to solve the 0-1 Knapsack). It will solve the problem for you (plus you can use it through C, C++, Python and maybe Java if you need to). For help using GLPK check this awesome article (you'll probably need part 2, where it talks about integer variables). If you need more help with GLPK please leave a comment.
EDIT:
Basically, what I'm trying to say is that your constraint is not continuous, it's discrete (cents), your problem is that the budget might be too many cents so you won't be able to solve it by hand.
Don't get intimidated because your budget might be several dollars -> several hundreds of cents. If your budget is just 18 cents your problem's size will be comparable to the one in the YouTube video. The guy in the video wouldn't be able to solve his problem either (by hand) if his knapsack size was 1800 (or even 180).
This is not an answer to your question, but might as well be what you are looking for:
Linear Programming
I've used Microsoft's Solver Foundation 3 to make a simple code that solves the problem you described. It doesn't use the knapsack algorithm, but a simplex method.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.SolverFoundation.Common;
using Microsoft.SolverFoundation.Services;
namespace LPOptimizer
{
class Item
{
public String ItemName { get; set; }
public double Price { get; set; }
public double Satisfaction { get; set; }
static void Main(string[] args)
{
//Our data, budget and items with respective satisfaction and price values
double budget = 100.00;
List<Item> items = new List<Item>()
{
new Item(){
ItemName="Product_1",
Price=20.1,
Satisfaction=2.01
},
new Item(){
ItemName="Product_2",
Price=1.4,
Satisfaction=0.14
},
new Item(){
ItemName="Product_3",
Price=22.1,
Satisfaction=2.21
}
};
//variables for solving the problem.
SolverContext context = SolverContext.GetContext();
Model model = context.CreateModel();
Term goal = 0;
Term constraint = 0;
foreach (Item i in items)
{
Decision decision = new Decision(Domain.IntegerNonnegative, i.ItemName);
model.AddDecision(decision); //each item is a decision - should the algorithm increase this item or not?
goal += i.Satisfaction * decision; //decision will contain quantity.
constraint += i.Price * decision;
}
constraint = constraint <= budget; //this now says: "item_1_price * item_1_quantity + ... + item_n_price * item_n_quantity <= budget";
model.AddConstraints("Budget", constraint);
model.AddGoals("Satisfaction", GoalKind.Maximize, goal); //goal says: "Maximize: item_1_satisfaction * item_1_quantity + ... + item_n_satisfaction * item_n_quantity"
Solution solution = context.Solve(new SimplexDirective());
Report report = solution.GetReport();
Console.Write("{0}", report);
Console.ReadLine();
}
}
}
This finds the optimum max for the number of items (integers) with prices (doubles), with a budget constraint (double).
From the code, it is obvious that you could have some items quantities in real values (double). This will probably also be faster than a knapsack with a large range (if you decide to use the *100 you mentioned).
You can easily specify additional constraints (such as number of certain items, etc...). The code above is adapted from this MSDN How-to, where it shows how you can easily define constraints.
Edit
It has occurred to me that you may not be using C#, in this case I believe there are a number of libraries for linear programming in many languages, and are all relatively simple to use: You specify constraints and a goal.
Edit2
According to your Update 2, I've updated this code to include satisfaction.
Have your looked at this.
Sorry, I don't have comment privilege.
Edit 1
Are you saying constraint is the budget instead of knapsack weight?
This still remains a knapsack problem.
Or are your saying instead of Item Values as Integers(0-1 knapsack problem) your have fractions. Then Greedy approach should do fine.
Edit 2
If I understand your problem correctly.. It states
We have n kinds of items, 1 through n. Each kind of item i has a value vi and a price pi. We usually assume that all values and pricess are nonnegative. The Budget is B.
The most common formulation of the problem is the 0-1 knapsack problem, which restricts the number xi of copies of each kind of item to zero or one. Mathematically the 0-1-knapsack problem can be formulated as:
n
maximize E(vi.xi)
i=i
n
subject to E(pi.xi) <= B, xi is a subset of {0,1}
i=1
Neo Adonis's answer is spot on here.. Dynamic programming wont work for arbitrary precision in practice.
But if you are willing to limit the precision say to 2 decimal places.. then carry on as explained in video.. your table should look something like this..
+------+--------+--------+--------+--------+--------+--------+
| Vi,Pi| 0.00 | 0.01 | 0.02 | 0.03 | 0.04 ... B |
+------+--------+--------+--------+--------+--------+--------+
|4,0.23| | | | | | |
|2,2.93| | | | | | |
|7,9.11| | | | | | |
| ... | | | | | | |
| Vn,Pn| | | | | | answer |
+------+--------+--------+--------+--------+--------+--------+
you can even convert real numbers to int as you have mentioned.
Yes, the range of values is very large, and you also have to understand knapsack is NP-complete, i.e, there is no efficient algorithm to solve this. only pseudo polynomial solution using DP. see this and this.
A question recently posted to sci.op-research offered me a welcome respite from some tedious work that I’d rather not think about and you’d rather not hear about. We know that the greedy heuristic solves the continuous knapsack problem
maximizec′xs.t.a′x≤bx≤ux∈ℜ+n(1)
to optimality. (The proof, using duality theory, is quite easy.) Suppose that we add what I’ll call a count constraint, yielding
maximizec′xs.t.a′x≤be′x=b˜x≤ux∈ℜ+n(2)
where e=(1,…,1) . Can it be solved by something other than the simplex method, such as a variant of the greedy heuristic?
The answer is yes, although I’m not at all sure that what I came up with is any easier to program or more efficient than the simplex method. Personally, I would link to a library with a linear programming solver and use simplex, but it was amusing to find an alternative even if the alternative may not be an improvement over simplex.
The method I’ll present relies on duality, specifically a well known result that if a feasible solution to a linear program and a feasible solution to its dual satisfy the complementary slackness condition, then both are optimal in their respective problems. I will denote the dual variables for the knapsack and count constraints λ and μ respectively. Note that λ≥0 but μ is unrestricted in sign. Essentially the same method stated below would work with an inequality count constraint (e′x≤b˜ ), and would in fact be slightly easier, since we would know a priori the sign of μ (nonnegative). The poster of the original question specified an equality count constraint, so that’s what I’ll use. There are also dual variables (ρ≥0 ) for the upper bounds. The dual problem is
minimizebλ+b˜μ+u′ρs.t.λa+μe+ρ≥cλ,ρ≥0.(3)
This being a blog post and not a dissertation, I’ll assume that (2) is feasible, that all parameters are strictly positive, and that the optimal solution is unique and not degenerate. Uniqueness and degeneracy will not cause invalidate the algorithm, but they would complicate the presentation. In an optimal basic feasible solution to (2), there will be either one or two basic variables — one if the knapsack constraint is nonbinding, two if it is binding — with every other variable nonbasic at either its lower or upper bound. Suppose that (λ,μ,ρ) is an optimal solution to the dual of (2). The reduced cost of any variable xi is ri=ci−λai−μ . If the knapsack constraint is nonbinding, then λ=0 and the optimal solution is
xi=uiri>0b˜−∑rj>0ujri=00ri<0.(4)
If the knapsack constraint is binding, there will be two items (j , k ) whose variables are basic, with rj=rk=0 . (By assuming away degeneracy, I’ve assumed away the possibility of the slack variable in the knapsack constraint being basic with value 0). Set
xi=uiri>00ri<0(5)
and let b′=b−∑i∉{j,k}aixi and b˜′=b˜−∑i∉{j,k}xi . The two basic variables are given by
xj=b′−akb˜′aj−akxk=b′−ajb˜′ak−aj.(6)
The algorithm will proceed in two stages, first looking for a solution with the knapsack nonbinding (one basic x variable) and then looking for a solution with the knapsack binding (two basic x variables). Note that the first time we find feasible primal and dual solutions obeying complementary slackness, both must be optimal, so we are done. Also note that, given any μ and any λ≥0 , we can complete it to obtain a feasible solution to (3) by setting ρi=ci−λai−μ+ . So we will always be dealing with a feasible dual solution, and the algorithm will construct primal solutions that satisfy complementary slackness. The stopping criterion therefore reduces to the constructed primal solution being feasible.
For the first phase, we sort the variables so that c1≥⋯≥cn . Since λ=0 and there is a single basic variable (xh ), whose reduced cost must be zero, obviously μ=ch . That means the reduced cost ri=ci−λai−μ=ci−ch of xi is nonnegative for ih . If the solution given by (3) is feasible — that is, if ∑ih . Thus we can use a bisection search to complete this phase. If we assume a large value of n , the initial sort can be done in O(nlogn ) time and the remainder of the phase requires O(logn) iterations, each of which uses O(n) time.
Unfortunately, I don’t see a way to apply the bisection search to the second phase, in which we look for solutions where the knapsack constraint is binding and λ>0 . We will again search on the value of μ , but this time monotonically. First apply the greedy heuristic to problem (1), retaining the knapsack constraint but ignoring the count constraint. If the solutions happens by chance to satisfy the count constraint, we are done. In most cases, though, the count constraint will be violated. If the count exceeds b˜ , then we can deduce that the optimal value of μ in (4) is positive; if the count falls short of b˜ , the optimal value of μ is negative. We start the second phase with μ=0 and move in the direction of the optimal value.
Since the greedy heuristic sorts items so that c1/a1≥⋯≥cn/an , and since we are starting with μ=0 , our current sort order has (c1−μ)/a1≥⋯≥(cn−μ)/an . We will preserve that ordering (resorting as needed) as we search for the optimal value of μ . To avoid confusion (I hope), let me assume that the optimal value of μ is positive, so that we will be increasing μ as we go. We are looking for values of (λ,μ) where two of the x variables are basic, which means two have reduced cost 0. Suppose that occurs for xi and xj ; then
ri=0=rj⟹ci−λai−μ=0=cj−λaj−μ(7)⟹ci−μai=λ=cj−μaj.
It is easy to show (left to the reader as an exercise) that if (c1−μ)/a1≥⋯≥(cn−μ)/an for the current value of μ , then the next higher (lower) value of μ which creates a tie in (7) must involve consecutive a consecutive pair of items (j=i+1 ). Moreover, again waving off degeneracy (in this case meaning more than two items with reduced cost 0), if we nudge μ slightly beyond the value at which items i and i+1 have reduced cost 0, the only change to the sort order is that items i and i+1 swap places. No further movement in that direction will cause i and i+1 to tie again, but of course either of them may end up tied with their new neighbor down the road.
The second phase, starting from μ=0 , proceeds as follows. For each pair (i,i+1) compute the value μi of μ at which (ci−μ)/ai=(ci+1−μ)/ai+1 ; replace that value with ∞ if it is less than the current value of μ (indicating the tie occurs in the wrong direction). Update μ to miniμi , compute λ from (7), and compute x from (5) and (6). If x is primal feasible (which reduces to 0≤xi≤ui and 0≤xi+1≤ui+1 ), stop: x is optimal. Otherwise swap i and i+1 in the sort order, set μi=∞ (the reindexed items i and i+1 will not tie again) and recompute μi−1 and μi+1 (no other μj are affected by the swap).
If the first phase did not find an optimum (and if the greedy heuristic at the start of the second phase did not get lucky), the second phase must terminate with an optimum before it runs out of values of μ to check (all μj=∞ ). Degeneracy can be handled either with a little extra effort in coding (for instance, checking multiple combinations of i and j in the second phase when three-way or higher ties occur) or by making small perturbations to break the degeneracy.
The answers are not quite correct.
You can implement a dynamic programm that solves the knapsack problem with integer satisfaction and arbitrary real number prizes like doubles.
First the standard solution of the problem with integer prizes:
Define K[0..M, 0..n] where K[j, i] = optimal value of items in knapsack of size j, using only items 1, ..., i
for j = 0 to M do K[j,0] = 0
for i = 1 to n do
for j = 0 to M do
//Default case: Do not take item i
K[j,1] = K[j, i-1]
if j >= w_i and v_i+K[j-w, i-1] > K[j, i] then
//Take item i
K[j,i] = v_i + K[j-w_i, i-1]
This creates a table where the solution can be found by following the recursion for entry K[M, n].
Now the solution for the problem with real number weight:
Define L[0..S, 0..N] where L[j, i] = minimal weight of items in knapsack of total value >= j, using only items 1, ..., i
and S = total value of all items
for j = 0 to S do L[j, 0] = 0
for i = 0 to n do
for j = 0 to S do
//Default case: Do not take item i
L[j,i] = L[j, i-1]
if j >= v_i and L[j-v_i, i-1] + w_i < L[j, i] then
//Take item i
L[j, i] = L[j -v_i, i-1] + w_i
The solution can now be found similiar to the other version. Instead of using the weight as first dimension we now use the total value of the items that lead to the minimal weight.
The code has more or less the same runtime O(S * N) whereas the other has O(M * N).
The answer to your question depends on several factors:
How large is the value of constraint (if scaled to cants and converted to integers).
How many items are there.
What kind of knapsack problem is to be solved
What is required precision.
If you have very large constraint value (much more than millions) and very many items (much more than thousands)
Then the only option is Greedy approximation algorithm. Sort the items in decreasing order of value per unit of weight and pack them in this order.
If you want to use a simple algorithm and do not require high precision
Then again try to use greedy algorithm. "Satisfaction value" itself may be very rough approximation, so why bother inventing complex solutions when simple approximation may be enough.
If you have very large (or even continuous) constraint value but pretty small number of items (less than thousands)
Then use branch and bound approach. You don't need to implement it from scratch. Try GNU GLPK. Its branch-and-cut solver is not perfect, but may be enough to solve small problems.
If both constraint value and number of items are small
Use any approach (DP, branch and bound, or just brute-force).
If constraint value is pretty small (less than millions) but there are too many (like millions) items
Then DP algorithms are possible.
Simplest case is the unbounded knapsack problem when there is no upper bound on the number of copies of each kind of item. This wikipedia article contains a good description how to simplify the problem: Dominance relations in the UKP and how to solve it: Unbounded knapsack problem.
More difficult is the 0-1 knapsack problem when you can pack each kind of item only zero times or one time. And the bounded knapsack problem, allowing to pack each kind of item up to some integer limit times is even more difficult. Internet offers lots of implementations for these problems, there are several suggestions in the same article. But I don't know which one is good or bad.

an algorithm to find the minimum size set cover for the Set-cover problem

In the Set Covering problem, we are given a universe U, such that |U|=n, and sets S1,……,Sk are subsets of U. A set cover is a collection C of some of the sets from S1,……,Sk whose union is the entire universe U.
I'm trying to come up with an algorithm that will find the minimum number of set cover so that I can show that the greedy algorithm for set covering sometimes finds more sets.
Following is what I came up with:
repeat for each set.
1. Cover<-Seti (i=1,,,n)
2. if a set is not a subset of any other sets, then take take that set into cover.
but it's not working for some instances.
Please help me figure out an algorithm to find the minimum set cover.
I'm still having problem find this algorithm online. Anyone has any suggestion?
Set cover is NP-hard, so it's unlikely that there'll be an algorithm much more efficient than looking at all possible combinations of sets, and checking if each combination is a cover.
Basically, look at all combinations of 1 set, then 2 sets, etc. until they form a cover.
EDIT
This is an example pseudocode. Note that I do not claim that this is efficient. I simply claim that there isn't a much more efficient algorithm (algorithms will be worse than polynomial time unless something really cool is discovered)
for size in 1..|S|:
for C in combination(S, size):
if (union(C) == U) return C
where combination(K, n) returns all possible sets of size n whose elements come from K.
EDIT
However, I'm not too sure why you need an algorithm to find the minimum. In the question you state that you want to show that the greedy algorithm for set covering sometimes finds more sets. But this is easily achieved via a counterexample (and a counterexample is shown in the wikipedia entry for set cover). So I am quite puzzled.
EDIT
A possible implementation of combination(K, n) is:
if n == 0: return [{}] //a list containing an empty set
r = []
for k in K:
K = K \ {k} // remove k from K.
for s in combination(K, n-1):
r.append(union({k}, s))
return r
But in combination with the cover problem, one probably wants to perform the test of coverage from the base case n == 0 instead. Well.
Try Donald E. Knuth algorithm-X for exact set coverage, using a sparse matrix. Must be adapted a little to solve minimum set cover problems also.

A packing algorithm ... kind of

Given an array of items, each of which has a value and cost, what's the best algorithm determine the items required to reach a minimum value at the minimum cost? eg:
Item: Value -> Cost
-------------------
A 20 -> 11
B 7 -> 5
C 1 -> 2
MinValue = 30
naive solution: A + B + C + C + C. Value: 30, Cost 22
best option: A + B + B. Value: 34, Cost 21
Note that the overall value:cost ratio at the end is irrelevant (A + A would give you the best value for money, but A + B + B is a cheaper option which hits the minimum value).
This is the knapsack problem. (That is, the decision version of this problem is the same as the decision version of the knapsack problem, although the optimization version of the knapsack problem is usually stated differently.) It is NP-hard (which means no algorithm is known that is polynomial in the "size" -- number of bits -- in the input). But if your numbers are small (the largest "value" in the input, say; the costs don't matter), then there is a simple dynamic programming solution.
Let best[v] be the minimum cost to get a value of (exactly) v. Then you can calculate the values best[] for all v, by (initializing all best[v] to infinity and):
best[0] = 0
best[v] = min_(items i){cost[i] + best[v-value[i]]}
Then look at best[v] for values upto the minimum you want plus the largest value; the smallest of those will give you the cost.
If you want the actual items (and not just the minimum cost), you can either maintain some extra data, or just look through the array of best[]s and infer from it.
This problem is known as integer linear programming. It's NP-hard.
However, for small problems like your example, it's trivial to make a quick few lines of code to simply brute force all the low combinations of purchase choices.
NP-harddoesn't mean impossible or even expensive, it means your problem becomes rapidly slower to solve with larger scale problems. In your case with just three items, you can solve this in mere microseconds.
For the exact question of what's the best algorithm in general.. there are entire textbooks on it. A good start is good old Wikipedia.
Edit This answer is redacted on account of being factually incorrect. Following the advice in this will only cause you harm.
This is not actually the knapsack problem, because it assumes that you cannot pack more items than there is space for in some container. In you case you want to find the cheapest combination that will fill up the space, allowing for the fact that overflow may occur.
My solution, which I don't know is the optimal but it should be pretty close, would be to compute for each item the cost benefit ratio, find the item with the highest cost benefit and fill the structure with this item until there isn't space for one more item. Then I would test to see if there was a combination with any of the other available items that could fill the available slot for less that the cost of one of the cheapest items and then if such a solution exist, use that combination otherwise use one more of the cheapest items.
Amenddum This may actually also be NP-complete, but I am not sure yet. Anyway for all practical purposes this variation should be much faster than the naive solution.

Resources