If there is more than one constraint (for example, both a volume limit and a weight limit, where the volume and weight of each item are not related), we get the multiply-constrained knapsack problem, multi-dimensional knapsack problem, or m-dimensional knapsack problem.
How do I code this in the most optimized fashion? Well, one can develop a brute force recursive solution. May be branch and bound.. but essentially its exponential most of the time until you do some sort of memoization or use dynamic programming which again takes a huge amount of memory if not done well.
The problem I am facing is this
I have my knapsack function
KnapSack( Capacity, Value, i) instead of the common
KnapSack ( Capacity , i ) since I have upper limits on both of those. can anyone guide me with this? or provide suitable resources for solving these problems for reasonably large n
or is this NP complete ?
Thanks
Merge the constraints. Look at http://www.diku.dk/~pisinger/95-1.pdf
chapter 1.3.1 called Merging the Constraints.
An example is say you have
variable , constraint1 , constraint2
1 , 43 , 66
2 , 65 , 54
3 , 34 , 49
4 , 99 , 32
5 , 2 , 88
Multiply the first constraint by some big number then add it to the second constraint.
So you have
variable , merged constraint
1 , 430066
2 , 650054
3 , 340049
4 , 990032
5 , 20088
From there do whatever algorithm you wanted to do with one constraint. The main limiter that comes to mind with this how many digits your variable can hold.
As a good example would serve the following problem:
Given an undirected graph G having positive weights and N vertices.
You start with having a sum of M money. For passing through a vertex i, you must pay S[i] money. If you don't have enough money - you can't pass through that vertex. Find the shortest path from vertex 1 to vertex N, respecting the above conditions; or state that such path doesn't exist. If there exist more than one path having the same length, then output the cheapest one. Restrictions: 1
Pseudocode:
Set states(i,j) as unvisited for all (i,j)
Set Min[i][j] to Infinity for all (i,j)
Min[0][M]=0
While(TRUE)
Among all unvisited states(i,j) find the one for which Min[i][j]
is the smallest. Let this state found be (k,l).
If there wasn't found any state (k,l) for which Min[k][l] is
less than Infinity - exit While loop.
Mark state(k,l) as visited
For All Neighbors p of Vertex k.
If (l-S[p]>=0 AND
Min[p][l-S[p]]>Min[k][l]+Dist[k][p])
Then Min[p][l-S[p]]=Min[k][l]+Dist[k][p]
i.e.
If for state(i,j) there are enough money left for
going to vertex p (l-S[p] represents the money that
will remain after passing to vertex p), and the
shortest path found for state(p,l-S[p]) is bigger
than [the shortest path found for
state(k,l)] + [distance from vertex k to vertex p)],
then set the shortest path for state(i,j) to be equal
to this sum.
End For
End While
Find the smallest number among Min[N-1][j] (for all j, 0<=j<=M);
if there are more than one such states, then take the one with greater
j. If there are no states(N-1,j) with value less than Infinity - then
such a path doesn't exist.
Knapsack with multiple constraints is a packing problem. Read up. http://en.wikipedia.org/wiki/Packing_problem
There are greedy like heuristics that calculate an "efficiency" for each item, that run quickly and yield approximate solutions.
You can use a branch and bound algorithm. You can get an initial lower bound using a greedy like heuristic, which can be used to initialize the incumbent solution. You can calculate upper bounds for various sub-problems by considering each of the m constraints one at time (relaxing the other constraints in the problem), then use the lowest of these bounds as an upper bound for the original problem. This technique is due to Shih. However this technique probably won't work well if no particular constraint tends to dominate the solution, or if the initial solution from the greedy like heuristic is not close to the optimum.
There are better more modern algorithms which are harder to implement, see "multidimensional knapsack problem" papers by J Puchinger!
As you said vol and weight both are positive quantities, try to use that fact that weight always decreases:
knap[position][vol][t]
Now t=0 when wt is positive, t=1 when wt is negative.
Related
Consider the problem definition of a knapsack problem. Given a set S of objects - each having a profit and weight associated with it, I have to find a subset T of S, which gives me the maximum profit but has a total weight less than or equal to a constant W. Now consider an extra constraint. In the above problem the profit of one object is independent of another. Suppose I say they're interdependent, say I've a factor 0<= S_ij <=1 for two objects i and j. This factor diminishes the effect of the item with minimum profit. Effectively
profit({i,j})=max(profit(i),profit(j))+S_ij * min(profit(i),profit(j))
This keeps the effective sum between max(profit(i),profit(j)) and profit(i)+profit(j) -> "Atleast as good as the best one but not as good as using both simultaneously". Now I'm tyring to extend it for n>2. Is this a standard problem of some variation of knapsack ? Can I formulate an LP(?) or NLP for this ?
UPDATE:
The set T is a strict subset of S. So you can only use objects in S(use duplicates if it exists in S).
As for the objective function, I'm still not sure about how to go about it. Above I've calculated the score for a 2 object sack considering the interactions between them. Now i want extend it over to more than 2 objects, and I'm not sure how to do it. The letter 'n' is the size of sack. For n=2 I've defined a way of calculating the total profit of the sack but for n>2 I'm not quite clear.
I'm trying to solve the following:
The knapsack problem is as follows: given a set of integers S={s1,s2,…,sn}, and a given target number T, find a subset of S that adds up exactly to T. For example, within S={1,2,5,9,10} there is a subset that adds up to T=22 but not T=23. Give a correct programming algorithm for knapsack that runs in O(nT) time.
but the only algorithm I could come up with is generating all the 1 to N combinations and try the sum out (exponential time).
I can't devise a dynamic programming solution since the fact that I can't reuse an object makes this problem different from a coin rest exchange problem and from a general knapsack problem.
Can somebody help me out with this or at least give me a hint?
The O(nT) running time gives you the hint: do dynamic programming on two axes. That is, let f(a,b) denote the maximum sum <= b which can be achieved with the first a integers.
f satisfies the recurrence
f(a,b) = max( f(a-1,b), f(a-1,b-s_a)+s_a )
since the first value is the maximum without using s_a and the second is the maximum including s_a. From here the DP algorithm should be straightforward, as should outputting the correct subset of S.
I did find a solution but with O(T(n2)) time complexity. If we make a table from bottom to top. In other words If we sort the array and start with the greatest number available and make a table where columns are the target values and rows the provided number. We will need to consider the sum of all possible ways of making i- cost [j] +j . Which will take n^2 time. And this multiplied with target.
I've stampled upon a curious problem.
I've got an unbounded chessboard, N knights starting locations and N target locations.
The task is to find minimal number of moves for all knights to reach all target locations.
I know that shortest path problem for a single knight can be solved using breadth-first search, but how can it be solved for multiple knights?
Sorry for my english, I use it seldom.
You can compute the cost matrix as suggested by Ricky using breadth first search. so now, cost[i][j] denotes the cost of choosing knight i to goto end location j. Then you can use the Hungarian algorithm to find the final answer, which can be computed in O(N^3) complexity.
I assume you know how to do it for one Knigt .
You can reformulate your problem as a linear program:
I will use the following notations :
We have N knights and N en locations.
xij = 1 if you chose knight i to go to location j and 0 otherwise
cij is the min length of moving knight i to location j
Then you have the following linear program :
variables:
xij for i j in [0,N]
Cost function :
C= SUM(cij.xij for (i,j) in [0,N]x[0,N])
constraints:
SUM(xij for j in [1,N]) = 1 //exactly one knigt goes from i to j
SUM(xij for i in [1,N] ) = 1
(The matrix (xij) is a stochastic matrix)
if X is the matrix of (xij) you have n! possible matrix. This problem can be NP-Hard (there is no easy solution to this system, solving the system is pretty similar than testing all possible solutions).
EDIT:
This problem is called the assignment problem and there exist multiple algorithms to solve it in polynomial time . (check out #purav answer for an example)
As mentionned by #Purav even though this kind of problems can be NP-hard, in this case it can be solve in O(n^3)
About the problem #j_random_hacker raised :
Problem
If a knight is at a endpoint, the next knights should not be able to
go through this endpoint. So the Cij might need to be updated after
each knight is moved.
Remarks :
1. multiple optimal paths :
As there is no constraint on the side of the chessboard (ilimited chessboard), the order in which you do your move for achiveing the shortest path is not relevant, so there is always a lot a different shortest path (I won't do the combinatorics here).
Example with 2 knights
Let say you have 2 K and 2 endpoints ('x'), the optimal path are drawned.
-x
|
|
x
|
K-- --K
you move the right K to the first point (1 move) the second cannot use the optimal path.
-x
|
|
K
|
K-- --:
But I can easily create a new optimal path, instead of moving 2 right 1 up then 2 up 1 right.
1 can move 2 up 1 right the 1 up 2 right (just inverse)
--K
|
-
| K
| |
: --:
and any combination of path works :
1 U 2 R then 2 U 1 R etc... as long as I keep the same number of move
UP LEFT DOWN and RIGHT and that they are valid.
2. order in which knights are moved :
The second thing is that I can chose the order of my move.
example:
with the previous example if I chose to start with the left knight and go to the upper endpoint, dont have anymore endpoint constraint.
-K
|
|
x
|
:-- --K
-K
|
|
K
|
:-- --:
With these 2 remarks it might be possible to prove that there is no situation in which the lower bound calculated is not optimal .
BFS can still work here. You need to adjust your states a bit, but it will still work:
let S be the set of possible states:
S={((x1,y1),(x2,y2),...,(xn,yn))|knight i is in (xi,yi)}
For each s in S, define:
Successors(s)={all possible states, moving 1 knight on the board}
Your target states are of course all permutations of your target points [you don't actually need to develop these permutations, just check if you reached a state where all the squares are "filled", which is simple to check]
start=(start_1,start_2,...,start_n) where start_i is the start location of knight i.
A run of BFS, from start [the initial position of each knight], is guaranteed to find a solution if one exists [because BFS is complete]. It is also guaranteed to be the shortest possible solution.
(*) Note that the case for single knight is a private instance of this solution, with n=1.
Though BFS will work, it will take a lot of time! the branch factor in here is 4n, so the algorithm will need to develop O((4n)^d) vertices, where n is the number of knights and d is the number of steps needed for a solution.
Possible optimizations:
Space: Note that because BFS uses a lot of memory [O((4n)^d)] you might
want to use Iterative Deepening DFS, which behaves like a BFS,
but consumes much less memory [O(nd)], but takes more time to run.
Time: To accelerate the solution search, you can use A Star with an
admissible heuristic function. It is also guarenteed to find a
solution if one exists, and also ensures the solution found is
optimal, and will probably [with good heuristic] need to develop
less vertices then BFS.
So, I've found the solution.
BFS won't work well on an unlimited chessboard. There is no point in using any shortest path algorithm -- number of knight's moves from location a to location b can be computed in O(1) time -- M. Deza, Dictionary of Distances, p. 251
http://www.scribd.com/doc/53001767/Dictionary-of-Distances-M-Deza-E-Deza-Elsevier-2006-WW
The assignment problem can be solved using mincost-maxflow algorithm (eg. Edmonds-Karp):
http://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp_algorithm
I ran into the following algorithmic problem while experimenting with classification algorithms. Elements are classified into a polyhierarchy, what I understand to be a poset with a single root. I have to solve the following problem, which looks a lot like the set cover problem.
I uploaded my Latex-ed problem description here.
Devising an approximation algorithm that satisfies 1 & 2 is quite easy, just start at the vertices of G and "walk up" or start at the root and "walk down". Say you start at the root, iteratively expand vertexes and then remove unnecessary vertices until you have at least k sub-lattices. The approximation bound depends on the number of children of a vertex, which is OK for my application.
Does anyone know if this problem has a proper name, or maybe the tree-version of the problem? I would be interested to find out if this problem is NP-hard, maybe someone has ideas for a good NP-hard problem to reduce or has a polynomial algorithm to solve the problem. If you have both collect your million dollar price. ;)
The DAG version is hard by (drum roll) a reduction from set cover. Set k = 2 and do the obvious: condition (2) prevents us from taking the root. (Note that (3) doesn't actually imply (2) because of the lower bound k.)
The tree version is a special case of the series-parallel poset version, which can be solved exactly in polynomial time. Here's a recursive formula that gives a polynomial p(x) where the coefficient of xn is the number of covers of cardinality n.
Single vertex to be covered: p(x) = x.
Other vertex: p(x) = 1 + x.
Parallel composition, where q and r are the polynomials for the two posets: q(x) r(x).
Series composition, where q is the polynomial for the top poset and r, for the bottom: If the top poset contains no vertices to be covered, then p(x) = (q(x) - 1) + r(x); otherwise, p(x) = q(x).
Given an array of items, each of which has a value and cost, what's the best algorithm determine the items required to reach a minimum value at the minimum cost? eg:
Item: Value -> Cost
-------------------
A 20 -> 11
B 7 -> 5
C 1 -> 2
MinValue = 30
naive solution: A + B + C + C + C. Value: 30, Cost 22
best option: A + B + B. Value: 34, Cost 21
Note that the overall value:cost ratio at the end is irrelevant (A + A would give you the best value for money, but A + B + B is a cheaper option which hits the minimum value).
This is the knapsack problem. (That is, the decision version of this problem is the same as the decision version of the knapsack problem, although the optimization version of the knapsack problem is usually stated differently.) It is NP-hard (which means no algorithm is known that is polynomial in the "size" -- number of bits -- in the input). But if your numbers are small (the largest "value" in the input, say; the costs don't matter), then there is a simple dynamic programming solution.
Let best[v] be the minimum cost to get a value of (exactly) v. Then you can calculate the values best[] for all v, by (initializing all best[v] to infinity and):
best[0] = 0
best[v] = min_(items i){cost[i] + best[v-value[i]]}
Then look at best[v] for values upto the minimum you want plus the largest value; the smallest of those will give you the cost.
If you want the actual items (and not just the minimum cost), you can either maintain some extra data, or just look through the array of best[]s and infer from it.
This problem is known as integer linear programming. It's NP-hard.
However, for small problems like your example, it's trivial to make a quick few lines of code to simply brute force all the low combinations of purchase choices.
NP-harddoesn't mean impossible or even expensive, it means your problem becomes rapidly slower to solve with larger scale problems. In your case with just three items, you can solve this in mere microseconds.
For the exact question of what's the best algorithm in general.. there are entire textbooks on it. A good start is good old Wikipedia.
Edit This answer is redacted on account of being factually incorrect. Following the advice in this will only cause you harm.
This is not actually the knapsack problem, because it assumes that you cannot pack more items than there is space for in some container. In you case you want to find the cheapest combination that will fill up the space, allowing for the fact that overflow may occur.
My solution, which I don't know is the optimal but it should be pretty close, would be to compute for each item the cost benefit ratio, find the item with the highest cost benefit and fill the structure with this item until there isn't space for one more item. Then I would test to see if there was a combination with any of the other available items that could fill the available slot for less that the cost of one of the cheapest items and then if such a solution exist, use that combination otherwise use one more of the cheapest items.
Amenddum This may actually also be NP-complete, but I am not sure yet. Anyway for all practical purposes this variation should be much faster than the naive solution.