MiniZinc. Discrete knapsack problem. Аn incomprehensible solution - knapsack-problem

Solve (from https://www.minizinc.org/doc-2.5.5/en/modelling2.html#set-constraints):
enum ITEM = { I1, I2, I3, I4, I5 };
int: capacity = 5;
array[ITEM] of int: profits = [1,2,3,4,5];
array[ITEM] of int: weights = [1,2,3,4,5];
int: maxProfit = sum (profits);
var set of ITEM: knapsack;
var int: weight = sum ([weights[i] | i in knapsack]);
var int: profit = sum ([profits[i] | i in knapsack]);
constraint weight <= capacity;
solve maximize profit;
output ["knapsack = \(knapsack)\n",
"weight = \(weight)/\(capacity)\n",
"profit = \(profit)"]
Output:
knapsack = {I1, I2}
weight = 3/5
profit = 3
----------
knapsack = {I1, I3}
weight = 4/5
profit = 4
----------
knapsack = {I1, I4}
weight = 5/5
profit = 5
----------
==========
Please tell me why the output is like this?
I expected an answer of the form:
% profit 5 for all
{ I1, I4 }
{ I2, I3 }
{ I5 }
Solver: Gecode 6.3.0

If I understand your question correctly, you are expecting that the output should show all optimal solutions. Is that correct?
However, this is an optimization problem which only shows one optimal solution. The two first "solutions" are the intermediate solutions with increasing value of profit (3 and 4). The last solution (profit = 5) is the optimal solution: {I1, I4}.
If you want all optimal solutions (with profit = 5), you have to add that as a constraint and change solve maximize profit to solve satisfy:
constraint profit = 5;
% solve maximize profit;
solve satisfy;
Then the output will be:
knapsack = {I1, I4}
weight = 5/5
profit = 5
----------
knapsack = {I2, I3}
weight = 5/5
profit = 5
----------
knapsack = {I5}
weight = 5/5
profit = 5
----------
==========
I am not aware of any flag to MiniZinc (or the FlatZinc solver) that will print all the optimal solutions directly (i.e. without the manual handling as above). However, this would be possible using the Python interface MiniZinc Python (https://minizinc-python.readthedocs.io/en/latest/ )
Update
Here is a Python model (using MiniZinc-Python) for showing all optimal solutions. The MiniZinc model is the same as stated except that there is no solve line. This is added by the Python program and is the key to getting all optimal solutions.
from minizinc import Instance, Model, Solver
gecode = Solver.lookup("gecode")
model = Model("./discrete_knapsack.mzn")
instance = Instance(gecode, model)
with instance.branch() as opt:
opt.add_string("solve maximize profit;\n")
res = opt.solve()
obj = res["objective"]
instance.add_string(f"constraint sum ([profits[i] | i in knapsack]) = {obj};\n")
result = instance.solve(all_solutions=True)
for sol in result.solution:
print(sol)
print()
The output is:
knapsack = {I1, I4}
weight = 5/5
profit = 5
knapsack = {I2, I3}
weight = 5/5
profit = 5
knapsack = {I5}
weight = 5/5
profit = 5
(This program was much inspired by the MiniZinc-Python example https://minizinc-python.readthedocs.io/en/latest/basic_usage.html#finding-all-optimal-solutions )

Related

Minimum time to reach from city 1 to city N [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
There are a lot of graph problems that require some modification of the BFS algorithm. I just come across this problem. I thought of this question with just an extension of the standard BFS algorithm.
The question states that:
We are given a country, having N cities and M bidirectional roads. Each city has a traffic light, showing only 2 colors i.e Green and Red. All the traffic lights switch their color from Green to Red or vice versa after every T seconds. We can cross a city only when the traffic light is green. Initially, all traffic light is green. In any city, if the traffic light is Red then we have to wait for its light to turn green. Time taken to travel any road is C. We have to find minimum time to reach City N from 1.
Note: graph doesn't contain a self-loop or multiple edges.
For example:
N=5,M=5,T=3,C=5
Edges are:
1 2,
1 3,
2 4,
1 4,
2 5.
Here minimum time to go from 1 to 5 is 11 through path 1 2 5.WE can reach city 2 in 5 secs. then wait for 1 second for the light to turn green and then 5 seconds to go to 5.
Can anyone share his approach toward this problem? whether it is a BFS problem or some other graph algorithm required too?
Better to unsderstand if psedoucode will be there along with algorithm?
Because all the cities start with same initial state, switch lights with the same frequency, and all the roads have the same duration, the traffic lights delay all routes equally.
As all roads have the same duration, this means that BFS will be an efficient way to solve the problem. The only adjustment to the standard algorithm is to adjust the time at each node to account for any delay due to the traffic lights.
(If the roads had different durations, or the lights switched irregularly, then a more advanced algorithm such as Dijkstra would be required.)
I'm assuming all edge weights have an integer number of seconds.
Note that the period of a traffic light is 2T. Take your original graph G and duplicate its nodes 2T times: G0, G1, ..., G2T - 1. If there is an edge in the original graph G from node a to b with weight w, then add an edge with weight w from each node a in Gt to b in G(t + w) mod 2T for each t where the light in a is green. Add an edge with weight 1 between each respective node in Gt, G(t+1) mod 2T, representing the possibility to wait at a city.
Finally, add one more copy of the nodes of G to your graph, D, that will be used for the destination nodes. Add an edge from each node in Gt to its respective node in D with weight 0.
Then the shortest path between nodes s in G0 and t in D follows exactly your problem.
#include <bits/stdc++.h>
using namespace std;
#define ll long long int
vector<int> g[1001];
vector<pair<ll,vector<ll>>> pt;
void dfs(ll st,ll e,ll vis[],vector<ll> rs,ll w){
rs.push_back(st);
if(st == e){
pt.push_back({w*(rs.size()-1),rs});
return;
}
for(auto u : g[st]){
if(vis[u] == 0){
vis[st] = 1;
dfs(u,e,vis,rs,w);
vis[st] = 0;
}
}
}
int main()
{
ll n,m,t,c,u,v;
cin>>n>>m>>t>>c;
while(m--){
cin>>u>>v;
g[u].push_back(v);
g[v].push_back(u);
}
if(n == 1)
cout<<0<<endl;
else if(n == 2)
cout<<t<<endl;
else{
vector<ll> rs;
ll w = c;
ll vis[n+1] = {0};
dfs(1,n,vis,rs,w);
if(pt.size() == 0)
cout<<-1<<endl;
else{
sort(pt.begin(),pt.end());
ll te = 0;
ll nt = 0;
for(int i=1;i<pt[0].second.size();i++){
te += c + (nt-te);
while(nt < te)
nt += t;
}
cout<<te<<endl;
}//else
}
return 0;
}
This Question was asked in a coding round of a company that i attended, I could come up with a BFS solution with O(n) Time Complexity.
(The coding round is over now. So here's my solution)-Edited
My Solution (Python 3):
def getPresentTrafficColorAndWaitTime(curr_color,traffic,cost_travel):
#color
val = (cost_travel//traffic)%2
if val == 1:
curr_color = 0 if curr_color==1 else 1
#waitTime
waitTime = traffic - (cost_travel%traffic)
return curr_color,waitTime
def bfs(adj,visited,curr,final,traffic,cost_travel):
queue = []
queue.append(1)
visited[1] = 1
dist = 0
# Red: 1 , Green: 0
curr_color = 0
temp_cost_travel = cost_travel
while queue:
length = len(queue)
for _ in range(length):
curr = queue[0]
queue.pop(0)
for i in adj[curr]:
if i == final:
return dist+cost_travel
elif visited[i] == 0:
visited[i] = 1
queue.append(i)
curr_color,waitTime = getPresentTrafficColorAndWaitTime(curr_color,traffic,temp_cost_travel)
if curr_color == 1:
add_dist = waitTime
temp_cost_travel = cost_travel
curr_color = 0
else:
add_dist = 0
temp_cost_travel += cost_travel
dist += cost_travel + add_dist
return -1
# Taking Input and creating Adjacency-list using defaultdict
n,m,t,c = map(int,input().split())
adj_list = defaultdict(list)
for _ in range(m):
u,v = map(int,input().split())
adj_list[u] = adj_list.get(u,[]) + [v]
adj_list[v] = adj_list.get(v,[]) + [u]
visited = [0]*(n+1)
answer = bfs(adj_list,visited,1,n,t,c)
print(answer)

Two knapsacks with smallest delta in sum of values

This question is a rephrased problem I encountered during implementation of some system at work. I thought it's a bit similiar to knapsack problem and was curious to explore how it can be solved since I wasn't able to come up with a solution.
Problem statement: Given a set of items, each with weight and value, and two knapsacks, determine which items to include in both of these knapsacks so each knapsack has exactly a weight of K and the delta of sum of values of these two knapsacks is as small as possible. If it's not possible to satisfy weight constraint for both knapsacks algorithm should return nothing.
I think some sort of greedy algorithm might be a satisfying solution but not sure how to write it.
This can be solved with a dynamic programming approach. Here is an approach with linked lists.
from collections import namedtuple
ListEntry = namedtuple('ListEntry', 'id weight value prev')
Thing = namedtuple('Thing', 'weight value')
def add_entry_to_list(i, e, l):
return ListEntry(i, l.weight + e.weight, l.value + e.value, l)
def split_entries (entries, target_weight):
empty_list = ListEntry(None, 0, 0, None)
dp_soln = { (0, 0): (empty_list, empty_list) }
for i in range(len(entries)):
dp_soln_new = {}
e = entries[i]
for k, v in dp_soln.items():
(weight_l, weight_r) = k
(l_left, l_right) = v
this_options = {k: v}
this_options[(weight_l + e.weight, weight_r)] = (add_entry_to_list(i, e, l_left), l_right)
this_options[(weight_l, weight_r + e.weight)] = (l_left, add_entry_to_list(i, e, l_right))
for o_k, o_v in this_options.items():
if target_weight < max(o_k):
pass # Can't lead to (target_weight, target_weight)
elif o_k not in dp_soln_new:
dp_soln_new[o_k] = o_v
else:
diff = o_v[0].value - o_v[1].value
existing_diff = dp_soln_new[o_k][0].value - dp_soln_new[o_k][1].value
if existing_diff < diff:
dp_soln_new[o_k] = o_v
dp_soln = dp_soln_new
final_key = (target_weight, target_weight)
if final_key in dp_soln:
return dp_soln[final_key]
else:
return None
print(split_entries([
Thing(1, 3),
Thing(1, 4),
Thing(2, 1),
Thing(2, 5),
], 3))

An efficient algorithm to count the number of integer grids

Consider a square 3 by 3 grid of non-negative integers. For each row i the sum of the integers is set to be r_i. Similarly for each column j the sum of integers in that column is set to be c_j. An instance of the problem is therefore described by 6 non-negative integers.
Is there an efficient algorithm to count how many different
assignments of integers to the grid there are given the row and column
sum constraints?
Clearly one could enumerate all possible matrices of non-negative integers with values up to sum r_i and check the constraints for each, but that would be insanely slow.
Example
Say the row constraints are 1 2 3 and the column constraints are 3 2 1. The possible integer grids are:
┌─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┐
│0 0 1│0 0 1│0 0 1│0 1 0│0 1 0│0 1 0│0 1 0│1 0 0│1 0 0│1 0 0│1 0 0│1 0 0│
│0 2 0│1 1 0│2 0 0│0 1 1│1 0 1│1 1 0│2 0 0│0 1 1│0 2 0│1 0 1│1 1 0│2 0 0│
│3 0 0│2 1 0│1 2 0│3 0 0│2 1 0│2 0 1│1 1 1│2 1 0│2 0 1│1 2 0│1 1 1│0 2 1│
└─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┘
In practice my main interest is when the total sum of the grid will be at most 100 but a more general solution would be very interesting.
Is there an efficient algorithm to count how many different assignments of integers to the grid there are given the row and column sum constraints?
upd My answer is wrong for this particular problem, when N is fixed (i.e. becomes constant 3). In this case it is polynomial. Sorry for misleading information.
TL;DR: I think it's at least NP-hard. There is no polinomial algorithm, but maybe there're some heuristic speedups.
For N-by-N grid you have N equations for row sums, N equations for col sums and N^2 non-negative constraints :
For N > 2 this system has more than one possible solution in general. Because there're N^2 unknown variables x_ij and just 2N equations => for N > 2: N^2 > 2N.
You can eliminate 2N - 1 variables to leave with just one equation with K = N^2 - (2N-1) variables getting the sum S. Then you'll have to deal with integer partition problem to find out all possible combinations of K terms to get the S. This problem is NP-complete. And the number of combinations depends not only on the number of terms K, but also on the order of the value S.
This problem reminded me about Simplex method. My first thought was to find just one solution using something like that method and then traverse edges of the convex to find all the possible solutions. And I was hoping that there's an optimal algorithm for that. But no, integer simplex method, which is related to integer linear programming, is NP-hard :(
I hope, there're some kind heuristics for related problems you can use to speedup naive brute force solution.
I don't know of a matching algorithm, but I don't think it would be that difficult to work one out. Given any one solution, you can derive another solution by selecting four corners of a rectangular region of your grid, increasing two diagonal corners by some value and decreasing the other two by that same value. The range for that value will be constrained by the lowest value of each diagonal pair. If you determine the size of all such ranges, you should be able to multiply them together to determine the total possible solutions.
Assuming you described your grid like a familiar spreadsheet alphabetically for columns, and numerically for rows, you could describe all possible regions in the following list:
A1:B2, A1:B3, A1:C2, A1:C3, B1:C2, B1:C3, A2:B3, A2:C3, B2:C3
For each region, we tabulate a range based on the lowest value from each diagonal corner pair. You can incrementally reduce either pair until a member reaches zero because there's no upper bound for the other pair.
Selecting the first solution of your example, we can derive all other possible solutions using this technique.
A B C
┌─────┐
1 │0 0 1│ sum=1
2 │0 2 0│ sum=2
3 │3 0 0│ sum=3
└─────┘
3 2 1 = sums
A1:B2 - 1 solution (0,0,0,2)
A1:C2 - 1 solution (0,1,0,0)
A1:B3 1 solution (0,0,3,0)
A1:C3 2 solutions (0,1,3,0), (1,0,2,1)
B1:C2 2 solutions (0,1,2,0), (1,0,1,1)
B1:C3 1 solution (0,1,0,0)
A2:B3 3 solutions (0,2,3,0), (1,1,2,1), (2,0,1,2)
A2:C3 1 solution (0,0,3,0)
B2:C3 1 solution (2,0,0,0)
Multiply all solution counts together and you get 2*2*3=12 solutions.
Maybe a simple 4-nested-loop solution is fast enough, if the total sum is small?
function solve(rowsum, colsum) {
var count = 0;
for (var a = 0; a <= rowsum[0] && a <= colsum[0]; a++) {
for (var b = 0; b <= rowsum[0] - a && b <= colsum[1]; b++) {
var c = rowsum[0] - a - b;
for (var d = 0; d <= rowsum[1] && d <= colsum[0] - a; d++) {
var g = colsum[0] - a - d;
for (var e = 0; e <= rowsum[1] - d && e <= colsum[1] - b; e++) {
var f = rowsum[1] - d - e;
var h = colsum[1] - b - e;
var i = rowsum[2] - g - h;
if (i >= 0 && i == colsum[2] - c - f) ++count;
}
}
}
}
return count;
}
document.write(solve([1,2,3],[3,2,1]) + "<br>");
document.write(solve([22,33,44],[30,40,29]) + "<br>");
It won't help with the problem being #P-hard (if you allow matrices to be of any sizes -- see reference in the comment below), but there is a solution which doesn't amount to enumerate all the matrices but rather a smaller set of objects called semi-standard Young tableaux. Depending on your input, it could go faster, but still being of exponential complexity. Since it's an entire chapter in several algebraic combinatorics book or in Knuth's AOCP 3, I won't go into details here only pointing to the relevant wikipedia pages.
The idea is that using the Robinson–Schensted–Knuth correspondence each of these matrix is in bijection with a pair of tableaux of the same shape, where one of the tableau is filled with integers counted by the row sum, the other by the column sum. The number of tableau of shape U filled with numbers counted by V is called the Kostka Number K(U,V). As a consequence, you end up with a formula such as
#Mat(RowSum, ColSum) = \sum_shape K(shape, RowSum)*K(shape, ColSum)
Of course if RowSum == ColSum == Sum:
#Mat(Sum, Sum) = \sum_shape K(shape, Sum)^2
Here is your example in the SageMath system:
sage: sum(SemistandardTableaux(p, [3,2,1]).cardinality()^2 for p in Partitions(6))
12
Here are some larger examples:
sage: sums = [6,5,4,3,2,1]
sage: %time sum(SemistandardTableaux(p, sums).cardinality()^2 for p in Partitions(sum(sums)))
CPU times: user 228 ms, sys: 4.77 ms, total: 233 ms
Wall time: 224 ms
8264346
sage: sums = [7,6,5,4,3,2,1]
sage: %time sum(SemistandardTableaux(p, sums).cardinality()^2 for p in Partitions(sum(sums)))
CPU times: user 1.95 s, sys: 205 µs, total: 1.95 s
Wall time: 1.94 s
13150070522
sage: sums = [5,4,4,4,4,3,2,1]
sage: %time sum(SemistandardTableaux(p, sums).cardinality()^2 for p in Partitions(sum(sums)))
CPU times: user 1.62 s, sys: 221 µs, total: 1.62 s
Wall time: 1.61 s
1769107201498
It's clear that you won't get that fast enumerating matrices.
As requested by גלעד ברקן# here is a solution with different row and column sums:
sage: rsums = [5,4,3,2,1]; colsums = [5,4,3,3]
sage: %time sum(SemistandardTableaux(p, rsums).cardinality() * SemistandardTableaux(p, colsums).cardinality() for p in Partitions(sum(rsums)))
CPU times: user 88.3 ms, sys: 8.04 ms, total: 96.3 ms
Wall time: 92.4 ms
10233
I've tired to optimize the slow option. I get the all combinations and change the code only to get the total count. This is the fastest I could get:
private static int count(int[] rowSums, int[] colSums)
{
int count = 0;
int[] row0 = new int[3];
int sum = rowSums[0];
for (int r0 = 0; r0 <= sum; r0++)
for (int r1 = 0, max1 = sum - r0; r1 <= max1; r1++)
{
row0[0] = r0;
row0[1] = r1;
row0[2] = sum - r0 - r1;
count += getCombinations(rowSums[1], row0, colSums);
}
return count;
}
private static int getCombinations(int sum, int[] row0, int[] colSums)
{
int count = 0;
int max1 = Math.Min(colSums[1] - row0[1], sum);
int max2 = Math.Min(colSums[2] - row0[2], sum);
for (int r0 = 0, max0 = Math.Min(colSums[0] - row0[0], sum); r0 <= max0; r0++)
for (int r1 = 0; r1 <= max1; r1++)
{
int r01 = r0 + r1;
if (r01 <= sum)
if ((r01 + max2) >= sum)
count++;
}
return count;
}
Stopwatch w2 = Stopwatch.StartNew();
int res = count(new int[] { 1, 2, 3 }, new int[] { 3, 2, 1 });//12
int res1 = count(new int[] { 22, 33, 44 }, new int[] { 30, 40, 29 });//117276
int res2 = count(new int[] { 98, 99, 100}, new int[] { 100, 99, 98});//12743775
int res3 = count(new int[] { 198, 199, 200 }, new int[] { 200, 199, 198 });//201975050
w2.Stop();
Console.WriteLine("w2:" + w2.ElapsedMilliseconds);//322 - 370 on my computer
Aside my other answer using Robinson-Schensted-Knuth bijection, here is
another solution which doesn't need advanced combinatorics, but some trick in
programming solve this problem for arbitrary larger matrix. The first idea
that should be used to solve those kind of problems is to use recursion, avoiding recompution things thanks to some memoization
or better dynamic programming. Specifically once you have chosen a candidate
for the first row, you subtract this first row to the column sum and you are
left with the same problem only there is one less row. To avoid recomputing
thing you store the result. You can do this
either basically in a big table (memoization)
or in a more tricky way by storing all the solutions for matrices with n rows
and deducing the number of solutions for matrices with n+1 rows (dynamic programming).
Here is a recursive method using memoization in Python:
# Generator for the rows of sum s which are smaller that maxrow
def choose_one_row(s, maxrow):
if not maxrow:
if s == 0: yield []
else: return
else:
for i in range(0, maxrow[0]+1):
for res in choose_one_row(s-i, maxrow[1:]):
yield [i]+res
memo = dict()
def nmat(rsum, colsum):
# sanity check: sum by row and column must match
if sum(rsum) != sum(colsum): return 0
# base case rsum is empty
if not rsum: return 1
# convert to immutable tuple for memoization
rsum = tuple(rsum)
colsum = tuple(colsum)
# try if allready computed
try:
return memo[rsum, colsum]
except KeyError:
pass
# apply the recursive formula
res = 0
for row in choose_one_row(rsum[0], colsum):
res += nmat(rsum[1:], tuple(a - b for a, b in zip(colsum, row)))
# memoize the result
memo[(tuple(rsum), tuple(colsum))] = res
return res
Then after that:
sage: nmat([3,2,1], [3,2,1])
12
sage: %time nmat([6,5,4,3,2,1], [6,5,4,3,2,1])
CPU times: user 1.49 s, sys: 7.16 ms, total: 1.5 s
Wall time: 1.48 s
8264346

Finding all the Combination to sum set of coins to a certain number

I have given an array and I have to find the targeted sum.
For Example:
A[] ={1,2,3};
S = 5;
Total Combination = {1,1,1,1,1} , {2,3} ,{3,2} . {1,1,3} , {1,3,1} , {3,1,1} and other possible pair
I know it sounds like coin change problem, But the problem is how to find the Combination i.e {2,3} and {3,2} are 2 different solutions.
In the original coin change problem, you "choose" an arbitrary coin - and "guess" if it is or is not in the solution, this is done because the order is not important.
Here, you will have to iterate all possibilities for "which coin is first", until you are done:
D(0) = 1
D(x) = 0 | x < 0
D(x) = sum { D(x-coins[0]) , D(x-coins[1]), ..., D(x-coins[n-1] }
Note that for each step, you are giving all possibilities for the choosing the next coin, and moving on. At the end, you sum up all the solutions, for all possibilities to place each coin at the head of the solution.
Complexity of this solution using DP is O(n*S), where n is the number of coins and S is the desired sum.
Matlab code (wrote it in imperative style, this is my current open IDE, sorry it's matlab and not more common language like java or C)
function [ n ] = make_change( coins, x )
D = zeros(x,1);
for k = 1:x
for t = 1:length(coins)
curr = k-coins(t);
if curr>0
D(k) = D(k) + D(curr);
elseif curr == 0
D(k) = D(k) + 1;
end
end
end
n = D(x);
end
Invoking will yield:
>> make_change([1,2,3],5)
ans =
13
Which is correct, since all possibilities are [1,1,1,1,1],[1,1,1,2]*4, [1,1,3]*3,[1,2,2]*3,[2,3]*2 = 13

Printing Items that are in sack in knapsack

Suppose you are a thief and you invaded a house. Inside you found the following items:
A vase that weights 3 pounds and is worth 50 dollars.
A silver nugget that weights 6 pounds and is worth 30 dollars.
A painting that weights 4 pounds and is worth 40 dollars.
A mirror that weights 5 pounds and is worth 10 dollars.
Solution to this Knapsack problem of size 10 pounds is 90 dollars .
Table made from dynamic programming is :-
Now i want to know which elements i put in my sack using this table then how to back track ??
From your DP table we know f[i][w] = the maximum total value of a subset of items 1..i that has total weight less than or equal to w.
We can use the table itself to restore the optimal packing:
def reconstruct(i, w): # reconstruct subset of items 1..i with weight <= w
# and value f[i][w]
if i == 0:
# base case
return {}
if f[i][w] > f[i-1][w]:
# we have to take item i
return {i} UNION reconstruct(i-1, w - weight_of_item(i))
else:
# we don't need item i
return reconstruct(i-1, w)
I have an iterative algorithm inspired by #NiklasB. that works when a recursive algorithm would hit some kind of recursion limit.
def reconstruct(i, w, kp_soln, weight_of_item):
"""
Reconstruct subset of items i with weights w. The two inputs
i and w are taken at the point of optimality in the knapsack soln
In this case I just assume that i is some number from a range
0,1,2,...n
"""
recon = set()
# assuming our kp soln converged, we stopped at the ith item, so
# start here and work our way backwards through all the items in
# the list of kp solns. If an item was deemed optimal by kp, then
# put it in our bag, otherwise skip it.
for j in range(0, i+1)[::-1]:
cur_val = kp_soln[j][w]
prev_val = kp_soln[j-1][w]
if cur_val > prev_val:
recon.add(j)
w = w - weight_of_item[j]
return recon
Using a loop :
for (int n = N, w = W; n > 0; n--)
{
if (sol[n][w] != 0)
{
selected[n] = 1;
w = w - wt[n];
}
else
selected[n] = 0;
}
System.out.print("\nItems with weight ");
for (int i = 1; i < N + 1; i++)
if (selected[i] == 1)
System.out.print(val[i] +" ");

Resources