Knapsack variation in Dynamic Programming - algorithm

I'm trying to solve this exercise: we are given n items, where each has a given nonnegative weigth w1,w2,...,wn and value v1,v2,...,vn, and a knapsack with max weigth capacity W. I have to find a subset S of maximum value, subject to two restricions: 1) the total weight of the set should not exceed W; 2) I can't take objects with consecutive index.
For example, with n = 10, possible solutions are {1, 4, 6, 9}, {2, 4, 10} o {1, 10}.
How can I build a correct recurrence?

Recall that the knapsack recursive formula used for the DP solution is:
D(i,w) = max { D(i-1,w) , D(i-1,w-weight[i]) + value[i] }
In your modified problem, if you chose to take i - you cannot take i-1, resulting in the modification of:
D(i,w) = max { D(i-1,w) , D(i-2,w-weight[i]) + value[i] }
^
note here
i-2 instead of i-1
Similar to classic knapsack, it is also an exhaustive search - and thus provides optimal solution for the same reasons.
The idea is given that you have decided to chose i - you cannot chose i-1, so find the optimal solution that uses at most the item i-2. (no change from the original if you decided to exclude i)

Related

Dynamic programming function in O(nk) time

Given two integer arrays A of size n and B of size k, and knowing that all items
in the array B are unique, I want to find an algorithm that finds indices j' < j'', such
that all elements of B belong to A[j' : j''] and value |j''-j'|is minimized or
returns zero if there are no such indices at all. I also note that A can contain duplicates.
To provide more clarity, we can consider array A = {1, 2, 9, 6, 7, 8, 1, 0, 0, 6} and B {1, 8, 6}, then you can see that B ⊆ A[1 : 6] and B ⊆ A[4 : 7], but at the same time 7−4 < 6−1,
thus algorithm should output j'= 4 and j''= 7.
I want to find an algorithm that runs in O(nk) time.
My work so far is that I was thinking for each j'∈ [n], I can compute the minimum j'' ≥ j' so that B ⊆ A[j', j'']. If I assume B = {b1, ..., bk}, let Next[j'][i] denote the smallest index t ≥ j' so that at = b_i, i.e., the index of next element after a_j' (included) which equals bi.
In particular if such t doesn’t exist, simply let Next[j'][i] = ∞. If I am able to show that the minimum j'' is the following
j'' = max i∈[k] of Next[j'][i],
then I think I will be able to design a dynamic programming algorithm to compute Next in O(nk) time. Any help on this dynamic programming problem would be much appreciated!
Just run a sliding window that maintains the invariant of including all elements of B. That's O(n) with a hashmap.

Maximum Sum for Subarray with fixed cutoff

I have a list of integers, and I need to find a way to get the maximum sum of a subset of them, adding elements to the total until the sum is equal to (or greater than) a fixed cutoff. I know this seems similar to the knapsack, but I was unsure whether it was equivalent.
Sorting the array and adding the maximum element until sum <= cutoff does not work. Observe the following list:
list = [6, 5, 4, 4, 4, 3, 2, 2, 1]
cutoff = 15
For this list, doing it the naive way results in a sum of 15, which is very sub-optimal. As far as I can see, the maximum you could arrive at using this list is 20, by adding 4 + 4 + 4 + 2 + 6. If this is just a different version of knapsack, I can just implement a knapsack solution, as I probably have small enough lists to get away with this, but I'd prefer to do something more efficient.
First of all in any sum, you won't have produced a worse result by adding the largest element last. So there is no harm in assuming that the elements are sorted from smallest to largest as a first step.
And now you use a dynamic programming approach similar to the usual subset sum.
def best_cutoff_sum (cutoff, elements):
elements = sorted(elements)
sums = {0: None}
for e in elements:
next_sums = {}
for v, path in sums.iteritems():
next_sums[v] = path
if v < cutoff:
next_sums[v + e] = [e, path]
sums = next_sums
best = max(sums.keys())
return (best, sums[best])
print(best_cutoff_sum(15, [6, 5, 4, 4, 4, 3, 2, 2, 1]))
With a little work you can turn the path from the nested array it currently is to whatever format you want.
If your list of non-negative elements has n elements, your cutoff is c and your maximum value is v, then this algorithm will take time O(n * (k + v))

Subset with smallest sum greater or equal to k

I am trying to write a python algorithm to do the following.
Given a set of positive integers S, find the subset with the smallest sum, greater or equal to k.
For example:
S = [50, 103, 85, 21, 30]
k = 140
subset = [85, 50, 21] (with sum = 146)
The numbers in the initial set are all integers, and k can be arbitrarily large. Usually there will be about 100 numbers in the set.
Of course there's the brute force solution of going through all possible subsets, but that runs in O(2^n) which is unfeasable. I have been told that this problem is NP-Complete, but that there should be a Dynamic Programing approach that allows it to run in pseudo-polynomial time, like the knapsack problem, but so far, attempting to use DP still leads me to solutions that are O(2^n).
Is there such a way to appy DP to this problem? If so, how? I find DP hard to understand so I might have missed something.
Any help is much appreciated.
Well seeing that numbers are not integers but reals, best I can think of is O(2^(n/2) log (2^(n/2)).
It might look worse at first glance but notice that 2^(n/2) == sqrt(2^n)
So to achieve such complexity we will use technique known as meet in the middle:
Split set into 2 parts of sizes n/2 and n-n/2
Use brute force to generate all subsets (including empty one) and store them in arrays, let's call them A and B
Let's sort array B
Now for each element a in A, if B[-1] + a >=k we can use binary search to find smallest element b in B that satisfies a + b >= k
out of all such a + b pairs we found choose the smallest
OP changed question a little now its integers so here goes dynamic solution:
well not much to say, classical knapsack.
for each i in [1,n] we have 2 options for set item i:
1. Include in subset, state changes from (i, w) to (i+1, w + S[i])
2. Skip it, state changes from (i, w) to (i+1, w)
Every time we reach some w that`s >= k, we update answer
Pseudo-code:
visited = Set() //some set/hashtable object to store visited states
S = [...]//set of integers from input
int ats = -1;
void solve(int i, int w) //theres atmost n*k different states so complexity is O(n*k)
{
if(w >= k)
{
if(ats==-1)ats=w;
else ats=min(ats,w);
return;
}
if(i>n)return;
if(visited.count(i,w))return; //we already visited this state, can skip
visited.insert(i,w)=1;
solve(i+1, w + S[i]); //take item
solve(i+1, w); //skip item
}
solve(1,0);
print(ats);

divisibility algorithm on an array of integers

You are given an array of positive integers A. You need to create a subset of the array A with the maximum number of elements with the property that however we take any two numbers of the subset (we can call it x and y), we have that gcd(x,y) is higher than 1. Print the elements of the subset.
For example, if we have n = 4 and the array is {15, 7, 10, 6}, the output needs to be {15, 10, 6}.
Is there any faster solution than backtracking?
Yes, I think you have a better solution. Transform this to a graph problem: each integer is a node; two nodes i and j have an edge connecting them iff gcd(i, j) > 1.
Now, you need to find the largest fully-connected subgraph, (a.k.a. a clique). A little research will show you how to implement that. It's not efficient, but it's more tractable and reliable.
This is equivalent to the Clique problem. So no, there is no efficient solution for this (unless P = NP).

Coin Change (Dynamic Programming)

We usually the following recurrence relation for the coin change problem:
(P is the total money for which we need change and d_i is the coin available)
But can't we make it like this:
(V is the given sorted set of coins available, i and j are its subscripts with Vj being the highest value coin given)
C[p,Vi,j] = C[p,Vi,j-1] if Vj > p
= C[p-Vj,Vi,j] + 1 if Vj <=p
Is there anything wrong with what I wrote? Though the solution is not dynamic but isn't it more efficient?
Consider P = 6, V = {4, 3, 1}. You would pick 4, 1, 1 instead of 3, 3, so 3 coins instead of the optimal 2.
What you've written is similar to the greedy algorithm that works only under certain conditions. (See - How to tell if greedy algorithm suffices for finding minimum coin change?).
Also, in your version you aren't actually using Vi within the recurrence, so it's just a waste of memory

Resources