Multiple Knapsack, weight = profit - complexity-theory

I have a research on Knapsack Problems. Now I stopped on special type of a Multiple Knapsack Problem, where weight of each item is equal with profit of this item.
I can't find any paper saying anything about the complexity of this problem. Is it NP-complete or not?
Any help will be appreciated.

I found a problem that can be reduced to mine - The Multiple Subset Sum Problem. The Multiple Subset Sum Problem (MSSP) is the selection of items from a given ground set and their packing into a given number of identical bins such that the sum of the item weights in every bin does not exceed the bin capacity and the total sum of the weights of the items packed is as large as possible. It can be easily reduced to my problem. It proves that my problem is NP-hard.

This problem is NP-hard via a reduction from the set partition problem. In that problem, you're given a set of integers and are asked whether it's possible to split that set into two sets with the same sum. You can reduce it to your problem as follows: if the set has sum 2k, create two knapsacks of capacity k each and create one item for each number in the set to split. Then, any way of perfectly filling the knapsacks corresponds to a partition of the original set and vice-versa. (If the sum of the numbers isn't even, just map the problem instance to an unsolvable instance of your knapsack problem).
Hope this helps!

Related

Multiple knapsack problem with equal profit and different weight

I am researching the load balancing problem in 5G system, but I am not sure if my problem is a NP-complete problem.
The problem is:
given a set of n items and a set of m knapsacks
capacity of knapsacks are equal.
3. the weight of item j in knapsack i is w[i][j],that means weight of a item in each knapsack are different.
4. each profit of items are equal.
I am not trying to put all item in least knapsack like bin packing problem.
I saw some similar question answered, but no one is identical to this case.
In this case, the goal is to put as more as possible item with m knapsacks.
Is the problem a NP-complete problem?

Is there an algorithm that can solve this variation of the Knapsack Problem?

I want to do a variation on the 0-1 knapsack problem. Just like the original problem each item would have a weight and value, and you are minimizing weight and maximizing value. However each item would be part of a set e.g. Book1(value, weight, set).
I need an algorithm that can sort these, given an upper limit on each individual set. So for instance Set A would be (Book1, Book2, Book3) and I could only have
3 items from set A in the entire knapsack.
There would be no limits on individual items, so I could take as many Book1's as I wished, but I couldn't take more than Set A would allow me to take. The entire knapsack would have multiple sets each with multiple items.
I've tried individually solving each set before solving the problem as a whole using the solutions from each set, but it gives me obviously suboptimal solutions.
How should I design my code so that it produces the best result?

Counter examples for 0-1 knapsack problem with two knapsacks

I came across the following question in this course:
Consider a variation of the Knapsack problem where we have two
knapsacks, with integer capacities 𝑊1 and 𝑊2. As usual, we are given
𝑛 items with positive values and positive integer weights. We want to
pick subsets 𝑆1,𝑆2 with maximum total value such that the total weights of 𝑆1 and 𝑆1 are at most 𝑊1 and 𝑊2, respectively. Assume that every item fits in either knapsack. Consider the following two algorithmic approaches.
(1) Use the algorithm from lecture to pick a max-value feasible solution 𝑆1 for the first knapsack, and then run it again on the remaining items to pick a max-value feasible solution 𝑆2 for the second knapsack.
(2) Use the algorithm from lecture to pick a max-value feasible solution for a knapsack with capacity 𝑊1+𝑊2, and then split the chosen items into
two sets 𝑆1+𝑆2 that have size at most 𝑊1 and 𝑊2, respectively.
Which of the following statements is true?
Algorithm (1) is guaranteed to produce an optimal feasible solution to the original problem provided 𝑊1=𝑊2.
Algorithm (1) is guaranteed to
produce an optimal feasible solution to the original problem but
algorithm (2) is not.
Algorithm (2) is guaranteed to produce an
optimal feasible solution to the original problem but algorithm (1) is
not.
Neither algorithm is guaranteed to produce an optimal feasible
solution to the original problem.
The "algorithm from lecture" is on YouTube. https://www.youtube.com/watch?v=KX_6OF8X6HQ, which is 0-1 knapsack problem for one bag.
The correct answer to this question is option 4. This, this and this post present solutions to the problem. However, I'm having a hard time finding counterexamples showing that options 1 through 3 are incorrect. Can you cite any?
Edit:
The accepted answer doesn't provide a counterexample for option 1; see 2 knapsacks with same capacity - Why can't we just find the max-value twice for that.
(Weight; Value): (3;10), (3;10), (4;2)
capacities 7, 3
The first method chooses 3+3 into the first sack, remaining items does not fit into the second one
(Weight; Value): (4;10), (4;10), (4;10), (2:1)
capacities 6, 6
The second method chooses (4+4+4) but this set cannot fit into two sacks without loss, while (4+2) and (4) is better

Knapsack p‌r‌o‌b‌l‌e‌m modified to unlimited elements

In the Knapsack problem, there is a list of elements, that each one of them contain weight and cost.
I would like to do a dynamic algorithm that will deal with the knapsack problem, but any element can be chosen more than once.
I think following solution from GeeksForGeeks demonstrates what you want to do with help of Algorithm, examples and implementation.
It is Min-cost Knapsack , you can add an item more than once.
NOTE: here object weights are the indices in array, starting at 1. i.e. w[] = {1,2,3,4,5}
cost[] is cost required when you add particular object.
If you add cost[1 ]=20 then w[1 ]=1 kg, cost2=10 then w2=2 kg and so on.
hope this helps.

The smallest number of containers that contain all given elements

Suppose C refers to a set of containers {c1,c2,c3....cn}, where each of these containers contains a finite set of integers {i1,i2,i3...im}. Further, suppose that it is possible for an integer to exist in more than one container. Given a finite set of integers S {s1,s2,s3...sz}, find the size of the smallest subset of C that contains all integers in S.
Note that there could be thousands of containers each with hundreds of integers. Therefore, brute force is slow for solving this problem.
I tried to solve the problem using Greedy algorithm. That is, each time I select the container with the largest number of integers in the set S, but I failed!
Can anyone suggest a fast algorithm for this problem?
This is the well known set cover problem. It is NP-hard — in fact, its decision version was one of the canonical NP-complete problems and was among the 21 problems included in Karp's 1972 paper — and so no efficient algorithm is known. Unless you can identify some special extra structure to the problem, you will have to be satisfied with an approximate result: that is, a subset of C whose union contains S, which but which is not necessarily the smallest such subset of C.
The greedy algorithm is probably your best bet: it finds a collection of sets that is no more than O(log |C|) times the size of the smallest such collection.
You say that you were unable to get the greedy algorithm to work. I think this is probably because you failed to implement it correctly. You describe your algorithm like this:
each time I select the container with the largest number of integers in the set S
but the rule in the usual greedy algorithm is to select at each stage the container with the largest number of integers in the set S that are not in any container selected so far.

Resources