Sort array of weight and values on basis of value and weight ratio - c++14

While solving the knapsack problem using a greedy algorithm, I need to sort the weight and values array on the basis of the values/weight ratio.
Supposed weight = (5,6,12,9)
Values = (5,3,4,4)
The weight[i] respective value of value[i] is given.
We have to sort weight in w/v ratio and hence value for the respective weight.
How can I solve this problem in the best possible way?

Related

algorithm about unbound knapsack problem with possible negative weights?

I meet an unbounded knapsack problem with possible negative weights: There are k items, with weights x1, x2, ..., xk (xi can be positive or negative). Every item can have infinite number. The bag can store weight W > 0. How to store as little number as possible with exact W weight, if there is no solution just return -1.
That is
What's the algorithm to solve this problem?
Firstly, we cannot drop negative one. For example, x_1 = 3, x_2 = -1, W = 2. If we drop negative one, there can be no solution. However, there can be solution n_1=1, n_2=1.
The naive idea of dynamic programming/recursion with memorization cannot handle negative weight with infinite number.
dp[i][w] = minimum number of items to fill weight w by using item 1, 2, ..., i
dp[i][w] = min(dp[i-1][w], dp[i][w - xi] + 1)
Since xi can be negative and infinite number, there can be infinite state dp[i][w].
You can do a breadth-first search on the graph of achievable total weights, where there exists an edge from weight w to weight v if there is an item with weight v-w. Start at 0 and find the shortest path to W.
The trick is that you don't need to consider achievable weights less then -max(|xi|) or greater than W+max(|xi|). You don't need to consider anything else, because for every sequence of item weights that adds up to W, there is an order in which you can perform the additions so that the intermediate sum never goes outside those bounds.
Assuming that the weights are integers, this makes the graph finite.

Is this a non-polynomial problem? If not, how can it be solved in polynomial time?

The problem:
Write a function that has 2 input parameters:
items: a list/vector of 2-tuple positive integers (i.e. >= 1)
target: a positive integer (i.e. >= 1)
Find a subset of tuples such that:
The sum of the first tuple elements of the subset is greater than or equal to the input target.
The sum of the second tuple elements of the subset is minimal, and let's call that sum value best.
The function should just return best. Also, it is guaranteed that there is a solution. In other words, the sum of all the first tuple elements is always greater than or equal to the target.
Here is the pseudocode signature of such a function:
(items: List<(int, int)>, target: int) -> int
And here are some examples...
Example A:
items = [(25,50), (49,51), (25,50), (1,100)]
target = 50
answer = 100
Example B:
items = [(25,50), (49,51), (25,50), (1,5)]
target = 50
answer = 56
Here's my naive exponential-time solution:
Go through all possible subsets (hence exponential time)
For each subset, compute the sum of the first tuple elements
If that sum is greater than or equal to the target, then compute the sum of the second tuple elements
If that new sum is the smallest found so far, update the minimum
I also tried to determine if there's a mathematical property of the problem that allows a shortcut, e.g. go through the items by the biggest "first element divided by second element" ratio (best bang for your buck). However, as Example A demonstrate, this is not valid for all cases.
Is this a non-polynomial problem? If not, how can it be solved in polynomial time?
This is a 0-1 knapsack problem: https://en.wikipedia.org/wiki/Knapsack_problem
The tuples are items, the first tuple elements are item values, the second tuple elements are item weights. The classic knapsack asks "is best less than some particular limit.
As such, this problem is NP-complete and has no polynomial time solution.
There normal dynamic programming solution can be adapted to work in O(items.length * best). The easiest way is to use the normal DP method, first with a small limit on best, and then doubling it until the target value is achievable.

Algorithm For Partition An Array into Subsets With Minimum Total Variance

I have an array of floating point numbers and I would like to partition the array into two subsets, such that their total variance is minimized.
The total variance is defined in the following:
var = (var_1 * n_1 + var_2 * n_2)/(n_1 + n_2)
where n_1 and n_2 are number of elements on the left/right respectively, and var_1 and var_2 are variance on the left/right respectively.
My question is: Is there any efficient algorithm for finding the global minimum of the total variance? The algorithm should output two subset, each containing the elements of the corresponding group.
Moreover, suppose each element is a tuple (x,y), and instead of variance I would like to find the global covariance of the left and right, defined in similar way as above. Is there some general algorithm for dealing with such partition problems? I guess this should be harder because all algorithms I can think of requires sorting the array, but there is no obvious comparator for sorting the tuple here.

length arrangement with probability and cost

Consider a set of length, each associated with a probability. i.e.
X={a1=(100,1/4),a2=(500,1/4),a3=(200,1/2)}
Obviously, the sum of all the probabilities = 1.
Arrange the lengths together on a line one after the other from a starting point.
For example: {a2,a1,a3} in that order from start to finish.
Define the cost of an element a_i as its the total length from the starting line to the end of this element multiplied by its probability.
So from the previous arrangement:
cost(a2) = (500)*(1/4)
cost(a1) = (500+100)*(1/4)
cost(a3) = (500+100+200)*(1/2)
Define the total cost as the sum of all costs. e.g. cost(X) = cost(a2) + cost(a1) + cost(a3). Give an algorithm that finds an arrangement that minimizes cost(X)
My thoughts:
This looks like an greedy algorithm, since the last element in the arrangement always has the same sum multiplied by its probability, but I'm can't think of an heuristics that accomplishes this. It goes without saying that sorting by probability or length will not work.

Variation on knapsack - minimum total value exceeding 'W'

Given the usual n sets of items (each unlimited, say), with weights and values:
w1, v1
w2, v2
...
wn, vn
and a target weight W, I need to choose items such that the total
weight is at least W and the total value is minimized.
This looks to me like a variation (or in some sense converse) of the
integer/unbounded knapsack problem. Any help in formulating the DP algorithm
would be much appreciated!
let TOT = w1 + w2 + ... + wn.
In this answer I will describe a second bag. I'll denote the original as 'bag' and to the additional as 'knapsack'
Fill the bag with all elements, and start excluding elements from it, 'filling' up a new knapsack with size of at most TOT-W, with the highest possible value! You got yourself a regular knapsack problem, with same elements, and bag size of TOT-W.
Proof:
Assume you have best solution with k elements: e_i1,e_i2,...,e_ik, then the bag size is at least of size W, which makes the excluded items knapsack at most at size TOT-W. Also, since the value of the knapsack is minimized for size W, the value of the excluded items is maximized for size TOT-W, because if it was not maximized, there would be a better bag of size at least W, with smaller value.
The other way around [assuming you have maximal excluded bag] is almost identical.
Not too sure, but this might work. Consider the values to be the -ve of the values you have. The DP formulation would try to find max value for that weight which would be the least negative value in this case. Once you have a value, take a -ve of it for the final answer.

Resources