Printing Items that are in sack in knapsack - algorithm

Suppose you are a thief and you invaded a house. Inside you found the following items:
A vase that weights 3 pounds and is worth 50 dollars.
A silver nugget that weights 6 pounds and is worth 30 dollars.
A painting that weights 4 pounds and is worth 40 dollars.
A mirror that weights 5 pounds and is worth 10 dollars.
Solution to this Knapsack problem of size 10 pounds is 90 dollars .
Table made from dynamic programming is :-
Now i want to know which elements i put in my sack using this table then how to back track ??

From your DP table we know f[i][w] = the maximum total value of a subset of items 1..i that has total weight less than or equal to w.
We can use the table itself to restore the optimal packing:
def reconstruct(i, w): # reconstruct subset of items 1..i with weight <= w
# and value f[i][w]
if i == 0:
# base case
return {}
if f[i][w] > f[i-1][w]:
# we have to take item i
return {i} UNION reconstruct(i-1, w - weight_of_item(i))
else:
# we don't need item i
return reconstruct(i-1, w)

I have an iterative algorithm inspired by #NiklasB. that works when a recursive algorithm would hit some kind of recursion limit.
def reconstruct(i, w, kp_soln, weight_of_item):
"""
Reconstruct subset of items i with weights w. The two inputs
i and w are taken at the point of optimality in the knapsack soln
In this case I just assume that i is some number from a range
0,1,2,...n
"""
recon = set()
# assuming our kp soln converged, we stopped at the ith item, so
# start here and work our way backwards through all the items in
# the list of kp solns. If an item was deemed optimal by kp, then
# put it in our bag, otherwise skip it.
for j in range(0, i+1)[::-1]:
cur_val = kp_soln[j][w]
prev_val = kp_soln[j-1][w]
if cur_val > prev_val:
recon.add(j)
w = w - weight_of_item[j]
return recon

Using a loop :
for (int n = N, w = W; n > 0; n--)
{
if (sol[n][w] != 0)
{
selected[n] = 1;
w = w - wt[n];
}
else
selected[n] = 0;
}
System.out.print("\nItems with weight ");
for (int i = 1; i < N + 1; i++)
if (selected[i] == 1)
System.out.print(val[i] +" ");

Related

Pseudo code Algorithm for Knapsack Problem with Two Constrains

I'm trying to solve following knapsack problem with two constrains.
What we know:
List item Total number of items
List item Weight
List item Value
List item If item is fragile or not (true/false)
Constrains:
List item Maximal weight of knapsack
List item Maximum number of fragile items knapsack can contain.
Can anyone give me some advice about algorithm i should use, pseudo code or good article?
UPDATE:
Important thing I forgot to mention is that I also need to know which items I putted in the bag.
It looks like a modification to knapsack would solve it.
Let's say we have N items, maximum knapscak weight is W, and max amount of fragile items is F
let's define our dp table as 3-dimensional array dp[N+1][W+1][F+1]
Now dp[n][w][f] stores maximum value we can get if we fill knapsack with some subset of items from first
n items, having weight of exacly w and having exacly f fragile items.
frop dp[n][w][f] we can move to states:
dp[n+1][w][f] if we skip n+1 th item
dp[n+1][w + weight(n+1)][f + isFragile(n+1)] if we take n+1 th item
so pseudocde:
dp[N+1][W+1][F+1] // memo table, initially filled with -1
int solve(n,w,f)
{
if(n > N)return 0;
if(dp[n][w][f] != -1) return dp[n][w][f];
dp[n][w][f] = solve(n+1,w,f); //skip item
if(w + weight(n) <= W && f + isFragile(n) <=F)
dp[n][w][f] = max(dp[n][w][f], value(n) + solve(n+1, w + weight(n), f + isFragile(n)));
return dp[n][w][f]
}
print(solve(1,0,0))
Getting the actual subset is also not difficult:
vector<int> getSolution(n,w,f)
{
int optimalValue = solve(n,w,f);
vector<int>answer; //just some dynamic array / arrayList
while(n <= N)
{
if(solve(n+1,w,f) == optimalValue)n++; //if after skipping item we can still get optimal answer we just skip it
else //otherwise we cant so current item must be taken
{
int new_w = w + weight(n);
int new_f = f + isFragile(n);
answer.push_back(n); //so we just save its index, and update all values
optimalValue -= value(n);
n++;
w = new_w;
f = new_f;
}
}
return answer;
}
Non-recursive approach
# N: number of items, W: max weight of knapsack, F: max amount of fragile items
# if item doesn't fit, then skip
# if it fits, check if it's worth to take it
dp[N+1][W+1][F+1] # filled with zeros
benefit, weight, fragility = [] # arrays filled with respective item properties
for n in range(1, N + 1):
for w in range(1, W + 1):
for f in range(0, F + 1):
if weight[n-1] > w or fragility[n-1] > f:
dp[n][w][f] = dp[n-1][w][f]
else:
dp[n][w][f] = max(dp[n-1][w][f],
dp[n-1][w-weight[n-1]][f-fragility[n-1]] + benefit[n-1])
print(dp[N][W][F]) # prints optimal knapsack value
Listing items
knapsack = [] # indexes of picked items
n = N
w = W
f = F
while n > 0:
if dp[n][w][f] != dp[n-1][w][f]:
knapsack.append(n-1)
w -= weight[n-1]
f -= fragility[n-1]
n -= 1
print(knapsack) # prints item indexes

Finding all the Combination to sum set of coins to a certain number

I have given an array and I have to find the targeted sum.
For Example:
A[] ={1,2,3};
S = 5;
Total Combination = {1,1,1,1,1} , {2,3} ,{3,2} . {1,1,3} , {1,3,1} , {3,1,1} and other possible pair
I know it sounds like coin change problem, But the problem is how to find the Combination i.e {2,3} and {3,2} are 2 different solutions.
In the original coin change problem, you "choose" an arbitrary coin - and "guess" if it is or is not in the solution, this is done because the order is not important.
Here, you will have to iterate all possibilities for "which coin is first", until you are done:
D(0) = 1
D(x) = 0 | x < 0
D(x) = sum { D(x-coins[0]) , D(x-coins[1]), ..., D(x-coins[n-1] }
Note that for each step, you are giving all possibilities for the choosing the next coin, and moving on. At the end, you sum up all the solutions, for all possibilities to place each coin at the head of the solution.
Complexity of this solution using DP is O(n*S), where n is the number of coins and S is the desired sum.
Matlab code (wrote it in imperative style, this is my current open IDE, sorry it's matlab and not more common language like java or C)
function [ n ] = make_change( coins, x )
D = zeros(x,1);
for k = 1:x
for t = 1:length(coins)
curr = k-coins(t);
if curr>0
D(k) = D(k) + D(curr);
elseif curr == 0
D(k) = D(k) + 1;
end
end
end
n = D(x);
end
Invoking will yield:
>> make_change([1,2,3],5)
ans =
13
Which is correct, since all possibilities are [1,1,1,1,1],[1,1,1,2]*4, [1,1,3]*3,[1,2,2]*3,[2,3]*2 = 13

Find the a location in a matrix so that the cost of every one moving to that location is smallest

There is a matrix, m×n. Several groups of people locate at some certain spots. In the following example, there are three groups and the number 4 indicates there are four people in this group. Now we want to find a meeting point in the matrix so that the cost of all groups moving to that point is the minimum. As for how to compute the cost of moving one group to another point, please see the following example.
Group1: (0, 1), 4
Group2: (1, 3), 3
Group3: (2, 0), 5
. 4 . .
. . . 3
5 . . .
If all of these three groups moving to (1, 1), the cost is:
4*((1-0)+(1-1)) + 5*((2-1)+(1-0))+3*((1-1)+(3-1))
My idea is :
Firstly, this two dimensional problem can be reduced to two one dimensional problem.
In the one dimensional problem, I can prove that the best spot must be one of these groups.
In this way, I can give a O(G^2) algorithm.(G is the number of group).
Use iterator's example for illustration:
{(-100,0,100),(100,0,100),(0,1,1)},(x,y,population)
for x, {(-100,100),(100,100),(0,1)}, 0 is the best.
for y, {(0,100),(0,100),(1,1)}, 0 is the best.
So it's (0, 0)
Is there any better solution for this problem.
I like the idea of noticing that the objective function can be decomposed to give the sum of two one-dimensional problems. The remaining problems look a lot like the weighted median to me (note "solves the following optimization problem in "http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/R.basic/html/weighted.median.html" or consider what happens to the objective function as you move away from the weighted median).
The URL above seems to say the weighted median takes time n log n, which I guess means that you could attain their claim by sorting the data and then doing a linear pass to work out the weighted median. The numbers you have to sort are in the range [0, m] and [0, n] so you could in theory do better if m and n are small, or - of course - if you are given the data pre-sorted.
Come to think of it, I don't see why you shouldn't be able to find the weighted median with a linear time randomized algorithm similar to that used to find the median (http://en.wikibooks.org/wiki/Algorithms/Randomization#find-median) - repeatedly pick a random element, use it to partition the items remaining, and work out which half the weighted median should be in. That gives you expected linear time.
I think this can be solved in O(n>m?n:m) time and O(n>m?n:m) space.
We have to find the median of x coordinates and median of all y coordinates in the k points and the answer will be (x_median,y_median);
Assumption is this function takes in the following inputs:
total number of points :int k= 4+3+5 = 12;
An array of coordinates:
struct coord_t c[12] = {(0,1),(0,1),(0,1), (0,1), (1,3), (1,3),(1,3),(2,0),(2,0),(2,0),(2,0),(2,0)};
c.int size = n>m ? n:m;
Let the input of the coordinates be an array of coordinates. coord_t c[k]
struct coord_t {
int x;
int y;
};
1. My idea is to create an array of size = n>m?n:m;
2. int array[size] = {0} ; //initialize all the elements in the array to zero
for(i=0;i<k;i++)
{
array[c[i].x] = +1;
count++;
}
int tempCount =0;
for(i=0;i<k;i++)
{
if(array[i]!=0)
{
tempCount += array[i];
}
if(tempCount >= count/2)
{
break;
}
}
int x_median = i;
//similarly with y coordinate.
int array[size] = {0} ; //initialize all the elements in the array to zero
for(i=0;i<k;i++)
{
array[c[i].y] = +1;
count++;
}
int tempCount =0;
for(i=0;i<k;i++)
{
if(array[i]!=0)
{
tempCount += array[i];
}
if(tempCount >= count/2)
{
break;
}
}
int y_median = i;
coord_t temp;
temp.x = x_median;
temp.y= y_median;
return temp;
Sample Working code for MxM matrix with k points:
*Problem
Given a MxM grid . and N people placed in random position on the grid. Find the optimal meeting point of all the people.
/
/
Answer:
Find the median of all the x coordiates of the positions of the people.
Find the median of all the y coordinates of the positions of the people.
*/
#include<stdio.h>
#include<stdlib.h>
typedef struct coord_struct {
int x;
int y;
}coord_struct;
typedef struct distance {
int count;
}distance;
coord_struct toFindTheOptimalDistance (int N, int M, coord_struct input[])
{
coord_struct z ;
z.x=0;
z.y=0;
int i,j;
distance * array_dist;
array_dist = (distance*)(malloc(sizeof(distance)*M));
for(i=0;i<M;i++)
{
array_dist[i].count =0;
}
for(i=0;i<N;i++)
{
array_dist[input[i].x].count +=1;
printf("%d and %d\n",input[i].x,array_dist[input[i].x].count);
}
j=0;
for(i=0;i<=N/2;)
{
printf("%d\n",i);
if(array_dist[j].count !=0)
i+=array_dist[j].count;
j++;
}
printf("x coordinate = %d",j-1);
int x= j-1;
for(i=0;i<M;i++)
array_dist[i].count =0;
for(i=0;i<N;i++)
{
array_dist[input[i].y].count +=1;
}
j=0;
for(i=0;i<N/2;)
{
if(array_dist[j].count !=0)
i+=array_dist[j].count;
j++;
}
int y =j-1;
printf("y coordinate = %d",j-1);
z.x=x;
z.y =y;
return z;
}
int main()
{
coord_struct input[5];
input[0].x =1;
input[0].y =2;
input[1].x =1;
input[1].y =2;
input[2].x =4;
input[2].y =1;
input[3].x = 5;
input[3].y = 2;
input[4].x = 5;
input[4].y = 2;
int size = m>n?m:n;
coord_struct x = toFindTheOptimalDistance(5,size,input);
}
Your algorithm is fine, and divide the problem into two one-dimensional problem. And the time complexity is O(nlogn).
You only need to divide every groups of people into n single people, so every move to left, right, up or down will be 1 for each people. We only need to find where's the (n + 1) / 2th people stand for row and column respectively.
Consider your sample. {(-100,0,100),(100,0,100),(0,1,1)}.
Let's take the line numbers out. It's {(-100,100),(100,100),(0,1)}, and that means 100 people stand at -100, 100 people stand at 100, and 1 people stand at 0.
Sort it by x, and it's {(-100,100),(0,1),(100,100)}. There is 201 people in total, so we only need to set the location at where the 101th people stands. It's 0, and that's for the answer.
The column number is with the same algorithm. {(0,100),(0,100),(1,1)}, and it's sorted. The 101th people is at 0, so the answer for column is also 0.
The answer is (0,0).
I can think of O(n) solution for one dimensional problem, which in turn means you can solve original problem in O(n+m+G).
Suppose, people are standing like this, a_0, a_1, ... a_n-1: a_0 people at spot 0, a_1 at spot 1. Then the solution in pseudocode is
cur_result = sum(i*a_i, i = 1..n-1)
cur_r = sum(a_i, i = 1..n-1)
cur_l = a_0
for i = 1:n-1
cur_result = cur_result - cur_r + cur_l
cur_r = cur_r - a_i
cur_l = cur_l + a_i
end
You need to find point, where cur_result is minimal.
So you need O(n) + O(m) for solving 1d problems + O(G) to build them, meaning total complexity is O(n+m+G).
Alternatively you solve 1d in O(G*log G) (or O(G) if data is sorted) using the same idea. Choose the one from expected number of groups.
you can solve this in O(G Log G) time by reducing it to, two one dimensional problems as you mentioned.
And as to how to solve it in one dimension, just sort them and go through them one by one and calculate cost moving to that point. This calculation can be done in O(1) time for each point.
You can also avoid Log(G) component if your x and y coordinates are small enough for you to use bucket/radix sort.
Inspired by kilotaras's idea. It seems that there is a O(G) solution for this problem.
Since everyone agree with the two dimensional problem can be reduced to two one dimensional problem. I will not repeat it again. I just focus on how to solve the one dimensional problem
with O(G).
Suppose, people are standing like this, a[0], a[1], ... a[n-1]. There is a[i] people standing at spot i. There are G spots having people(G <= n). Assuming these G spots are g[1], g[2], ..., g[G], where gi is in [0,...,n-1]. Without losing generality, we can also assume that g[1] < g[2] < ... < g[G].
It's not hard to prove that the optimal spot must come from these G spots. I will pass the
prove here and left it as an exercise if you guys have interest.
Since the above observation, we can just compute the cost of moving to the spot of every group and then chose the minimal one. There is an obvious O(G^2) algorithm to do this.
But using kilotaras's idea, we can do it in O(G)(no sorting).
cost[1] = sum((g[i]-g[1])*a[g[i]], i = 2,...,G) // the cost of moving to the
spot of first group. This step is O(G).
cur_r = sum(a[g[i]], i = 2,...,G) //How many people is on the right side of the
second group including the second group. This step is O(G).
cur_l = a[g[1]] //How many people is on the left side of the second group not
including the second group.
for i = 2:G
gap = g[i] - g[i-1];
cost[i] = cost[i-1] - cur_r*gap + cur_l*gap;
if i != G
cur_r = cur_r - a[g[i]];
cur_l = cur_l + a[g[i]];
end
end
The minimal of cost[i] is the answer.
Using the example 5 1 0 3 to illustrate the algorithm.
In this example,
n = 4, G = 3.
g[1] = 0, g[2] = 1, g[3] = 3.
a[0] = 5, a[1] = 1, a[2] = 0, a[3] = 3.
(1) cost[1] = 1*1+3*3 = 10, cur_r = 4, cur_l = 5.
(2) cost[2] = 10 - 4*1 + 5*1 = 11, gap = g[2] - g[1] = 1, cur_r = 4 - a[g[2]] = 3, cur_l = 6.
(3) cost[3] = 11 - 3*2 + 6*2 = 17, gap = g[3] - g[2] = 2.

How would you look at developing an algorithm for this hotel problem?

There is a problem I am working on for a programming course and I am having trouble developing an algorithm to suit the problem. Here it is:
You are going on a long trip. You start on the road at mile post 0. Along the way there are n
hotels, at mile posts a1 < a2 < ... < an, where each ai is measured from the starting point. The
only places you are allowed to stop are at these hotels, but you can choose which of the hotels
you stop at. You must stop at the final hotel (at distance an), which is your destination.
You'd ideally like to travel 200 miles a day, but this may not be possible (depending on the spacing
of the hotels). If you travel x miles during a day, the penalty for that day is (200 - x)^2. You want
to plan your trip so as to minimize the total penalty that is, the sum, over all travel days, of the
daily penalties.
Give an efficient algorithm that determines the optimal sequence of hotels at which to stop.
So, my intuition tells me to start from the back, checking penalty values, then somehow match them going back the forward direction (resulting in an O(n^2) runtime, which is optimal enough for the situation).
Anyone see any possible way to make this idea work out or have any ideas on possible implmentations?
If x is a marker number, ax is the mileage to that marker, and px is the minimum penalty to get to that marker, you can calculate pn for marker n if you know pm for all markers m before n.
To calculate pn, find the minimum of pm + (200 - (an - am))^2 for all markers m where am < an and (200 - (an - am))^2 is less than your current best for pn (last part is optimization).
For the starting marker 0, a0 = 0 and p0 = 0, for marker 1, p1 = (200 - a1)^2. With that starting information you can calculate p2, then p3 etc. up to pn.
edit: Switched to Java code, using the example from OP's comment. Note that this does not have the optimization check described in second paragraph.
public static void printPath(int path[], int i) {
if (i == 0) return;
printPath(path, path[i]);
System.out.print(i + " ");
}
public static void main(String args[]) {
int hotelList[] = {0, 200, 400, 600, 601};
int penalties[] = {0, (int)Math.pow(200 - hotelList[1], 2), -1, -1, -1};
int path[] = {0, 0, -1, -1, -1};
for (int i = 2; i <= hotelList.length - 1; i++) {
for(int j = 0; j < i; j++){
int tempPen = (int)(penalties[j] + Math.pow(200 - (hotelList[i] - hotelList[j]), 2));
if(penalties[i] == -1 || tempPen < penalties[i]){
penalties[i] = tempPen;
path[i] = j;
}
}
}
for (int i = 1; i < hotelList.length; i++) {
System.out.print("Hotel: " + hotelList[i] + ", penalty: " + penalties[i] + ", path: ");
printPath(path, i);
System.out.println();
}
}
Output is:
Hotel: 200, penalty: 0, path: 1
Hotel: 400, penalty: 0, path: 1 2
Hotel: 600, penalty: 0, path: 1 2 3
Hotel: 601, penalty: 1, path: 1 2 4
It looks like you can solve this problem with dynamic programming. The subproblem is the following:
d(i) : The minimum penalty possible when travelling from the start to hotel i.
The recursive formula is as follows:
d(0) = 0 where 0 is the starting position.
d(i) = min_{j=0, 1, ... , i-1} ( d(j) + (200-(ai-aj))^2)
The minimum penalty for reaching hotel i is found by trying all stopping places for the previous day, adding today's penalty and taking the minimum of those.
In order to find the path, we store in a separate array (path[]) which hotel we had to travel from in order to achieve the minimum penalty for that particular hotel. By traversing the array backwards (from path[n]) we obtain the path.
The runtime is O(n^2).
This is equivalent to finding the shortest path between two nodes in a directional acyclic graph. Dijkstra's algorithm will run in O(n^2) time.
Your intuition is better, though. Starting at the back, calculate the minimum penalty of stopping at that hotel. The first hotel's penalty is just (200-(200-x)^2)^2. Then, for each of the other hotels (in reverse order), scan forward to find the lowest-penalty hotel. A simple optimization is to stop as soon as the penalty costs start increasing, since that means you've overshot the global minimum.
I don't think you can do it as easily as sysrqb states.
On a side note, there is really no difference to starting from start or end; the goal is to find the minimum amount of stops each way, where each stop is as close to 200m as possible.
The question as stated seems to allow travelling beyond 200m per day, and the penalty is equally valid for over or under (since it is squared). This prefers an overage of miles per day rather than underage, since the penalty is equal, but the goal is closer.
However, given this layout
A ----- B----C-------D------N
0 190 210 390 590
It is not always true. It is better to go to B->D->N for a total penalty of only (200-190)^2 = 100. Going further via C->D->N gives a penalty of 100+400=500.
The answer looks like a full breadth first search with active pruning if you already have an optimal solution to reach point P, removing all solutions thus far where
sum(penalty-x) > sum(penalty-p) AND distance-to-x <= distance-to-p - 200
This would be an O(n^2) algorithm
Something like...
Quicksort all hotels by distance from start (discard any that have distance > hotelN)
Create an array/list of solutions, each containing (ListOfHotels, I, DistanceSoFar, Penalty)
Inspect each hotel in order, for each hotel_I
Calculate penalty to I, starting from each prior solution
Pruning
For each prior solution that is beyond 200 distanceSoFar from
current, and Penalty>current.penalty, remove it from list
loop
Following is the MATLAB code for hotel problem.
clc
clear all
% Data
% a = [0;50;180;150;50;40];
% a = [0, 200, 400, 600, 601];
a = [0,10,180,350,450,600];
% a = [0,1,2,3,201,202,203,403];
n = length(a);
opt(1) = 0;
prev(1)= 1;
for i=2:n
opt(i) =Inf;
for j = 1:i-1
if(opt(i)>(opt(j)+ (200-a(i)+a(j))^2))
opt(i)= opt(j)+ (200-a(i)+a(j))^2;
prev(i) = j;
end
end
S(i) = opt(i);
end
k = 1;
i = n;
sol(1) = n;
while(i>1)
k = k+1;
sol(k)=prev(i);
i = prev(i);
end
for i =k:-1:1
stops(i) = sol(i);
end
stops
Step 1 of 2
Sub-problem:
In this scenario, "C(j)" has been considered as sub-problem for minimum penalty gained up to the hotel "ai" when "0<=i<=n". The required value for the problem is "C(n)".
Algorithm to find minimum total penalty:
If the trip is stopped at the location "aj" then the previous stop will be "ai" and the value of i and should be less than j. Then all the possibilities of "ai", has been follows:
C(j) min{C(i)+(200-(aj-ai))^2}, 0<=i<=j.
Initialize the value of "C(0)" as "0" and “a0" as "0" to find the remaining values.
To find the optimal route, increase the value of "j" and "i" for each iteration of and use this detail to backtrack from "C(n)".
Here, "C(n)" refers the penalty of the last hotel (That is, the value of "i" is between "0" and "n").
Pseudocode:
//Function definition
Procedure min_tot()
//Outer loop to represent the value of for j = 1 to n:
//Calculate the distance of each stop C(j) = (200 — aj)^2
//Inner loop to represent the value of for i=1 to j-1:
//Compute total penalty and assign the minimum //total penalty to
"c(j)"
C(j) = min (C(i), C(i) + (200 — (aj — ai))^2}
//Return the value of total penalty of last hotel
return C(n)
Step 2 of 2
Explanation:
The above algorithm is used to find the minimum total penalty from the starting point to the end point.
It uses the function "min()" to find the total penalty for the each stop in the trip and computes the minimum
penalty value.
Running time of the algorithm:
This algorithm contains "n" sub-problems and each sub-problem take "O(n)" times to resolve.
It is needed to compute only the minimum values of "O(n)".
And the backtracking process takes "O(n)" times.
The total running time of the algorithm is nxn = n^2 = O(n^2) .
Therefore, this algorithm totally takes "0(n^2)" times to solve the whole problem.
I have come across this problem recently and wanted to share my solution written in Javascript.
Not dissimilar to the most of the above solutions, I have used dynamic programming approach. To calculate penalties[i], we need to search for such stopping place for the previous day so that the penalty is minimum.
penalties(i) = min_{j=0, 1, ... , i-1} ( penalties(j) + (200-(hotelList[i]-hotelList[j]))^2) The solution does not assume that the first penalty is Math.pow(200 - hotelList[1], 2). We don't know whether or not it is optimal to stop at the first top so this assumption should not be made.
In order to find the optimal path and store all the stops along the way, the helper array path is being used. Finally, the array is being traversed backwards to calculate the finalPath.
function calculateOptimalRoute(hotelList) {
const path = [];
const penalties = [];
for (i = 0; i < hotelList.length; i++) {
penalties[i] = Math.pow(200 - hotelList[i], 2)
path[i] = 0
for (j = 0; j < i; j++) {
const temp = penalties[j] + Math.pow((200 - (hotelList[i] - hotelList[j])), 2)
if (temp < penalties[i]) {
penalties[i] = temp;
path[i] = (j + 1);
}
}
}
const finalPath = [];
let index = path.length - 1
while (index >= 0) {
finalPath.unshift(index + 1);
index = path[index] - 1;
}
console.log('min penalty is ', penalties[hotelList.length - 1])
console.log('final path is ', finalPath)
return finalPath;
}
// calculateOptimalRoute([20, 40, 60, 940, 1500])
// Outputs [3, 4, 5]
// calculateOptimalRoute([190, 420, 550, 660, 670])
// Outputs [1, 2, 5]
// calculateOptimalRoute([200, 400, 600, 601])
// Outputs [1, 2, 4]
// calculateOptimalRoute([])
// Outputs []
To answer your question concisely, a PSPACE-complete algorithm is usually considered "efficient" for most Constraint Satisfaction Problems, so if you have an O(n^2) algorithm, that's "efficient".
I think the simplest method, given N total miles and 200 miles per day, would be to divide N by 200 to get X; the number of days you will travel. Round that to the nearest whole number of days X', then divide N by X' to get Y, the optimal number of miles to travel in a day. This is effectively a constant-time operation. If there were a hotel every Y miles, stopping at those hotels would produce the lowest possible score, by minimizing the effect of squaring each day's penalty. For instance, if the total trip is 605 miles, the penalty for travelling 201 miles per day (202 on the last) is 1+1+4 = 6, far less than 0+0+25 = 25 (200+200+205) you would get by minimizing each individual day's travel penalty as you went.
Now, you can traverse the list of hotels. The fastest method would be to simply pick the hotel that is the closest to each multiple of Y miles. It's linear-time and will produce a "good" result. However, I do not think this will produce the "best" result in all cases.
The more complex but foolproof method is to get the two closest hotels to each multiple of Y; the one immediately before and the one immediately after. This produces an array of X' pairs, which can be traversed in all possible permutations in 2^X' time. You can shorten this by applying Dijkstra to a map of these pairs, which will determine the least costly path for each day's travel, and will execute in roughly (2X')^2 time. This will probably be the most efficient algorithm that is guaranteed to produce the optimal result.
As #rmmh mentioned you are finding minimum distance path. Here distance is penalty ( 200-x )^2
So you will try to find a stopping plan by finding minimum penalty.
Lets say D(ai) gives distance of ai from starting point
P(i) = min { P(j) + (200 - (D(ai) - D(dj)) ^2 } where j is : 0 <= j < i
From a casual analysis it looks to be
O(n^2) algorithm ( = 1 + 2 + 3 + 4 + .... + n ) = O(n^2)
As a proof of concept, here is my JavaScript solution in Dynamic Programming without nested loops.
We start at zero miles.
We find the next stop by keeping the penalty as low as we can by comparing the penalty of a current hotel in the loop to the previous hotel's penalty.
Once we have our current minimum, we have found our stop for the day. We assign this point as our next starting point.
Optionally, we could keep the total of the penalties:
let hotels = [40, 80, 90, 200, 250, 450, 680, 710, 720, 950, 1000, 1080, 1200, 1480]
function findOptimalPath(arr) {
let start = 0
let stops = []
for (let i = 0; i < arr.length; i++) {
if (Math.pow((start + 200) - arr[i-1], 2) < Math.pow((start + 200) - arr[i], 2)) {
stops.push(arr[i-1])
start = arr[i-1]
}
}
console.log(stops)
}
findOptimalPath(hotels)
Here is my Python solution using Dynamic Programming:
distance = [150,180,250,340]
def hotelStop(distance):
n = len(distance)
DP = [0 for _ in distance]
for i in range(n-2,-1,-1):
min_penalty = float("inf")
for j in range(i+1,n):
# going from hotel i to j in first day
x = distance[j]-distance[i]
penalty = (200-x)**2
total_pentalty = penalty+ DP[j]
min_penalty = min(min_penalty,total_pentalty)
DP[i] = min_penalty
return DP[0]
hotelStop(distance)

Select k random elements from a list whose elements have weights

Selecting without any weights (equal probabilities) is beautifully described here.
I was wondering if there is a way to convert this approach to a weighted one.
I am also interested in other approaches as well.
Update: Sampling without replacement
If the sampling is with replacement, you can use this algorithm (implemented here in Python):
import random
items = [(10, "low"),
(100, "mid"),
(890, "large")]
def weighted_sample(items, n):
total = float(sum(w for w, v in items))
i = 0
w, v = items[0]
while n:
x = total * (1 - random.random() ** (1.0 / n))
total -= x
while x > w:
x -= w
i += 1
w, v = items[i]
w -= x
yield v
n -= 1
This is O(n + m) where m is the number of items.
Why does this work? It is based on the following algorithm:
def n_random_numbers_decreasing(v, n):
"""Like reversed(sorted(v * random() for i in range(n))),
but faster because we avoid sorting."""
while n:
v *= random.random() ** (1.0 / n)
yield v
n -= 1
The function weighted_sample is just this algorithm fused with a walk of the items list to pick out the items selected by those random numbers.
This in turn works because the probability that n random numbers 0..v will all happen to be less than z is P = (z/v)n. Solve for z, and you get z = vP1/n. Substituting a random number for P picks the largest number with the correct distribution; and we can just repeat the process to select all the other numbers.
If the sampling is without replacement, you can put all the items into a binary heap, where each node caches the total of the weights of all items in that subheap. Building the heap is O(m). Selecting a random item from the heap, respecting the weights, is O(log m). Removing that item and updating the cached totals is also O(log m). So you can pick n items in O(m + n log m) time.
(Note: "weight" here means that every time an element is selected, the remaining possibilities are chosen with probability proportional to their weights. It does not mean that elements appear in the output with a likelihood proportional to their weights.)
Here's an implementation of that, plentifully commented:
import random
class Node:
# Each node in the heap has a weight, value, and total weight.
# The total weight, self.tw, is self.w plus the weight of any children.
__slots__ = ['w', 'v', 'tw']
def __init__(self, w, v, tw):
self.w, self.v, self.tw = w, v, tw
def rws_heap(items):
# h is the heap. It's like a binary tree that lives in an array.
# It has a Node for each pair in `items`. h[1] is the root. Each
# other Node h[i] has a parent at h[i>>1]. Each node has up to 2
# children, h[i<<1] and h[(i<<1)+1]. To get this nice simple
# arithmetic, we have to leave h[0] vacant.
h = [None] # leave h[0] vacant
for w, v in items:
h.append(Node(w, v, w))
for i in range(len(h) - 1, 1, -1): # total up the tws
h[i>>1].tw += h[i].tw # add h[i]'s total to its parent
return h
def rws_heap_pop(h):
gas = h[1].tw * random.random() # start with a random amount of gas
i = 1 # start driving at the root
while gas >= h[i].w: # while we have enough gas to get past node i:
gas -= h[i].w # drive past node i
i <<= 1 # move to first child
if gas >= h[i].tw: # if we have enough gas:
gas -= h[i].tw # drive past first child and descendants
i += 1 # move to second child
w = h[i].w # out of gas! h[i] is the selected node.
v = h[i].v
h[i].w = 0 # make sure this node isn't chosen again
while i: # fix up total weights
h[i].tw -= w
i >>= 1
return v
def random_weighted_sample_no_replacement(items, n):
heap = rws_heap(items) # just make a heap...
for i in range(n):
yield rws_heap_pop(heap) # and pop n items off it.
If the sampling is with replacement, use the roulette-wheel selection technique (often used in genetic algorithms):
sort the weights
compute the cumulative weights
pick a random number in [0,1]*totalWeight
find the interval in which this number falls into
select the elements with the corresponding interval
repeat k times
If the sampling is without replacement, you can adapt the above technique by removing the selected element from the list after each iteration, then re-normalizing the weights so that their sum is 1 (valid probability distribution function)
I know this is a very old question, but I think there's a neat trick to do this in O(n) time if you apply a little math!
The exponential distribution has two very useful properties.
Given n samples from different exponential distributions with different rate parameters, the probability that a given sample is the minimum is equal to its rate parameter divided by the sum of all rate parameters.
It is "memoryless". So if you already know the minimum, then the probability that any of the remaining elements is the 2nd-to-min is the same as the probability that if the true min were removed (and never generated), that element would have been the new min. This seems obvious, but I think because of some conditional probability issues, it might not be true of other distributions.
Using fact 1, we know that choosing a single element can be done by generating these exponential distribution samples with rate parameter equal to the weight, and then choosing the one with minimum value.
Using fact 2, we know that we don't have to re-generate the exponential samples. Instead, just generate one for each element, and take the k elements with lowest samples.
Finding the lowest k can be done in O(n). Use the Quickselect algorithm to find the k-th element, then simply take another pass through all elements and output all lower than the k-th.
A useful note: if you don't have immediate access to a library to generate exponential distribution samples, it can be easily done by: -ln(rand())/weight
I've done this in Ruby
https://github.com/fl00r/pickup
require 'pickup'
pond = {
"selmon" => 1,
"carp" => 4,
"crucian" => 3,
"herring" => 6,
"sturgeon" => 8,
"gudgeon" => 10,
"minnow" => 20
}
pickup = Pickup.new(pond, uniq: true)
pickup.pick(3)
#=> [ "gudgeon", "herring", "minnow" ]
pickup.pick
#=> "herring"
pickup.pick
#=> "gudgeon"
pickup.pick
#=> "sturgeon"
If you want to generate large arrays of random integers with replacement, you can use piecewise linear interpolation. For example, using NumPy/SciPy:
import numpy
import scipy.interpolate
def weighted_randint(weights, size=None):
"""Given an n-element vector of weights, randomly sample
integers up to n with probabilities proportional to weights"""
n = weights.size
# normalize so that the weights sum to unity
weights = weights / numpy.linalg.norm(weights, 1)
# cumulative sum of weights
cumulative_weights = weights.cumsum()
# piecewise-linear interpolating function whose domain is
# the unit interval and whose range is the integers up to n
f = scipy.interpolate.interp1d(
numpy.hstack((0.0, weights)),
numpy.arange(n + 1), kind='linear')
return f(numpy.random.random(size=size)).astype(int)
This is not effective if you want to sample without replacement.
Here's a Go implementation from geodns:
package foo
import (
"log"
"math/rand"
)
type server struct {
Weight int
data interface{}
}
func foo(servers []server) {
// servers list is already sorted by the Weight attribute
// number of items to pick
max := 4
result := make([]server, max)
sum := 0
for _, r := range servers {
sum += r.Weight
}
for si := 0; si < max; si++ {
n := rand.Intn(sum + 1)
s := 0
for i := range servers {
s += int(servers[i].Weight)
if s >= n {
log.Println("Picked record", i, servers[i])
sum -= servers[i].Weight
result[si] = servers[i]
// remove the server from the list
servers = append(servers[:i], servers[i+1:]...)
break
}
}
}
return result
}
If you want to pick x elements from a weighted set without replacement such that elements are chosen with a probability proportional to their weights:
import random
def weighted_choose_subset(weighted_set, count):
"""Return a random sample of count elements from a weighted set.
weighted_set should be a sequence of tuples of the form
(item, weight), for example: [('a', 1), ('b', 2), ('c', 3)]
Each element from weighted_set shows up at most once in the
result, and the relative likelihood of two particular elements
showing up is equal to the ratio of their weights.
This works as follows:
1.) Line up the items along the number line from [0, the sum
of all weights) such that each item occupies a segment of
length equal to its weight.
2.) Randomly pick a number "start" in the range [0, total
weight / count).
3.) Find all the points "start + n/count" (for all integers n
such that the point is within our segments) and yield the set
containing the items marked by those points.
Note that this implementation may not return each possible
subset. For example, with the input ([('a': 1), ('b': 1),
('c': 1), ('d': 1)], 2), it may only produce the sets ['a',
'c'] and ['b', 'd'], but it will do so such that the weights
are respected.
This implementation only works for nonnegative integral
weights. The highest weight in the input set must be less
than the total weight divided by the count; otherwise it would
be impossible to respect the weights while never returning
that element more than once per invocation.
"""
if count == 0:
return []
total_weight = 0
max_weight = 0
borders = []
for item, weight in weighted_set:
if weight < 0:
raise RuntimeError("All weights must be positive integers")
# Scale up weights so dividing total_weight / count doesn't truncate:
weight *= count
total_weight += weight
borders.append(total_weight)
max_weight = max(max_weight, weight)
step = int(total_weight / count)
if max_weight > step:
raise RuntimeError(
"Each weight must be less than total weight / count")
next_stop = random.randint(0, step - 1)
results = []
current = 0
for i in range(count):
while borders[current] <= next_stop:
current += 1
results.append(weighted_set[current][0])
next_stop += step
return results
In the question you linked to, Kyle's solution would work with a trivial generalization.
Scan the list and sum the total weights. Then the probability to choose an element should be:
1 - (1 - (#needed/(weight left)))/(weight at n). After visiting a node, subtract it's weight from the total. Also, if you need n and have n left, you have to stop explicitly.
You can check that with everything having weight 1, this simplifies to kyle's solution.
Edited: (had to rethink what twice as likely meant)
This one does exactly that with O(n) and no excess memory usage. I believe this is a clever and efficient solution easy to port to any language. The first two lines are just to populate sample data in Drupal.
function getNrandomGuysWithWeight($numitems){
$q = db_query('SELECT id, weight FROM theTableWithTheData');
$q = $q->fetchAll();
$accum = 0;
foreach($q as $r){
$accum += $r->weight;
$r->weight = $accum;
}
$out = array();
while(count($out) < $numitems && count($q)){
$n = rand(0,$accum);
$lessaccum = NULL;
$prevaccum = 0;
$idxrm = 0;
foreach($q as $i=>$r){
if(($lessaccum == NULL) && ($n <= $r->weight)){
$out[] = $r->id;
$lessaccum = $r->weight- $prevaccum;
$accum -= $lessaccum;
$idxrm = $i;
}else if($lessaccum){
$r->weight -= $lessaccum;
}
$prevaccum = $r->weight;
}
unset($q[$idxrm]);
}
return $out;
}
I putting here a simple solution for picking 1 item, you can easily expand it for k items (Java style):
double random = Math.random();
double sum = 0;
for (int i = 0; i < items.length; i++) {
val = items[i];
sum += val.getValue();
if (sum > random) {
selected = val;
break;
}
}
I have implemented an algorithm similar to Jason Orendorff's idea in Rust here. My version additionally supports bulk operations: insert and remove (when you want to remove a bunch of items given by their ids, not through the weighted selection path) from the data structure in O(m + log n) time where m is the number of items to remove and n the number of items in stored.
Sampling wihout replacement with recursion - elegant and very short solution in c#
//how many ways we can choose 4 out of 60 students, so that every time we choose different 4
class Program
{
static void Main(string[] args)
{
int group = 60;
int studentsToChoose = 4;
Console.WriteLine(FindNumberOfStudents(studentsToChoose, group));
}
private static int FindNumberOfStudents(int studentsToChoose, int group)
{
if (studentsToChoose == group || studentsToChoose == 0)
return 1;
return FindNumberOfStudents(studentsToChoose, group - 1) + FindNumberOfStudents(studentsToChoose - 1, group - 1);
}
}
I just spent a few hours trying to get behind the algorithms underlying sampling without replacement out there and this topic is more complex than I initially thought. That's exciting! For the benefit of a future readers (have a good day!) I document my insights here including a ready to use function which respects the given inclusion probabilities further below. A nice and quick mathematical overview of the various methods can be found here: Tillé: Algorithms of sampling with equal or unequal probabilities. For example Jason's method can be found on page 46. The caveat with his method is that the weights are not proportional to the inclusion probabilities as also noted in the document. Actually, the i-th inclusion probabilities can be recursively computed as follows:
def inclusion_probability(i, weights, k):
"""
Computes the inclusion probability of the i-th element
in a randomly sampled k-tuple using Jason's algorithm
(see https://stackoverflow.com/a/2149533/7729124)
"""
if k <= 0: return 0
cum_p = 0
for j, weight in enumerate(weights):
# compute the probability of j being selected considering the weights
p = weight / sum(weights)
if i == j:
# if this is the target element, we don't have to go deeper,
# since we know that i is included
cum_p += p
else:
# if this is not the target element, than we compute the conditional
# inclusion probability of i under the constraint that j is included
cond_i = i if i < j else i-1
cond_weights = weights[:j] + weights[j+1:]
cond_p = inclusion_probability(cond_i, cond_weights, k-1)
cum_p += p * cond_p
return cum_p
And we can check the validity of the function above by comparing
In : for i in range(3): print(i, inclusion_probability(i, [1,2,3], 2))
0 0.41666666666666663
1 0.7333333333333333
2 0.85
to
In : import collections, itertools
In : sample_tester = lambda f: collections.Counter(itertools.chain(*(f() for _ in range(10000))))
In : sample_tester(lambda: random_weighted_sample_no_replacement([(1,'a'),(2,'b'),(3,'c')],2))
Out: Counter({'a': 4198, 'b': 7268, 'c': 8534})
One way - also suggested in the document above - to specify the inclusion probabilities is to compute the weights from them. The whole complexity of the question at hand stems from the fact that one cannot do that directly since one basically has to invert the recursion formula, symbolically I claim this is impossible. Numerically it can be done using all kind of methods, e.g. Newton's method. However the complexity of inverting the Jacobian using plain Python becomes unbearable quickly, I really recommend looking into numpy.random.choice in this case.
Luckily there is method using plain Python which might or might not be sufficiently performant for your purposes, it works great if there aren't that many different weights. You can find the algorithm on page 75&76. It works by splitting up the sampling process into parts with the same inclusion probabilities, i.e. we can use random.sample again! I am not going to explain the principle here since the basics are nicely presented on page 69. Here is the code with hopefully a sufficient amount of comments:
def sample_no_replacement_exact(items, k, best_effort=False, random_=None, ε=1e-9):
"""
Returns a random sample of k elements from items, where items is a list of
tuples (weight, element). The inclusion probability of an element in the
final sample is given by
k * weight / sum(weights).
Note that the function raises if a inclusion probability cannot be
satisfied, e.g the following call is obviously illegal:
sample_no_replacement_exact([(1,'a'),(2,'b')],2)
Since selecting two elements means selecting both all the time,
'b' cannot be selected twice as often as 'a'. In general it can be hard to
spot if the weights are illegal and the function does *not* always raise
an exception in that case. To remedy the situation you can pass
best_effort=True which redistributes the inclusion probability mass
if necessary. Note that the inclusion probabilities will change
if deemed necessary.
The algorithm is based on the splitting procedure on page 75/76 in:
http://www.eustat.eus/productosServicios/52.1_Unequal_prob_sampling.pdf
Additional information can be found here:
https://stackoverflow.com/questions/2140787/
:param items: list of tuples of type weight,element
:param k: length of resulting sample
:param best_effort: fix inclusion probabilities if necessary,
(optional, defaults to False)
:param random_: random module to use (optional, defaults to the
standard random module)
:param ε: fuzziness parameter when testing for zero in the context
of floating point arithmetic (optional, defaults to 1e-9)
:return: random sample set of size k
:exception: throws ValueError in case of bad parameters,
throws AssertionError in case of algorithmic impossibilities
"""
# random_ defaults to the random submodule
if not random_:
random_ = random
# special case empty return set
if k <= 0:
return set()
if k > len(items):
raise ValueError("resulting tuple length exceeds number of elements (k > n)")
# sort items by weight
items = sorted(items, key=lambda item: item[0])
# extract the weights and elements
weights, elements = list(zip(*items))
# compute the inclusion probabilities (short: π) of the elements
scaling_factor = k / sum(weights)
π = [scaling_factor * weight for weight in weights]
# in case of best_effort: if a inclusion probability exceeds 1,
# try to rebalance the probabilities such that:
# a) no probability exceeds 1,
# b) the probabilities still sum to k, and
# c) the probability masses flow from top to bottom:
# [0.2, 0.3, 1.5] -> [0.2, 0.8, 1]
# (remember that π is sorted)
if best_effort and π[-1] > 1 + ε:
# probability mass we still we have to distribute
debt = 0.
for i in reversed(range(len(π))):
if π[i] > 1.:
# an 'offender', take away excess
debt += π[i] - 1.
π[i] = 1.
else:
# case π[i] < 1, i.e. 'save' element
# maximum we can transfer from debt to π[i] and still not
# exceed 1 is computed by the minimum of:
# a) 1 - π[i], and
# b) debt
max_transfer = min(debt, 1. - π[i])
debt -= max_transfer
π[i] += max_transfer
assert debt < ε, "best effort rebalancing failed (impossible)"
# make sure we are talking about probabilities
if any(not (0 - ε <= π_i <= 1 + ε) for π_i in π):
raise ValueError("inclusion probabilities not satisfiable: {}" \
.format(list(zip(π, elements))))
# special case equal probabilities
# (up to fuzziness parameter, remember that π is sorted)
if π[-1] < π[0] + ε:
return set(random_.sample(elements, k))
# compute the two possible lambda values, see formula 7 on page 75
# (remember that π is sorted)
λ1 = π[0] * len(π) / k
λ2 = (1 - π[-1]) * len(π) / (len(π) - k)
λ = min(λ1, λ2)
# there are two cases now, see also page 69
# CASE 1
# with probability λ we are in the equal probability case
# where all elements have the same inclusion probability
if random_.random() < λ:
return set(random_.sample(elements, k))
# CASE 2:
# with probability 1-λ we are in the case of a new sample without
# replacement problem which is strictly simpler,
# it has the following new probabilities (see page 75, π^{(2)}):
new_π = [
(π_i - λ * k / len(π))
/
(1 - λ)
for π_i in π
]
new_items = list(zip(new_π, elements))
# the first few probabilities might be 0, remove them
# NOTE: we make sure that floating point issues do not arise
# by using the fuzziness parameter
while new_items and new_items[0][0] < ε:
new_items = new_items[1:]
# the last few probabilities might be 1, remove them and mark them as selected
# NOTE: we make sure that floating point issues do not arise
# by using the fuzziness parameter
selected_elements = set()
while new_items and new_items[-1][0] > 1 - ε:
selected_elements.add(new_items[-1][1])
new_items = new_items[:-1]
# the algorithm reduces the length of the sample problem,
# it is guaranteed that:
# if λ = λ1: the first item has probability 0
# if λ = λ2: the last item has probability 1
assert len(new_items) < len(items), "problem was not simplified (impossible)"
# recursive call with the simpler sample problem
# NOTE: we have to make sure that the selected elements are included
return sample_no_replacement_exact(
new_items,
k - len(selected_elements),
best_effort=best_effort,
random_=random_,
ε=ε
) | selected_elements
Example:
In : sample_no_replacement_exact([(1,'a'),(2,'b'),(3,'c')],2)
Out: {'b', 'c'}
In : import collections, itertools
In : sample_tester = lambda f: collections.Counter(itertools.chain(*(f() for _ in range(10000))))
In : sample_tester(lambda: sample_no_replacement_exact([(1,'a'),(2,'b'),(3,'c'),(4,'d')],2))
Out: Counter({'a': 2048, 'b': 4051, 'c': 5979, 'd': 7922})
The weights sum up to 10, hence the inclusion probabilities compute to: a → 20%, b → 40%, c → 60%, d → 80%. (Sum: 200% = k.) It works!
Just one word of caution for the productive use of this function, it can be very hard to spot illegal inputs for the weights. An obvious illegal example is
In: sample_no_replacement_exact([(1,'a'),(2,'b')],2)
ValueError: inclusion probabilities not satisfiable: [(0.6666666666666666, 'a'), (1.3333333333333333, 'b')]
b cannot appear twice as often as a since both have to be always be selected. There are more subtle examples. To avoid an exception in production just use best_effort=True, which rebalances the inclusion probability mass such that we have always a valid distribution. Obviously this might change the inclusion probabilities.
I used a associative map (weight,object). for example:
{
(10,"low"),
(100,"mid"),
(10000,"large")
}
total=10110
peek a random number between 0 and 'total' and iterate over the keys until this number fits in a given range.

Resources