DP algorithm for bounded Knapsack? - algorithm

The Wikipedia article about Knapsack problem contains lists three kinds of it:
1-0 (one item of a type)
Bounded (several items of a type)
Unbounded (unlimited number of items of a type)
The article contains DP approaches for 1. and 3. types of problem, but no solution for 2.
How can the dynamic programming algorithm for solving 2. be described?

Use the 0-1 variant, but allow repetition of an item in the solution up to the number of times specified in its bound. You would need to maintain a vector stating how many copies of each item you already included in the partial solution.

The other DP solutions mentioned are all suboptimal as they require you to directly simulate the problem, resulting in a O(number of items * maximum weight * total count of items) runtime complexity.
There are many ways to optimize this, and I'll mention a few of them here:
One solution is to apply a technique similar to Sqrt Decomposition and is described here: https://codeforces.com/blog/entry/59606. This algorithm runs in O(number of items * maximum weight * sqrt(maximum weight)).
However, Dorijan Lendvaj describes a much faster algorithm that runs in O(number of items * maximum weight * log(maximum weight)) here: https://codeforces.com/blog/entry/65202?#comment-492168
Another way to think of the above approach is the following:
For each type of item, let's define the following values:
w, the weight/cost of the current type of item
v, the value of the current type of item
n, the number of copies of the current type of item available to use
Phase 1
First, let us consider 2^k, the largest power of 2 less than or equal to n. We insert the following items (each inserted item is in the format (weight, value)): (w, v), (2 * w, 2 * v), (2^2 * w, 2^2 * v), ..., (2^(k-1) * w, 2^(k-1) * v). Note that the items inserted each represent 2^0, 2^1, ..., 2^(k-1) copies of the current type of item respectively.
Observe that this is the same as inserting 2^k - 1 copies of the current type of item. This is because we can simulate the taking of any number of items (represented as n') by taking the combination of the above items that corresponds to the binary representation of n' (For all whole numbers k', if the bit representing 2^k' is set, take the item that represents 2^k' copies of the current type of item).
Phase 2
Lastly, we just insert the items that correspond to the set bits of n - (2^k - 1). (For all whole numbers k', if the bit representing 2^k' is set, insert (2^k' * w, 2^k' * v)).
Now, we can simulate the taking of up to n items of the current type simply by taking a combination of the above inserted items.
I don't currently have an exact proof of this solution, but after playing around with it for a while it seems correct. If I can figure one out I may update this post later on.
Proof
First, a proposition: All we have to prove is that inserting the above items allows us to simulate the taking of any number of items of the current type up to n.
With that in mind, let's define some variables:
Let n be the number of items of the current type available
Let x be the number of items of the current type we want to take
Let k be the greatest integer such that 2^k <= n
If x < 2^k, we can easily take x items using the method described in phase 1 of the algorithm:
... we can simulate the taking of any number of items (represented as n') by taking the combination of the above items that corresponds to the binary representation of n' (For all whole numbers k', if the bit representing 2^k' is set, take the item that represents 2^k' copies of the current type of item).
Otherwise, we do the following:
Take n - (2^k - 1) items. This is done by taking all the items inserted in phase 2. Now only the items inserted in phase 1 are available for use.
Take x - (n - (2^k - 1)) items. Since this value is always less than 2^k, we can just use the method used for the first case.
Finally, how do we know that x - (n - (2^k - 1)) < 2^k?
If we simplify the left side, we get:
x - (n - (2^k - 1))
x - n + 2^k - 1
x - (n + 1) + 2^k
If the above value was >= 2^k, then x - (n + 1) >= 0 would be true, meaning that x > n. That would be impossible as that's not a valid value of x.
Finally, there is even an approach mentioned here that runs in O(number of items * maximum weight) time.
The algorithm is similar to the brute force method ic3b3rg proposed and just uses simple DP optimizations and sliding window deque to bring down the run time.
My code was tested on this problem (classical bounded knapsack problem): https://dmoj.ca/problem/knapsack
My code: https://pastebin.com/acezMrMY

I posted an article on Code Project which discusses a more efficient solution to the bounded knapsack algorithm.
From the article:
In the dynamic programming solution, each position of the m array is a
sub-problem of capacity j. In the 0/1 algorithm, for each sub-problem
we consider the value of adding one copy of each item to the knapsack.
In the following algorithm, for each sub-problem we consider the value
of adding the lesser of the quantity that will fit, or the quantity
available of each item.
I've also enhanced the code so that we can determine what's in the
optimized knapsack (as opposed to just the optimized value).
ItemCollection[] ic = new ItemCollection[capacity + 1];
for(int i=0;i<=capacity;i++) ic[i] = new ItemCollection();
for(int i=0;i<items.Count;i++)
for(int j=capacity;j>=0;j--)
if(j >= items[i].Weight) {
int quantity = Math.Min(items[i].Quantity, j / items[i].Weight);
for(int k=1;k<=quantity;k++) {
ItemCollection lighterCollection = ic[j - k * items[i].Weight];
int testValue = lighterCollection.TotalValue + k * items[i].Value;
if(testValue > ic[j].TotalValue) (ic[j] = lighterCollection.Copy()).AddItem(items[i],k);
}
}
private class Item {
public string Description;
public int Weight;
public int Value;
public int Quantity;
public Item(string description, int weight, int value, int quantity) {
Description = description;
Weight = weight;
Value = value;
Quantity = quantity;
}
}
private class ItemCollection {
public Dictionary<string,int> Contents = new Dictionary<string,int>();
public int TotalValue;
public int TotalWeight;
public void AddItem(Item item,int quantity) {
if(Contents.ContainsKey(item.Description)) Contents[item.Description] += quantity;
else Contents[item.Description] = quantity;
TotalValue += quantity * item.Value;
TotalWeight += quantity * item.Weight;
}
public ItemCollection Copy() {
var ic = new ItemCollection();
ic.Contents = new Dictionary<string,int>(this.Contents);
ic.TotalValue = this.TotalValue;
ic.TotalWeight = this.TotalWeight;
return ic;
}
}
The download in the Code Project article includes a test case.

First, store all your data in a single array (with repetition).
Then use the 1st method mentioned in the Wikipedia article(1-0).
For example, trying a bounded knapsack with { 2 (2 times), 4(3 times),...} is equivalent to solving a 1-0 knapsack with {2, 2, 4, 4, 4,...}.

I will suggest you to use Knapsack Fraction Greedy Method Algorithm. It's Complexity is O(n log n) and one of the best algorithm.
Below I have mentioned its code in c#..
private static void Knapsack()
{
Console.WriteLine("************Kanpsack***************");
Console.WriteLine("Enter no of items");
int _noOfItems = Convert.ToInt32(Console.ReadLine());
int[] itemArray = new int[_noOfItems];
int[] weightArray = new int[_noOfItems];
int[] priceArray = new int[_noOfItems];
int[] fractionArray=new int[_noOfItems];
for(int i=0;i<_noOfItems;i++)
{
Console.WriteLine("[Item"+" "+(i+1)+"]");
Console.WriteLine("");
Console.WriteLine("Enter the Weight");
weightArray[i] = Convert.ToInt32(Console.ReadLine());
Console.WriteLine("Enter the Price");
priceArray[i] = Convert.ToInt32(Console.ReadLine());
Console.WriteLine("");
itemArray[i] = i+1 ;
}//for loop
int temp;
Console.WriteLine(" ");
Console.WriteLine("ITEM" + " " + "WEIGHT" + " "+"PRICE");
Console.WriteLine(" ");
for(int i=0;i<_noOfItems;i++)
{
Console.WriteLine("Item"+" "+(i+1)+" "+weightArray[i]+" "+priceArray[i]);
Console.WriteLine(" ");
}//For Loop For Printing the value.......
//Caluclating Fraction for the Item............
for(int i=0;i<_noOfItems;i++)
{
fractionArray[i] = (priceArray[i] / weightArray[i]);
}
Console.WriteLine("Testing.............");
//sorting the Item on the basis of fraction value..........
//Bubble Sort To Sort the Process Priority
for (int i = 0; i < _noOfItems; i++)
{
for (int j = i + 1; j < _noOfItems; j++)
{
if (fractionArray[j] > fractionArray[i])
{
//item Array
temp = itemArray[j];
itemArray[j] = itemArray[i];
itemArray[i] = temp;
//Weight Array
temp = weightArray[j];
weightArray[j] = weightArray[i];
weightArray[i] = temp;
//Price Array
temp = priceArray[j];
priceArray[j] = priceArray[i];
priceArray[i] = temp;
//Fraction Array
temp = fractionArray[j];
fractionArray[j] = fractionArray[i];
fractionArray[i] = temp;
}//if
}//Inner for
}//outer For
// Printing its value..............After Sorting..............
Console.WriteLine(" ");
Console.WriteLine("ITEM" + " " + "WEIGHT" + " " + "PRICE" + " "+"Fraction");
Console.WriteLine(" ");
for (int i = 0; i < _noOfItems; i++)
{
Console.WriteLine("Item" + " " + (itemArray[i]) + " " + weightArray[i] + " " + priceArray[i] + " "+fractionArray[i]);
Console.WriteLine(" ");
}//For Loop For Printing the value.......
Console.WriteLine("");
Console.WriteLine("Enter the Capacity of Knapsack");
int _capacityKnapsack = Convert.ToInt32(Console.ReadLine());
// Creating the valuse for Solution
int k=0;
int fractionvalue = 0;
int[] _takingItemArray=new int[100];
int sum = 0,_totalPrice=0;
int l = 0;
int _capacity = _capacityKnapsack;
do
{
if(k>=_noOfItems)
{
k = 0;
}
if (_capacityKnapsack >= weightArray[k])
{
_takingItemArray[l] = weightArray[k];
_capacityKnapsack = _capacityKnapsack - weightArray[k];
_totalPrice += priceArray[k];
k++;
l++;
}
else
{
fractionvalue = fractionArray[k];
_takingItemArray[l] = _capacityKnapsack;
_totalPrice += _capacityKnapsack * fractionArray[k];
k++;
l++;
}
sum += _takingItemArray[l-1];
} while (sum != _capacity);
Console.WriteLine("");
Console.WriteLine("Value in Kg Are............");
Console.WriteLine("");
for (int i = 0; i < _takingItemArray.Length; i++)
{
if(_takingItemArray[i]!=0)
{
Console.WriteLine(_takingItemArray[i]);
Console.WriteLine("");
}
else
{
break;
}
enter code here
}//for loop
Console.WriteLine("Toatl Value is "+_totalPrice);
}//Method

We can use 0/1 knapsack algorithm with tracking # of items left for each item;
We could do the same on unbounded knapsack algorithm to solve bounded knapsack problem also.

Related

Finding bounded nearest neighbour in a 1-dimensional array

Let's say we have some array of boolean values:
A = [0 1 1 1 1 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 1 1 1 1 0 0 0 ... 0]
The array is constructed by performing classification on a stream of data. Each element in the array corresponds to the output of a classification algorithm given a small "chunk" of the data. An answer may include restructuring the array to make parsing more efficient.
The array is pseudo random in the sense that groups of 1's and 0's tend to exist in bunches (but not necessarily always).
Given some index, i, what is the most efficient way to find the group of at least n zeros closest to A[i]? For the easy case, take n = 1.
EDIT: Groups should have AT LEAST n zeros. Again, for the easy case, that means at least 1 zero.
EDIT2: This search will be performed o(n) times, where n is size of the array. (Specifically, its n/c, where c is some fixed duration.
In this solution I organize the data so that you can use a binary search O(log n) to find the nearest group of at least a certain size.
I first create groups of zeros from the array, then I put each group of zeros into lists containing all groups of size s or larger , so that when you want to find the nearest group of s s or more then you just run a binary search in the list that has all groups with a size of s or greater.
The downside is in the pre-processing of putting the groups into the lists, with O(n * m) (I think, please check me) time and space efficiency where n is the number of groups of zeros, and m is the max size of the groups, though in reality the efficiency is probably better.
Here is the code:
public static class Group {
final public int x1;
final public int x2;
final public int size;
public Group(int x1, int x2) {
assert x1 <= x2;
this.x1 = x1;
this.x2 = x2;
this.size = x2 - x1 + 1;
}
public static final List<Group> getGroupsOfZeros(byte[] arr) {
List<Group> listOfGroups = new ArrayList<>();
for (int i = 0; i < arr.length; i++) {
if (arr[i] == 0) {
int x1 = i;
for (++i; i < arr.length; i++)
if (arr[i] != 0)
break;
int x2 = i - 1;
listOfGroups.add(new Group(x1, x2));
}
}
return Collections.unmodifiableList(listOfGroups);
}
public static final Group binarySearchNearest(int i, List<Group> list) {
{ // edge cases
Group firstGroup = list.get(0);
if (i <= firstGroup.x2)
return firstGroup;
Group lastGroup = list.get(list.size() - 1);
if (i >= lastGroup.x1)
return lastGroup;
}
int lo = 0;
int hi = list.size() - 1;
while (lo <= hi) {
int mid = (hi + lo) / 2;
Group currGroup = list.get(mid);
if (i < currGroup.x1) {
hi = mid - 1;
} else if (i > currGroup.x2) {
lo = mid + 1;
} else {
// x1 <= i <= x2
return currGroup;
}
}
// intentionally swapped because: lo == hi + 1
Group lowGroup = list.get(hi);
Group highGroup = list.get(lo);
return (i - lowGroup.x2) < (highGroup.x1 - i) ? lowGroup : highGroup;
}
}
NOTE: GroupsBySize can be improved, as described by #maraca to only contain a list of Groups per each distinct group size. I'll update tomorrow.
public static class GroupsBySize {
private List<List<Group>> listOfGroupsBySize = new ArrayList<>();
public GroupsBySize(List<Group> groups) {
for (Group group : groups) {
// ensure internal array can groups up to this size
while (listOfGroupsBySize.size() < group.size) {
listOfGroupsBySize.add(new ArrayList<Group>());
}
// add group to all lists up to its size
for (int i = 0; i < group.size; i++) {
listOfGroupsBySize.get(i).add(group);
}
}
}
public final Group getNearestGroupOfAtLeastSize(int index, int atLeastSize) {
if (atLeastSize < 1)
throw new IllegalArgumentException("group size must be greater than 0");
List<Group> groupsOfAtLeastSize = listOfGroupsBySize.get(atLeastSize - 1);
return Group.binarySearchNearest(index, groupsOfAtLeastSize);
}
}
public static void main(String[] args) {
byte[] byteArray = null;
List<Group> groups = Group.getGroupsOfZeros(byteArray);
GroupsBySize groupsBySize = new GroupsBySize(groups);
int index = 12;
int atLeastSize = 5;
Group g = groupsBySize.getNearestGroupOfAtLeastSize(index, atLeastSize);
System.out.println("nearest group is (" + g.x1 + ":" + g.x2 + ") of size " + g.size);
}
If you have n queries on an array of size n, then the naive approach would take O(n^2) time.
You can optimize this by incorporating the observation that the number of distinct group sizes is in the order of sqrt(n), because the most distinct group sizes we get if we have one group of size 1, one of size 2, one of size 3 and so on, we know that 1 + 2 + 3 + ... + n is n * (n + 1) / 2, so in the order of n^2, but the array has size n, so the number of distinct group sizes is in the order of sqrt(n).
create an integer array of size n to denote which group sizes are present how many times
create a list for the 0-groups, each element should contain the group size and starting index
scan the array, add the 0-groups to the list and update the present group sizes
create an array for the different group sizes, each entry should contain the group size and an array with the start indices of the groups
create an integer array or a map which tells you which group size is at which index by scanning the array of the present group sizes
go through the list of 0-groups and fill the start index arrays created at 4.
We end up with an array which takes O(n) space, takes O(n) time to create and contains all present group sizes in order, additionally each entry has an array with the start indices of the groups of that size.
To answer a query we can do a binary search on the start indices of all groups greater or equal than the given minimum group size. This takes O(log(n)*sqrt(n)) and we do it n times, so over all it would take O(n*log(n)*sqrt(n)) = O(n^1.5*log(n)) which is better than O(n^2).
I think you can get it down to O(n^1.5) by creating a structure which has all distinct group sizes but contains not only the groups of that size, but also the groups that are bigger than that size. This would be the time complexity to create the structure and answering all the n queries would be faster O(n*log(sqrt(n))*log(n)) I think, so it doesn't matter.
example:
[0 1 1 1 1 0 0 0 0 0 0 0 1 1 1 0 0, 1, 0, 0] -- 0 indexed array
hashmap = {1:[0], 2:[15, 18], 7:[5]}
search(i = 7, n = 2) {
binary search in {2:[15, 18], 7:[5]}
return min(15, 5)
}
what is the most efficient way to find the group of at least n zeros closest to A[i]
If we are not limited in preprocessing time and resources, the most efficient way would seem to be O(1) time and O(n * sqrt n) space, storing the answers to all possible queries. (To accomplish that, run the algorithm below with a list of all possible queries, that is each distinct zero-group size in the array paired with each index.)
If we are provided with all the n / c queries at once, we can produce the complete result set in O(n log n) time.
Traverse once from the left and once from the right. For each traversal, start with a balanced binary tree with our queries, sorted by zero-group-size (the n in the query), where each node has a sorted list of the query indexes (all is with this particular n).
At each iteration, when a zero-group is registered, update all queries with n equal and lower than this zero-group size, removing all equal and lower indexes from the node and keeping the records for them (since the index list is sorted, we just remove the head of the list while it's equal or lower than the current index), and storing the current index of the zero-group in the node as well (the "last seen" zero-group-index). If no is are left in the node, remove it.
After the traversal, assign each node's "last seen" zero-group-index to any remaining is in that node. Now we have all the answers for this traversal. (Any queries left in the tree have no answer.) In the opposite traversal, if any query comes up with a better (closer) answer, update it in the final record.

Reconstructing the list of items from a space optimized 0/1 knapsack implementation

A space optimization for the 0/1 knapsack dynamic programming algorithm is to use a 1-d array (say, A) of size equal to the knapsack capacity, and simply overwrite A[w] (if required) at each iteration i, where A[w] denotes the total value if the first i items are considered and knapsack capacity is w.
If this optimization is used, is there a way to reconstruct the list of items picked, perhaps by saving some extra information at each iteration of the DP algorithm? For example, in the Bellman Ford Algorithm a similar space optimization can be implemented, and the shortest path can still be reconstructed as long as we keep a list of the predecessor pointers, ie the last hop (or first, depending on if a source/destination oriented algorithm is being written).
For reference, here is my C++ function for the 0/1 knapsack problem using dynamic programming where I construct a 2-d vector ans such that ans[i][j] denotes the total value considering the first i items and knapsack capacity j. I reconstruct the items picked by reverse traversing this vector ans:
void knapsack(vector<int> v,vector<int>w,int cap){
//v[i]=value of item i-1
//w[i]=weight of item i-1, cap=knapsack capacity
//ans[i][j]=total value if considering 1st i items and capacity j
vector <vector<int> > ans(v.size()+1,vector<int>(cap+1));
//value with 0 items is 0
ans[0]=vector<int>(cap+1,0);
//value with 0 capacity is 0
for (uint i=1;i<v.size()+1;i++){
ans[i][0]=0;
}
//dp
for (uint i=1;i<v.size()+1;i++) {
for (int x=1;x<cap+1;x++) {
if (ans[i-1][x]>=ans[i-1][x-w[i-1]]+v[i-1]||x<w[i-1])
ans[i][x]=ans[i-1][x];
else {
ans[i][x]=ans[i-1][x-w[i-1]]+v[i-1];
}
}
}
cout<<"Total value: "<<ans[v.size()][cap]<<endl;
//reconstruction
cout<<"Items to carry: \n";
for (uint i=v.size();i>0;i--) {
for (int x=cap;x>0;x--) {
if (ans[i][x]==ans[i-1][x]) //item i not in knapsack
break;
else if (ans[i][x]==ans[i-1][x-w[i-1]]+v[i-1]) { //item i in knapsack
cap-=w[i-1];
cout<<i<<"("<<v[i-1]<<"), ";
break;
}
}
}
cout<<endl;
}
The following is a C++ implementation of yildizkabaran's answer. It adapts Hirschberg's clever divide & conquer idea to compute the answer to a knapsack instance with n items and capacity c in O(nc) time and just O(c) space:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
// Returns a vector of (cost, elem) pairs.
vector<pair<int, int>> optimal_cost(vector<int> const& v, vector<int> const& w, int cap) {
vector<pair<int, int>> dp(cap + 1, { 0, -1 });
for (int i = 0; i < size(v); ++i) {
for (int j = cap; j >= 0; --j) {
if (w[i] <= j && dp[j].first < dp[j - w[i]].first + v[i]) {
dp[j] = { dp[j - w[i]].first + v[i], i };
}
}
}
return dp;
}
// Returns a vector of item labels corresponding to an optimal solution, in increasing order.
vector<int> knapsack_hirschberg(vector<int> const& v, vector<int> const& w, int cap, int offset = 0) {
if (empty(v)) {
return {};
}
int mid = size(v) / 2;
auto subSol1 = optimal_cost(vector<int>(begin(v), begin(v) + mid), vector<int>(begin(w), begin(w) + mid), cap);
auto subSol2 = optimal_cost(vector<int>(begin(v) + mid, end(v)), vector<int>(begin(w) + mid, end(w)), cap);
pair<int, int> best = { -1, -1 };
for (int i = 0; i <= cap; ++i) {
best = max(best, { subSol1[i].first + subSol2[cap - i].first, i });
}
vector<int> solution;
if (subSol1[best.second].second != -1) {
int iChosen = subSol1[best.second].second;
solution = knapsack_hirschberg(vector<int>(begin(v), begin(v) + iChosen), vector<int>(begin(w), begin(w) + iChosen), best.second - w[iChosen], offset);
solution.push_back(subSol1[best.second].second + offset);
}
if (subSol2[cap - best.second].second != -1) {
int iChosen = mid + subSol2[cap - best.second].second;
auto subSolution = knapsack_hirschberg(vector<int>(begin(v) + mid, begin(v) + iChosen), vector<int>(begin(w) + mid, begin(w) + iChosen), cap - best.second - w[iChosen], offset + mid);
copy(begin(subSolution), end(subSolution), back_inserter(solution));
solution.push_back(iChosen + offset);
}
return solution;
}
Even though this is an old question I recently ran into the same problem so I figured I would write my solution here. What you need is Hirschberg's algorithm. Although this algorithm is written for reconstructing edit distances, the same principle applies here. The idea is that when searching for n items in capacity c, the knapsack state at (n/2)th item corresponding to the final maximum value is determined in the first scan. Let's call this state weight_m and value_m. This can be with keeping track of an additional 1d array of size c. So the memory is still O(c). Then the problem is divided into two parts: items 0 to n/2 with a capacity of weight_m, and items n/2 to n with a capacity of c-weight_m. The reduced problems in total is of size nc/2. Continuing this approach we can determine the knapsack state (occupied weight and current value) after each item, after which we can simply check to see which items were included. This algorithm completes in O(2nc) while using O(c) memory, so in terms of big-O nothing is changed even though the algorithm is at least twice as slow. I hope this helps to anyone who is facing a similar problem.
To my understanding, with the proposed solution, it is effectively impossible to obtain the set of involved items for a certain objective value. The set of items can be obtained by either generating the discarded rows again or maintain a suitable auxiliary data structure. This could be done by associating each entry in A with the list of items from which it was generated. However, this would require more memory than the initially proposed solution. Approaches for backtracking for knapsack problems is also briefly discussed in this journal paper.

What algorithm is this? Best way to distribute limited resources

I recently saw this question on a programming challenge, and I'd like to know which well-known CS algorithm this resembles. I implemented a crude solution. I know there must be a better way to do it, but I'm not sure of the terms to search for. It seems like a variation of the knapsack problem... but there are enough differences that I'm a bit stumped.
Problem:
There are 3 cities (A, B, C) with populations (100, 100, 200). You can build 4 hospitals. Build the hospitals so that you minimize the # of people visiting each one.
In this example, the answer would be: build 1 in A, 1 in B, and 2 in C. This means that each hospital serves 100 people (optimal solution).
If you were to distribute the hospitals as 1 in A, 2 in B, and 1 in C, for example, you would average (100, 50, 200), which gives you a worst case of 200 (not the optimal solution).
Thanks.
Addendum:
To simplify the problem, the # of hospitals will always be >= the # of cities. Every city should have at least 1 hospital.
Assign a hospital to each city
While hospitals left
Work out the the Population to Hospital ratio for each city
Assign a hospital to the one with the highest ratio
Loop
This problem can be solved by using binary search. So we search for the minimum number of people served by a hospital.
Pseudo code:
int start = 0;
int end =//total population
while(start <= end)
int mid = (start + end)/2;
for(each city)
Calculate the number of hospital needed to achieve mid = (population/mid)
if(total of number of hospital needed <= number of available hospital)
decrease end;
else
increase start;
Time complexity will be O(n log m) with n is number of city and m is total population.
This is an example of a problem which can be solved using dynamic programming. The following working java code solves this problem in O(M * N^2) time where M = Number of cities, and
N = Total number of hospitals
public void run(){
arr[0] = 100;
arr[1] = 100;
arr[2] = 200;
System.out.println(minCost(0, 4));
printBestAllocation(0, 4, minCost(0, 4));
}
static HashMap<String, Integer> map = new HashMap<String, Integer>();
// prints the allocation of hospitals from the ith city onwards when there are n hospitals and the answer for this subproblem is 'ans'
static void printBestAllocation(int i, int n, int ans){
if(i>=arr.length){
return;
}
if(n<=0){
throw new RuntimeException();
}
int remainingCities = arr.length - i - 1;
for(int place=1; place<=n-remainingCities; place++){
if(arr[i] % place == 0){
int ppl = Math.max(arr[i] / place, minCost(i+1, n-place));
if(ppl == ans){
System.out.print(place + " ");
printBestAllocation(i+1, n-place, minCost(i+1, n-place));
return;
}
}else{
int ppl = Math.max(arr[i] / place + 1, minCost(i+1, n-place));
if(ppl==ans){
System.out.print(place + " ");
printBestAllocation(i+1, n-place, minCost(i+1, n-place));
return;
}
}
}
throw new RuntimeException("Buggy code. If this exception is raised");
}
// Returns the maximum number of people that will be visiting a hospital for the best allocation of n hospitals from the ith city onwards.
static int minCost(int i, int n){
if(i>=arr.length){
return 0;
}
if(n<=0){
throw new RuntimeException();
}
String s = i + " " + n;
if(map.containsKey(s)){
return map.get(s);
}
int remainingCities = arr.length - i - 1;
int best = Integer.MAX_VALUE;
for(int place=1; place<=n-remainingCities; place++){
int ppl;
if(arr[i] % place==0){
ppl = Math.max(arr[i] / place, minCost(i+1, n-place));
}else{
ppl = Math.max(arr[i] / place + 1, minCost(i+1, n-place));
}
best = Math.min(best, ppl);
}
map.put(s, best);
return best;
}
States will be (i, n) where i represents the i th city and n represents the number of hospitals available. It represents the maximum number of people that would be visiting any hospital for the best allocation of n hospitals from the i th city onwards to the end. So the answer would be for the state (0, 4) for the example you have in your question.
So now at each city you can place a maximum of
maxHospitals = n-remainingCities hospitals, where
remainingCities = totalCities-i-1.
So start by placing at least 1 hospital at that city upto maxHospitals and then recur for other smaller sub problems.
Number of states = O(M * N^2)
Time per state = O(1)
Therefore, Time complexity = O(M * N^2)

Return the number of elements of an array that is the most "expensive"

I recently stumbled upon an interesting problem, an I am wondering if my solution is optimal.
You are given an array of zeros and ones. The goal is to return the
amount zeros and the amount of ones in the most expensive sub-array.
The cost of an array is the amount of 1s divided by amount of 0s. In
case there are no zeros in the sub-array, the cost is zero.
At first I tried brute-forcing, but for an array of 10,000 elements it was far too slow and I ran out of memory.
My second idea was instead of creating those sub-arrays, to remember the start and the end of the sub-array. That way I saved a lot of memory, but the complexity was still O(n2).
My final solution that I came up is I think O(n). It goes like this:
Start at the beginning of the array, for each element, calculate the cost of the sub-arrays starting from 1, ending at the current index. So we would start with a sub-array consisting of the first element, then first and second etc. Since the only thing that we need to calculate the cost, is the amount of 1s and 0s in the sub-array, I could find the optimal end of the sub-array.
The second step was to start from the end of the sub-array from step one, and repeat the same to find the optimal beginning. That way I am sure that there is no better combination in the whole array.
Is this solution correct? If not, is there a counter-example that will show that this solution is incorrect?
Edit
For clarity:
Let's say our input array is 0101.
There are 10 subarrays:
0,1,0,1,01,10,01,010,101 and 0101.
The cost of the most expensive subarray would be 2 since 101 is the most expensive subarray. So the algorithm should return 1,2
Edit 2
There is one more thing that I forgot, if 2 sub-arrays have the same cost, the longer one is "more expensive".
Let me sketch a proof for my assumption:
(a = whole array, *=zero or more, +=one or more, {n}=exactly n)
Cases a=0* and a=1+ : c=0
Cases a=01+ and a=1+0 : conforms to 1*0{1,2}1*, a is optimum
For the normal case, a contains one or more 0s and 1s.
This means there is some optimum sub-array of non-zero cost.
(S) Assume s is an optimum sub-array of a.
It contains one or more zeros. (Otherwise its cost would be zero).
(T) Let t be the longest `1*0{1,2}+1*` sequence within s
(and among the equally long the one with with most 1s).
(Note: There is always one such, e.g. `10` or `01`.)
Let N be the number of 1s in t.
Now, we prove that always t = s.
By showing it is not possible to add adjacent parts of s to t if (S).
(E) Assume t shorter than s.
We cannot add 1s at either side, otherwise not (T).
For each 0 we add from s, we have to add at least N more 1s
later to get at least the same cost as our `1*0+1*`.
This means: We have to add at least one run of N 1s.
If we add some run of N+1, N+2 ... somewhere than not (T).
If we add consecutive zeros, we need to compensate
with longer runs of 1s, thus not (T).
This leaves us with the only option of adding single zeors and runs of N 1s each.
This would give (symmetry) `1{n}*0{1,2}1{m}01{n+m}...`
If m>0 then `1{m}01{n+m}` is longer than `1{n}0{1,2}1{m}`, thus not (T).
If m=0 then we get `1{n}001{n}`, thus not (T).
So assumption (E) must be wrong.
Conclusion: The optimum sub-array must conform to 1*0{1,2}1*.
Here is my O(n) impl in Java according to the assumption in my last comment (1*01* or 1*001*):
public class Q19596345 {
public static void main(String[] args) {
try {
String array = "0101001110111100111111001111110";
System.out.println("array=" + array);
SubArray current = new SubArray();
current.array = array;
SubArray best = (SubArray) current.clone();
for (int i = 0; i < array.length(); i++) {
current.accept(array.charAt(i));
SubArray candidate = (SubArray) current.clone();
candidate.trim();
if (candidate.cost() > best.cost()) {
best = candidate;
System.out.println("better: " + candidate);
}
}
System.out.println("best: " + best);
} catch (Exception ex) { ex.printStackTrace(System.err); }
}
static class SubArray implements Cloneable {
String array;
int start, leftOnes, zeros, rightOnes;
// optimize 1*0*1* by cutting
void trim() {
if (zeros > 1) {
if (leftOnes < rightOnes) {
start += leftOnes + (zeros - 1);
leftOnes = 0;
zeros = 1;
} else if (leftOnes > rightOnes) {
zeros = 1;
rightOnes = 0;
}
}
}
double cost() {
if (zeros == 0) return 0;
else return (leftOnes + rightOnes) / (double) zeros +
(leftOnes + zeros + rightOnes) * 0.00001;
}
void accept(char c) {
if (c == '1') {
if (zeros == 0) leftOnes++;
else rightOnes++;
} else {
if (rightOnes > 0) {
start += leftOnes + zeros;
leftOnes = rightOnes;
zeros = 0;
rightOnes = 0;
}
zeros++;
}
}
public Object clone() throws CloneNotSupportedException { return super.clone(); }
public String toString() { return String.format("%s at %d with cost %.3f with zeros,ones=%d,%d",
array.substring(start, start + leftOnes + zeros + rightOnes), start, cost(), zeros, leftOnes + rightOnes);
}
}
}
If we can show the max array is always 1+0+1+, 1+0, or 01+ (Regular expression notation then we can calculate the number of runs
So for the array (010011), we have (always starting with a run of 1s)
0,1,1,2,2
so the ratios are (0, 1, 0.3, 1.5, 1), which leads to an array of 10011 as the final result, ignoring the one runs
Cost of the left edge is 0
Cost of the right edge is 2
So in this case, the right edge is the correct answer -- 011
I haven't yet been able to come up with a counterexample, but the proof isn't obvious either. Hopefully we can crowd source one :)
The degenerate cases are simpler
All 1's and 0's are obvious, as they all have the same cost.
A string of just 1+,0+ or vice versa is all the 1's and a single 0.
How about this? As a C# programmer, I am thinking we can use something like Dictionary of <int,int,int>.
The first int would be use as key, second as subarray number and the third would be for the elements of sub-array.
For your example
key|Sub-array number|elements
1|1|0
2|2|1
3|3|0
4|4|1
5|5|0
6|5|1
7|6|1
8|6|0
9|7|0
10|7|1
11|8|0
12|8|1
13|8|0
14|9|1
15|9|0
16|9|1
17|10|0
18|10|1
19|10|0
20|10|1
Then you can run through the dictionary and store the highest in a variable.
var maxcost=0
var arrnumber=1;
var zeros=0;
var ones=0;
var cost=0;
for (var i=1;i++;i<=20+1)
{
if ( dictionary.arraynumber[i]!=dictionary.arraynumber[i-1])
{
zeros=0;
ones=0;
cost=0;
if (cost>maxcost)
{
maxcost=cost;
}
}
else
{
if (dictionary.values[i]==0)
{
zeros++;
}
else
{
ones++;
}
cost=ones/zeros;
}
}
This will be log(n^2), i hope and u just need 3n size of memory of the array?
I think we can modify the maximal subarray problem to fit to this question. Here's my attempt at it:
void FindMaxRatio(int[] array, out maxNumOnes, out maxNumZeros)
{
maxNumOnes = 0;
maxNumZeros = 0;
int numOnes = 0;
int numZeros = 0;
double maxSoFar = 0;
double maxEndingHere = 0;
for(int i = 0; i < array.Size; i++){
if(array[i] == 0) numZeros++;
if(array[i] == 1) numOnes++;
if(numZeros == 0) maxEndingHere = 0;
else maxEndingHere = numOnes/(double)numZeros;
if(maxEndingHere < 1 && maxEndingHere > 0) {
numZeros = 0;
numOnes = 0;
}
if(maxSoFar < maxEndingHere){
maxSoFar = maxEndingHere;
maxNumOnes = numOnes;
maxNumZeros = numZeros;
}
}
}
I think the key is if the ratio is less then 1, we can disregard that subsequence because
there will always be a subsequence 01 or 10 whose ratio is 1. This seemed to work for 010011.

Shuffle list, ensuring that no item remains in same position

I want to shuffle a list of unique items, but not do an entirely random shuffle. I need to be sure that no element in the shuffled list is at the same position as in the original list. Thus, if the original list is (A, B, C, D, E), this result would be OK: (C, D, B, E, A), but this one would not: (C, E, A, D, B) because "D" is still the fourth item. The list will have at most seven items. Extreme efficiency is not a consideration. I think this modification to Fisher/Yates does the trick, but I can't prove it mathematically:
function shuffle(data) {
for (var i = 0; i < data.length - 1; i++) {
var j = i + 1 + Math.floor(Math.random() * (data.length - i - 1));
var temp = data[j];
data[j] = data[i];
data[i] = temp;
}
}
You are looking for a derangement of your entries.
First of all, your algorithm works in the sense that it outputs a random derangement, ie a permutation with no fixed point. However it has a enormous flaw (which you might not mind, but is worth keeping in mind): some derangements cannot be obtained with your algorithm. In other words, it gives probability zero to some possible derangements, so the resulting distribution is definitely not uniformly random.
One possible solution, as suggested in the comments, would be to use a rejection algorithm:
pick a permutation uniformly at random
if it hax no fixed points, return it
otherwise retry
Asymptotically, the probability of obtaining a derangement is close to 1/e = 0.3679 (as seen in the wikipedia article). Which means that to obtain a derangement you will need to generate an average of e = 2.718 permutations, which is quite costly.
A better way to do that would be to reject at each step of the algorithm. In pseudocode, something like this (assuming the original array contains i at position i, ie a[i]==i):
for (i = 1 to n-1) {
do {
j = rand(i, n) // random integer from i to n inclusive
} while a[j] != i // rejection part
swap a[i] a[j]
}
The main difference from your algorithm is that we allow j to be equal to i, but only if it does not produce a fixed point. It is slightly longer to execute (due to the rejection part), and demands that you be able to check if an entry is at its original place or not, but it has the advantage that it can produce every possible derangement (uniformly, for that matter).
I am guessing non-rejection algorithms should exist, but I would believe them to be less straight-forward.
Edit:
My algorithm is actually bad: you still have a chance of ending with the last point unshuffled, and the distribution is not random at all, see the marginal distributions of a simulation:
An algorithm that produces uniformly distributed derangements can be found here, with some context on the problem, thorough explanations and analysis.
Second Edit:
Actually your algorithm is known as Sattolo's algorithm, and is known to produce all cycles with equal probability. So any derangement which is not a cycle but a product of several disjoint cycles cannot be obtained with the algorithm. For example, with four elements, the permutation that exchanges 1 and 2, and 3 and 4 is a derangement but not a cycle.
If you don't mind obtaining only cycles, then Sattolo's algorithm is the way to go, it's actually much faster than any uniform derangement algorithm, since no rejection is needed.
As #FelixCQ has mentioned, the shuffles you are looking for are called derangements. Constructing uniformly randomly distributed derangements is not a trivial problem, but some results are known in the literature. The most obvious way to construct derangements is by the rejection method: you generate uniformly randomly distributed permutations using an algorithm like Fisher-Yates and then reject permutations with fixed points. The average running time of that procedure is e*n + o(n) where e is Euler's constant 2.71828... That would probably work in your case.
The other major approach for generating derangements is to use a recursive algorithm. However, unlike Fisher-Yates, we have two branches to the algorithm: the last item in the list can be swapped with another item (i.e., part of a two-cycle), or can be part of a larger cycle. So at each step, the recursive algorithm has to branch in order to generate all possible derangements. Furthermore, the decision of whether to take one branch or the other has to be made with the correct probabilities.
Let D(n) be the number of derangements of n items. At each stage, the number of branches taking the last item to two-cycles is (n-1)D(n-2), and the number of branches taking the last item to larger cycles is (n-1)D(n-1). This gives us a recursive way of calculating the number of derangements, namely D(n)=(n-1)(D(n-2)+D(n-1)), and gives us the probability of branching to a two-cycle at any stage, namely (n-1)D(n-2)/D(n-1).
Now we can construct derangements by deciding to which type of cycle the last element belongs, swapping the last element to one of the n-1 other positions, and repeating. It can be complicated to keep track of all the branching, however, so in 2008 some researchers developed a streamlined algorithm using those ideas. You can see a walkthrough at http://www.cs.upc.edu/~conrado/research/talks/analco08.pdf . The running time of the algorithm is proportional to 2n + O(log^2 n), a 36% improvement in speed over the rejection method.
I have implemented their algorithm in Java. Using longs works for n up to 22 or so. Using BigIntegers extends the algorithm to n=170 or so. Using BigIntegers and BigDecimals extends the algorithm to n=40000 or so (the limit depends on memory usage in the rest of the program).
package io.github.edoolittle.combinatorics;
import java.math.BigInteger;
import java.math.BigDecimal;
import java.math.MathContext;
import java.util.Random;
import java.util.HashMap;
import java.util.TreeMap;
public final class Derangements {
// cache calculated values to speed up recursive algorithm
private static HashMap<Integer,BigInteger> numberOfDerangementsMap
= new HashMap<Integer,BigInteger>();
private static int greatestNCached = -1;
// load numberOfDerangementsMap with initial values D(0)=1 and D(1)=0
static {
numberOfDerangementsMap.put(0,BigInteger.valueOf(1));
numberOfDerangementsMap.put(1,BigInteger.valueOf(0));
greatestNCached = 1;
}
private static Random rand = new Random();
// private default constructor so class isn't accidentally instantiated
private Derangements() { }
public static BigInteger numberOfDerangements(int n)
throws IllegalArgumentException {
if (numberOfDerangementsMap.containsKey(n)) {
return numberOfDerangementsMap.get(n);
} else if (n>=2) {
// pre-load the cache to avoid stack overflow (occurs near n=5000)
for (int i=greatestNCached+1; i<n; i++) numberOfDerangements(i);
greatestNCached = n-1;
// recursion for derangements: D(n) = (n-1)*(D(n-1) + D(n-2))
BigInteger Dn_1 = numberOfDerangements(n-1);
BigInteger Dn_2 = numberOfDerangements(n-2);
BigInteger Dn = (Dn_1.add(Dn_2)).multiply(BigInteger.valueOf(n-1));
numberOfDerangementsMap.put(n,Dn);
greatestNCached = n;
return Dn;
} else {
throw new IllegalArgumentException("argument must be >= 0 but was " + n);
}
}
public static int[] randomDerangement(int n)
throws IllegalArgumentException {
if (n<2)
throw new IllegalArgumentException("argument must be >= 2 but was " + n);
int[] result = new int[n];
boolean[] mark = new boolean[n];
for (int i=0; i<n; i++) {
result[i] = i;
mark[i] = false;
}
int unmarked = n;
for (int i=n-1; i>=0; i--) {
if (unmarked<2) break; // can't move anything else
if (mark[i]) continue; // can't move item at i if marked
// use the rejection method to generate random unmarked index j &lt i;
// this could be replaced by more straightforward technique
int j;
while (mark[j=rand.nextInt(i)]);
// swap two elements of the array
int temp = result[i];
result[i] = result[j];
result[j] = temp;
// mark position j as end of cycle with probability (u-1)D(u-2)/D(u)
double probability
= (new BigDecimal(numberOfDerangements(unmarked-2))).
multiply(new BigDecimal(unmarked-1)).
divide(new BigDecimal(numberOfDerangements(unmarked)),
MathContext.DECIMAL64).doubleValue();
if (rand.nextDouble() < probability) {
mark[j] = true;
unmarked--;
}
// position i now becomes out of play so we could mark it
//mark[i] = true;
// but we don't need to because loop won't touch it from now on
// however we do have to decrement unmarked
unmarked--;
}
return result;
}
// unit tests
public static void main(String[] args) {
// test derangement numbers D(i)
for (int i=0; i<100; i++) {
System.out.println("D(" + i + ") = " + numberOfDerangements(i));
}
System.out.println();
// test quantity (u-1)D_(u-2)/D_u for overflow, inaccuracy
for (int u=2; u<100; u++) {
double d = numberOfDerangements(u-2).doubleValue() * (u-1) /
numberOfDerangements(u).doubleValue();
System.out.println((u-1) + " * D(" + (u-2) + ") / D(" + u + ") = " + d);
}
System.out.println();
// test derangements for correctness, uniform distribution
int size = 5;
long reps = 10000000;
TreeMap<String,Integer> countMap = new TreeMap&ltString,Integer>();
System.out.println("Derangement\tCount");
System.out.println("-----------\t-----");
for (long rep = 0; rep < reps; rep++) {
int[] d = randomDerangement(size);
String s = "";
String sep = "";
if (size > 10) sep = " ";
for (int i=0; i<d.length; i++) {
s += d[i] + sep;
}
if (countMap.containsKey(s)) {
countMap.put(s,countMap.get(s)+1);
} else {
countMap.put(s,1);
}
}
for (String key : countMap.keySet()) {
System.out.println(key + "\t\t" + countMap.get(key));
}
System.out.println();
// large random derangement
int size1 = 1000;
System.out.println("Random derangement of " + size1 + " elements:");
int[] d1 = randomDerangement(size1);
for (int i=0; i<d1.length; i++) {
System.out.print(d1[i] + " ");
}
System.out.println();
System.out.println();
System.out.println("We start to run into memory issues around u=40000:");
{
// increase this number from 40000 to around 50000 to trigger
// out of memory-type exceptions
int u = 40003;
BigDecimal d = (new BigDecimal(numberOfDerangements(u-2))).
multiply(new BigDecimal(u-1)).
divide(new BigDecimal(numberOfDerangements(u)),MathContext.DECIMAL64);
System.out.println((u-1) + " * D(" + (u-2) + ") / D(" + u + ") = " + d);
}
}
}
In C++:
template <class T> void shuffle(std::vector<T>&arr)
{
int size = arr.size();
for (auto i = 1; i < size; i++)
{
int n = rand() % (size - i) + i;
std::swap(arr[i-1], arr[n]);
}
}

Resources