Related
I was asked this question in an interview, but couldn't figure it out and would like to know the answer.
Suppose we have a list like this:
1 7 8 6 1 1 5 0
I need to find an algorithm such that it pairs adjacent numbers. The goal is to maximize the benefit but such that only the first number in the pair is counted.
e.g in the above, the optimal solution is:
{7,8} {6,1} {5,0}
so when taking only the first one:
7 + 6 + 5 = 18.
I tried various greedy solutions, but they often pick on {8,6} which leads to a non-optimal solution.
Thoughts?
First, observe that it never makes sense to skip more than one number *. Then, observe that the answer to this problem can be constructed by comparing two numbers:
The answer to the subproblem where you skip the first number, and
The answer to the subproblem where you keep the first number
Finally, observe that the answer to a problem with the sequence of only one number is zero, and the solution to the problem with only two numbers is the first number of the two.
With this information in hand, you can construct a recursive memoized solution to the problem, or a dynamic programming solution that starts at the back and goes back deciding on whether to include the previous number or not.
* Proof: assume that you have a sequence that produces the max sum, and that it skip two numbers in the original sequence. Then you can add the pair that you skipped, and improve on the answer.
A simple dynamic programming problem. Starting from one specific index, we can either make a pair at current index, or skip to the next index:
int []dp;//Array to store result of sub-problem
boolean[]check;//Check for already solve sub-problem
public int solve(int index, int []data){
if(index + 1 >= data.length){//Special case,which cannot create any pair
return 0;
}
if(check[index]){//If this sub-problem is solved before, return the value
return dp[index];
}
check[index] = true;
//Either make a pair at this index, or skip to next index
int result = max(data[index] + solve(index + 2, data) , solve(index + 1,data));
return dp[index] = result;
}
It's a dynamic programming problem, and the table can be optimised away.
def best_pairs(xs):
b0, b1 = 0, max(0, xs[0])
for i in xrange(2, len(xs)):
b0, b1 = b1, max(b1, xs[i-1]+b0)
return b1
print best_pairs(map(int, '1 7 8 6 1 1 5 0'.split()))
After each iteration, b1 is the best solution using elements up to and including i, and b0 is the best solution using elements up to and including i-1.
This is my solution in Java, hope it helps.
public static int getBestSolution(int[] a, int offset) {
if (a.length-offset <= 1)
return 0;
if (a.length-offset == 2)
return a[offset];
return Math.max(a[offset] + getBestSolution(a,offset+2),
getBestSolution(a,offset+1));
}
Here is a DP formulation for O(N) solution : -
MaxPairSum(i) = Max(arr[i]+MaxPairSum(i+2),MaxPairSum(i+1))
MaxPairSum(i) is max sum for subarray (i,N)
So here is the question:
In a party there are n different-flavored cakes of volume V1, V2, V3 ... Vn each. Need to divide them into K people present in the party such that
All members of party get equal volume of cake (say V, which is the solution we are looking for)
Each member should get a cake of single flavour only (you cannot distribute parts of different flavored cakes to a member).
Some volume of cake will be wasted after distribution, we want to minimize the waste; or, equivalently, we are after a maximum distribution policy
Given known condition that: if V is an optimal solution, then at least one cake, X, can be divided by V without any volume left, i.e., Vx mod V == 0
I am trying to look for a solution with best time complexity (brute force will do it, but I need a quicker way).
Any suggestion would be appreciated.
Thanks
PS: It is not an assignment, it is an Interview question. Here is the pseducode for brute force:
int return_Max_volumn(List VolumnList)
{
maxVolumn = 0;
minimaxLeft = Integer.Max_value;
for (Volumn v: VolumnList)
for i = 1 to K people
targeVolumn = v/i;
NumberofpeoplecanGetcake = v1/targetVolumn +
v2/targetVolumn + ... + vn/targetVolumn
if (numberofPeopleCanGetcake >= k)
remainVolumn = (v1 mod targetVolumn) + (v2 mod targetVolumn)
+ (v3 mod targetVolumn + ... + (vn mod targetVolumn)
if (remainVolumn < minimaxLeft)
update maxVolumn to be targetVolumn;
update minimaxLeft to be remainVolumn
return maxVolumn
}
This is a somewhat classic programming-contest problem.
The answer is simple: do a basic binary search on volume V (the final answer).
(Note the title says M people, yet the problem description says K. I'll be using M)
Given a volume V during the search, you iterate through all of the cakes, calculating how many people each cake can "feed" with single-flavor slices (fed += floor(Vi/V)). If you reach M (or 'K') people "fed" before you're out of cakes, this means you can obviously also feed M people with any volume < V with whole single-flavor slices, by simply consuming the same amount of (smaller) slices from each cake. If you run out of cakes before reaching M slices, it means you cannot feed the people with any volume > V either, as that would consume even more cake than what you've already failed with. This satisfies the condition for a binary search, which will lead you to the highest volume V of single-flavor slices that can be given to M people.
The complexity is O(n * log((sum(Vi)/m)/eps) ). Breakdown: the binary search takes log((sum(Vi)/m)/eps) iterations, considering the upper bound of sum(Vi)/m cake for each person (when all the cakes get consumed perfectly). At each iteration, you have to pass through at most all N cakes. eps is the precision of your search and should be set low enough, no higher than the minimum non-zero difference between the volume of two cakes, divided by M*2, so as to guarantee a correct answer. Usually you can just set it to an absolute precision such as 1e-6 or 1e-9.
To speed things up for the average case, you should sort the cakes in decreasing order, such that when you are trying a large volume, you instantly discard all the trailing cakes with total volume < V (e.g. you have one cake of volume 10^6 followed by a bunch of cakes of volume 1.0. If you're testing a slice volume of 2.0, as soon as you reach the first cake of volume 1.0 you can already return that this run failed to provide M slices)
Edit:
The search is actually done with floating point numbers, e.g.:
double mid, lo = 0, hi = sum(Vi)/people;
while(hi - lo > eps){
mid = (lo+hi)/2;
if(works(mid)) lo = mid;
else hi = mid;
}
final_V = lo;
By the end, if you really need more precision than your chosen eps, you can simply take an extra O(n) step:
// (this step is exclusively to retrieve an exact answer from the final
// answer above, if a precision of 'eps' is not acceptable)
foreach (cake_volume vi){
int slices = round(vi/final_V);
double difference = abs(vi-(final_V*slices));
if(difference < best){
best = difference;
volume = vi;
denominator = slices;
}
}
// exact answer is volume/denominator
Here's the approach I would consider:
Let's assume that all of our cakes are sorted in the order of non-decreasing size, meaning that Vn is the largest cake and V1 is the smallest cake.
Generate the first intermediate solution by dividing only the largest cake between all k people. I.e. V = Vn / k.
Immediately discard all cakes that are smaller than V - any intermediate solution that involves these cakes is guaranteed to be worse than our intermediate solution from step 1. Now we are left with cakes Vb, ..., Vn, where b is greater or equal to 1.
If all cakes got discarded except the biggest one, then we are done. V is the solution. END.
Since we have more than one cake left, let's improve our intermediate solution by redistributing some of the slices to the second biggest cake Vn-1, i.e. find the biggest value of V so that floor(Vn / V) + floor(Vn-1 / V) = k. This can be done by performing a binary search between the current value of V and the upper limit (Vn + Vn-1) / k, or by something more clever.
Again, just like we did on step 2, immediately discard all cakes that are smaller than V - any intermediate solution that involves these cakes is guaranteed to be worse than our intermediate solution from step 4.
If all cakes got discarded except the two biggest ones, then we are done. V is the solution. END.
Continue to involve the new "big" cakes in right-to-left direction, improve the intermediate solution, and continue to discard "small" cakes in left-to-right direction until all remaining cakes get used up.
P.S. The complexity of step 4 seems to be equivalent to the complexity of the entire problem, meaning that the above can be seen as an optimization approach, but not a real solution. Oh well, for what it is worth... :)
Here's one approach to a more efficient solution. Your brute force solution in essence generates an implicit of possible volumes, filters them by feasibility, and returns the largest. We can modify it slightly to materialize the list and sort it so that the first feasible solution found is the largest.
First task for you: find a way to produce the sorted list on demand. In other words, we should do O(n + m log n) work to generate the first m items.
Now, let's assume that the volumes appearing in the list are pairwise distinct. (We can remove this assumption later.) There's an interesting fact about how many people are served by the volume at position k. For example, with volumes 11, 13, 17 and 7 people, the list is 17, 13, 11, 17/2, 13/2, 17/3, 11/2, 13/3, 17/4, 11/3, 17/5, 13/4, 17/6, 11/4, 13/5, 17/7, 11/5, 13/6, 13/7, 11/6, 11/7.
Second task for you: simulate the brute force algorithm on this list. Exploit what you notice.
So here is the algorithm I thought it would work:
Sort the volumes from largest to smallest.
Divide the largest cake to 1...k people, i.e., target = volume[0]/i, where i = 1,2,3,4,...,k
If target would lead to total number of pieces greater than k, decrease the number i and try again.
Find the first number i that will result in total number of pieces greater than or equal to K but (i-1) will lead to a total number of cakes less than k. Record this volume as baseVolume.
For each remaining cake, find the smallest fraction of remaining volume divide by number of people, i.e., division = (V_cake - (baseVolume*(Math.floor(V_cake/baseVolume)) ) / Math.floor(V_cake/baseVolume)
Add this amount to the baseVolume(baseVolume += division) and recalculate the total pieces all volumes could divide. If the new volume result in less pieces, return previous value, otherwise, repeat step 6.
Here are the java codes:
public static int getKonLagestCake(Integer[] sortedVolumesList, int k) {
int result = 0;
for (int i = k; i >= 1; i--) {
double volumeDividedByLargestCake = (double) sortedVolumesList[0]
/ i;
int totalNumber = totalNumberofCakeWithGivenVolumn(
sortedVolumesList, volumeDividedByLargestCake);
if (totalNumber < k) {
result = i + 1;
break;
}
}
return result;
}
public static int totalNumberofCakeWithGivenVolumn(
Integer[] sortedVolumnsList, double givenVolumn) {
int totalNumber = 0;
for (int volume : sortedVolumnsList) {
totalNumber += (int) (volume / givenVolumn);
}
return totalNumber;
}
public static double getMaxVolume(int[] volumesList, int k) {
List<Integer> list = new ArrayList<Integer>();
for (int i : volumesList) {
list.add(i);
}
Collections.sort(list, Collections.reverseOrder());
Integer[] sortedVolumesList = new Integer[list.size()];
list.toArray(sortedVolumesList);
int previousValidK = getKonLagestCake(sortedVolumesList, k);
double baseVolume = (double) sortedVolumesList[0] / (double) previousValidK;
int totalNumberofCakeAvailable = totalNumberofCakeWithGivenVolumn(sortedVolumesList, baseVolume);
if (totalNumberofCakeAvailable == k) {
return baseVolume;
} else {
do
{
double minimumAmountAdded = minimumAmountAdded(sortedVolumesList, baseVolume);
if(minimumAmountAdded == 0)
{
return baseVolume;
}else
{
baseVolume += minimumAmountAdded;
int newTotalNumber = totalNumberofCakeWithGivenVolumn(sortedVolumesList, baseVolume);
if(newTotalNumber == k)
{
return baseVolume;
}else if (newTotalNumber < k)
{
return (baseVolume - minimumAmountAdded);
}else
{
continue;
}
}
}while(true);
}
}
public static double minimumAmountAdded(Integer[] sortedVolumesList, double volume)
{
double mimumAdded = Double.MAX_VALUE;
for(Integer i:sortedVolumesList)
{
int assignedPeople = (int)(i/volume);
if (assignedPeople == 0)
{
continue;
}
double leftPiece = (double)i - assignedPeople*volume;
if(leftPiece == 0)
{
continue;
}
double division = leftPiece / (double)assignedPeople;
if (division < mimumAdded)
{
mimumAdded = division;
}
}
if (mimumAdded == Double.MAX_VALUE)
{
return 0;
}else
{
return mimumAdded;
}
}
Any Comments would be appreciated.
Thanks
I have a problem asked to me in an interview, this is a similar problem I found so I thought of asking here. The problem is
There is a robot situated at (1,1) in a N X N grid, the robot can move in any direction left, right ,up and down. Also I have been given an integer k, which denotes the maximum steps in the path. I had to calculate the number of possible ways to move from (1,1) to (N,N) in k or less steps.
I know how to solve simplified version of this problem, the one with moves possible in only right and down direction. That can be solved with Dynamic Programming. I tried applying the same technique here but I don't think it could be solved using 2-dimensional matrix, I tried a similar approach counting possible number of ways from left or up or right and summing up in down direction, but the problem is I don't know number of ways from down direction which should also be added. So I go in a loop. I was able to solve this problem using recursion, I could recurse on (N,N,k) call for up, left and k-1, sum them up but I think this is also not correct, and if it could be correct it has exponential complexity. I found problems similar to this so I wanted to know what would be a perfect approach for solving these types of problems.
Suppose you have an NxN matrix, where each cell gives you the number of ways to move from (1,1) to (i,j) in exactly k steps (some entries will be zero). You can now create an NxN matrix, where each cell gives you the number of ways to move from (1,1) to (i,j) in exactly k+1 steps - start off with the all-zero matrix, and then add in cell (i,j) of the previous matrix to cells (i+1, j), (i, j+1),... and so on.
The (N,N) entry in each of the k matrices gives you the number of ways to move from (1,1) to (i,j) in exactly k steps - all you have to do now is add them all together.
Here is an example for the 2x2 case, where steps outside the
matrix are not allowed, and (1,1) is at the top left.
In 0 steps, you can only get to the (1,1) cell:
1 0
0 0
There is one path to 1,1. From here you can go down or right,
so there are two different paths of length 1:
0 1
1 0
From the top right path you can go left or down, and from the
bottom left you can go right or up, so both cells have paths
that can be extended in two ways, and end up in the same two
cells. We add two copies of the following, one from each non-zero
cell
1 0
0 1
giving us these totals for paths of length two:
2 0
0 2
There are two choices from each of the non-empty cells again
so we have much the same as before for paths of length three.
0 4
4 0
Two features of this are easy checks:
1) For each length of path, only two cells are non-zero,
corresponding to the length of the path being odd or even.
2) The number of paths at each stage is a power of two, because
each path corresponds to a choice at each step as to whether to
go horizontally or vertically. (This only holds for this simple
2x2 case).
Update: This algorithm is incorrect. See the comments and mcdowella's answer. However, the corrected algorithm does not make a difference to the time complexity.
It can be done in O(k * N^2) time, at least. Pseudocode:
# grid[i,j] contains the number of ways we can get to i,j in at most n steps,
# where n is initially 0
grid := N by N array of 0s
grid[1,1] := 1
for n from 1 to k:
old := grid
for each cell i,j in grid:
# cells outside the grid considered 0 here
grid[i,j] := old[i,j] + old[i-1,j] + old[i+1,j] + old[i,j-1] + old[i,j+1]
return grid[N,N]
There might be an O(log k * (N*log N)^2) solution which is way more complex. Each iteration through the outer for loop is nothing but a convolution with a fixed kernel. So we can convolve the kernel with itself to get bigger kernels that fuse multiple iterations into one, and use FFT to compute the convolution.
Basically uniquepaths( row, column ) = 0 if row > N || column > N
1 if row ==N && column == N
uniquepaths(row+1, column) + uniquePaths(row, column+1)
i.e, the solution have optimal substructure and overlapped subproblems. So, it can be solved using Dynamic Programming. Below is memorization (lazy/on demand) version of it (related which basically returns paths as well: Algorithm for finding all paths in a NxN grid) (you may refer to my blog for more details: http://codingworkout.blogspot.com/2014/08/robot-in-grid-unique-paths.html)
private int GetUniquePaths_DP_Memoization_Lazy(int?[][] DP_Memoization_Lazy_Cache, int row,
int column)
{
int N = DP_Memoization_Lazy_Cache.Length - 1;
if (row > N)
{
return 0;
}
if (column > N)
{
return 0;
}
if(DP_Memoization_Lazy_Cache[row][column] != null)
{
return DP_Memoization_Lazy_Cache[row][column].Value;
}
if((row == N) && (column == N))
{
DP_Memoization_Lazy_Cache[N][N] = 1;
return 1;
}
int pathsWhenMovedDown = this.GetUniquePaths_DP_Memoization_Lazy(DP_Memoization_Lazy_Cache,
row + 1, column);
int pathsWhenMovedRight = this.GetUniquePaths_DP_Memoization_Lazy(DP_Memoization_Lazy_Cache,
row, column + 1);
DP_Memoization_Lazy_Cache[row][column] = pathsWhenMovedDown + pathsWhenMovedRight;
return DP_Memoization_Lazy_Cache[row][column].Value;
}
where the caller is
int GetUniquePaths_DP_Memoization_Lazy(int N)
{
int?[][] DP_Memoization_Lazy_Cache = new int?[N + 1][];
for(int i =0;i<=N;i++)
{
DP_Memoization_Lazy_Cache[i] = new int?[N + 1];
for(int j=0;j<=N;j++)
{
DP_Memoization_Lazy_Cache[i][j] = null;
}
}
this.GetUniquePaths_DP_Memoization_Lazy(DP_Memoization_Lazy_Cache, row: 1, column: 1);
return DP_Memoization_Lazy_Cache[1][1].Value;
}
Unit Tests
[TestCategory(Constants.DynamicProgramming)]
public void RobotInGridTests()
{
int p = this.GetNumberOfUniquePaths(3);
Assert.AreEqual(p, 6);
int p1 = this.GetUniquePaths_DP_Memoization_Lazy(3);
Assert.AreEqual(p, p1);
var p2 = this.GetUniquePaths(3);
Assert.AreEqual(p1, p2.Length);
foreach (var path in p2)
{
Debug.WriteLine("===================================================================");
foreach (Tuple<int, int> t in path)
{
Debug.Write(string.Format("({0}, {1}), ", t.Item1, t.Item2));
}
}
p = this.GetNumberOfUniquePaths(4);
Assert.AreEqual(p, 20);
p1 = this.GetUniquePaths_DP_Memoization_Lazy(4);
Assert.AreEqual(p, p1);
p2 = this.GetUniquePaths(4);
Assert.AreEqual(p1, p2.Length);
foreach (var path in p2)
{
Debug.WriteLine("===================================================================");
foreach (Tuple<int, int> t in path)
{
Debug.Write(string.Format("({0}, {1}), ", t.Item1, t.Item2));
}
}
}
There will be infinite no of ways. This is because you can form an infinite loop of positions and thus infinite possibilities. For ex:- You can move from (0,0) to (0,1) then to (1,1), then (1,0) and back again to (0,0). This forms a loop of positions and thus anyone can go round and round these types of loops and have infinite possibilities.
For finding the position of a fraction in farey sequence, i tried to implement the algorithm given here http://www.math.harvard.edu/~corina/publications/farey.pdf under "initial algorithm" but i can't understand where i'm going wrong, i am not getting the correct answers . Could someone please point out my mistake.
eg. for order n = 7 and fractions 1/7 ,1/6 i get same answers.
Here's what i've tried for given degree(n), and a fraction a/b:
sum=0;
int A[100000];
A[1]=a;
for(i=2;i<=n;i++)
A[i]=i*a-a;
for(i=2;i<=n;i++)
{
for(j=i+i;j<=n;j+=i)
A[j]-=A[i];
}
for(i=1;i<=n;i++)
sum+=A[i];
ans = sum/b;
Thanks.
Your algorithm doesn't use any particular properties of a and b. In the first part, every relevant entry of the array A is a multiple of a, but the factor is independent of a, b and n. Setting up the array ignoring the factor a, i.e. starting with A[1] = 1, A[i] = i-1 for 2 <= i <= n, after the nested loops, the array contains the totients, i.e. A[i] = phi(i), no matter what a, b, n are. The sum of the totients from 1 to n is the number of elements of the Farey sequence of order n (plus or minus 1, depending on which of 0/1 and 1/1 are included in the definition you use). So your answer is always the approximation (a*number of terms)/b, which is close but not exact.
I've not yet looked at how yours relates to the algorithm in the paper, check back for updates later.
Addendum: Finally had time to look at the paper. Your initialisation is not what they give. In their algorithm, A[q] is initialised to floor(x*q), for a rational x = a/b, the correct initialisation is
for(i = 1; i <= n; ++i){
A[i] = (a*i)/b;
}
in the remainder of your code, only ans = sum/b; has to be changed to ans = sum;.
A non-algorithmic way of finding the position t of a fraction in the Farey sequence of order n>1 is shown in Remark 7.10(ii)(a) of the paper, under m:=n-1, where mu-bar stands for the number-theoretic Mobius function on positive integers taking values from the set {-1,0,1}.
Here's my Java solution that works. Add head(0/1), tail(1/1) nodes to a SLL.
Then start by passing headNode,tailNode and setting required orderLevel.
public void generateSequence(Node leftNode, Node rightNode){
Fraction left = (Fraction) leftNode.getData();
Fraction right= (Fraction) rightNode.getData();
FractionNode midNode = null;
int midNum = left.getNum()+ right.getNum();
int midDenom = left.getDenom()+ right.getDenom();
if((midDenom <=getMaxLevel())){
Fraction middle = new Fraction(midNum,midDenom);
midNode = new FractionNode(middle);
}
if(midNode!= null){
leftNode.setNext(midNode);
midNode.setNext(rightNode);
generateSequence(leftNode, midNode);
count++;
}else if(rightNode.next()!=null){
generateSequence(rightNode, rightNode.next());
}
}
I have the following problem:
Given N objects (N < 30) of different values multiple of a "k" constant i.e. k, 2k, 3k, 4k, 6k, 8k, 12k, 16k, 24k and 32k, I need an algorithm that will distribute all items to M players (M <= 6) in such a way that the total value of the objects each player gets is as even as possible (in other words, I want to distribute all objects to all players in the fairest way possible).
EDIT: By fairest distribution I mean that the difference between the value of the objects any two players get is minimal.
Another similar case would be: I have N coins of different values and I need to divide them equally among M players; sometimes they don't divide exactly and I need to find the next best case of distribution (where no player is angry because another one got too much money).
I don't need (pseudo)code to solve this (also, this is not a homework :) ), but I'll appreciate any ideas or links to algorithms that could solve this.
Thanks!
The problem is strongly NP-complete. This means there is no way to ensure a correct solution in reasonable time. (See 3-partition-problem, thanks Paul).
Instead you'll wanna go for a good approximate solution generator. These can often get very close to the optimal answer in very short time. I can recommend the Simulated Annealing technique, which you will also be able to use for a ton of other NP-complete problems.
The idea is this:
Distribute the items randomly.
Continually make random swaps between two random players, as long as it makes the system more fair, or only a little less fair (see the wiki for details).
Stop when you have something fair enough, or you have run out of time.
This solution is much stronger than the 'greedy' algorithms many suggest. The greedy algorithm is the one where you continuously add the largest item to the 'poorest' player. An example of a testcase where greedy fails is [10,9,8,7,7,5,5].
I did an implementation of SA for you. It follows the wiki article strictly, for educational purposes. If you optimize it, I would say a 100x improvement wouldn't be unrealistic.
from __future__ import division
import random, math
values = [10,9,8,7,7,5,5]
M = 3
kmax = 1000
emax = 0
def s0():
s = [[] for i in xrange(M)]
for v in values:
random.choice(s).append(v)
return s
def E(s):
avg = sum(values)/M
return sum(abs(avg-sum(p))**2 for p in s)
def neighbour(s):
snew = [p[:] for p in s]
while True:
p1, p2 = random.sample(xrange(M),2)
if s[p1]: break
item = random.randrange(len(s[p1]))
snew[p2].append(snew[p1].pop(item))
return snew
def P(e, enew, T):
if enew < e: return 1
return math.exp((e - enew) / T)
def temp(r):
return (1-r)*100
s = s0()
e = E(s)
sbest = s
ebest = e
k = 0
while k < kmax and e > emax:
snew = neighbour(s)
enew = E(snew)
if enew < ebest:
sbest = snew; ebest = enew
if P(e, enew, temp(k/kmax)) > random.random():
s = snew; e = enew
k += 1
print sbest
Update: After playing around with Branch'n'Bound, I now believe this method to be superior, as it gives perfect results for the N=30, M=6 case within a second. However I guess you could play around with the simulated annealing approach just as much.
The greedy solution suggested by a few people seems like the best option, I ran it a bunch of times with some random values, and it seems to get it right every time.
If it's not optimal, it's at the very least very close, and it runs in O(nm) or so (I can't be bothered to do the math right now)
C# Implementation:
static List<List<int>> Dist(int n, IList<int> values)
{
var result = new List<List<int>>();
for (int i = 1; i <= n; i++)
result.Add(new List<int>());
var sortedValues = values.OrderByDescending(val => val);
foreach (int val in sortedValues)
{
var lowest = result.OrderBy(a => a.Sum()).First();
lowest.Add(val);
}
return result;
}
how about this:
order the k values.
order the players.
loop over the k values giving the next one to the next player.
when you get to the end of the players, turn around and continue giving the k values to the players in the opposite direction.
Repeatedly give the available object with the largest value to the player who has the least total value of objects assigned to him.
This is a straight-forward implementation of Justin Peel's answer:
M = 3
players = [[] for i in xrange(M)]
values = [10,4,3,1,1,1]
values.sort()
values.reverse()
for v in values:
lowest=sorted(players, key=lambda x: sum(x))[0]
lowest.append(v)
print players
print [sum(p) for p in players]
I am a beginner with Python, but it seems to work okay. This example will print
[[10], [4, 1], [3, 1, 1]]
[10, 5, 5]
30 ^ 6 isn't that large (it's less than 1 billion). Go through every possible allocation, and pick the one that's the fairest by whatever measure you define.
EDIT:
The purpose was to use the greedy solution with small improvement in the implementation, which is maybe transparent in C#:
static List<List<int>> Dist(int n, IList<int> values)
{
var result = new List<List<int>>();
for (int i = 1; i <= n; i++)
result.Add(new List<int>());
var sortedValues = values.OrderByDescending(val => val);//Assume the most efficient sorting algorithm - O(N log(N))
foreach (int val in sortedValues)
{
var lowest = result.OrderBy(a => a.Sum()).First();//This can be done in O(M * log(n)) [M - size of sortedValues, n - size of result]
lowest.Add(val);
}
return result;
}
Regarding this stage:
var lowest = result.OrderBy(a => a.Sum()).First();//This can be done in O(M * log(n)) [M - size of sortedValues, n - size of result]
The idea is that the list is always sorted (In this code it is done by OrderBy). Eventually, this sorting wont take more than O (log(n)) - because we just need to INSERT at most one item into a sorted list - that should take the same as a binary search.
Because we need to repeat this phase for sortedValues.Length times, the whole algorithm runs in O(M * log(n)).
So, in words, it can be rephrased as:
Repeat the steps below till you finish the Values values:
1. Add the biggest value to the smallest player
2. Check if this player still has the smallest sum
3. If yes, go to step 1.
4. Insert the last-that-was-got player to the sorted players list
Step 4 is the O (log(n)) step - as the list is always sorted.