There are K points on a circle that represent the location of the treasures. N people want to share the treasures. You want to divide the treasure fairly among all of them such that the difference between the person having the maximum value and the person having the minimum value is as minimum as possible.
They all take contiguous set of points on the circle. That is, they
cannot own segmented treasures.
All the treasures must be allocated
Each treasure should only belong to only one person.
For example if there are 4 treasures and 2 people as shown in the figure, then the optimal way of dividing would be
(6, 10) and (11, 3) => with a difference of 2.
1 <= n <= 25
1 <= k <= 50
How do I approach solving this problem? I planned to calculate the mean of all the points and keep adding the resources until they are lesser than the mean for each person. But as obvious as it is, it will not work in all cases.
I'd be glad if someone throws some light.
So say we fix x, y as the min max allowed for the treasure.
I need to figure out if we can get a solution in these constraints.
For that I need to traverse the circle and create exactly N segments with sums between x and y.
This I can solve via dynamic programming, a[i][j][l] = 1 if I can split the elements between i and j into l whose sums are between x and y (see above). To compute it we can evaluate a[i][j][l] = is_there_a_q_such_that(a[i][q - 1][l-1] == 1 and sum(q -> j) between x and y).
To handle the circle look for n-1 segments that cover enough elements and the remaining difference remains between x and y.
So naive solution is O(total_sum^2) to select X and Y plus O(K^3) to iterate over i,j,l and another O(K) to find a q and another O(K) to get the sum. That's a total of O(total_sum^2 * K^5) which likely is too slow.
So we need to compute sums a lot. So let's precompute a partial sums array sums[w] = sum(elements between pos 0 and pos w). So to get the sum between q and j you only need to compute sums[j] - sums[q-1]. This takes care of O(K).
To compute the a[i][j][l].
Since the treasures are always positive, if a partial sum is too small we need to grow the interval, if the sum is too high we need to shrink the interval. Sine we fixed a side of the interval (at j) we can only move the q. We can use binary search to find the closes t and the furthest q that allow us to be between x and y. Let's call them low_q (the closest to j, lowest sum) and high_q (far from j, largest sum). If low_q < i then the interval is too small so the value is 0. So now we need to check if there's a 1 between max(high_q, i) and low_q. The max is to make sure we don't look outside of the interval. To do the check we can precompute again partial sums to count how many 1s are in out interval. We only need to do this once per level so it will be amortized O(1). So, if we did everything right this will be O(K^3 logK).
We still have the total_sum^2 in front. Let's say we fix X. If for a given y we have a solution you might be able to find also a smaller y that still has a solution. If you can't find a solution for a given y then you won't be able to find a solution for any smaller value. So we can now do a binary search on y.
So this is now O(total_sum *log(total_sum) * K^3 * logK).
Other optimization would be to not raise i if the sum(0-> i- 1) > x.
You might not want to check for values of x > total_sum/K since that's the ideal minimum value. This should cancel out one of the K is the complexity.
There might be other things that you can do, but I think this will be fast enough for your constraints.
You can do brute-force for O(k^n) or dp for O(k^{2}*MAXSUM^{k — 1}).
dp[i][val1][val2]...[val k -1] is it possible to distribute first k items, so first have val1, second — val2 and so on. There are k * MAXSUM(k - 1) states and you need O(k) to do step, you simply choose who takes ith item.
I dont think it's possible to solve it faster.
No standard type of algorithm ( greedy, divide and conquer etc ) exists for this problem.
You would have to check each and every combination of (resource, people) and pick the answer. Once you have solved the problem using recursion, you can throw DP to optimize the solution.
The curx of the solution is:
Recuse through all the treasures
If you current treasure is not the last,
set minimum difference to Infinity
for each user
assign the current treasure to the current user
ans = recurse further by going to the next treasure
update minimumDifference if necessary
else
Find and max amount of treasure assigned and minimum amount of treasure assigned
and return the difference
Here is the javascript version of the answer.
I have commented it to try to explain the logic as well:
// value of the treasure
const K = [6, 3, 11, 10];
// number of users
const N = 2;
// Array which track amount of treasure with each user
const U = new Array(N).fill(0);
// 2D array to save whole solution
const bitset = [...new Array(N)].map(() => [...new Array(K.length)]);
const solve = index => {
/**
* The base case:
* We are out of treasure.
* So far, the assigned treasures will be in U array
*/
if (index >= K.length) {
/**
* Take the maximum and minimum and return the difference along with the bitset
*/
const max = Math.max(...U);
const min = Math.min(...U);
const answer = { min: max - min, set: bitset };
return answer;
}
/**
* We have treasures to check
*/
let answer = { min: Infinity, set: undefined };
for (let i = 0; i < N; i++) {
// Let ith user take the treasure
U[i] += K[index];
bitset[i][index] = 1;
/**
* Let us recuse and see what will be the answer if ith user has treasure at `index`
* Note that ith user might also have other treasures for indices > `index`
*/
const nextAnswer = solve(index + 1);
/**
* Did we do better?
* Was the difference bw the distribution of treasure reduced?
* If so, let us update the current answer
* If not, we can assign treasure at `index` to next user (i + 1) and see if we did any better or not
*/
if (nextAnswer.min <= answer.min) {
answer = JSON.parse(JSON.stringify(nextAnswer));
}
/**
* Had we done any better,the changes might already be recorded in the answer.
* Because we are going to try and assign this treasure to the next user,
* Let us remove it from the current user before iterating further
*/
U[i] -= K[index];
bitset[i][index] = 0;
}
return answer;
};
const ans = solve(0);
console.log("Difference: ", ans.min);
console.log("Treasure: [", K.join(", "), "]");
console.log();
ans.set.forEach((x, i) => console.log("User: ", i + 1, " [", x.join(", "), "]"));
Each problem at index i creates exactly N copies of
itself and we have total K indices, the time complexity of the problem
to solve is O(K^N)
We can definitely do better by throwing memoization.
Here comes the tricky part:
If we have a distribution of treasure for one user, the minimum
difference of the distributions of treasure among users will be
the same.
In out case, bitset[i] represents the distribution for ith user.
Thus, we can memoize the results for the bitset of the user.
One you realize that, coding that is easy:
// value of the treasure
const K = [6, 3, 11, 10, 1];
// number of users
const N = 2;
// Array which track amount of treasure with each user
const U = new Array(N).fill(0);
// 2D array to save whole solution
const bitset = [...new Array(N)].map(() => [...new Array(K.length).fill(0)]);
const cache = {};
const solve = (index, userIndex) => {
/**
* Do we have cached answer?
*/
if (cache[bitset[userIndex]]) {
return cache[bitset[userIndex]];
}
/**
* The base case:
* We are out of treasure.
* So far, the assigned treasures will be in U array
*/
if (index >= K.length) {
/**
* Take the maximum and minimum and return the difference along with the bitset
*/
const max = Math.max(...U);
const min = Math.min(...U);
const answer = { min: max - min, set: bitset };
// cache the answer
cache[bitset[userIndex]] = answer;
return answer;
}
/**
* We have treasures to check
*/
let answer = { min: Infinity, set: undefined };
// Help us track the index of the user with optimal answer
let minIndex = undefined;
for (let i = 0; i < N; i++) {
// Let ith user take the treasure
U[i] += K[index];
bitset[i][index] = 1;
/**
* Let us recuse and see what will be the answer if ith user has treasure at `index`
* Note that ith user might also have other treasures for indices > `index`
*/
const nextAnswer = solve(index + 1, i);
/**
* Did we do better?
* Was the difference bw the distribution of treasure reduced?
* If so, let us update the current answer
* If not, we can assign treasure at `index` to next user (i + 1) and see if we did any better or not
*/
if (nextAnswer.min <= answer.min) {
answer = JSON.parse(JSON.stringify(nextAnswer));
minIndex = i;
}
/**
* Had we done any better,the changes might already be recorded in the answer.
* Because we are going to try and assign this treasure to the next user,
* Let us remove it from the current user before iterating further
*/
U[i] -= K[index];
bitset[i][index] = 0;
}
cache[answer.set[minIndex]] = answer;
return answer;
};
const ans = solve(0);
console.log("Difference: ", ans.min);
console.log("Treasure: [", K.join(", "), "]");
console.log();
ans.set.forEach((x, i) => console.log("User: ", i + 1, " [", x.join(", "), "]"));
// console.log("Cache:\n", cache);
We can definitely improve the space used by not caching the whole bitset. Removing the bitset from cahce is trivial.
Consider that for each k, we can pair a sum growing from A[i] to the left (sum A[i-j..i]) with all available intervals recorded for f(k-1, i-j-1) and update them: for each interval, (low, high), if the sum is greater than high, then new_interval = (low, sum) and if the sum is lower than low, then new_interval = (sum, high); otherwise, the interval stays the same. For example,
i: 0 1 2 3 4 5
A: [5 1 1 1 3 2]
k = 3
i = 3, j = 0
The ordered intervals available for f(3-1, 3-0-1) = f(2,2) are:
(2,5), (1,6) // These were the sums, (A[1..2], A[0]) and (A[2], A[0..1])
Sum = A[3..3-0] = 1
Update intervals: (2,5) -> (1,5)
(1,6) -> (1,6) no change
Now, we can make this iteration much more efficient by recognizing and pruning intervals during the previous k round.
Watch:
A: [5 1 1 1 3 2]
K = 1:
N = 0..5; Intervals: (5,5), (6,6), (7,7), (8,8), (11,11), (13,13)
K = 2:
N = 0: Intervals: N/A
N = 1: Intervals: (1,5)
N = 2: (1,6), (2,5)
Prune: remove (1,6) since any sum <= 1 would be better paired with (2,5)
and any sum >= 6 would be better paired with (2,5)
N = 3: (1,7), (2,6), (3,5)
Prune: remove (2,6) and (1,7)
N = 4: (3,8), (4,7), (5,6), (5,6)
Prune: remove (3,8) and (4,7)
N = 5: (2,11), (5,8), (6,7)
Prune: remove (2,11) and (5,8)
For k = 2, we are now left with the following pruned record:
{
k: 2,
n: {
1: (1,5),
2: (2,5),
3: (3,5),
4: (5,6),
5: (6,7)
}
}
We've cut down the iteration of k = 3 from a list of n choose 2 possible splits to n relevant splits!
The general algorithm applied to k = 3:
for k' = 1 to k
for sum A[i-j..i], for i <- [k'-1..n], j <- [0..i-k'+1]:
for interval in record[k'-1][i-j-1]: // records are for [k'][n']
update interval
prune intervals in k'
k' = 3
i = 2
sum = 1, record[2][1] = (1,5) -> no change
i = 3
// sums are accumulating right to left starting from A[i]
sum = 1, record[2][2] = (2,5) -> (1,5)
sum = 2, record[2][1] = (1,5) -> no change
i = 4
sum = 3, record[2][3] = (3,5) -> no change
sum = 4, record[2][2] = (2,5) -> no change
sum = 5, record[2][1] = (1,5) -> no change
i = 5
sum = 2, record[2][4] = (5,6) -> (2,6)
sum = 5, record[2][3] = (3,5) -> no change
sum = 6, record[2][2] = (2,5) -> (2,6)
sum = 7, record[2][1] = (1,5) -> (1,7)
The answer is 5 paired with record[2][3] = (3,5), yielding the updated interval, (3,5). I'll leave the pruning logic for the reader to work out. If we wanted to continue, here's the pruned list for k = 3
{
k: 3
n: {
2: (1,5),
3: (1,5),
4: (3,5),
5: (3,5)
}
}
Related
I'm stuck on this problem.
Given an array of numbers. At each step we can pick a number like N in this array and sum N with another number that exist in this array. We continue this process until all numbers in this array equals to zero. What is the minimum number of steps required? (We can guarantee initially the sum of numbers in this array is zero).
Example: -20,-15,1,3,7,9,15
Step 1: pick -15 and sum with 15 -> -20,0,1,3,7,9,0
Step 2: pick 9 and sum with -20 -> -11,0,1,3,7,0,0
Step 3: pick 7 and sum with -11 -> -4,0,1,3,0,0,0
Step 4: pick 3 and sum with -4 -> -1,0,1,0,0,0,0
Step 5: pick 1 and sum with -1 -> 0,0,0,0,0,0,0
So the answer of this example is 5.
I've tried using greedy algorithm. It works like this:
At each step we pick maximum and minimum number that already available in this array and sum these two numbers until all numbers in this array equals to zero.
but it doesn't work and get me wrong answer. Can anyone help me to solve this problem?
#include <bits/stdc++.h>
using namespace std;
int a[] = {-20,-15,1,3,7,9,15};
int bruteforce(){
bool isEqualToZero = 1;
for (int i=0;i<(sizeof(a)/sizeof(int));i++)
if (a[i] != 0){
isEqualToZero = 0;
break;
}
if (isEqualToZero)
return 0;
int tmp=0,m=1e9;
for (int i=0;i<(sizeof(a)/sizeof(int));i++){
for (int j=i+1;j<(sizeof(a)/sizeof(int));j++){
if (a[i]*a[j] >= 0) continue;
tmp = a[j];
a[i] += a[j];
a[j] = 0;
m = min(m,bruteforce());
a[j] = tmp;
a[i] -= tmp;
}
}
return m+1;
}
int main()
{
cout << bruteforce();
}
This is the brute force approach that I've written for this problem. Is there any algorithm to solve this problem faster?
This has an np-complete feel, but the following search does an A* search through all possible normalized partial sums on the way to a single non-zero term. Which solves your problem, and means that you don't get into an infinite loop if the sum is not zero.
If greedy works, this will explore the greedy path first, verify that you can't do better, and return fairly quickly. If greedy doesn't work, this may...take a lot longer.
Implementation in Python because that is easy for me. Translation into another language is an exercise for the reader.
import heapq
def find_minimal_steps (numbers):
normalized = tuple(sorted(numbers))
seen = set([normalized])
todo = [(min_steps_remaining(normalized), 0, normalized, None)]
while todo[0][0] < 7:
step_limit, steps_taken, prev, path = heapq.heappop(todo)
steps_taken = -1 * steps_taken # We store negative for sort order
if min_steps_remaining(prev) == 0:
decoded_path = []
while path is not None:
decoded_path.append((path[0], path[1]))
path = path[2]
return steps_taken, list(reversed(decoded_path))
prev_numbers = list(prev)
for i in range(len(prev_numbers)):
for j in range(len(prev_numbers)):
if i != j:
# Track what they were
num_i = prev_numbers[i]
num_j = prev_numbers[j]
# Sum them
prev_numbers[i] += num_j
prev_numbers[j] = 0
normalized = tuple(sorted(prev_numbers))
if (normalized not in seen):
seen.add(normalized)
heapq.heappush(todo, (
min_steps_remaining(normalized) + steps_taken + 1,
-steps_taken - 1, # More steps is smaller is looked at first
normalized,
(num_i, num_j, path)))
# set them back.
prev_numbers[i] = num_i
prev_numbers[j] = num_j
print(find_minimal_steps([-20,-15,1,3,7,9,15]))
For fun I also added a linked list implementation that doesn't just tell you how many minimal steps, but which ones it found. In this case its steps were (-15, 15), (7, 9), (3, 16), (1, 19), (-20, 20) meaning add 15 to -15, 9 to 7, 16 to 3, 19 to 1, and 20 to -20.
Here 5 different sets are shown. S1 contains 1. Next set S2 is calculated from S1 considering the following logic:
Suppose Sn contains {a1,a2,a3,a4.....,an} and middle element of Sn is b.
Then the set Sn+1 contains elements {b,b+a1,b+a2,......,b+an}. Total (n+1) elements. If a set contains even number of elements then middle element is (n/2 +1) .
Now, if n is given as input then we have to display all the elements of set Sn.
Clearly it is possible to solve the problem in O(n) time.
we can compute all the middle element as (2^(n-1) - middle element of the previous set + 1) where s1 ={1} is base case. In this way O(n) time we will get the all middle elements till (n-1)th set. So, middle element of (n-1)th set is the first element of the nth set set. (middle element of (n-1)th set + middle element of (n-2)th set) is the middle second element of the nth set. In this way we will get all the elements of nth set.
So it needs O(n) time.
Here id the complete java code I have written:
public class SpecialSubset {
private static Scanner inp;
public static void main(String[] args) {
// TODO Auto-generated method stub
int N,fst,mid,con=0;
inp = new Scanner(System.in);
N=inp.nextInt();
int[] setarr=new int[N];
int[] midarr=new int[N];
fst=1;
mid=1;
midarr[0]=1;
for(int i=1;i<N;i++)
{
midarr[i]=(int) (Math.pow(2, i)-midarr[i-1]+1);
}
setarr[0]=midarr[N-2];
System.out.print(setarr[0]);
System.out.print(" ");
for(int i=1,j=N-3;i<N-1;i++,j--)
{
setarr[i]=setarr[i-1]+midarr[j];
System.out.print(setarr[i]);
System.out.print(" ");
}
setarr[N-1]=setarr[N-2]+1;
System.out.print(setarr[N-1]);
}
}
Here is the link of the Question:
https://www.hackerrank.com/contests/projecteuler/challenges/euler103
IS it possible to solve the problem with less than O(n) time?
#Paul Boddington has given an answer that relies on the sequence of first numbers of these sets being the Narayana-Zidek-Capell numbers and has checked it for some small-ish values. However, there was no proof of the conjecture given. This answer is in addition to the above, to make it complete. I'm no HTML/CSS/Markdown guru, so you'll have to excuse the bad positioning of subscripts (If anyone can improve those - be my guest.
Notation:
Let aij be the i-th number in the j-th set.
I'll also define bj as the first number of the j-2-th set. This is the sequence the proof is about. The -2 is to account for the first and second 1 in the Narayana-Zidek-Capell sequence.
Generating rules:
The problem statement didn't clarify what "center number" is for a even-length set (a list really, but whatever), but it seems they meant the "center right" in that case. I'll denote the rules numbers in bold when I use them below.
a11 = 1
a1n = aceil(n+1⁄2)n-1
ain = a1n + ai-1n-1
bn = a1n-2
Proof:
First step is to make a slightly more involved formula for ain by unwinding the recursion a bit more and substituting b:
ain = Σ a1n-j = Σ bn-j+2 for j in [0 ... i-1]
Next, we consider two cases for bn - one where n is odd, one where n is even.
Even case:
b2n+2 = a12n =
2 = aceil(2n+1⁄2)2n-1 = an+12n-1 =
3 = a12n-1 + an2n-2 =
2, 4 = b2n+1 + a12n-1 =
5 = 2 * b2n+1
Odd case:
b2n+1 = a12n-1 =
2 = aceil(2n⁄2)2n-2 = an2n-2 =
3 = a12n-2 + an-12n-3 =
4 = 2 * b2n + (an-12n-3 - a12n-2) =
2 = 2 * b2n + (an-12n-3 - an2n-3) =
5 = 2 * b2n - bn
These rules are the exact sequence definition, and provide a way to generate the nth set in linear time (as opposed to quadratic when generating each set in turn)
The smallest numbers in the sets appear to be the Narayana-Zidek-Capell numbers
1, 1, 2, 3, 6, 11, 22, ...
The other numbers are obtained from the first number by repeatedly adding these numbers in reverse.
For example,
S6 = {11, 17, 20, 22, 23, 24}
+6 +3 +2 +1 +1
Using a recurrence for the Narayana-Zidek-Capell sequence found in that link, I have managed to produce a solution for this problem that runs in O(n) time. Here is a solution in Java. It only works for n <= 32 due to int overflow, but it could be written using BigInteger to work for higher values.
static Set<Integer> set(int n) {
int[] a = new int[n + 2];
for (int i = 1; i < n + 2; i++) {
if (i <= 2)
a[i] = 1;
else if (i % 2 == 0)
a[i] = 2 * a[i - 1];
else
a[i] = 2 * a[i - 1] - a[i / 2];
}
Set<Integer> set = new HashSet<>();
int sum = 0;
for (int i = n + 1; i >= 2; i--) {
sum += a[i];
set.add(sum);
}
return set;
}
I'm not able to justify right now why this is the same as the set in the question, but I'm working on it. However I have checked for all n <= 32 that this algorithm gives the same set as the "obvious" algorithm, so I'm reasonably sure it's correct.
I was given this interview question, and I totally blanked out. How would you guys solve this:
Go from the start of an array to the end in a way that you minimize the sum of elements that you land on.
You can move to the next element, i.e go from index 1 to index 2.
Or you can hop one element over. i.e go from index 1 to index 3.
Assuming that you only move from left to right, and you want to find a way to get from index 0 to index n - 1 of an array of n elements, so that the sum of the path you take is minimum. From index i, you can only move ahead to index i + 1 or index i + 2.
Observe that the minimum path to get from index 0 to index k is the minimum between the minimum path to get from index 0 to index k - 1 and the mininum path from index 0 to index k- 2. There is simply no other path to take.
Therefore, we can have a dynamic programming solution:
DP[0] = A[0]
DP[1] = A[0] + A[1]
DP[k] = min(DP[0], DP[1]) + A[k]
A is the array of elements.
DP array will store the minimum sum to reach element at index i from index 0.
The result will be in DP[n - 1].
Java:
static int getMinSum(int elements[])
{
if (elements == null || elements.length == 0)
{
throw new IllegalArgumentException("No elements");
}
if (elements.length == 1)
{
return elements[0];
}
int minSum[] = new int[elements.length];
minSum[0] = elements[0];
minSum[1] = elements[0] + elements[1];
for (int i = 2; i < elements.length; i++)
{
minSum[i] = Math.min(minSum[i - 1] + elements[i], minSum[i - 2] + elements[i]);
}
return Math.min(minSum[elements.length - 2], minSum[elements.length - 1]);
}
Input:
int elements[] = { 1, -2, 3 };
System.out.println(getMinSum(elements));
Output:
-1
Case description:
We start from the index 0. We must take 1. Now we can go to index 1 or 2. Since -2 is attractive, we choose it. Now we can go to index 2 or hop it. Better hop and our sum is minimal 1 + (-2) = -1.
Another examples (pseudocode):
getMinSum({1, 1, 10, 1}) == 3
getMinSum({1, 1, 10, 100, 1000}) == 102
Algorithm:
O(n) complexity. Dynamic programming. We go from left to right filling up minSum array. Invariant: minSum[i] = min(minSum[i - 1] + elements[i] /* move by 1 */ , minSum[i - 2] + elements[i] /* hop */ ).
This seems like the perfect place for a dynamic programming solution.
Keeping track of two values, odd/even.
We will take Even to mean we used the previous value, and Odd to mean we haven't.
int Even = 0; int Odd = 0;
int length = arr.length;
Start at the back. We can either take the number or not. Therefore:
Even = arr[length];
Odd = 0;`
And now we move to the next element with two cases. Either we were even, in which case we have the choice to take the element or skip it. Or we were odd and had to take the element.
int current = arr[length - 1]
Even = Min(Even + current, Odd + current);
Odd = Even;
We can make a loop out of this and achieve a O(n) solution!
There is a matrix, m×n. Several groups of people locate at some certain spots. In the following example, there are three groups and the number 4 indicates there are four people in this group. Now we want to find a meeting point in the matrix so that the cost of all groups moving to that point is the minimum. As for how to compute the cost of moving one group to another point, please see the following example.
Group1: (0, 1), 4
Group2: (1, 3), 3
Group3: (2, 0), 5
. 4 . .
. . . 3
5 . . .
If all of these three groups moving to (1, 1), the cost is:
4*((1-0)+(1-1)) + 5*((2-1)+(1-0))+3*((1-1)+(3-1))
My idea is :
Firstly, this two dimensional problem can be reduced to two one dimensional problem.
In the one dimensional problem, I can prove that the best spot must be one of these groups.
In this way, I can give a O(G^2) algorithm.(G is the number of group).
Use iterator's example for illustration:
{(-100,0,100),(100,0,100),(0,1,1)},(x,y,population)
for x, {(-100,100),(100,100),(0,1)}, 0 is the best.
for y, {(0,100),(0,100),(1,1)}, 0 is the best.
So it's (0, 0)
Is there any better solution for this problem.
I like the idea of noticing that the objective function can be decomposed to give the sum of two one-dimensional problems. The remaining problems look a lot like the weighted median to me (note "solves the following optimization problem in "http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/R.basic/html/weighted.median.html" or consider what happens to the objective function as you move away from the weighted median).
The URL above seems to say the weighted median takes time n log n, which I guess means that you could attain their claim by sorting the data and then doing a linear pass to work out the weighted median. The numbers you have to sort are in the range [0, m] and [0, n] so you could in theory do better if m and n are small, or - of course - if you are given the data pre-sorted.
Come to think of it, I don't see why you shouldn't be able to find the weighted median with a linear time randomized algorithm similar to that used to find the median (http://en.wikibooks.org/wiki/Algorithms/Randomization#find-median) - repeatedly pick a random element, use it to partition the items remaining, and work out which half the weighted median should be in. That gives you expected linear time.
I think this can be solved in O(n>m?n:m) time and O(n>m?n:m) space.
We have to find the median of x coordinates and median of all y coordinates in the k points and the answer will be (x_median,y_median);
Assumption is this function takes in the following inputs:
total number of points :int k= 4+3+5 = 12;
An array of coordinates:
struct coord_t c[12] = {(0,1),(0,1),(0,1), (0,1), (1,3), (1,3),(1,3),(2,0),(2,0),(2,0),(2,0),(2,0)};
c.int size = n>m ? n:m;
Let the input of the coordinates be an array of coordinates. coord_t c[k]
struct coord_t {
int x;
int y;
};
1. My idea is to create an array of size = n>m?n:m;
2. int array[size] = {0} ; //initialize all the elements in the array to zero
for(i=0;i<k;i++)
{
array[c[i].x] = +1;
count++;
}
int tempCount =0;
for(i=0;i<k;i++)
{
if(array[i]!=0)
{
tempCount += array[i];
}
if(tempCount >= count/2)
{
break;
}
}
int x_median = i;
//similarly with y coordinate.
int array[size] = {0} ; //initialize all the elements in the array to zero
for(i=0;i<k;i++)
{
array[c[i].y] = +1;
count++;
}
int tempCount =0;
for(i=0;i<k;i++)
{
if(array[i]!=0)
{
tempCount += array[i];
}
if(tempCount >= count/2)
{
break;
}
}
int y_median = i;
coord_t temp;
temp.x = x_median;
temp.y= y_median;
return temp;
Sample Working code for MxM matrix with k points:
*Problem
Given a MxM grid . and N people placed in random position on the grid. Find the optimal meeting point of all the people.
/
/
Answer:
Find the median of all the x coordiates of the positions of the people.
Find the median of all the y coordinates of the positions of the people.
*/
#include<stdio.h>
#include<stdlib.h>
typedef struct coord_struct {
int x;
int y;
}coord_struct;
typedef struct distance {
int count;
}distance;
coord_struct toFindTheOptimalDistance (int N, int M, coord_struct input[])
{
coord_struct z ;
z.x=0;
z.y=0;
int i,j;
distance * array_dist;
array_dist = (distance*)(malloc(sizeof(distance)*M));
for(i=0;i<M;i++)
{
array_dist[i].count =0;
}
for(i=0;i<N;i++)
{
array_dist[input[i].x].count +=1;
printf("%d and %d\n",input[i].x,array_dist[input[i].x].count);
}
j=0;
for(i=0;i<=N/2;)
{
printf("%d\n",i);
if(array_dist[j].count !=0)
i+=array_dist[j].count;
j++;
}
printf("x coordinate = %d",j-1);
int x= j-1;
for(i=0;i<M;i++)
array_dist[i].count =0;
for(i=0;i<N;i++)
{
array_dist[input[i].y].count +=1;
}
j=0;
for(i=0;i<N/2;)
{
if(array_dist[j].count !=0)
i+=array_dist[j].count;
j++;
}
int y =j-1;
printf("y coordinate = %d",j-1);
z.x=x;
z.y =y;
return z;
}
int main()
{
coord_struct input[5];
input[0].x =1;
input[0].y =2;
input[1].x =1;
input[1].y =2;
input[2].x =4;
input[2].y =1;
input[3].x = 5;
input[3].y = 2;
input[4].x = 5;
input[4].y = 2;
int size = m>n?m:n;
coord_struct x = toFindTheOptimalDistance(5,size,input);
}
Your algorithm is fine, and divide the problem into two one-dimensional problem. And the time complexity is O(nlogn).
You only need to divide every groups of people into n single people, so every move to left, right, up or down will be 1 for each people. We only need to find where's the (n + 1) / 2th people stand for row and column respectively.
Consider your sample. {(-100,0,100),(100,0,100),(0,1,1)}.
Let's take the line numbers out. It's {(-100,100),(100,100),(0,1)}, and that means 100 people stand at -100, 100 people stand at 100, and 1 people stand at 0.
Sort it by x, and it's {(-100,100),(0,1),(100,100)}. There is 201 people in total, so we only need to set the location at where the 101th people stands. It's 0, and that's for the answer.
The column number is with the same algorithm. {(0,100),(0,100),(1,1)}, and it's sorted. The 101th people is at 0, so the answer for column is also 0.
The answer is (0,0).
I can think of O(n) solution for one dimensional problem, which in turn means you can solve original problem in O(n+m+G).
Suppose, people are standing like this, a_0, a_1, ... a_n-1: a_0 people at spot 0, a_1 at spot 1. Then the solution in pseudocode is
cur_result = sum(i*a_i, i = 1..n-1)
cur_r = sum(a_i, i = 1..n-1)
cur_l = a_0
for i = 1:n-1
cur_result = cur_result - cur_r + cur_l
cur_r = cur_r - a_i
cur_l = cur_l + a_i
end
You need to find point, where cur_result is minimal.
So you need O(n) + O(m) for solving 1d problems + O(G) to build them, meaning total complexity is O(n+m+G).
Alternatively you solve 1d in O(G*log G) (or O(G) if data is sorted) using the same idea. Choose the one from expected number of groups.
you can solve this in O(G Log G) time by reducing it to, two one dimensional problems as you mentioned.
And as to how to solve it in one dimension, just sort them and go through them one by one and calculate cost moving to that point. This calculation can be done in O(1) time for each point.
You can also avoid Log(G) component if your x and y coordinates are small enough for you to use bucket/radix sort.
Inspired by kilotaras's idea. It seems that there is a O(G) solution for this problem.
Since everyone agree with the two dimensional problem can be reduced to two one dimensional problem. I will not repeat it again. I just focus on how to solve the one dimensional problem
with O(G).
Suppose, people are standing like this, a[0], a[1], ... a[n-1]. There is a[i] people standing at spot i. There are G spots having people(G <= n). Assuming these G spots are g[1], g[2], ..., g[G], where gi is in [0,...,n-1]. Without losing generality, we can also assume that g[1] < g[2] < ... < g[G].
It's not hard to prove that the optimal spot must come from these G spots. I will pass the
prove here and left it as an exercise if you guys have interest.
Since the above observation, we can just compute the cost of moving to the spot of every group and then chose the minimal one. There is an obvious O(G^2) algorithm to do this.
But using kilotaras's idea, we can do it in O(G)(no sorting).
cost[1] = sum((g[i]-g[1])*a[g[i]], i = 2,...,G) // the cost of moving to the
spot of first group. This step is O(G).
cur_r = sum(a[g[i]], i = 2,...,G) //How many people is on the right side of the
second group including the second group. This step is O(G).
cur_l = a[g[1]] //How many people is on the left side of the second group not
including the second group.
for i = 2:G
gap = g[i] - g[i-1];
cost[i] = cost[i-1] - cur_r*gap + cur_l*gap;
if i != G
cur_r = cur_r - a[g[i]];
cur_l = cur_l + a[g[i]];
end
end
The minimal of cost[i] is the answer.
Using the example 5 1 0 3 to illustrate the algorithm.
In this example,
n = 4, G = 3.
g[1] = 0, g[2] = 1, g[3] = 3.
a[0] = 5, a[1] = 1, a[2] = 0, a[3] = 3.
(1) cost[1] = 1*1+3*3 = 10, cur_r = 4, cur_l = 5.
(2) cost[2] = 10 - 4*1 + 5*1 = 11, gap = g[2] - g[1] = 1, cur_r = 4 - a[g[2]] = 3, cur_l = 6.
(3) cost[3] = 11 - 3*2 + 6*2 = 17, gap = g[3] - g[2] = 2.
I got asked this question on a interview for Google a couple of weeks ago, I didn't quite get the answer and I was wondering if anyone here could help me out.
You have an array with n elements. The elements are either 0 or 1.
You want to split the array into k contiguous subarrays. The size of each subarray can vary between ceil(n/2k) and floor(3n/2k). You can assume that k << n.
After you split the array into k subarrays. One element of each subarray will be randomly selected.
Devise an algorithm for maximizing the sum of the randomly selected elements from the k subarrays.
Basically means that we will want to split the array in such way such that the sum of all the expected values for the elements selected from each subarray is maximum.
You can assume that n is a power of 2.
Example:
Array: [0,0,1,1,0,0,1,1,0,1,1,0]
n = 12
k = 3
Size of subarrays can be: 2,3,4,5,6
Possible subarrays [0,0,1] [1,0,0,1] [1,0,1,1,0]
Expected Value of the sum of the elements randomly selected from the subarrays: 1/3 + 2/4 + 3/5 = 43/30 ~ 1.4333333
Optimal split: [0,0,1,1,0,0][1,1][0,1,1,0]
Expected value of optimal split: 1/3 + 1 + 1/2 = 11/6 ~ 1.83333333
I think we can solve this problem using dynamic programming.
Basically, we have:
f(i,j) is defined as the maximum sum of all expected values chosen from an array of size i and split into j subarrays. Therefore the solution should be f(n,k).
The recursive equation is:
f(i,j) = f(i-x,j-1) + sum(i-x+1,i)/x where (n/2k) <= x <= (3n/2k)
I don't know if this is still an open question or not, but it seems like the OP has managed to add enough clarifications that this should be straightforward to solve. At any rate, if I am understanding what you are saying this seems like a fair thing to ask in an interview environment for a software development position.
Here is the basic O(n^2 * k) solution, which should be adequate for small k (as the interviewer specified):
def best_val(arr, K):
n = len(arr)
psum = [ 0.0 ]
for x in arr:
psum.append(psum[-1] + x)
tab = [ -100000 for i in range(n) ]
tab.append(0)
for k in range(K):
for s in range(n - (k+1) * ceil(n/(2*K))):
terms = range(s + ceil(n/(2*K)), min(s + floor((3*n)/(2*K)) + 1, n+1))
tab[s] = max( [ (psum[t] - psum[s]) / (t - s) + tab[t] for t in terms ])
return tab[0]
I used the numpy ceil/floor functions but you basically get the idea. The only `tricks' in this version is that it does windowing to reduce the memory overhead to just O(n) instead of O(n * k), and that it precalculates the partial sums to make computing the expected value for a box a constant time operation (thus saving a factor of O(n) from the inner loop).
I don't know if anyone is still interested to see the solution for this problem. Just stumbled upon this question half an hour ago and thought of posting my solution(Java). The complexity for this is O(n*K^log10). The proof is a little convoluted so I would rather provide runtime numbers:
n k time(ms)
48 4 25
48 8 265
24 4 20
24 8 33
96 4 51
192 4 143
192 8 343919
The solution is the same old recursive one where given an array, choose the first partition of size ceil(n/2k) and find the best solution recursively for the rest with number of partitions = k -1, then take ceil(n/2k) + 1 and so on.
Code:
public class PartitionOptimization {
public static void main(String[] args) {
PartitionOptimization p = new PartitionOptimization();
int[] input = { 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0};
int splitNum = 3;
int lowerLim = (int) Math.ceil(input.length / (2.0 * splitNum));
int upperLim = (int) Math.floor((3.0 * input.length) / (2.0 * splitNum));
System.out.println(input.length + " " + lowerLim + " " + upperLim + " " +
splitNum);
Date currDate = new Date();
System.out.println(currDate);
System.out.println(p.getMaxPartExpt(input, lowerLim, upperLim,
splitNum, 0));
System.out.println(new Date().getTime() - currDate.getTime());
}
public double getMaxPartExpt(int[] input, int lowerLim, int upperLim,
int splitNum, int startIndex) {
if (splitNum <= 1 && startIndex<=(input.length -lowerLim+1)){
double expt = findExpectation(input, startIndex, input.length-1);
return expt;
}
if (!((input.length - startIndex) / lowerLim >= splitNum))
return -1;
double maxExpt = 0;
double curMax = 0;
int bestI=0;
for (int i = startIndex + lowerLim - 1; i < Math.min(startIndex
+ upperLim, input.length); i++) {
double curExpect = findExpectation(input, startIndex, i);
double splitExpect = getMaxPartExpt(input, lowerLim, upperLim,
splitNum - 1, i + 1);
if (splitExpect>=0 && (curExpect + splitExpect > maxExpt)){
bestI = i;
curMax = curExpect;
maxExpt = curExpect + splitExpect;
}
}
return maxExpt;
}
public double findExpectation(int[] input, int startIndex, int endIndex) {
double expectation = 0;
for (int i = startIndex; i <= endIndex; i++) {
expectation = expectation + input[i];
}
expectation = (expectation / (endIndex - startIndex + 1));
return expectation;
}
}
Not sure I understand, the algorithm is to split the array in groups, right? The maximum value the sum can have is the number of ones. So split the array in "n" groups of 1 element each and the addition will be the maximum value possible. But it must be something else and I did not understand the problem, that seems too silly.
I think this can be solved with dynamic programming. At each possible split location, get the maximum sum if you split at that location and if you don't split at that point. A recursive function and a table to store history might be useful.
sum_i = max{ NumOnesNewPart/NumZerosNewPart * sum(NewPart) + sum(A_i+1, A_end),
sum(A_0,A_i+1) + sum(A_i+1, A_end)
}
This might lead to something...
I think its a bad interview question, but it is also an easy problem to solve.
Every integer contributes to the expected value with weight 1/s where s is the size of the set where it has been placed. Therefore, if you guess the sizes of the sets in your partition, you just need to fill the sets with ones starting from the smallest set, and then fill the remaining largest set with zeroes.
You can easily see then that if you have a partition, filled as above, where the sizes of the sets are S_1, ..., S_k and you do a transformation where you remove one item from set S_i and move it to set S_i+1, you have the following cases:
Both S_i and S_i+1 were filled with ones; then the expected value does not change
Both them were filled with zeroes; then the expected value does not change
S_i contained both 1's and 0's and S_i+1 contains only zeroes; moving 0 to S_i+1 increases the expected value because the expected value of S_i increases
S_i contained 1's and S_i+1 contains both 1's and 0's; moving 1 to S_i+1 increases the expected value because the expected value of S_i+1 increases and S_i remains intact
In all these cases, you can shift an element from S_i to S_i+1, maintaining the filling rule of filling smallest sets with 1's, so that the expected value increases. This leads to the simple algorithm:
Create a partitioning where there is a maximal number of maximum-size arrays and maximal number of minimum-size arrays
Fill the arrays starting from smallest one with 1's
Fill the remaining slots with 0's
How about a recursive function:
int BestValue(Array A, int numSplits)
// Returns the best value that would be obtained by splitting
// into numSplits partitions.
This in turn uses a helper:
// The additional argument is an array of the valid split sizes which
// is the same for each call.
int BestValueHelper(Array A, int numSplits, Array splitSizes)
{
int result = 0;
for splitSize in splitSizes
int splitResult = ExpectedValue(A, 0, splitSize) +
BestValueHelper(A+splitSize, numSplits-1, splitSizes);
if splitResult > result
result = splitResult;
}
ExpectedValue(Array A, int l, int m) computes the expected value of a split of A that goes from l to m i.e. (A[l] + A[l+1] + ... A[m]) / (m-l+1).
BestValue calls BestValueHelper after computing the array of valid split sizes between ceil(n/2k) and floor(3n/2k).
I have omitted error handling and some end conditions but those should not be too difficult to add.
Let
a[] = given array of length n
from = inclusive index of array a
k = number of required splits
minSize = minimum size of a split
maxSize = maximum size of a split
d = maxSize - minSize
expectation(a, from, to) = average of all element of array a from "from" to "to"
Optimal(a[], from, k) = MAX[ for(j>=minSize-1 to <=maxSize-1) { expectation(a, from, from+j) + Optimal(a, j+1, k-1)} ]
Runtime (assuming memoization or dp) = O(n*k*d)