How to repack multiple knapsacks which were at maximum capacity, had their items dumped onto a pile, shuffled, and had some items removed? - knapsack-problem

In this variant of the Multiple Knapsack Problem, only the weight of the items is considered, so I guess it's more like a Multiple Subset Sum Problem, but it's easier to explain with knapsacks.
There are n knapsacks, each filled with items to its individual maximum weight capacity C[j], where 0 <= j < n.
The knapsacks are emptied onto a pile, with a total of m items, each with a weight W[i], where 0 <= i < m. The items in the pile are shuffled and k items are removed from the pile, where 0 <= k <= m.
n, m, C[j] and W[i] are integers larger than zero; i, j and k are non-negative integers.
This state is the initial input to the packing algorithm.
How to repack all the remaining m - k items so that the individual capacity of each knapsack C[j] is not exceeded?
The packer has no knowledge of how the knapsacks were previously packed
Knapsacks were previously packed, so there exists a valid solution
The number of knapsacks used does not need to be optimised, there can be empty knapsacks and/or under-packed ones as well
Items cannot be broken down into lighter pieces, even if the resulting weights are integers as well
Items and knapsacks can be sorted if necessary
My biggest concern is correctness, and time is more important that memory usage
From the sample inputs I've been provided, usually m <= 10and k ~= 7, but there are cases where m = 20, or k = 0 or k = m
I don't know if first-fit or full-bin packing algorithms are guaranteed to reach a correct result when k approaches zero, for example: if an algorithm packs as many small items as possible in a large knapsack, but then a large item needs to be packed and the only large knapsack is already full.
Here's an simple example in Javascript of what I want to accomplish:
let knapsacks = [
{ capacity: 13 },
{ capacity: 9 },
{ capacity: 60 },
{ capacity: 81 }
];
let items = [ 52, 81, 13 ];
// all items packed
let aSolution = [
{
capacity: 13,
items: [ 13 ]
},
{ capacity: 9 },
{
capacity: 65,
items: [ 52 ]
},
{
capacity: 81,
items: [ 81 ]
}
];
// item 81 not packed
let notASolution = [
{ capacity: 13 },
{ capacity: 9 },
{ capacity: 65 },
{
capacity: 81,
items: [ 52, 13 ]
}
];

Is it known what items were removed from the packing list? And, is it known what algorithm successfully packed the original list? If those are known, then packing of the subset list reverts to the problem of packing the original list: Pack the original list using the previously successful packing algorithm, then remove the items from the packed knapsacks to obtain a packing of the subset list.

Related

Subset with smallest sum greater or equal to k

I am trying to write a python algorithm to do the following.
Given a set of positive integers S, find the subset with the smallest sum, greater or equal to k.
For example:
S = [50, 103, 85, 21, 30]
k = 140
subset = [85, 50, 21] (with sum = 146)
The numbers in the initial set are all integers, and k can be arbitrarily large. Usually there will be about 100 numbers in the set.
Of course there's the brute force solution of going through all possible subsets, but that runs in O(2^n) which is unfeasable. I have been told that this problem is NP-Complete, but that there should be a Dynamic Programing approach that allows it to run in pseudo-polynomial time, like the knapsack problem, but so far, attempting to use DP still leads me to solutions that are O(2^n).
Is there such a way to appy DP to this problem? If so, how? I find DP hard to understand so I might have missed something.
Any help is much appreciated.
Well seeing that numbers are not integers but reals, best I can think of is O(2^(n/2) log (2^(n/2)).
It might look worse at first glance but notice that 2^(n/2) == sqrt(2^n)
So to achieve such complexity we will use technique known as meet in the middle:
Split set into 2 parts of sizes n/2 and n-n/2
Use brute force to generate all subsets (including empty one) and store them in arrays, let's call them A and B
Let's sort array B
Now for each element a in A, if B[-1] + a >=k we can use binary search to find smallest element b in B that satisfies a + b >= k
out of all such a + b pairs we found choose the smallest
OP changed question a little now its integers so here goes dynamic solution:
well not much to say, classical knapsack.
for each i in [1,n] we have 2 options for set item i:
1. Include in subset, state changes from (i, w) to (i+1, w + S[i])
2. Skip it, state changes from (i, w) to (i+1, w)
Every time we reach some w that`s >= k, we update answer
Pseudo-code:
visited = Set() //some set/hashtable object to store visited states
S = [...]//set of integers from input
int ats = -1;
void solve(int i, int w) //theres atmost n*k different states so complexity is O(n*k)
{
if(w >= k)
{
if(ats==-1)ats=w;
else ats=min(ats,w);
return;
}
if(i>n)return;
if(visited.count(i,w))return; //we already visited this state, can skip
visited.insert(i,w)=1;
solve(i+1, w + S[i]); //take item
solve(i+1, w); //skip item
}
solve(1,0);
print(ats);

Minimum sum that cant be obtained from a set

Given a set S of positive integers whose elements need not to be distinct i need to find minimal non-negative sum that cant be obtained from any subset of the given set.
Example : if S = {1, 1, 3, 7}, we can get 0 as (S' = {}), 1 as (S' = {1}), 2 as (S' = {1, 1}), 3 as (S' = {3}), 4 as (S' = {1, 3}), 5 as (S' = {1, 1, 3}), but we can't get 6.
Now we are given one array A, consisting of N positive integers. Their are M queries,each consist of two integers Li and Ri describe i'th query: we need to find this Sum that cant be obtained from array elements ={A[Li], A[Li+1], ..., A[Ri-1], A[Ri]} .
I know to find it by a brute force approach to be done in O(2^n). But given 1 ≤ N, M ≤ 100,000.This cant be done .
So is their any effective approach to do it.
Concept
Suppose we had an array of bool representing which numbers so far haven't been found (by way of summing).
For each number n we encounter in the ordered (increasing values) subset of S, we do the following:
For each existing True value at position i in numbers, we set numbers[i + n] to True
We set numbers[n] to True
With this sort of a sieve, we would mark all the found numbers as True, and iterating through the array when the algorithm finishes would find us the minimum unobtainable sum.
Refinement
Obviously, we can't have a solution like this because the array would have to be infinite in order to work for all sets of numbers.
The concept could be improved by making a few observations. With an input of 1, 1, 3, the array becomes (in sequence):
(numbers represent true values)
An important observation can be made:
(3) For each next number, if the previous numbers had already been found it will be added to all those numbers. This implies that if there were no gaps before a number, there will be no gaps after that number has been processed.
For the next input of 7 we can assert that:
(4) Since the input set is ordered, there will be no number less than 7
(5) If there is no number less than 7, then 6 cannot be obtained
We can come to a conclusion that:
(6) the first gap represents the minimum unobtainable number.
Algorithm
Because of (3) and (6), we don't actually need the numbers array, we only need a single value, max to represent the maximum number found so far.
This way, if the next number n is greater than max + 1, then a gap would have been made, and max + 1 is the minimum unobtainable number.
Otherwise, max becomes max + n. If we've run through the entire S, the result is max + 1.
Actual code (C#, easily converted to C):
static int Calculate(int[] S)
{
int max = 0;
for (int i = 0; i < S.Length; i++)
{
if (S[i] <= max + 1)
max = max + S[i];
else
return max + 1;
}
return max + 1;
}
Should run pretty fast, since it's obviously linear time (O(n)). Since the input to the function should be sorted, with quicksort this would become O(nlogn). I've managed to get results M = N = 100000 on 8 cores in just under 5 minutes.
With numbers upper limit of 10^9, a radix sort could be used to approximate O(n) time for the sorting, however this would still be way over 2 seconds because of the sheer amount of sorts required.
But, we can use statistical probability of 1 being randomed to eliminate subsets before sorting. On the start, check if 1 exists in S, if not then every query's result is 1 because it cannot be obtained.
Statistically, if we random from 10^9 numbers 10^5 times, we have 99.9% chance of not getting a single 1.
Before each sort, check if that subset contains 1, if not then its result is one.
With this modification, the code runs in 2 miliseconds on my machine. Here's that code on http://pastebin.com/rF6VddTx
This is a variation of the subset-sum problem, which is NP-Complete, but there is a pseudo-polynomial Dynamic Programming solution you can adopt here, based on the recursive formula:
f(S,i) = f(S-arr[i],i-1) OR f(S,i-1)
f(-n,i) = false
f(_,-n) = false
f(0,i) = true
The recursive formula is basically an exhaustive search, each sum can be achieved if you can get it with element i OR without element i.
The dynamic programming is achieved by building a SUM+1 x n+1 table (where SUM is the sum of all elements, and n is the number of elements), and building it bottom-up.
Something like:
table <- SUM+1 x n+1 table
//init:
for each i from 0 to SUM+1:
table[0][i] = true
for each j from 1 to n:
table[j][0] = false
//fill the table:
for each i from 1 to SUM+1:
for each j from 1 to n+1:
if i < arr[j]:
table[i][j] = table[i][j-1]
else:
table[i][j] = table[i-arr[j]][j-1] OR table[i][j-1]
Once you have the table, you need the smallest i such that for all j: table[i][j] = false
Complexity of solution is O(n*SUM), where SUM is the sum of all elements, but note that the algorithm can actually be trimmed after the required number was found, without the need to go on for the next rows, which are un-needed for the solution.

Strange but practical 2D bin packing optimization

I am trying to write an application that generates drawing for compartmentalized Panel.
I have N cubicles (2D rectangles) (N <= 40). For each cubicle there is a minimum height (minHeight[i]) and minimum width (minWidth[i]) associated. The panel itself also has a MAXIMUM_HEIGHT constraint.
These N cubicles have to be arranged in a column-wise grid such that the above constraints are met for each cubicle.
Also, the width of each column is decided by the maximum of minWidths of each cubicle in that column.
Also, the height of each column should be the same. This decides the height of the panel
We can add spare cubicles in the empty space left in any column or we can increase the height/width of any cubicle beyond the specified minimum. However we cannot rotate any of the cubicles.
OBJECTIVE: TO MINIMIZE TOTAL PANEL WIDTH.
At present I have implemented it simply by ignoring the widths of cubicles in my optimization. I just choose the cubicle with largest minHeight and try to fit it in my panel. However, it does not gurantee an optimal solution.
Can I get any better than this?
EDIT 1: MAXIMUM_HEIGHT of panel = 2100mm, minwidth range (350mm to 800mm), minheight range (225mm to 2100mm)
EDIT 2: PROBLEM OBJECTIVE: TO MINIMIZE PANEL WIDTH (not panel area).
Formulation
Given:
for each cell i = 1, ..., M, the (min) width W_i and (min) height H_i
the maximum allowed height of any stack, T
We can formulate the mixed integer program as follows:
minimize sum { CW_k | k = 1, ..., N }
with respect to
C_i in { 1, ..., N }, i = 1, ..., M
CW_k >= 0, k = 1, ..., N
and subject to
[1] sum { H_i | C_i = k } <= T, k = 1, ..., N
[2] CW_k = max { W_i | C_i = k }, k = 1, ..., N
(or 0 when set is empty)
You can pick N to be any sufficiently large integer (for example, N = M).
Algorithm
Plug this mixed integer program into an existing mixed integer program solver to determine the cell-to-column mapping given by the optimal C_i, i = 1, ..., M values.
This is the part you do not want to reinvent yourself. Use an existing solver!
Note
Depending on the expressive power of your mixed integer program solver package, you may or may not be able to directly apply the formulation I described above. If the constraints [1] and [2] cannot be specified because of the "set based" nature of them or the max, you can manually transform the formulation to an equivalent less-declarative but more-canonical one that does not need this expressive power:
minimize sum { CW_k | k = 1, ..., N }
with respect to
C_i_k in { 0, 1 }, i = 1, ..., M; k = 1, ..., N
CW_k >= 0, k = 1, ..., N
and subject to
[1] sum { H_i * C_i_k | i = 1, ..., M } <= T, k = 1, ..., N
[2] CW_k >= W_i * C_i_k, i = 1, ..., M; k = 1, ..., N
[3] sum { C_i_k | k = 1, ..., N } = 1, i = 1, ..., M
Here the C_i variables from before (taking values in { 1, ..., N }) have been replaced with C_i_k variables (taking values in { 0, 1 }) under the relationship C_i = sum { C_i_k | k = 1, ..., N }.
The final cell-to-column mapping is described by the the C_i_k: cell i belongs in column k if and only if C_i_k = 1.
One solution is to divide the width of the cubicle row by the minimum width. This gives you the maximum number of cubicles that can fit in a row.
Divide the remainder of the first division by the number of cubicles. This gives you the extra width to add to the minimum width to make all of the cubicle widths even.
Example: You have a cubicle row of 63 meters. Each cubicle has a minimum width of 2 meters. I'm assuming that the thickness of one of the cubicle walls is included in the 2 meters. I'm also assuming that one end cubicle will be against a wall.
Doing the math, we get 63 / 2 = 31.5 or 31 cubicles.
Now we divide 0.5 meters by 31 cubicles and get 16 millimeters. So, the cubicle widths are 2.016 meters.
You can look into vm packing especially share aware algorithm for virtual machine collocation: http://dl.acm.org/citation.cfm?id=1989554. You can read also about # http://en.m.wikipedia.org/wiki/Bin_packing_problem. The problem is already difficult but the cubicle can share width or height. Thus the search space gets bigger.

Algorithm for shortest path from muptiple Sets

This is an interview Question (I saw it on a forum and am not able to figure out the best solution). The problem is from a given Set of numbers find the shortest path.
eg.
Set A - [2, 14, 34]
Set B - [9, 13]
Set C - [15, 22, 62, 78]
Set D - [16, 24, 54]
Set Z - [17, 38, 41]
1) There can be any number of sets
2) The numbers inside the set will never repeat.
3) The numbers can range from any start to any end (they are not between 0 - n, i.e. It can start from 1091 to 1890 etc)
4) All the sets are sorted.
in the above example the path will be:
B[13] -> A[14] -> C[15] -> D[16] -> Z[17]
The shortest path is defined as the difference between MAX number (17) - MIN Number (13) = 4;
Any ideas ?
make a list of pairs [number, name_of_set]. Sort it.
For a given length of path, D, scan the sorted list, keeping 2 pointers. Always increase the first pointer, and increase the second while the the spread is larger than D. While scanning, keep counts of elements between pointers belonging to each set. if there is element from each set, bingo, You found a path with difference at most D.
Now, binary search for D.
overall complexity O(N log N)
a heap (priority queue) might help.
merge sort all data into an array N, also keep original set id, assume there are m sets in total;
int shortest = MAX(N) - MIN(N); // that is N[N.length - 1] - N[0]
init a heap h;
loop through N with i, if h does not contain element from the same set as N[i], add N[i] to heap; if h already contains an element from the same set, say h[k], increase the key of h[k] to N[i]. If h.size() == m, shortest == N[i] - h[0] < shortest ? N[i] - h[0]: shortest.
here is code:
mergesort(all_sets[], H, S); // H holds all data, S holds corresponding setid.
Heap<key, setid> H = new Heap<key, setid>();
int shortest = N[N.length - 1] - n[0];
for(int i = 0; i < N.length; i++)
{
int data = N[i];
int setID = S[i];
int hindex = H.elementFromSet(setID);
if(hindex < 0)
{ // H does not have any element from set with setID;
H.add(data, setID);
} else {
H.increase(data, hindex);
}
if(H.size() == m)
{
shortest = shortest > N[i] - H[0]? N[i] - H[0] : shortest;
}
}
Maybe I can use a hashtable to keep tracking set id to heap index.
the runtime I believe is O(nlgm).
Take Set A and Set B . Find shortest path in this set.
This will be 14-13 . Now sort it , so that it becomes 13- 14.
Now the short set short = {13,14}
Take short set { 13, 14} and set C {15,22,62,78}.
Now start node is 13 and end node is 14 in the short set.
Beginning from the end node 14 the shortest reachable path is 15.
So add the 15 to the short set.
Now short set becomes { 13, 14 , 15}, sort it so that it remains {13, 14, 15}
Now take short set {13,14,15} and set D { 16 , 24 , 54}
The end node in short set is 15. So we begin from there.
Now shortest path from 25 to set D is 16. So add 16 to the short set.
Now short set becomes { 13,14,15,16}. Sort it . It remains {13,14,15,16}
3 . We can repeat this for the entire sets to get the resultant short set.
You can apply essentially the same idea as the algorithm I described in this question.
Let's look for the center of the final subset. It must minimize the maximum distance to each of the sets. As usual, the distance of a point to a set is defined as the minimum distance between the point and an element of the set.
For each set i, the function fi describing the distance to the set is piecewise linear. If a,b are two consecutive numbers, the relations fi(a) = 0, fi((a+b)/2) = (b-a)/2, fi(b) = 0 let us build a description of all the fi in linear time.
But we can also compute the maximum of two piecewise functions fi and fj in linear time, by considering the consecutive intervals [a,b] where they are linear: either the result is linear, or it is piecewise linear by adding the unique intersection point of the functions to the partition. Since the slopes of our functions are always +1 or -1, the intersection point is a half-integer so it can be represented exactly in floating-point (or fixed-point) arithmetic.
A convexity argument shows that the maximum g of all the fi can only have at most twice as many points as the fi, so we don't have to worry about the maximum having a number of points that would be exponential in the number of sets.
So we just:
Compute the piecewise linear distance function fi for i = 1..p.
Compute the maximum g of all the fi by repeatedly computing the maximum.
The location of any minimum point of g is the desired center.
For each set, pick the closest point to the center.
The width of the set of points we picked is exactly the minimum of g :-)
Complexity is O(N) if the number of sets is bounded, or O(N p) if the number of sets p is variable. By being smart about how you compute the maximum (divide-and-conquer), I think you can even reduce it to O(N log p).
Here's an alternative formulation of the problem.
Q: Find the smallest interval which contains an element from all the sets.
A:
Put all the elements in a single bucket and sort them. Complexity O(N*K)
We will find the largest number such that there is at least one element from each set higher than this number by binary search. This will be MIN.
Similarly, find the smallest number such that there is at least one element from each set smaller than this number by binary search. This will be MAX.
As an optimization, you can store each set as an interval, and use an interval tree for queries in steps 2 and 3. That way the query complexity changes from O(K) to O(log K)

array median transformation minimum steps

Given an array A with n
integers. In one turn one can apply the
following operation to any consecutive
subarray A[l..r] : assign to all A i (l <= i <= r)
median of subarray A[l..r] .
Let max be the maximum integer of A .
We want to know the minimum
number of operations needed to change A
to an array of n integers each with value
max.
For example, let A = [1, 2, 3] . We want to change it to [3, 3, 3] . We
can do this in two operations, first for
subarray A[2..3] (after that A equals to [1,
3, 3] ), then operation to A[1..3] .
Also,median is defined for some array A as follows. Let B be the same
array A , but sorted in non-decreasing
order. Median of A is B m (1-based
indexing), where m equals to (n div 2)+1 .
Here 'div' is an integer division operation.
So, for a sorted array with 5 elements,
median is the 3rd element and for a sorted
array with 6 elements, it is the 4th element.
Since the maximum value of N is 30.I thought of brute forcing the result
could there be a better solution.
You can double the size of the subarray containing the maximum element in each iteration. After the first iteration, there is a subarray of size 2 containing the maximum. Then apply your operation to a subarray of size 4, containing those 2 elements, giving you a subarray of size 4 containing the maximum. Then apply to a size 8 subarray and so on. You fill the array in log2(N) operations, which is optimal. If N is 30, five operations is enough.
This is optimal in the worst case (i.e. when only one element is the maximum), since it sets the highest possible number of elements in each iteration.
Update 1: I noticed I messed up the 4s and 8s a bit. Corrected.
Update 2: here's an example. Array size 10, start state:
[6 1 5 9 3 2 0 7 4 8]
To get two nines, run op on subarray of size two containing the nine. For instance A[4…5] gets you:
[6 1 5 9 9 2 0 7 4 8]
Now run on size four subarray that contains 4…5, for instance on A[2…5] to get:
[6 9 9 9 9 2 0 7 4 8]
Now on subarray of size 8, for instance A[1…8], get:
[9 9 9 9 9 9 9 9 4 8]
Doubling now would get us 16 nines, but we have only 10 positions, so round of with A[1…10], get:
[9 9 9 9 9 9 9 9 9 9]
Update 3: since this is only optimal in the worst case, it is actually not an answer to the original question, which asks for a way of finding the minimal number of operations for all inputs. I misinterpreted the sentence about brute forcing to be about brute forcing with the median operations, rather than in finding the minimum sequence of operations.
This is the problem from codechef Long Contest.Since the contest is already over,so awkwardiom ,i am pasting the problem setter approach (Source : CC Contest Editorial Page).
"Any state of the array can be represented as a binary mask with each bit 1 means that corresponding number is equal to the max and 0 otherwise. You can run DP with state R[mask] and O(n) transits. You can proof (or just believe) that the number of statest will be not big, of course if you run good DP. The state of our DP will be the mask of numbers that are equal to max. Of course, it makes sense to use operation only for such subarray [l; r] that number of 1-bits is at least as much as number of 0-bits in submask [l; r], because otherwise nothing will change. Also you should notice that if the left bound of your operation is l it is good to make operation only with the maximal possible r (this gives number of transits equal to O(n)). It was also useful for C++ coders to use map structure to represent all states."
The C/C++ Code is::
#include <cstdio>
#include <iostream>
using namespace std;
int bc[1<<15];
const int M = (1<<15) - 1;
void setMin(int& ret, int c)
{
if(c < ret) ret = c;
}
void doit(int n, int mask, int currentSteps, int& currentBest)
{
int numMax = bc[mask>>15] + bc[mask&M];
if(numMax == n) {
setMin(currentBest, currentSteps);
return;
}
if(currentSteps + 1 >= currentBest)
return;
if(currentSteps + 2 >= currentBest)
{
if(numMax * 2 >= n) {
setMin(currentBest, 1 + currentSteps);
}
return;
}
if(numMax < (1<<currentSteps)) return;
for(int i=0;i<n;i++)
{
int a = 0, b = 0;
int c = mask;
for(int j=i;j<n;j++)
{
c |= (1<<j);
if(mask&(1<<j)) b++;
else a++;
if(b >= a) {
doit(n, c, currentSteps + 1, currentBest);
}
}
}
}
int v[32];
void solveCase() {
int n;
scanf(" %d", &n);
int maxElement = 0;
for(int i=0;i<n;i++) {
scanf(" %d", v+i);
if(v[i] > maxElement) maxElement = v[i];
}
int mask = 0;
for(int i=0;i<n;i++) if(v[i] == maxElement) mask |= (1<<i);
int ret = 0, p = 1;
while(p < n) {
ret++;
p *= 2;
}
doit(n, mask, 0, ret);
printf("%d\n",ret);
}
main() {
for(int i=0;i<(1<<15);i++) {
bc[i] = bc[i>>1] + (i&1);
}
int cases;
scanf(" %d",&cases);
while(cases--) solveCase();
}
The problem setter approach has exponential complexity. It is pretty good for N=30. But not so for larger sizes. I think, it's more interesting to find an exponential time solution. And I found one, with O(N4) complexity.
This approach uses the fact that optimal solution starts with some group of consecutive maximal elements and extends only this single group until whole array is filled with maximal values.
To prove this fact, take 2 starting groups of consecutive maximal elements and extend each of them in optimal way until they merge into one group. Suppose that group 1 needs X turns to grow to size M, group 2 needs Y turns to grow to the same size M, and on turn X + Y + 1 these groups merge. The result is a group of size at least M * 4. Now instead of turn Y for group 2, make an additional turn X + 1 for group 1. In this case group sizes are at least M * 2 and at most M / 2 (even if we count initially maximal elements, that might be included in step Y). After this change, on turn X + Y + 1 the merged group size is at least M * 4 only as a result of the first group extension, add to this at least one element from second group. So extending a single group here produces larger group in same number of steps (and if Y > 1, it even requires less steps). Since this works for equal group sizes (M), it will work even better for non-equal groups. This proof may be extended to the case of several groups (more than two).
To work with single group of consecutive maximal elements, we need to keep track of only two values: starting and ending positions of the group. Which means it is possible to use a triangular matrix to store all possible groups, allowing to use a dynamic programming algorithm.
Pseudo-code:
For each group of consecutive maximal elements in original array:
Mark corresponding element in the matrix and clear other elements
For each matrix diagonal, starting with one, containing this element:
For each marked element in this diagonal:
Retrieve current number of turns from this matrix element
(use indexes of this matrix element to initialize p1 and p2)
p2 = end of the group
p1 = start of the group
Decrease p1 while it is possible to keep median at maximum value
(now all values between p1 and p2 are assumed as maximal)
While p2 < N:
Check if number of maximal elements in the array is >= N/2
If this is true, compare current number of turns with the best result \
and update it if necessary
(additional matrix with number of maximal values between each pair of
points may be used to count elements to the left of p1 and to the
right of p2)
Look at position [p1, p2] in the matrix. Mark it and if it contains \
larger number of turns, update it
Repeat:
Increase p1 while it points to maximal value
Increment p1 (to skip one non-maximum value)
Increase p2 while it is possible to keep median at maximum value
while median is not at maximum value
To keep algorithm simple, I didn't mention special cases when group starts at position 0 or ends at position N, skipped initialization and didn't make any optimizations.

Resources