Consider we have a sacks of gold and thief wants to get the maximum gold. Thief can take the gold to get maximum by,
1) Taking the Gold from contiguous sacks.
2) Thief should take the same amount of gold from all sacks.
N Sacks 1 <= N <= 1000
M quantity of Gold 0 <= M <= 100
Sample Input1:
3 0 5 4 4 4
Output:
16
Explanation:
4 is the minimum amount he can take from the sacks 3 to 6 to get the maximum value of 16.
Sample Input2:
2 4 3 2 1
Output:
8
Explanation:
2 is the minimum amount he can take from the sacks 1 to 4 to get the maximum value of 8.
I approached the problem using subtracting the values from array and taking the transition point from negative to positive, but this doesn't solves the problem.
EDIT: code provided by OP to find the index:
int temp[6];
for(i=1;i<6;i++){
for(j=i-1; j>=0;j--) {
temp[j] = a[j] - a[i];
}
}
for(i=0;i<6;i++){
if(temp[i]>=0) {
index =i;
break;
}
}
The best amount of gold (TBAG) taken from every sack is equal to weight of some sack. Let's put indexes of candidates in a stack in order.
When we meet heavier weight (than stack contains), it definitely continues "good sequence", so we just add its index to the stack.
When we meet lighter weight (than stack top), it breaks some "good sequences" and we can remove heavier candidates from the stack - they will not have chance to be TBAG later. Remove stack top until lighter weight is met, calculate potentially stolen sum during this process.
Note that stack always contains indexes of strictly increasing sequence of weights, so we don't need to consider items before index at the stack top (intermediate AG) in calculation of stolen sum (they will be considered later with another AG value).
for idx in Range(Sacks):
while (not Stack.Empty) and (Sacks[Stack.Peek] >= Sacks[idx]): //smaller sack is met
AG = Sacks[Stack.Pop]
if Stack.Empty then
firstidx = 0
else
firstidx = Stack.Peek + 1
//range_length * smallest_weight_in_range
BestSUM = MaxValue(BestSUM, AG * (idx - firstidx))
Stack.Push(idx)
now check the rest:
repeat while loop without >= condition
Every item is pushed and popped once, so linear time and space complexity.
P.S. I feel that I've ever seen this problem in another formulation...
I see two differents approaches for the moment :
Naive approach: For each pair of indices (i,j) in the array, compute the minimum value m(i,j) of the array in the interval (i,j) and then compute score(i,j) = |j-i+1|*m(i,j). Take then the maximum score over all the pairs (i,j).
-> Complexity of O(n^3).
Less naive approach:
Compute the set of values of the array
For each value, compute the maximum score it can get. For that, you just have to iterate once over all the values of the array. For example, when your sample input is [3 0 5 4 4 4] and the current value you are looking is 3, then it will give you a score of 12. (You'll first find a value of 3 thanks to the first index, and then a score of 12 due to indices from 2 to 5).
Take the maximum over all values found at step 2.
-> Complexity is here O(n*m), since you have to do at most m times the step 2, and the step 2 can be done in O(n).
Maybe there is a better complexity, but I don't have a clue yet.
Related
Please help me with the below problem statement:
Bounce is a fast bunny. This time she faces the challenging task of completing all the trades on a number line.
Initially, bounce is at the 0th position, and the trades to be performed are on the right side(position>0).
She has two list of equal length, one containing the value v[i], and the other position p[i], for each of the trade it needs to perform .
The given list 'pos' is in strictly increasing order, that is pos[i]<pos[i+1], for 1<=i<=n-1 (1 based indexing) where n is the sizeof list.
the trade values can be positive, negative or zero.
During the process she cannot have a resource count of strictly less than zero at any moment, and after finishing all the trades she should finish at the right most position of trade(even if trade value is zero).
It is guaranteed that the sum of all trades is greater than or equal to zero.
Bounce can jump from any position to any other position. If she jumps from pos1 to pos2, the distance covered is |pos1-pos2|, and the distance for this jump is added to total distance covered.
find the Total minimum total Bounce has to cover to complete all the trades and then end at the last(rightmost) position of the trade.
Constraints
1<=n<=10^5
-1000<=v[i]<=1000
1<=pos<=10^8
Sample I/O: 1
4
2
-3
1
2
1
2
3
4
6
Explanation:
Number of trades = 4v = {2,-3,1,2}
position = {1,2,3,4}
at x=1
we gain 2 resources and resource count is 2
at x=2
we can't trade as we have only 2 resources
at x=3
we gain 1 more resource and count becomes 3(now go back to 2 and finish pending task and come back)
distance covered = 3+1+1 = 5
at x=4
we gain 2 more resource and exit
Hence, total distance covered = 6
Sample I/O: 2
4
2
-3
-1
2
1
2
3
4
8
I was asked this question in an interview and wasn't able to answer and i'm unable to solve it till now. I tried to relate this with many concepts like DAG, maximum sum, Kadane's Algo. but none was helpful.
How to approach this question and how to relate this with any existing algorithm?
It is an past interview question for which i don't have any link. I just want to know what i could have done at that time which would had solved it.
A greedy algorithm works here: as you walk forward, and would get a negative accumulated result, then you know that you'll have to get back to this position some time later. This means that every next step counts three times (forward, backward and forward again). As you know that the conflicting negative trade amount will eventually need to be accumulated, you might as well account for it immediately, knowing that you will have to triple the distance of the following steps until you have a positive accumulated amount.
So here is how that algorithm can be implemented in JavaScript. The two examples are run:
function minDistance(v, p) {
let distance = 0;
let position = 0;
let resources = 0;
for (let i = 0; i < v.length; i++) {
let step = p[i] - position;
if (resources < 0) distance += step * 3; // need to get back & forth here
else distance += step;
resources += v[i]; // all trades have to be performed anyway
position = p[i];
}
return distance;
}
console.log(minDistance([2,-3,1,2], [1,2,3,4])); // 6
console.log(minDistance([2,-3,-1,2], [1,2,3,4])); // 8
We're learning about skiplists at my university and we have to find k-th element in the skiplist. I haven't found anything about this in the internet, since skiplist in not really a popular data structure. W. Pugh in his original article wrote:
Each element x has an index pos(x). We use this value in our invariants but do not store it. The index of the header is zero, the index of the first element is one and so on. Associated with each forward pointer is a measurement, fDistance, of the distance traversed by that pointer:
x→fDistance[i] = pos(x→forward[i]) – pos(x).
Note that the distance traversed by a level 1 pointer is always 1, so some storage economy is possible here at the cost of a slight increase in the complexity of the algorithms.
SearchByPosition(list, k)
if k < 1 or k > size(list) then return bad-index
x := list→header
pos := 0
-- loop invariant: pos = pos(x)
for i := list→level downto 1 do
while pos + x→fDistance[i] ≤ k do
pos := pos + x→fDistance[i]
x := x→forward[i]
return x→value
The problem is, I still don't get what is going on here. How do we know positions of elements without storing them? How do we calculate fDistance from pos(x) if we don't store it? If we go from the highest level of the skiplist, how do we know how many nodes on level 0 (or 1, the lowest one anyway) we skip this way?
I'm going to assume you're referring to how to find the k-th smallest (or largest) element in a skip list. This is a rather standard assumption I think, otherwise you have to clarify what you mean.
I'll refer to the GIF on wikipedia in this answer: https://en.wikipedia.org/wiki/Skip_list
Let's say you want to find the k = 5 smallest element.
You start from the highest level (4 in the figure). How many elements would you skip from 30 to NIL? 6 (we also count the 30). That's too much.
Go down a level. How many skipped from 30 to 50? 2: 30 and 40.
So we reduced the problem to finding the k = 5 - 2 = 3 smallest element starting at 50 on level 3.
How many skipped from 50 to NIL? 4, that's one too many.
Go down a level. How many skipped from 50 to 70? 2. Now find the 3 - 2 = 1 smallest element starting from 70 on level 2.
How many skipped from 70 to NIL? 2, one too many.
From 70 to 90 on level 1? 1 (itself). So the answer is 70.
So you need to store how many nodes are skipped for each node at each level and use that extra information in order to get an efficient solution. That seems to be what fDistance[i] does in your code.
Let's say you are given an array A of N integers and another integer M. For any given index i where 0 <= i < N, hide the ith index of A and return the product of all other elements of A modulo M.
For example, say A = {1, 2, 3, 4, 5} and M=100 then for i=1, the result would be (1x3x4x5) mod 100. Hence the result is 60.
Assume that all integers are 32 bit unsigned integers.
Now an obvious approach to do this is to calculate the result for any given value of i. That would mean N-1 multiplications for every given value of i. Is there a more optimal way to do this?
P.S.
First idea would be to store the product of all numbers in A (let's call this total). Now for every given value of i, we can just divide total by A[i] and return the result after taking the modulo. However, the total would cause an overflow so this cannot be done.
Easy...:)
left[0]=a[0];
for(int i=1;i<=n-1;i++)
left[i]=(left[i-1]*a[i])%M;
right[n-1]=a[n-1];
for(int i=n-2;i>=0;i--)
right[i]=(right[i-1]*a[i])%M;
for query q
if(q==0)
return right[1]%M;
if(q==n-1)
return left[n-2]%M;
return (left[q-1]*right[q+1])%M;
Suppose there is an array of 5 elements.
Now
index: 1 2 3 4 5
1 5 2 10 4
Now for query q=3
answer is = ((1*5) * (10*4))%M
for query q=4
answer is = ((1*5*2)*(4))%M
We are basically pre computing all the left and right multiplication
index: 1 2 3 4 5
1 5 2 10 4
left: 1 5 10 100 400
right: 400 400 80 40 4
For q=3 answer is left[2]*right[4]= (5*40)%M= 200%M
For q=4 answer is left[3]*right[5]= (10*4)%M= 40%M
For this answer, I'm assuming that this is not a ONE-TIME calculation, but it is something that can take place many times with different values of i.
First, define a non-volatile array to hold calculated products.
Then, whenever the function is invoked with a given pair of parameters (M and i):
Check in the array (of above) if the product was calculated,
If yes, simply use the stored value, calculate the MOD and return the result,
If not, calculate the product, store it, calculate the MOD and return the value.
This method spares you from having a (potentially long) initialization which might calculate products that would not be needed.
I'm following algorithm from here:
http://cgm.cs.mcgill.ca/~godfried/teaching/dm-reading-assignments/Maximum-Gap-Problem.pdf
I dont understand step 2 and 3:
Divide
the
interval
[xmin,
xmax]
into
(n−1)
"buckets"
of
equal
size
delta= (xmax
–
xmin)/(n‐1)
For
each
of
the
remaining
(n‐2)
numbers
determine
in
which
bucket it
falls
using
the
floor
function.
The
number
xi
belongs
to
the
kth
bucket
Bk
if,
and
only
if,
(xi
‐
xmin)/δ
=
k‐1.
Lets say
a = [13, 4, 7, 2, 9, 17, 18]
Minm: 2 Maxm: 18 n-1: 6.
So my # of buckets will be 6. And delta = (18-2)/6 = 2. That is 6 buckets
having 2 elements into each of them. (Total 12 elements I can have)
Step 2. Que:
If there are only 12 elements where would be my max 18?
Step 3.
For element 18:
as per algorithm it should be in math.floor((17-2)/float(2)) = 7
So 18 should be in 8th block, BUT we have only (n-1) = 6 buckets.
Mystery to me!
EDIT1:
Sorry
Step 3: wrong Math:
math.floor((17-2)/float(2)) = 5
Still need to figure out where does minimum and maximum goes.
EDIT2:
As per answer by Miljen Mikic:
He was right, my question is "What we do with maximum and minimum"
And in step 6:
In
L
find
the
maximum
distance
between
a
pair
of
consecutive
minimum
and
maximum
(ximax,
xjmin),
where
j
>
i.
How come j > i? i.e. max from next bucket and min from current bucket.
In the algorithm you cited, you don't put minimum and maximum in the buckets. Pay attention to the Note after Step 5:
Note: Since there are n-1 buckets and only n-2 numbers..
If you put minimum and maximum in some buckets, then you would have had n numbers, not n-2. The real question now is: what to do with minimum and maximum? Actually, step 6 of the algorithm should be clarified a little bit more. When you examine the list L, you should start with x-min and compare it with x1-min, and you should end by comparing x(n-1)-max and x-max, because the maximum gap can actually include minimum or maximum, like you get e.g. in this example: [1,7,3,2]. Of course, these two additional comparisons still give you linear time complexity.
Note that you can change the algorithm slightly by putting minimum and maximum in buckets as well (by the exact same formula!) and then you would have n numbers and n buckets. Why? Minimum always goes in the first bucket (see the formula), and maximum needs to go in the n-th bucket, which didn't exist previously, so we have one extra bucket if we apply this change. This means that in this case you cannot always apply Pigeonhole principle, however it still holds that the maximum
distance
between
a
pair
of
consecutive elements
must
be
at
least
the
length
of
the
bucket. How come? If at least one bucket contains two elements, then there must be some empty bucket and this is clear. Otherwise, all buckets contain exactly one element; this means that the first bucket contains the minimum, and the second bucket contains an element whose value is at least x_min + δ, so the difference between this element and x_min is at least δ, the
length
of
the
bucket. Why the element in the second bucket has to be at least x_min + δ? If it is smaller than that, e.g. if it's x_min + δ - k, where k > 0, then it will also belong to the first bucket because [((x_min + δ - k) - x_min) / δ] = [(δ - k) / δ] = 0, i.e. not to the second as we assumed!
I have a sorted array of integers of size n. These values are not unique. What I need to do is
: Given a B, I need to find an i<A[n] such that the sum of |A[j:1 to n]-i| is lesser than B and to that particular sum contribute the biggest number of A[j]s. I have some ideas but I can't seem to find anything better from the naive n*B and n*n algorithm. Any ideas about O(nlogn) or O(n) ?
For example: Imagine
A[n] = 1 2 10 10 12 14 and B<7 then the best i is 12 cause I achieve having 4 A[j]s contribute to my sum. 10 and 11 are also equally good i's cause if i=10 I got 10 - 10 + 10 - 10 +12-10 + 14-10 = 6<7
A solution in O(n) : start from the end and compute a[n]-a[n-1] :
let d=14-12 => d=2 and r=B-d => r=5,
then repeat the operation but multiplying d by 2:
d=12-10 => d=2 and r=r-2*d => r=1,
r=1 end of the algorithm because the sum must be less than B:
with a array indexed 0..n-1
i=1
r=B
while(r>0 && n-i>1) {
d=a[n-i]-a[n-i-1];
r-=i*d;
i++;
}
return a[n-i+1];
maybe a drawing explains better
14 x
13 x -> 2
12 xx
11 xx -> 2*2
10 xxxx -> 3*0
9 xxxx
8 xxxx
7 xxxx
6 xxxx
5 xxxx
4 xxxxx
3 xxxxx
2 xxxxxx
1 xxxxxxx
I think you can do it in O(n) using these three tricks:
CUMULATIVE SUM
Precompute an array C[k] that stores sum(A[0:k]).
This can be done recursively via C[k]=C[k-1]+A[k] in time O(n).
The benefit of this array is that you can then compute sum(A[a:b]) via C[b]-C[a-1].
BEST MIDPOINT
Because your elements are sorted, then it is easy to compute the best i to minimise the sum of absolute values. In fact, the best i will always be given by the middle entry.
If the length of the list is even, then all values of i between the two central elements will always give the minimum absolute value.
e.g. for your list 10,10,12,14 the central elements are 10 and 12, so any value for i between 10 and 12 will minimise the sum.
ITERATIVE SEARCH
You can now scan over the elements a single time to find the best value.
1. Init s=0,e=0
2. if the score for A[s:e] is less than B increase e by 1
3. else increase s by 1
4. if e<n return to step 2
Keep track of the largest value for e-s seen which has a score < B and this is your answer.
This loop can go around at most 2n times so it is O(n).
The score for A[s:e] is given by sum |A[s:e]-A[(s+e)/2]|.
Let m=(s+e)/2.
score = sum |A[s:e]-A[(s+e)/2]|
= sum |A[s:e]-A[m]|
= sum (A[m]-A[s:m]) + sum (A[m+1:e]-A[m])
= (m-s+1)*A[m]-sum(A[s:m]) + sum(A[m+1:e])-(e-m)*A[m]
and we can compute the sums in this expression using the precomputed array C[k].
EDIT
If the endpoint must always be n, then you can use this alternative algorithm:
1. Init s=0,e=n
2. while the score for A[s:e] is greater than B, increase s by 1
PYTHON CODE
Here is a python implementation of the algorithm:
def fast(A,B):
C=[]
t=0
for a in A:
t+=a
C.append(t)
def fastsum(s,e):
if s==0:
return C[e]
else:
return C[e]-C[s-1]
def fastscore(s,e):
m=(s+e)//2
return (m-s+1)*A[m]-fastsum(s,m)+fastsum(m+1,e)-(e-m)*A[m]
s=0
e=0
best=-1
while e<len(A):
if fastscore(s,e)<B:
best=max(best,e-s+1)
e+=1
elif s==e:
e+=1
else:
s+=1
return best
print fast([1,2,10,10,12,14],7)
# this returns 4, as the 4 elements 10,10,12,14 can be chosen
Try it this way for an O(N) with N size of array approach:
minpos = position of closest value to B in array (binary search, O(log(N))
min = array[minpos]
if (min >= B) EXIT, no solution
// now, we just add the smallest elements from the left or the right
// until we are greater than B
leftindex = minpos - 1
rightindex = minpos + 1
while we have a valid leftindex or valid rightindex:
add = min(abs(array[leftindex (if valid)]-B), abs(array[rightindex (if valid)]-B))
if (min + add >= B)
break
min += add
decrease leftindex or increase rightindex according to the usage
min is now our sum, rightindex the requested i (leftindex the start)
(It could happen that some indices are not correct, this is just the idea, not the implementation)
I would guess, the average case for small b is O(log(N)). The linear case only happens if we can use the whole array.
Im not sure, but perhaps this can be done in O(log(N)*k) with N size of array and k < N, too. We have to use the bin search in a clever way to find leftindex and rightindex in every iteration, such that the possible result range gets smaller in every iteration. This could be easily done, but we have to take care of duplicates, because they could destroy our bin search reductions.