Find continuous subarrays that have at least 1 pair adding up to target sum - Optimization - algorithm

I took this assessment that had this prompt, and I was able to pass 18/20 tests, but not the last 2 due to hitting the execution time limit. Unfortunately, the input values were not displayed for these tests.
Prompt:
// Given an array of integers **a**, find how many of its continuous subarrays of length **m** that contain at least 1 pair of integers with a sum equal to **k**
Example:
const a = [1,2,3,4,5,6,7];
const m = 5, k = 5;
solution(a, m, k) will yield 2, because there are 2 subarrays in a that have at least 1 pair that add up to k
a[0]...a[4] - [1,2,3,4,5] - 2 + 3 = k ✓
a[1]...a[5] - [2,3,4,5,6] - 2 + 3 = k ✓
a[2]...a[6] - [3,4,5,6,7] - no two elements add up to k ✕
Here was my solution:
// strategy: check each subarray if it contains a two sum pair
// time complexity: O(n * m), where n is the size of a and m is the subarray length
// space complexity: O(m), where m is the subarray length
function solution(a, m, k) {
let count = 0;
for(let i = 0; i <= a.length - m; i++){
let set = new Set();
for(let j = i; j < i + m; j++){
if(set.has(k - a[j])){
count++;
break;
}
else
set.add(a[j]);
}
}
return count;
}
I thought of ways to optimize this algo, but failed to come up with any. Is there any way this can be optimized further for time complexity - perhaps for any edge cases?
Any feedback would be much appreciated!

maintain a map of highest position of the last m values (add/remove/query is O(1)) and highest position of the first value of a complementary pair
for each array element, check if complementary element is in the map, update the highest position if necessary.
if at least m elements were processed and higest position is in the range, increase counter
O(n) overall. Python:
def solution(a, m, k):
count = 0
last_pos = {} # value: last position observed
max_complement_pos = -1
for head, num in enumerate(a, 1): # advance head by one
tail = head - m
# deletion part is to keep space complexity O(m).
# If this is not a concern (likely), safe to omit
if tail > 0 and last_pos[a[tail]] <= tail: # time to pop last element
del last_pos[a[tail]]
max_complement_pos = max(max_complement_pos, last_pos.get(k-num, -1))
count += head >= m and max_complement_pos > tail
last_pos[num] =head # add element at head
return count

Create a counting hash: elt -> count.
When the window moves:
add/increment the new element
decrement the departing element
check if (k - new_elt) is in your hash with a count >= 1. If it is, you've found a good subarray.

Related

Find scalar interval containing maximum elements from population A and zero elements from population B

Given two large sets A and B of scalar (floating point) values, what algorithm would you use to find the (scalar) range [x0,x1] containing zero elements from B and the maximum number of elements from A?
Is sorting complexity (O(n log n)) unavoidable?
Create a single list with all values, where each value is marked with two counts: one count that relates to set A, and another that relates to set B. Initially these counts are 1 and 0, when the value comes from set A, and 0 and 1 when the value comes from set B. So entries in this list could be tuples (value, countA, countB). This operation is O(n).
Sort these tuples. O(nlogn)
Merge tuples with duplicate values into one tuple, and accumulate the values in the corresponding counters, so that the tuple tells us how many times the value occurs in set A and how many times in set B. O(n)
Traverse this list in sorted order and maintain the largest sum of counts for countA of a series of adjacent tuples where countB is always 0, and the minimum and maximum value of that range. O(n)
The sorting is the determining factor of the time complexity: O(nlogn).
Sort both A and B in O(|A| log |A| + |B| log |B|). Then apply the following algorithm, which has complexity O(|A| + |B|):
i = j = k = 0
best_interval = (0, 1)
while i < len(B) - 1:
lo = B[i]
hi = B[i+1]
j = k # We can skip ahead from last iteration.
while j < len(A) and A[j] <= lo:
j += 1
k = j # We can skip ahead from the above loop.
while k < len(A) and A[k] < hi:
k += 1
if k - j > best_interval[1] - best_interval[0]:
best_interval = (j, k)
i += 1
x0 = A[best_interval[0]]
x1 = A[best_interval[1]-1]
It may look quadratic at a first inspection but note we never decrease j and k - it really is just a linear scan with three pointers.

Length of Longest Subarray with all same elements

I have this problem:
You are given an array of integers A and an integer k.
You can decrement elements of A up to k times, with the goal of producing a consecutive subarray whose elements are all equal. Return the length of the longest possible consecutive subarray that you can produce in this way.
For example, if A is [1,7,3,4,6,5] and k is 6, then you can produce [1,7,3,4-1,6-1-1-1,5-1-1] = [1,7,3,3,3,3], so you will return 4.
What is the optimal solution?
The subarray must be made equal to its lowest member since the only allowed operation is reduction (and reducing the lowest member would add unnecessary cost). Given:
a1, a2, a3...an
the cost to reduce is:
sum(a1..an) - n * min(a1..an)
For example,
3, 4, 6, 5
sum = 18
min = 3
cost = 18 - 4 * 3 = 6
One way to reduce the complexity from O(n^2) to a log factor is: for each element as the rightmost (or leftmost) element of the candidate best subarray, binary search the longest length within cost. To do that, we only need the sum, which we can get from a prefix sum in O(1), the length (which we are searching on already), and minimum range query, which is well-studied.
In response to comments below this post, here is a demonstration that the sequence of costs as we extend a subarray from each element as rightmost increases monotonically and can therefore be queried with binary search.
JavaScript code:
function cost(A, i, j){
const n = j - i + 1;
let sum = 0;
let min = Infinity;
for (let k=i; k<=j; k++){
sum += A[k];
min = Math.min(min, A[k]);
}
return sum - n * min;
}
function f(A){
for (let j=0; j<A.length; j++){
const rightmost = A[j];
const sequence = [];
for (let i=j; i>=0; i--)
sequence.push(cost(A, i, j));
console.log(rightmost + ': ' + sequence);
}
}
var A = [1,7,3,1,4,6,5,100,1,4,6,5,3];
f(A);
def cost(a, i, j):
n = j - i
s = 0
m = a[i]
for k in range(i,j):
s += a[k]
m = min(m, a[k])
return s - n * m;
def solve(n,k,a):
m=1
for i in range(n):
for j in range(i,n+1):
if cost(a,i,j)<=k:
x = j - i
if x>m:
m=x
return m
This is my python3 solution as per your specifications.

Counting bounded slice codility

I have recently attended a programming test in codility, and the question is to find the Number of bounded slice in an array..
I am just giving you breif explanation of the question.
A Slice of an array said to be a Bounded slice if Max(SliceArray)-Min(SliceArray)<=K.
If Array [3,5,6,7,3] and K=2 provided .. the number of bounded slice is 9,
first slice (0,0) in the array Min(0,0)=3 Max(0,0)=3 Max-Min<=K result 0<=2 so it is bounded slice
second slice (0,1) in the array Min(0,1)=3 Max(0,1)=5 Max-Min<=K result 2<=2 so it is bounded slice
second slice (0,2) in the array Min(0,1)=3 Max(0,2)=6 Max-Min<=K result 3<=2 so it is not bounded slice
in this way you can find that there are nine bounded slice.
(0, 0), (0, 1), (1, 1), (1, 2), (1, 3), (2, 2), (2, 3), (3, 3), (4, 4).
Following is the solution i have provided
private int FindBoundSlice(int K, int[] A)
{
int BoundSlice=0;
Stack<int> MinStack = new Stack<int>();
Stack<int> MaxStack = new Stack<int>();
for (int p = 0; p < A.Length; p++)
{
MinStack.Push(A[p]);
MaxStack.Push(A[p]);
for (int q = p; q < A.Length; q++)
{
if (IsPairBoundedSlice(K, A[p], A[q], MinStack, MaxStack))
BoundSlice++;
else
break;
}
}
return BoundSlice;
}
private bool IsPairBoundedSlice(int K, int P, int Q,Stack<int> Min,Stack<int> Max)
{
if (Min.Peek() > P)
{
Min.Pop();
Min.Push(P);
}
if (Min.Peek() > Q)
{
Min.Pop();
Min.Push(Q);
}
if (Max.Peek() < P)
{
Max.Pop();
Max.Push(P);
}
if (Max.Peek() < Q)
{
Max.Pop();
Max.Push(Q);
}
if (Max.Peek() - Min.Peek() <= K)
return true;
else
return false;
}
But as per codility review the above mentioned solution is running in O(N^2), can anybody help me in finding the solution which runs in O(N).
Maximum Time Complexity allowed O(N).
Maximum Space Complexity allowed O(N).
Disclaimer
It is possible and I demonstrate it here to write an algorithm that solves the problem you described in linear time in the worst case, visiting each element of the input sequence at a maximum of two times.
This answer is an attempt to deduce and describe the only algorithm I could find and then gives a quick tour through an implementation written in Clojure. I will probably write a Java implementation as well and update this answer but as of now that task is left as an excercise to the reader.
EDIT: I have now added a working Java implementation. Please scroll down to the end.
EDIT: Notices that PeterDeRivaz provided a sequence ([0 1 2 3 4], k=2) making the algorithm visit certain elements three times and probably falsifying it. I will update the answer at later time regarding that issue.
Unless I have overseen something trivial I can hardly imagine significant further simplification. Feedback is highly welcome.
(I found your question here when googling for codility like exercises as a preparation for a job test there myself. I set myself aside half an hour to solve it and didn't come up with a solution, so I was unhappy and spent some dedicated hammock time - now that I have taken the test I must say found the presented exercises significantly less difficult than this problem).
Observations
For any valid bounded slice of size we can say that it is divisible into the triangular number of size bounded sub-slices with their individual bounds lying within the slices bounds (including itself).
Ex. 1: [3 1 2] is a bounded slice for k=2, has a size of 3 and thus can be divided into (3*4)/2=6 sub-slices:
[3 1 2] ;; slice 1
[3 1] [1 2] ;; slices 2-3
[3] [1] [2] ;; slices 4-6
Naturally, all those slices are bounded slices for k.
When you have two overlapping slices that are both bounded slices for k but differ in their bounds, the amount of possible bounded sub-slices in the array can be calculated as the sum of the triangular numbers of those slices minus the triangular number of the count of elements they share.
Ex. 2: The bounded slices [4 3 1] and [3 1 2] for k=2 differ in bounds and overlap in the array [4 3 1 2]. They share the bounded slice [3 1] (notice that overlapping bounded slices always share a bounded slice, otherwise they could not overlap). For both slices the triangular number is 6, the triangular number of the shared slice is (2*3)/2=3. Thus the array can be divided into 6+6-3=9 slices:
[4 3 1] [3 1 2] ;; 1-2 the overlapping slices
[4 3] 6 [3 1] 6 [1 2] ;; 3-5 two slices and the overlapping slice
[4] [3] 3 [1] [2] ;; 6-9 single-element slices
As observable, the triangle of the overlapping bounded slice is part of both triangles element count, so that is why it must be subtracted from the added triangles as it otherwise would be counted twice. Again, all counted slices are bounded slices for k=2.
Approach
The approach is to find the largest possible bounded slices within the input sequence until all elements have been visited, then to sum them up using the technique described above.
A slice qualifies as one of the largest possible bounded slices (in the following text often referred as one largest possible bounded slice which shall then not mean the largest one, only one of them) if the following conditions are fulfilled:
It is bounded
It may share elements with two other slices to its left and right
It can not grow to the left or to the right without becoming unbounded - meaning: If it is possible, it has to contain so many elements that its maximum-minimum=k
By implication a bounded slice does not qualify as one of the largest possible bounded slices if there is a bounded slice with more elements that entirely encloses this slice
As a goal our algorithm must be capable to start at any element in the array and determine one largest possible bounded slice that contains that element and is the only one to contain it. It is then guaranteed that the next slice constructed from a starting point outside of it will not share the starting element of the previous slice because otherwise it would be one largest possible bounded slice with the previously found slice together (which now, by definition, is impossible). Once that algorithm has been found it can be applied sequentially from the beginning building such largest possible slices until no more elements are left. This would guarantee that each element is traversed two times in the worst case.
Algorithm
Start at the first element and find the largest possible bounded slice that includes said first element. Add the triangular number of its size to the counter.
Continue exactly one element after found slice and repeat. Subtract the triangular number of the count of elements shared with the previous slice (found searching backwards), add the triangular number of its total size (found searching forwards and backwards) until the sequence has been traversed. Repeat until no more elements can be found after a found slice, return the result.
Ex. 3: For the input sequence [4 3 1 2 0] with k=2 find the count of bounded slices.
Start at the first element, find the largest possible bounded slice:
[4 3], count=2, overlap=0, result=3
Continue after that slice, find the largest possible bounded slice:
[3 1 2], size=3, overlap=1, result=3-1+6=8
...
[1 2 0], size=3, overlap=2, result=8-3+6=11
result=11
Process behavior
In the worst case the process grows linearly in time and space. As proven above, elements are traversed two times at max. and per search for a largest possible bounded slice some locals need to be stored.
However, the process becomes dramatically faster as the array contains less largest possible bounded slices. For example, the array [4 4 4 4] with k>=0 has only one largest possible bounded slice (the array itself). The array will be traversed once and the triangular number of the count of its elements is returned as the correct result. Notice how this is complementary to solutions of worst case growth O((n * (n+1)) / 2). While they reach their worst case with only one largest possible bounded slice, for this algorithm such input would mean the best case (one visit per element in one pass from start to end).
Implementation
The most difficult part of the implementation is to find a largest bounded slice from one element scanning in two directions. When we search in one direction, we track the minimum and maximum bounds of our search and see how they compare to k. Once an element has been found that stretches the bounds so that maximum-minimum <= k does not hold anymore, we are done in that direction. Then we search into the other direction but use the last valid bounds of the backwards scan as starting bounds.
Ex.4: We start in the array [4 3 1 2 0] at the third element (1) after we have successfully found the largest bounded slice [4 3]. At this point we only know that our starting value 1 is the minimum, the maximum (of the searched largest bounded slice) or between those two. We scan backwards (exclusive) and stop after the second element (as 4 - 1 > k=2). The last valid bounds were 1 and 3. When we now scan forwards, we use the same algorithm but use 1 and 3 as bounds. Notice that even though in this example our starting element is one of the bounds, that is not always the case: Consider the same scenario with a 2 instead of the 3: Neither that 2 or the 1 would be determined to be a bound as we could find a 0 but also a 3 while scanning forwards - only then it could be decided which of 2 or 3 is a lower or upper bound.
To solve that problem here is a special counting algorithm. Don't worry if you don't understand Clojure yet, it does just what it says.
(defn scan-while-around
"Count numbers in `coll` until a number doesn't pass an (inclusive)
interval filter where said interval is guaranteed to contain
`around` and grows with each number to a maximum size of `size`.
Return count and the lower and upper bounds (inclusive) that were not
passed as [count lower upper]."
([around size coll]
(scan-while-around around around size coll))
([lower upper size coll]
(letfn [(step [[count lower upper :as result] elem]
(let [lower (min lower elem)
upper (max upper elem)]
(if (<= (- upper lower) size)
[(inc count) lower upper]
(reduced result))))]
(reduce step [0 lower upper] coll))))
Using this function we can search backwards, from before the starting element passing it our starting element as around and using k as the size.
Then we start a forward scan from the starting element with the same function, by passing it the previously returned bounds lower and upper.
We add their returned counts to the total count of the found largest possible slide and use the count of the backwards scan as the length of the overlap and subtract its triangular number.
Notice that in any case the forward scan is guaranteed to return a count of at least one. This is important for the algorithm for two reasons:
We use the resulting count of the forward scan to determine the starting point of the next search (and would loop infinitely with it being 0)
The algorithm would not be correct as for any starting element the smallest possible largest possible bounded slice always exists as an array of size 1 containing the starting element.
Assuming that triangular is a function returning the triangular number, here is the final algorithm:
(defn bounded-slice-linear
"Linear implementation"
[s k]
(loop [start-index 0
acc 0]
(if (< start-index (count s))
(let [start-elem (nth s start-index)
[backw lower upper] (scan-while-around start-elem
k
(rseq (subvec s 0
start-index)))
[forw _ _] (scan-while-around lower upper k
(subvec s start-index))]
(recur (+ start-index forw)
(-> acc
(+ (triangular (+ forw
backw)))
(- (triangular backw)))))
acc)))
(Notice that the creation of subvectors and their reverse sequences happens in constant time and that the resulting vectors share structure with the input vector so no "rest-size" depending allocation is happening (although it may look like it). This is one of the beautiful aspects of Clojure, that you can avoid tons of index-fiddling and usually work with elements directly.)
Here is a triangular implementation for comparison:
(defn bounded-slice-triangular
"O(n*(n+1)/2) implementation for testing."
[s k]
(reduce (fn [c [elem :as elems]]
(+ c (first (scan-while-around elem k elems))))
0
(take-while seq
(iterate #(subvec % 1) s))))
Both functions only accept vectors as input.
I have extensively tested their behavior for correctness using various strategies. Please try to prove them wrong anyway. Here is a link to a full file to hack on: https://www.refheap.com/32229
Here is the algorithm implemented in Java (not tested as extensively but seems to work, Java is not my first language. I'd be happy about feedback to learn)
public class BoundedSlices {
private static int triangular (int i) {
return ((i * (i+1)) / 2);
}
public static int solve (int[] a, int k) {
int i = 0;
int result = 0;
while (i < a.length) {
int lower = a[i];
int upper = a[i];
int countBackw = 0;
int countForw = 0;
for (int j = (i-1); j >= 0; --j) {
if (a[j] < lower) {
if (upper - a[j] > k)
break;
else
lower = a[j];
}
else if (a[j] > upper) {
if (a[j] - lower > k)
break;
else
upper = a[j];
}
countBackw++;
}
for (int j = i; j <a.length; j++) {
if (a[j] < lower) {
if (upper - a[j] > k)
break;
else
lower = a[j];
}
else if (a[j] > upper) {
if (a[j] - lower > k)
break;
else
upper = a[j];
}
countForw++;
}
result -= triangular(countBackw);
result += triangular(countForw + countBackw);
i+= countForw;
}
return result;
}
}
Now codility release their golden solution with O(N) time and space.
https://codility.com/media/train/solution-count-bounded-slices.pdf
if you still confused after read the pdf, like me.. here is a
very nice explanation
The solution from the pdf:
def boundedSlicesGolden(K, A):
N = len(A)
maxQ = [0] * (N + 1)
posmaxQ = [0] * (N + 1)
minQ = [0] * (N + 1)
posminQ = [0] * (N + 1)
firstMax, lastMax = 0, -1
firstMin, lastMin = 0, -1
j, result = 0, 0
for i in xrange(N):
while (j < N):
# added new maximum element
while (lastMax >= firstMax and maxQ[lastMax] <= A[j]):
lastMax -= 1
lastMax += 1
maxQ[lastMax] = A[j]
posmaxQ[lastMax] = j
# added new minimum element
while (lastMin >= firstMin and minQ[lastMin] >= A[j]):
lastMin -= 1
lastMin += 1
minQ[lastMin] = A[j]
posminQ[lastMin] = j
if (maxQ[firstMax] - minQ[firstMin] <= K):
j += 1
else:
break
result += (j - i)
if result >= maxINT:
return maxINT
if posminQ[firstMin] == i:
firstMin += 1
if posmaxQ[firstMax] == i:
firstMax += 1
return result
HINTS
Others have explained the basic algorithm which is to keep 2 pointers and advance the start or the end depending on the current difference between maximum and minimum.
It is easy to update the maximum and minimum when moving the end.
However, the main challenge of this problem is how to update when moving the start. Most heap or balanced tree structures will cost O(logn) to update, and will result in an overall O(nlogn) complexity which is too high.
To do this in time O(n):
Advance the end until you exceed the allowed threshold
Then loop backwards from this critical position storing a cumulative value in an array for the minimum and maximum at every location between the current end and the current start
You can now advance the start pointer and immediately lookup from the arrays the updated min/max values
You can carry on using these arrays to update start until start reaches the critical position. At this point return to step 1 and generate a new set of lookup values.
Overall this procedure will work backwards over every element exactly once, and so the total complexity is O(n).
EXAMPLE
For the sequence with K of 4:
4,1,2,3,4,5,6,10,12
Step 1 advances the end until we exceed the bound
start,4,1,2,3,4,5,end,6,10,12
Step 2 works backwards from end to start computing array MAX and MIN.
MAX[i] is maximum of all elements from i to end
Data = start,4,1,2,3,4,5,end,6,10,12
MAX = start,5,5,5,5,5,5,critical point=end -
MIN = start,1,1,2,3,4,5,critical point=end -
Step 3 can now advance start and immediately lookup the smallest values of max and min in the range start to critical point.
These can be combined with the max/min in the range critical point to end to find the overall max/min for the range start to end.
PYTHON CODE
def count_bounded_slices(A,k):
if len(A)==0:
return 0
t=0
inf = max(abs(a) for a in A)
left=0
right=0
left_lows = [inf]*len(A)
left_highs = [-inf]*len(A)
critical = 0
right_low = inf
right_high = -inf
# Loop invariant
# t counts number of bounded slices A[a:b] with a<left
# left_lows[i] is defined for values in range(left,critical)
# and contains the min of A[left:critical]
# left_highs[i] contains the max of A[left:critical]
# right_low is the minimum of A[critical:right]
# right_high is the maximum of A[critical:right]
while left<len(A):
# Extend right as far as possible
while right<len(A) and max(left_highs[left],max(right_high,A[right]))-min(left_lows[left],min(right_low,A[right]))<=k:
right_low = min(right_low,A[right])
right_high = max(right_high,A[right])
right+=1
# Now we know that any slice starting at left and ending before right will satisfy the constraints
t += right-left
# If we are at the critical position we need to extend our left arrays
if left==critical:
critical=right
left_low = inf
left_high = -inf
for x in range(critical-1,left,-1):
left_low = min(left_low,A[x])
left_high = max(left_high,A[x])
left_lows[x] = left_low
left_highs[x] = left_high
right_low = inf
right_high = -inf
left+=1
return t
A = [3,5,6,7,3]
print count_bounded_slices(A,2)
Here is my attempt at solving this problem:
- you start with p and q form position 0, min =max =0;
- loop until p = q = N-1
- as long as max-min<=k advance q and increment number of bounded slides.
- if max-min >k advance p
- you need to keep track of 2x min/max values because when you advance p, you might remove one or both of the min/max values
- each time you advance p or q update min/max
I can write the code if you want, but I think the idea is explicit enough...
Hope it helps.
Finally a code that works according to the below mentioned idea. This outputs 9.
(The code is in C++. You can change it for Java)
#include <iostream>
using namespace std;
int main()
{
int A[] = {3,5,6,7,3};
int K = 2;
int i = 0;
int j = 0;
int minValue = A[0];
int maxValue = A[0];
int minIndex = 0;
int maxIndex = 0;
int length = sizeof(A)/sizeof(int);
int count = 0;
bool stop = false;
int prevJ = 0;
while ( (i < length || j < length) && !stop ) {
if ( maxValue - minValue <= K ) {
if ( j < length-1 ) {
j++;
if ( A[j] > maxValue ) {
maxValue = A[j];
maxIndex = j;
}
if ( A[j] < minValue ) {
minValue = A[j];
minIndex = j;
}
} else {
count += j - i + 1;
stop = true;
}
} else {
if ( j > 0 ) {
int range = j - i;
int count1 = range * (range + 1) / 2; // Choose 2 from range with repitition.
int rangeRep = prevJ - i; // We have to subtract already counted ones.
int count2 = rangeRep * (rangeRep + 1) / 2;
count += count1 - count2;
prevJ = j;
}
if ( A[j] == minValue ) {
// first reach the first maxima
while ( A[i] - minValue <= K )
i++;
// then come down to correct level.
while ( A[i] - minValue > K )
i++;
maxValue = A[i];
} else {//if ( A[j] == maxValue ) {
while ( maxValue - A[i] <= K )
i++;
while ( maxValue - A[i] > K )
i++;
minValue = A[i];
}
}
}
cout << count << endl;
return 0;
}
Algorithm (minor tweaking done in code):
Keep two pointers i & j and maintain two values minValue and maxValue..
1. Initialize i = 0, j = 0, and minValue = maxValue = A[0];
2. If maxValue - minValue <= K,
- Increment count.
- Increment j.
- if new A[j] > maxValue, maxValue = A[j].
- if new A[j] < minValue, minValue = A[j].
3. If maxValue - minValue > K, this can only happen iif
- the new A[j] is either maxValue or minValue.
- Hence keep incrementing i untill abs(A[j] - A[i]) <= K.
- Then update the minValue and maxValue and proceed accordingly.
4. Goto step 2 if ( i < length-1 || j < length-1 )
I have provided the answer for the same question in different SO Question
(1) For an A[n] input , for sure you will have n slices , So add at first.
for example for {3,5,4,7,6,3} you will have for sure (0,0)(1,1)(2,2)(3,3)(4,4) (5,5).
(2) Then find the P and Q based on min max comparison.
(3) apply the Arithmetic series formula to find the number of combination between (Q-P) as a X . then it would be X ( X+1) /2 But we have considered "n" already so the formula would be (x ( x+1) /2) - x) which is x (x-1) /2 after basic arithmetic.
For example in the above example if P is 0 (3) and Q is 3 (7) we have Q-P is 3 . When apply the formula the value would be 3 (3-1)/2 = 3. Now add the 6 (length) + 3 .Then take care of Q- min or Q - max records.
Then check the Min and Max index .In this case Min as 0 Max as 3 (obivously any one of the would match with currentIndex (which ever used to loop). here we took care of (0,1)(0,2)(1,2) but we have not taken care of (1,3) (2,3) . Rather than start the hole process from index 1 , save this number (position 2,3 = 2) , then start same process from currentindex( assume min and max as A[currentIndex] as we did while starting). finaly multiply the number with preserved . in our case 2 * 2 ( A[7],A[6]) .
It runs in O(N) time with O(N) space.
I came up with a solution in Scala:
package test
import scala.collection.mutable.Queue
object BoundedSlice {
def apply(k:Int, a:Array[Int]):Int = {
var c = 0
var q:Queue[Int] = Queue()
a.map(i => {
if(!q.isEmpty && Math.abs(i-q.last) > k)
q.clear
else
q = q.dropWhile(j => (Math.abs(i-j) > k)).toQueue
q += i
c += q.length
})
c
}
def main(args: Array[String]): Unit = {
val a = Array[Int](3,5,6,7,3)
println(BoundedSlice(2, a))
}
}

checking if 2 numbers of array add up to I

I saw a interview question as follows:
Give an unsorted array of integers A and and an integer I, find out if any two members of A add up to I.
any clues?
time complexity should be less
Insert the elements into hashtable.
While inserting x, check if I-x already exists. O(n) expected time.
Otherwise, sort the array ascending (from index 0 to n-1). Have two pointers, one at max and one at min (call them M and m respectively).
If a[M] + a[m] > I then M--
If a[M] + a[m] < I then m++
If a[M] + a[m] == I you have found it
If m > M, no such numbers exist.
If you have the range which the integers are within, you can use a counting sort-like solution where you scan over the array and count an array up. Ex you have the integers
input = [0,1,5,2,6,4,2]
And you create an array like this:
count = int[7]
which (in Java,C# etc.) are suited for counting integers between 0 and 6.
foreach integer in input
count[i] = count[i] + 1
This will give you the array [1,1,2,0,1,1,1]. Now you can scan over this array (half of it) and check whether there are integers which adds up to i like
for j = 0 to count.length - 1
if count[j] != 0 and count[i - j] != 0 then // Check for array out-of-bounds here
WUHUU! the integers j and i - j adds up
Overall this algorithm gives you O(n + k) where n is from the scan over the input of length n and k is the scan over the count array of length k (integers between 0 and k - 1). This means that if n > k then you have a guaranteed O(n) solution.
For example, loop and add possible number to set or hash and if found, just return it.
>>> A = [11,3,2,9,12,15]
>>> I = 14
>>> S = set()
>>> for x in A:
... if x in S:
... print I-x, x
... S.add(I-x)
...
11 3
2 12
>>>
sort the array
for each element X in A, perform a binary search for I-X. If I-X is in A, we have a solution.
This is O(nlogn).
If A contains integers in a given (small enough) range, we can use a trick to make it O(n):
we have an array V. For each element X in A, we increment V[X].
when we increment V[X] we also check if V[I-X] is >0. If it is, we have a solution.
public static boolean findSum2(int[] a, int sum) {
if (a.length == 0) {
return false;
}
Arrays.sort(a);
int i = 0;
int j = a.length - 1;
while (i < j) {
int tmp = a[i] + a[j];
if (tmp == sum) {
System.out.println(a[i] + "+" + a[j] + "=" + sum);
return true;
} else if (tmp > sum) {
j--;
} else {
i++;
}
}
return false;
}
O(n) time and O(1) space
If the array is sorted there is a solution in O(n) time complexity.
Suppose are array is
array = {0, 1, 3, 5, 8, 10, 14}
And our x1 + x2 = k = 13, so output should be= 5, 8
Take two pointers one at start of array, one at end of array
Add both the elements at ptr1 and ptr2
array[ptr1] + array[ptr2]
if sum > k then decrement ptr2 else increment ptr1
Repeat step2 and step3 till ptr1 != ptr2
Same thing explained in detail here. Seems like an Amazon interview Question
http://inder-gnu.blogspot.com/2007/10/find-two-nos-in-array-whose-sum-x.html
for nlogn : Sort the array and for each element [0<=j<len A] , subtract i-A[j] and do a binary search for this element in sorted array.
hashmap (frequency of no, number) should work in O(n).
for each ele in the array
if (sum - ele) is hashed and hashed value is not equal to index of ele
print ele, sum-ele
end-if
Hash ele as key and index as value
end-for
PERL implementation to detect if a sorted array contains two integer that sum up to Number
my #a = (11,3,2,9,12,15);
my #b = sort {$a <=> $b} #a;
my %hash;
my $sum = 14;
my $index = 0;
foreach my $ele (#b) {
my $sum_minus_ele = $sum - $ele;
print "Trace: $ele :: $index :: $sum_minus_ele\n";
if(exists($hash{$sum_minus_ele}) && $hash{$sum_minus_ele} != $index ) {
print "\tElement: ".$ele." :: Sum-ele: ".$sum_minus_ele."\n";
}
$hash{$ele} = $index;
$index++;
}
This might be possible in the following way: Before putting the elements into the hashmap, you can check if the element is greater than the required sum. If it is, you can simply skip that element, else you can proceed with putting it into the hashmap. Its a slight improvement on your algorithm, although the overall time still remains the same.
This can be solved using the UNION-FIND algorithm, which can check in constant time whether an element is into a set.
So, the algorithm would be so :
foundsum0 = false;
foreach (el: array) {
if find (-x): foundsum0 = true;
else union (x);
}
FIND and UNION are constant, O(1).
here is a O(n) solution in java using O(n) extra space. This uses hashSet to implement it
http://www.dsalgo.com/UnsortedTwoSumToK.php
Here is a solution witch takes into account duplicate entries. It is written in javascript and assumes array is sorted. The solution runs in O(n) time and does not use any extra memory aside from variable. Choose a sorting algorithm of choice. (radix O(kn)!) and then run the array through this baby.
var count_pairs = function(_arr,x) {
if(!x) x = 0;
var pairs = 0;
var i = 0;
var k = _arr.length-1;
if((k+1)<2) return pairs;
var halfX = x/2;
while(i<k) {
var curK = _arr[k];
var curI = _arr[i];
var pairsThisLoop = 0;
if(curK+curI==x) {
// if midpoint and equal find combinations
if(curK==curI) {
var comb = 1;
while(--k>=i) pairs+=(comb++);
break;
}
// count pair and k duplicates
pairsThisLoop++;
while(_arr[--k]==curK) pairsThisLoop++;
// add k side pairs to running total for every i side pair found
pairs+=pairsThisLoop;
while(_arr[++i]==curI) pairs+=pairsThisLoop;
} else {
// if we are at a mid point
if(curK==curI) break;
var distK = Math.abs(halfX-curK);
var distI = Math.abs(halfX-curI);
if(distI > distK) while(_arr[++i]==curI);
else while(_arr[--k]==curK);
}
}
return pairs;
}
I solved this during an interview for a large corporation. They took it but not me.
So here it is for everyone.
Start at both side of the array and slowly work your way inwards making sure to count duplicates if they exist.
It only counts pairs but can be reworked to
find the pairs
find pairs < x
find pairs > x
Enjoy and don't forget to bump if its the best solution!
Split the array into two groups <= I/2 and > I/2. Then split those into <= I/4,>I/4 and <= 3I/4,>3I/4
And repeat for log(I) steps and check the pairs joining from the outside e.g 1I/8<= and >7I/8 and if they both contain at least one element then they add to I.
This will take n.Log(I) + n/2 steps and for I
An implementation in python
def func(list,k):
temp={} ## temporary dictionary
for i in range(len(list)):
if(list[i] in temp): ## if temp already has the key just increment its value
temp[list[i]] +=1
else: ## else initialize the key in temp with count as 0
temp[list[i]]=0
if(k-list[i] in temp and ((k/2 != list[i]) or temp[list[i]]>=1)): ## if the corresponding other value to make the sum k is in the dictionary and its either not k/2 or the count for that number is more than 1
return True
return False
Input:
list is a list of numbers (A in the question above)...
k is the sum (I in the question above)....
The function outputs True if there exist a pair in the list whose sum is equal to k and False otherwise...
I am using a dictionary whose key is the element in the array(list) and value is the count of that element(number of times that element is present in that list).
Average running time complexity is O(n).
This implementation also takes care of two important edge cases:
repeated numbers in the list and
not adding the same number twice.

Minimum number of swaps needed to change Array 1 to Array 2?

For example, input is
Array 1 = [2, 3, 4, 5]
Array 2 = [3, 2, 5, 4]
Minimum number of swaps needed are 2.
The swaps need not be with adjacent cells, any two elements can be swapped.
https://www.spoj.com/problems/YODANESS/
As #IVlad noted in the comment to your question Yodaness problem asks you to count number of inversions and not minimal number of swaps.
For example:
L1 = [2,3,4,5]
L2 = [2,5,4,3]
The minimal number of swaps is one (swap 5 and 3 in L2 to get L1), but number of inversions is three: (5 4), (5 3), and (4 3) pairs are in the wrong order.
The simplest way to count number of inversions follows from the definition:
A pair of elements (pi,pj) is called an inversion in a permutation p if i < j and pi > pj.
In Python:
def count_inversions_brute_force(permutation):
"""Count number of inversions in the permutation in O(N**2)."""
return sum(pi > permutation[j]
for i, pi in enumerate(permutation)
for j in xrange(i+1, len(permutation)))
You could count inversion in O(N*log(N)) using divide & conquer strategy (similar to how a merge sort algorithm works). Here's pseudo-code from Counting Inversions translated to Python code:
def merge_and_count(a, b):
assert a == sorted(a) and b == sorted(b)
c = []
count = 0
i, j = 0, 0
while i < len(a) and j < len(b):
c.append(min(b[j], a[i]))
if b[j] < a[i]:
count += len(a) - i # number of elements remaining in `a`
j+=1
else:
i+=1
# now we reached the end of one the lists
c += a[i:] + b[j:] # append the remainder of the list to C
return count, c
def sort_and_count(L):
if len(L) == 1: return 0, L
n = len(L) // 2
a, b = L[:n], L[n:]
ra, a = sort_and_count(a)
rb, b = sort_and_count(b)
r, L = merge_and_count(a, b)
return ra+rb+r, L
Example:
>>> sort_and_count([5, 4, 2, 3])
(5, [2, 3, 4, 5])
Here's solution in Python for the example from the problem:
yoda_words = "in the force strong you are".split()
normal_words = "you are strong in the force".split()
perm = get_permutation(normal_words, yoda_words)
print "number of inversions:", sort_and_count(perm)[0]
print "number of swaps:", number_of_swaps(perm)
Output:
number of inversions: 11
number of swaps: 5
Definitions of get_permutation() and number_of_swaps() are:
def get_permutation(L1, L2):
"""Find permutation that converts L1 into L2.
See http://en.wikipedia.org/wiki/Cycle_representation#Notation
"""
if sorted(L1) != sorted(L2):
raise ValueError("L2 must be permutation of L1 (%s, %s)" % (L1,L2))
permutation = map(dict((v, i) for i, v in enumerate(L1)).get, L2)
assert [L1[p] for p in permutation] == L2
return permutation
def number_of_swaps(permutation):
"""Find number of swaps required to convert the permutation into
identity one.
"""
# decompose the permutation into disjoint cycles
nswaps = 0
seen = set()
for i in xrange(len(permutation)):
if i not in seen:
j = i # begin new cycle that starts with `i`
while permutation[j] != i: # (i σ(i) σ(σ(i)) ...)
j = permutation[j]
seen.add(j)
nswaps += 1
return nswaps
As implied by Sebastian's solution, the algorithm you are looking for can be based on inspecting the permutation's cycles.
We should consider array #2 to be a permutation transformation on array #1. In your example, the permutation can be represented as P = [2,1,4,3].
Every permutation can be expressed as a set of disjoint cycles, representing cyclic position changes of the items. The permutation P for example has 2 cycles: (2,1) and (4,3). Therefore two swaps are enough. In the general case, you should simply subtract the number of cycles from the permutation length, and you get the minimum number of required swaps. This follows from the observation that in order to "fix" a cycle of N elements, N-1 swaps are enough.
This problem has a clean, greedy, trivial solution:
Find any swap operation which gets both swapped elements in Array1 closer to their destination in Array2. Perform the swap operation on Array1 if one exists.
Repeat step1 until no more such swap operations exist.
Find any swap operation which gets one swapped element in Array1 closer to its destination in Array2. If such an operation exist, perform it on Array1.
Go back to step1 until Array1 == Array2.
The correctness of the algorithm can be proved by defining a potential for the problem as the sum of distances of all elements in array1 from their destination in array2.
This can be easily converted to another type of problem, which can be solved more efficiently. All that is needed is to convert the arrays into permutations, i.e. change the values to their ids. So your arrays:
L1 = [2,3,4,5]
L2 = [2,5,4,3]
would become
P1 = [0,1,2,3]
P2 = [0,3,2,1]
with the assignment 2->0, 3->1, 4->2, 5->3. This can only be done if there are no repeated items though. If there are, then this becomes harder to solve.
Converting permutation from one to another can be converted to a similar problem (Number of swaps in a permutation) by inverting the target permutation in O(n), composing the permutations in O(n) and then finding the number of swaps from there to an identity permutation in O(m).
Given:
int P1[] = {0, 1, 2, 3}; // 2345
int P2[] = {0, 3, 2, 1}; // 2543
// we can follow a simple algebraic modification
// (see http://en.wikipedia.org/wiki/Permutation#Product_and_inverse):
// P1 * P = P2 | premultiply P1^-1 *
// P1^-1 * P1 * P = P1^-1 * P2
// I * P = P1^-1 * P2
// P = P1^-1 * P2
// where P is a permutation that makes P1 into P2.
// also, the number of steps from P to identity equals
// the number of steps from P1 to P2.
int P1_inv[4];
for(int i = 0; i < 4; ++ i)
P1_inv[P1[i]] = i;
// invert the first permutation in O(n)
int P[4];
for(int i = 0; i < 4; ++ i)
P[i] = P2[P1_inv[i]];
// chain the permutations in O(n)
int num_steps = NumSteps(P, 4); // will return 2
// now we just need to count the steps in O(num_steps)
To count the steps, a simple algorithm can be devised, such as:
int NumSteps(int *P, int n)
{
int count = 0;
for(int i = 0; i < n; ++ i) {
for(; P[i] != i; ++ count) // could be permuted multiple times
swap(P[P[i]], P[i]); // look where the number at hand should be
}
// count number of permutations
return count;
}
This always swaps an item for a place where it should be in the identity permutation, therefore at every step it undoes and counts one swap. Now, provided that the number of swaps it returns is indeed minimum, the runtime of the algorithm is bounded by it and is guaranteed to finish (instead of getting stuck in an infinite loop). It will run in O(m) swaps or O(m + n) loop iterations where m is number of swaps (the count returned) and n is number of items in the sequence (4). Note that m < n is always true. Therefore, this should be superior to O(n log n) solutions, as the upper bound is O(n - 1) of swaps or O(n + n - 1) of loop iterations here, which is both practically O(n) (constant factor of 2 omitted in the latter case).
The algorithm will only work for valid permutations, it will loop infinitely for sequences with duplicate values and will do out-of-bounds array access (and crash) for sequences with values other than [0, n). A complete test case can be found here (builds with Visual Studio 2008, the algorithm itself should be fairly portable). It generates all possible permutations of lengths 1 to 32 and checks against solutions, generated with breadth first search (BFS), seems to work for all of permutations of lengths 1 to 12, then it becomes fairly slow but I assume it will just continue working.
Algorithm:
Check if the elements of list in the same position are equal. If yes, no swap is required. If no, swap the position of list-element wherever the element is matching
Iterate the process for the entire list elements.
Code:
def nswaps(l1, l2):
cnt = 0
for i in range(len(l1)):
if l1[i] != l2[i]:
ind = l2.index(l1[i])
l2[i], l2[ind] = l2[ind], l2[i]
cnt += 1
pass
return cnt
Since we already know that arr2 has the correct indexes of each element present in arr1. Therefore, we can simply compare the arr1 elements with arr2, and swap them with the correct indexes in case they are at wrong index.
def minimum_swaps(arr1, arr2):
swaps = 0
for i in range(len(arr1)):
if arr1[i] != arr2[i]:
swaps += 1
element = arr1[i]
index = arr1.index(arr2[i]) # find index of correct element
arr1[index] = element # swap
arr1[i] = arr2[i]
return swaps
#J.F. Sebastian and #Eyal Schneider's answer are pretty cool.
I got inspired on solving a similar problem: Calculate the minimum swaps needed to sort an array, e.g.: to sort {2,1,3,0}, you need minimum 2 swaps.
Here is the Java Code:
// 0 1 2 3
// 3 2 1 0 (0,3) (1,2)
public static int sortWithSwap(int [] a) {
Integer[] A = new Integer[a.length];
for(int i=0; i<a.length; i++) A[i] = a[i];
Integer[] B = Arrays.copyOf(mapping(A), A.length, Integer[].class);
int cycles = 0;
HashSet<Integer> set = new HashSet<>();
boolean newCycle = true;
for(int i=0; i<B.length; ) {
if(!set.contains(B[i])) {
if(newCycle) {
newCycle = false;
cycles++;
}
set.add(B[i]);
i = B[i];
}
else if(set.contains(B[i])) { // duplicate in existing cycles
newCycle = true;
i++;
}
}
// suppose sequence has n cycles, each cycle needs swap len(cycle)-1 times
// and sum of length of all cycles is length of sequence, so
// swap = sequence length - cycles
return a.length - cycles;
}
// a b b c
// c a b b
// 3 0 1 1
private static Object[] mapping(Object[] A) {
Object[] B = new Object[A.length];
Object[] ret = new Object[A.length];
System.arraycopy(A, 0, B, 0, A.length);
Arrays.sort(A);
HashMap<Object, Integer> map = new HashMap<>();
for(int i=0; i<A.length; i++) {
map.put(A[i], i);
}
for(int i=0; i<B.length; i++) {
ret[i] = map.get(B[i]);
}
return ret;
}
This seems like an edit distance problem, except that only transpositions are allowed.
Check out Damerau–Levenshtein distance pseudo code. I believe you can adjust it to count only the transpositions.

Resources