What is the difference between BFS and DFS algorithms? - algorithm

A timeout occurred when solving an algorithm problem with BFS. However, there is a problem that can be solved with DFS. Why does this difference occur?
The problem is to calculate the number of arrivals from (1,1) to (N, N) by moving horizontally, vertically, or diagonally.
It took 1331.0ms if the problem was solved by BFS (worst case), and 62.0ms when it was solved by DFS. (I have created and tested 16 * 16 arrays.)
Attach the issue URL. (But please understand that it is Korean.)
URL> https://www.acmicpc.net/problem/17070
The answer I want to hear is...
I thought the BFS algorithm would be faster, but why is DFS faster?
Is BFS slower because there are many elements in the queue? I want to know the exact reason.
Implementation code>
class Location {
int x;
int y;
int dir;
public Location(int x, int y, int dir) {
super();
this.x = x;
this.y = y;
this.dir = dir;
}
}
public class Main {
static int[][] map;
static int Answer;
static int N;
public static void main(String args[]) throws IOException {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
StringTokenizer st;
N = Integer.parseInt(br.readLine());
map = new int[N + 1][N + 1];
for (int i = 1; i <= N; i++) {
st = new StringTokenizer(br.readLine());
for (int j = 1; j <= N; j++)
map[i][j] = Integer.parseInt(st.nextToken());
}
DFS(1, 2, 0);
System.out.println(Answer);
Answer = 0;
BFS(1, 2, 0);
System.out.println(Answer);
br.close();
}
static void DFS(int x, int y, int pre) {
if (x == N && y == N) {
Answer++;
return;
}
if (pre == 0) {
if (y + 1 <= N && map[x][y + 1] == 0)
DFS(x, y + 1, 0);
if (y + 1 <= N && map[x][y + 1] == 0 && x + 1 <= N && map[x + 1][y] == 0 && map[x + 1][y + 1] == 0)
DFS(x + 1, y + 1, 1);
} else if (pre == 1) {
if (y + 1 <= N && map[x][y + 1] == 0)
DFS(x, y + 1, 0);
if (x + 1 <= N && map[x + 1][y] == 0)
DFS(x + 1, y, 2);
if (y + 1 <= N && map[x][y + 1] == 0 && x + 1 <= N && map[x + 1][y] == 0 && map[x + 1][y + 1] == 0)
DFS(x + 1, y + 1, 1);
} else {
if (x + 1 <= N && map[x + 1][y] == 0)
DFS(x + 1, y, 2);
if (y + 1 <= N && map[x][y + 1] == 0 && x + 1 <= N && map[x + 1][y] == 0 && map[x + 1][y + 1] == 0)
DFS(x + 1, y + 1, 1);
}
}
static void BFS(int startX, int startY, int dir) {
ArrayDeque<Location> arrayDeque = new ArrayDeque<>();
arrayDeque.add(new Location(startX, startY, dir));
Location location;
int x, y, pre;
while(!arrayDeque.isEmpty()) {
location = arrayDeque.remove();
x = location.x;
y = location.y;
pre = location.dir;
if(x == N-1 && y == N-1) {
Answer++; continue;
}
if (pre == 0) {
if (y + 1 <= N && map[x][y + 1] == 0)
arrayDeque.add(new Location(x, y + 1, 0));
if (y + 1 <= N && map[x][y + 1] == 0 && x + 1 <= N && map[x + 1][y] == 0 && map[x + 1][y + 1] == 0)
arrayDeque.add(new Location(x + 1, y + 1, 1));
} else if (pre == 1) {
if (y + 1 <= N && map[x][y + 1] == 0)
arrayDeque.add(new Location(x, y + 1, 0));
if (x + 1 <= N && map[x + 1][y] == 0)
arrayDeque.add(new Location(x + 1, y, 2));
if (y + 1 <= N && map[x][y + 1] == 0 && x + 1 <= N && map[x + 1][y] == 0 && map[x + 1][y + 1] == 0)
arrayDeque.add(new Location(x + 1, y + 1, 1));
} else {
if (x + 1 <= N && map[x + 1][y] == 0)
arrayDeque.add(new Location(x + 1, y, 2));
if (y + 1 <= N && map[x][y + 1] == 0 && x + 1 <= N && map[x + 1][y] == 0 && map[x + 1][y + 1] == 0)
arrayDeque.add(new Location(x + 1, y + 1, 1));
}
}
}
}

Both BFS and DFS have O(|V| + |E|) time complexity, and the time difference you are experiencing is most probably rooted in a mistake in the implementation of the BFS which is breaking the loop invariant.
One of the more common mistakes made while implementing BFS is adding the same element to the queue multiple times. One should only add a vertex v to the queue once, so that you can ensure it's removed a single time. Unless you do so, then the asymptotic runtime (i.e. its complexity) will not be linear anymore. You can see the relevant CLRS chapter for the proof of that, based on the loop invariant concept they introduce.
In other words, BFS is a traversal algorithm. It finds out which vertices are reachable, not the number of routes to reach every vertex v. If you try to compute the number of ways Kv to reach to each v from (1, 1) through BFS, the complexity becomes larger than linear. If the problem asks you to find Kv, then your approach should be to use memoization and dynamic programming, not BFS.
Specifically, based on the code you provided, your algorithm does not seem to keep track of whether a vertex (i e. a cell in the grid) was previously explored or not. This causes the vertices to be explored multiple times, which beats the point of using a graph traversal algorithm like BFS and DFS. Using the terminology I mentioned above, you are going against the loop invariant of BFS, which states that each vertex is popped from the queue only once. This causes the complexity of your code to go much higher than linear.
You should look into the term memoization and figure out a way to find a solutions for (N, N), using the solutions you compute only once for (N-1, N-1), (N-1, N) and (N, N-1).

Your BFS implementation uses dynamic memory allocation and an ArrayDeque; your DFS avoids that. That will increase the cost per node for your BFS, although it is odd that it would be that much.
You could try to allocate a new Location in each call of DFS (and possibly add and remove it from an ArrayDeque) and see if that causes the same performance loss.
Additionally your BFS doesn't stop directly when x=y=N; but I don't see that it significantly increases the run-time.

Related

Find index j in a simple convex vector

A vector V [0, n − 1] is a simple convex vector if a index j exists such that V [0] = V[1] = V[2] = ··· = V[j] and for each i > j, V[i] < V[i + 1]. Can you write pseudocode to solve this in O(log(n)) time complexity?
That's my solution, but I don't know if it works. I think it should but assuming for each i >= j, V[i] < V[i + 1]. I don't know if I have a wrong definition.
convex(A, l, r) {
m = floor((l + r) / 2);
if (A[m] == A[0] && A[m] != A[m + 1] {
return m;
}
if (A[m] == A[0]) {
return convex(A, m + 1, r);
}
else {
return convex(A, l, m - 1);
}
}

Optimize: Divide an array into continuous subsequences of length no greater than k such that sum of maximum value of each subsequence is minimum

Optimize O(n^2) algorithm to O(n log n).
Problem Statement
Given array A of n positive integers. Divide the array into continuous subsequences of length no greater than k such that sum of maximum value of each subsequence is minimum. Here's an example.
If n = 8 and k = 5 and elements of the array are 1 4 1 3 4 7 2 2, best solution is 1 | 4 1 3 4 7 | 2 2. The sum would be max{1} + max{4, 1, 3, 4, 7} + max{2, 2} = 1 + 7 + 2 = 10.
O(n^2) solution
Let dp[i] be the minimum sum as in problem statement for subproblem array A[0] ... A[i]. dp[0] = A[0] and, for 0 < i < n (dp[-1] = 0),
// A, n, k, - defined
// dp - all initialized to INF
dp[0] = A[0];
for (auto i = 1; i < n; i++) {
auto max = -INF;
for (auto j = i; j >= 0 && j >= i-k+1; j--) {
if (A[j] > max)
max = A[j];
auto sum = max + (j > 0 ? dp[j-1] : 0);
if (sum < dp[i])
dp[i] = sum;
}
}
// answer: dp[n-1]
O(n log n) ?
The problem author claimed that it was possible to solve this in O(n log n) time, and there are some people who were able to pass the test cases. How can this be optimized?
NOTE: I'm gonna change slightly your dynamic programming relation, so that there is no special case if j = 0. Now dp[j] is the answer for the first j termsA[0], ..., A[j-1] and:
dp[i] = min(dp[j] + max(A[j], ..., A[i-1]), i-k <= j < i)
The answer of the problem is now dp[n].
Notice that if j < i and dp[j] >= dp[i], you won't need dp[j] in the following transitions, because max(A[j], ..., A[l]) >= max(A[i], ..., A[l]) (so it will be always better to cut at i instead of j.
Furthermore let C[j] = max(A[j+1], ..., A[l]) (where l is our current index in the dynamic programming step, ie. i in your C++ program).
Then you can keep in memory some set of indices x1 < ... < xm (the "interesting" indices for the transitions of your dynamic programming relation) such that: dp[x1] < ... < dp[xm] (1). Then automatically C[x1] >= ... >= C[xm] (2).
To store {x1, ..., xm}, we need some data structure that supports the following operations:
Pop back (when we move from i to i+1, we must say that i-k is now unreachable) or front (cf. insertion).
Push front x (when we have computed dp[i], we insert it while preserving (1), by deleting the corresponding elements).
Compute min(dp[xj] + C[xj], 1 <= j <= m).
Thus some queue to store x1, ..., xk together with a set to store all dp[xi] + C[xi] will be enough.
How do we both preserve (1) and update C when we insert an element i?
Before computing dp[i], we update C with A[i-1]. For that we find the smallest element xj in the set x s.t. C[xj] <= A[i-1]. Then (1) and (2) imply dp[j'] + C[j'] >= dp[j] + C[j] for all j' >= j, so we update C[xj] to A[i-1] and we delete x(j+1), ..., xm from the set (*).
When we insert dp[i], we just delete all elements s.t. dp[j] >= dp[i] by popping front.
When we remove i-k, it may be possible that some element destroyed in (*) is now becoming best. So if necessary we update C and insert the last element.
Complexity : O(n log n) (there could be at most 2n insertions in the set).
This code sums up the main ideas:
template<class T> void relaxmax(T& r, T v) { r = max(r, v); }
vector<int> dp(n + 1);
vector<int> C(n + 1, -INF);
vector<int> q(n + 1);
vector<int> ne(n + 1, -INF);
int qback = 0, qfront = 0;
auto cmp = [&](const int& x, const int& y) {
int vx = dp[x] + C[x], vy = dp[y] + C[y];
return vx != vy ? vx < vy : x < y;
};
set<int, decltype(cmp)> s(cmp);
dp[0] = 0;
s.insert(0);
q[qfront++] = 0;
for (int i = 1; i <= n; ++i) {
C[i] = A[i - 1];
auto it_last = lower_bound(q.begin() + qback, q.begin() + qfront, i, [=](const int& x, const int& y) {
return C[x] > C[y];
});
for (auto it = it_last; it != q.begin() + qfront; ++it) {
s.erase(*it);
C[*it] = A[i - 1];
ne[*it] = i;
if (it == it_last) s.insert(*it);
}
dp[i] = dp[*s.begin()] + C[*s.begin()];
while (qback < qfront && dp[q[qfront]] >= dp[i]) {
s.erase(q[qfront]);
qfront--;
}
q[qfront++] = i;
C[i] = -INF;
s.insert(i);
if (q[qback] == i - k) {
s.erase(i - k);
if (qback + 1 != qfront && ne[q[qback]] > q[qback + 1]) {
s.erase(q[qback + 1]);
relaxmax(C[q[qback + 1]], C[i - k]);
s.insert(q[qback + 1]);
}
qback++;
}
}
// answer: dp[n]
This time I stress-tested it against your algorithm: see here.
Please let me know if it's still unclear.

depth first search recursion in matrix

http://codeforces.com/contest/540/problem/C
This problem is solved using depth first recursion on matrix and i got this code for the problem. Can anyone explain why/where recursion is working on this matrix?
#include <bits/stdc++.h>
using namespace std;
int a, b, used[600][600];
char c;
void dfs( int x, int y )
{
used[x][y]++;
if( used[x][y] >= 3 || x < 1 || y < 1 || x > a || y > b )
return;
dfs(x + 1, y);
dfs(x - 1, y);
dfs(x, y + 1);
dfs(x, y - 1);
}
int main()
{
int i, j;
cin >> a >> b;
for( i = 1; i <= a; i++ ){
for( j = 1; j <= b; j++ ){
cin >> c;
if( c == '.' ) used[i][j] = 1;
else used[i][j] = 2;
}
}
int x1, y1, x2, y2;
cin >> x1 >> y1 >> x2 >> y2;
used[x1][y1] = 1;
dfs(x1, y1);
if( used[x2][y2] >= 3 ) cout << "YES";
else cout << "NO";
}
This problem can be thought of as a graph with edges connecting cells(nodes) which are adjacent i.e. share a boundary.
When DFS is implemented using a stack structure it is easier to see recursion. Here recursion is implemented in the form of function calls. We would call dfs for only those nodes which are:
directly reachable from the current node we are examining(which are all the cells
adjacent to (x, y)).
Call to
dfs(x, y);
results in calling
dfs(x + i, y + j);
for (i, j) = {(-1, 0), (0, 1), (1, 0), (0, -1)}. As a consequence, the progam status word(variables, instruction pointer etc.) at the time
dfs(x + 1, y) is called, is pushed implicitly onto the stack memory used by the program. These calls result in increasing the count of cells adjacent to (x, y) by 1.
Recursion unfolds when
if( used[x][y] >= 3 || x < 1 || y < 1 || x > a || y > b )
condition is met resulting in return to coordinates(node) which called dfs on (x, y) OR when all the calls to
dfs(x + i, y + j) for (i, j) = {(-1, 0), (0, 1), (1, 0), (0, -1)} have returned control to dfs(x, y) i.e. the after the line dfs(x, y - 1).
So recursion is all around when you are calling dfs(x -1, y), dfs(x, y -1) etc. within dfs(x, y);

Find order statistic in union of 2 sorted lists on logarithmic time [duplicate]

This is a homework question, binary search has already been introduced:
Given two arrays, respectively N and M elements in ascending order, not necessarily unique:
What is a time efficient algorithm to find the kth smallest element in the union of both arrays?
They say it takes O(logN + logM) where N and M are the arrays lengths.
Let's name the arrays a and b. Obviously we can ignore all a[i] and b[i] where i > k.
First let's compare a[k/2] and b[k/2]. Let b[k/2] > a[k/2]. Therefore we can discard also all b[i], where i > k/2.
Now we have all a[i], where i < k and all b[i], where i < k/2 to find the answer.
What is the next step?
I hope I am not answering your homework, as it has been over a year since this question was asked. Here is a tail recursive solution that will take log(len(a)+len(b)) time.
Assumption: The inputs are correct, i.e., k is in the range [0, len(a)+len(b)].
Base cases:
If length of one of the arrays is 0, the answer is kth element of the second array.
Reduction steps:
If mid index of a + mid index of b is less than k:
If mid element of a is greater than mid element of b, we can ignore the first half of b, adjust k.
Otherwise, ignore the first half of a, adjust k.
If k is less than sum of mid indices of a and b:
If mid element of a is greater than mid element of b, we can safely ignore second half of a.
Otherwise, we can ignore second half of b.
Code:
def kthlargest(arr1, arr2, k):
if len(arr1) == 0:
return arr2[k]
elif len(arr2) == 0:
return arr1[k]
mida1 = len(arr1) // 2 # integer division
mida2 = len(arr2) // 2
if mida1 + mida2 < k:
if arr1[mida1] > arr2[mida2]:
return kthlargest(arr1, arr2[mida2+1:], k - mida2 - 1)
else:
return kthlargest(arr1[mida1+1:], arr2, k - mida1 - 1)
else:
if arr1[mida1] > arr2[mida2]:
return kthlargest(arr1[:mida1], arr2, k)
else:
return kthlargest(arr1, arr2[:mida2], k)
Please note that my solution is creating new copies of smaller arrays in every call, this can be easily eliminated by only passing start and end indices on the original arrays.
You've got it, just keep going! And be careful with the indexes...
To simplify a bit I'll assume that N and M are > k, so the complexity here is O(log k), which is O(log N + log M).
Pseudo-code:
i = k/2
j = k - i
step = k/4
while step > 0
if a[i-1] > b[j-1]
i -= step
j += step
else
i += step
j -= step
step /= 2
if a[i-1] > b[j-1]
return a[i-1]
else
return b[j-1]
For the demonstration you can use the loop invariant i + j = k, but I won't do all your homework :)
Many people answered this "kth smallest element from two sorted array" question, but usually with only general ideas, not a clear working code or boundary conditions analysis.
Here I'd like to elaborate it carefully with the way I went though to help some novices to understand, with my correct working Java code. A1 and A2 are two sorted ascending arrays, with size1 and size2 as length respectively. We need to find the k-th smallest element from the union of those two arrays. Here we reasonably assume that (k > 0 && k <= size1 + size2), which implies that A1 and A2 can't be both empty.
First, let's approach this question with a slow O(k) algorithm. The method is to compare the first element of both array, A1[0] and A2[0]. Take the smaller one, say A1[0] away into our pocket. Then compare A1[1] with A2[0], and so on. Repeat this action until our pocket reached k elements. Very important: In the first step, we can only commit to A1[0] in our pocket. We can NOT include or exclude A2[0]!!!
The following O(k) code gives you one element before the correct answer. Here I use it to show my idea, and analysis boundary condition. I have correct code after this one:
private E kthSmallestSlowWithFault(int k) {
int size1 = A1.length, size2 = A2.length;
int index1 = 0, index2 = 0;
// base case, k == 1
if (k == 1) {
if (size1 == 0) {
return A2[index2];
} else if (size2 == 0) {
return A1[index1];
} else if (A1[index1].compareTo(A2[index2]) < 0) {
return A1[index1];
} else {
return A2[index2];
}
}
/* in the next loop, we always assume there is one next element to compare with, so we can
* commit to the smaller one. What if the last element is the kth one?
*/
if (k == size1 + size2) {
if (size1 == 0) {
return A2[size2 - 1];
} else if (size2 == 0) {
return A1[size1 - 1];
} else if (A1[size1 - 1].compareTo(A2[size2 - 1]) < 0) {
return A1[size1 - 1];
} else {
return A2[size2 - 1];
}
}
/*
* only when k > 1, below loop will execute. In each loop, we commit to one element, till we
* reach (index1 + index2 == k - 1) case. But the answer is not correct, always one element
* ahead, because we didn't merge base case function into this loop yet.
*/
int lastElementFromArray = 0;
while (index1 + index2 < k - 1) {
if (A1[index1].compareTo(A2[index2]) < 0) {
index1++;
lastElementFromArray = 1;
// commit to one element from array A1, but that element is at (index1 - 1)!!!
} else {
index2++;
lastElementFromArray = 2;
}
}
if (lastElementFromArray == 1) {
return A1[index1 - 1];
} else {
return A2[index2 - 1];
}
}
The most powerful idea is that in each loop, we always use the base case approach. After committed to the current smallest element, we get one step closer to the target: the k-th smallest element. Never jump into the middle and make yourself confused and lost!
By observing the above code base case k == 1, k == size1+size2, and combine with that A1 and A2 can't both be empty. We can turn the logic into below more concise style.
Here is a slow but correct working code:
private E kthSmallestSlow(int k) {
// System.out.println("this is an O(k) speed algorithm, very concise");
int size1 = A1.length, size2 = A2.length;
int index1 = 0, index2 = 0;
while (index1 + index2 < k - 1) {
if (size1 > index1 && (size2 <= index2 || A1[index1].compareTo(A2[index2]) < 0)) {
index1++; // here we commit to original index1 element, not the increment one!!!
} else {
index2++;
}
}
// below is the (index1 + index2 == k - 1) base case
// also eliminate the risk of referring to an element outside of index boundary
if (size1 > index1 && (size2 <= index2 || A1[index1].compareTo(A2[index2]) < 0)) {
return A1[index1];
} else {
return A2[index2];
}
}
Now we can try a faster algorithm runs at O(log k). Similarly, compare A1[k/2] with A2[k/2]; if A1[k/2] is smaller, then all the elements from A1[0] to A1[k/2] should be in our pocket. The idea is to not just commit to one element in each loop; the first step contains k/2 elements. Again, we can NOT include or exclude A2[0] to A2[k/2] anyway. So in the first step, we can't go more than k/2 elements. For the second step, we can't go more than k/4 elements...
After each step, we get much closer to k-th element. At the same time each step get smaller and smaller, until we reach (step == 1), which is (k-1 == index1+index2). Then we can refer to the simple and powerful base case again.
Here is the working correct code:
private E kthSmallestFast(int k) {
// System.out.println("this is an O(log k) speed algorithm with meaningful variables name");
int size1 = A1.length, size2 = A2.length;
int index1 = 0, index2 = 0, step = 0;
while (index1 + index2 < k - 1) {
step = (k - index1 - index2) / 2;
int step1 = index1 + step;
int step2 = index2 + step;
if (size1 > step1 - 1
&& (size2 <= step2 - 1 || A1[step1 - 1].compareTo(A2[step2 - 1]) < 0)) {
index1 = step1; // commit to element at index = step1 - 1
} else {
index2 = step2;
}
}
// the base case of (index1 + index2 == k - 1)
if (size1 > index1 && (size2 <= index2 || A1[index1].compareTo(A2[index2]) < 0)) {
return A1[index1];
} else {
return A2[index2];
}
}
Some people may worry what if (index1+index2) jump over k-1? Could we miss the base case (k-1 == index1+index2)? That's impossible. You can add up 0.5+0.25+0.125..., and you will never go beyond 1.
Of course, it is very easy to turn the above code into recursive algorithm:
private E kthSmallestFastRecur(int k, int index1, int index2, int size1, int size2) {
// System.out.println("this is an O(log k) speed algorithm with meaningful variables name");
// the base case of (index1 + index2 == k - 1)
if (index1 + index2 == k - 1) {
if (size1 > index1 && (size2 <= index2 || A1[index1].compareTo(A2[index2]) < 0)) {
return A1[index1];
} else {
return A2[index2];
}
}
int step = (k - index1 - index2) / 2;
int step1 = index1 + step;
int step2 = index2 + step;
if (size1 > step1 - 1 && (size2 <= step2 - 1 || A1[step1 - 1].compareTo(A2[step2 - 1]) < 0)) {
index1 = step1;
} else {
index2 = step2;
}
return kthSmallestFastRecur(k, index1, index2, size1, size2);
}
Hope the above analysis and Java code could help you to understand. But never copy my code as your homework! Cheers ;)
Here's a C++ iterative version of #lambdapilgrim's solution (see the explanation of the algorithm there):
#include <cassert>
#include <iterator>
template<class RandomAccessIterator, class Compare>
typename std::iterator_traits<RandomAccessIterator>::value_type
nsmallest_iter(RandomAccessIterator firsta, RandomAccessIterator lasta,
RandomAccessIterator firstb, RandomAccessIterator lastb,
size_t n,
Compare less) {
assert(issorted(firsta, lasta, less) && issorted(firstb, lastb, less));
for ( ; ; ) {
assert(n < static_cast<size_t>((lasta - firsta) + (lastb - firstb)));
if (firsta == lasta) return *(firstb + n);
if (firstb == lastb) return *(firsta + n);
size_t mida = (lasta - firsta) / 2;
size_t midb = (lastb - firstb) / 2;
if ((mida + midb) < n) {
if (less(*(firstb + midb), *(firsta + mida))) {
firstb += (midb + 1);
n -= (midb + 1);
}
else {
firsta += (mida + 1);
n -= (mida + 1);
}
}
else {
if (less(*(firstb + midb), *(firsta + mida)))
lasta = (firsta + mida);
else
lastb = (firstb + midb);
}
}
}
It works for all 0 <= n < (size(a) + size(b)) indexes and has O(log(size(a)) + log(size(b))) complexity.
Example
#include <functional> // greater<>
#include <iostream>
#define SIZE(a) (sizeof(a) / sizeof(*a))
int main() {
int a[] = {5,4,3};
int b[] = {2,1,0};
int k = 1; // find minimum value, the 1st smallest value in a,b
int i = k - 1; // convert to zero-based indexing
int v = nsmallest_iter(a, a + SIZE(a), b, b + SIZE(b),
SIZE(a)+SIZE(b)-1-i, std::greater<int>());
std::cout << v << std::endl; // -> 0
return v;
}
My attempt for first k numbers, kth number in 2 sorted arrays, and in n sorted arrays:
// require() is recognizable by node.js but not by browser;
// for running/debugging in browser, put utils.js and this file in <script> elements,
if (typeof require === "function") require("./utils.js");
// Find K largest numbers in two sorted arrays.
function k_largest(a, b, c, k) {
var sa = a.length;
var sb = b.length;
if (sa + sb < k) return -1;
var i = 0;
var j = sa - 1;
var m = sb - 1;
while (i < k && j >= 0 && m >= 0) {
if (a[j] > b[m]) {
c[i] = a[j];
i++;
j--;
} else {
c[i] = b[m];
i++;
m--;
}
}
debug.log(2, "i: "+ i + ", j: " + j + ", m: " + m);
if (i === k) {
return 0;
} else if (j < 0) {
while (i < k) {
c[i++] = b[m--];
}
} else {
while (i < k) c[i++] = a[j--];
}
return 0;
}
// find k-th largest or smallest number in 2 sorted arrays.
function kth(a, b, kd, dir){
sa = a.length; sb = b.length;
if (kd<1 || sa+sb < kd){
throw "Mission Impossible! I quit!";
}
var k;
//finding the kd_th largest == finding the smallest k_th;
if (dir === 1){ k = kd;
} else if (dir === -1){ k = sa + sb - kd + 1;}
else throw "Direction has to be 1 (smallest) or -1 (largest).";
return find_kth(a, b, k, sa-1, 0, sb-1, 0);
}
// find k-th smallest number in 2 sorted arrays;
function find_kth(c, d, k, cmax, cmin, dmax, dmin){
sc = cmax-cmin+1; sd = dmax-dmin+1; k0 = k; cmin0 = cmin; dmin0 = dmin;
debug.log(2, "=k: " + k +", sc: " + sc + ", cmax: " + cmax +", cmin: " + cmin + ", sd: " + sd +", dmax: " + dmax + ", dmin: " + dmin);
c_comp = k0-sc;
if (c_comp <= 0){
cmax = cmin0 + k0-1;
} else {
dmin = dmin0 + c_comp-1;
k -= c_comp-1;
}
d_comp = k0-sd;
if (d_comp <= 0){
dmax = dmin0 + k0-1;
} else {
cmin = cmin0 + d_comp-1;
k -= d_comp-1;
}
sc = cmax-cmin+1; sd = dmax-dmin+1;
debug.log(2, "#k: " + k +", sc: " + sc + ", cmax: " + cmax +", cmin: " + cmin + ", sd: " + sd +", dmax: " + dmax + ", dmin: " + dmin + ", c_comp: " + c_comp + ", d_comp: " + d_comp);
if (k===1) return (c[cmin]<d[dmin] ? c[cmin] : d[dmin]);
if (k === sc+sd) return (c[cmax]>d[dmax] ? c[cmax] : d[dmax]);
m = Math.floor((cmax+cmin)/2);
n = Math.floor((dmax+dmin)/2);
debug.log(2, "m: " + m + ", n: "+n+", c[m]: "+c[m]+", d[n]: "+d[n]);
if (c[m]<d[n]){
if (m === cmax){ // only 1 element in c;
return d[dmin+k-1];
}
k_next = k-(m-cmin+1);
return find_kth(c, d, k_next, cmax, m+1, dmax, dmin);
} else {
if (n === dmax){
return c[cmin+k-1];
}
k_next = k-(n-dmin+1);
return find_kth(c, d, k_next, cmax, cmin, dmax, n+1);
}
}
function traverse_at(a, ae, h, l, k, at, worker, wp){
var n = ae ? ae.length : 0;
var get_node;
switch (at){
case "k": get_node = function(idx){
var node = {};
var pos = l[idx] + Math.floor(k/n) - 1;
if (pos<l[idx]){ node.pos = l[idx]; }
else if (pos > h[idx]){ node.pos = h[idx];}
else{ node.pos = pos; }
node.idx = idx;
node.val = a[idx][node.pos];
debug.log(6, "pos: "+pos+"\nnode =");
debug.log(6, node);
return node;
};
break;
case "l": get_node = function(idx){
debug.log(6, "a["+idx+"][l["+idx+"]]: "+a[idx][l[idx]]);
return a[idx][l[idx]];
};
break;
case "h": get_node = function(idx){
debug.log(6, "a["+idx+"][h["+idx+"]]: "+a[idx][h[idx]]);
return a[idx][h[idx]];
};
break;
case "s": get_node = function(idx){
debug.log(6, "h["+idx+"]-l["+idx+"]+1: "+(h[idx] - l[idx] + 1));
return h[idx] - l[idx] + 1;
};
break;
default: get_node = function(){
debug.log(1, "!!! Exception: get_node() returns null.");
return null;
};
break;
}
worker.init();
debug.log(6, "--* traverse_at() *--");
var i;
if (!wp){
for (i=0; i<n; i++){
worker.work(get_node(ae[i]));
}
} else {
for (i=0; i<n; i++){
worker.work(get_node(ae[i]), wp);
}
}
return worker.getResult();
}
sumKeeper = function(){
var res = 0;
return {
init : function(){ res = 0;},
getResult: function(){
debug.log(5, "## sumKeeper.getResult: returning: "+res);
return res;
},
work : function(node){ if (node!==null) res += node;}
};
}();
maxPicker = function(){
var res = null;
return {
init : function(){ res = null;},
getResult: function(){
debug.log(5, "## maxPicker.getResult: returning: "+res);
return res;
},
work : function(node){
if (res === null){ res = node;}
else if (node!==null && node > res){ res = node;}
}
};
}();
minPicker = function(){
var res = null;
return {
init : function(){ res = null;},
getResult: function(){
debug.log(5, "## minPicker.getResult: returning: ");
debug.log(5, res);
return res;
},
work : function(node){
if (res === null && node !== null){ res = node;}
else if (node!==null &&
node.val !==undefined &&
node.val < res.val){ res = node; }
else if (node!==null && node < res){ res = node;}
}
};
}();
// find k-th smallest number in n sorted arrays;
// need to consider the case where some of the subarrays are taken out of the selection;
function kth_n(a, ae, k, h, l){
var n = ae.length;
debug.log(2, "------** kth_n() **-------");
debug.log(2, "n: " +n+", k: " + k);
debug.log(2, "ae: ["+ae+"], len: "+ae.length);
debug.log(2, "h: [" + h + "]");
debug.log(2, "l: [" + l + "]");
for (var i=0; i<n; i++){
if (h[ae[i]]-l[ae[i]]+1>k) h[ae[i]]=l[ae[i]]+k-1;
}
debug.log(3, "--after reduction --");
debug.log(3, "h: [" + h + "]");
debug.log(3, "l: [" + l + "]");
if (n === 1)
return a[ae[0]][k-1];
if (k === 1)
return traverse_at(a, ae, h, l, k, "l", minPicker);
if (k === traverse_at(a, ae, h, l, k, "s", sumKeeper))
return traverse_at(a, ae, h, l, k, "h", maxPicker);
var kn = traverse_at(a, ae, h, l, k, "k", minPicker);
debug.log(3, "kn: ");
debug.log(3, kn);
var idx = kn.idx;
debug.log(3, "last: k: "+k+", l["+kn.idx+"]: "+l[idx]);
k -= kn.pos - l[idx] + 1;
l[idx] = kn.pos + 1;
debug.log(3, "next: "+"k: "+k+", l["+kn.idx+"]: "+l[idx]);
if (h[idx]<l[idx]){ // all elements in a[idx] selected;
//remove a[idx] from the arrays.
debug.log(4, "All elements selected in a["+idx+"].");
debug.log(5, "last ae: ["+ae+"]");
ae.splice(ae.indexOf(idx), 1);
h[idx] = l[idx] = "_"; // For display purpose only.
debug.log(5, "next ae: ["+ae+"]");
}
return kth_n(a, ae, k, h, l);
}
function find_kth_in_arrays(a, k){
if (!a || a.length<1 || k<1) throw "Mission Impossible!";
var ae=[], h=[], l=[], n=0, s, ts=0;
for (var i=0; i<a.length; i++){
s = a[i] && a[i].length;
if (s>0){
ae.push(i); h.push(s-1); l.push(0);
ts+=s;
}
}
if (k>ts) throw "Too few elements to choose from!";
return kth_n(a, ae, k, h, l);
}
/////////////////////////////////////////////////////
// tests
// To show everything: use 6.
debug.setLevel(1);
var a = [2, 3, 5, 7, 89, 223, 225, 667];
var b = [323, 555, 655, 673];
//var b = [99];
var c = [];
debug.log(1, "a = (len: " + a.length + ")");
debug.log(1, a);
debug.log(1, "b = (len: " + b.length + ")");
debug.log(1, b);
for (var k=1; k<a.length+b.length+1; k++){
debug.log(1, "================== k: " + k + "=====================");
if (k_largest(a, b, c, k) === 0 ){
debug.log(1, "c = (len: "+c.length+")");
debug.log(1, c);
}
try{
result = kth(a, b, k, -1);
debug.log(1, "===== The " + k + "-th largest number: " + result);
} catch (e) {
debug.log(0, "Error message from kth(): " + e);
}
debug.log("==================================================");
}
debug.log(1, "################# Now for the n sorted arrays ######################");
debug.log(1, "####################################################################");
x = [[1, 3, 5, 7, 9],
[-2, 4, 6, 8, 10, 12],
[8, 20, 33, 212, 310, 311, 623],
[8],
[0, 100, 700],
[300],
[],
null];
debug.log(1, "x = (len: "+x.length+")");
debug.log(1, x);
for (var i=0, num=0; i<x.length; i++){
if (x[i]!== null) num += x[i].length;
}
debug.log(1, "totoal number of elements: "+num);
// to test k in specific ranges:
var start = 0, end = 25;
for (k=start; k<end; k++){
debug.log(1, "=========================== k: " + k + "===========================");
try{
result = find_kth_in_arrays(x, k);
debug.log(1, "====== The " + k + "-th smallest number: " + result);
} catch (e) {
debug.log(1, "Error message from find_kth_in_arrays: " + e);
}
debug.log(1, "=================================================================");
}
debug.log(1, "x = (len: "+x.length+")");
debug.log(1, x);
debug.log(1, "totoal number of elements: "+num);
The complete code with debug utils can be found at: https://github.com/brainclone/teasers/tree/master/kth
Most of the answers I found here focus on both arrays. while it's good but it's harder to implement as there are a lot of edge cases that we need to take care of. Besides that most of the implementations are recursive which adds the space complexity of recursion stack. So instead of focusing on both arrays I decided to just focus on the smaller array and do the binary search on just the smaller array and adjust the pointer for the second array based on the value of the pointer in the first Array. By the following implementation, we have the complexity of O(log(min(n,m)) with O(1) space complexity.
public static int kth_two_sorted(int []a, int b[],int k){
if(a.length > b.length){
return kth_two_sorted(b,a,k);
}
if(a.length + a.length < k){
throw new RuntimeException("wrong argument");
}
int low = 0;
int high = k;
if(a.length <= k){
high = a.length-1;
}
while(low <= high){
int sizeA = low+(high - low)/2;
int sizeB = k - sizeA;
boolean shrinkLeft = false;
boolean extendRight = false;
if(sizeA != 0){
if(sizeB !=b.length){
if(a[sizeA-1] > b[sizeB]){
shrinkLeft = true;
high = sizeA-1;
}
}
}
if(sizeA!=a.length){
if(sizeB!=0){
if(a[sizeA] < b[sizeB-1]){
extendRight = true;
low = sizeA;
}
}
}
if(!shrinkLeft && !extendRight){
return Math.max(a[sizeA-1],b[sizeB-1]) ;
}
}
throw new IllegalArgumentException("we can't be here");
}
We have a range of [low, high] for array a and we narrow this range as we go further through the algorithm. sizeA shows how many of items from k items are from array a and it derives from the value of low and high. sizeB is the same definition except we calculate the value such a way that sizeA+sizeB=k. The based on the values on those two borders with conclude that we have to extend to the right side in array a or shrink to the left side. if we stuck in the same position it means that we found the solution and we will return the max of values in the position of sizeA-1 from a and sizeB-1 from b.
Here's my code based on Jules Olleon's solution:
int getNth(vector<int>& v1, vector<int>& v2, int n)
{
int step = n / 4;
int i1 = n / 2;
int i2 = n - i1;
while(!(v2[i2] >= v1[i1 - 1] && v1[i1] > v2[i2 - 1]))
{
if (v1[i1 - 1] >= v2[i2 - 1])
{
i1 -= step;
i2 += step;
}
else
{
i1 += step;
i2 -= step;
}
step /= 2;
if (!step) step = 1;
}
if (v1[i1 - 1] >= v2[i2 - 1])
return v1[i1 - 1];
else
return v2[i2 - 1];
}
int main()
{
int a1[] = {1,2,3,4,5,6,7,8,9};
int a2[] = {4,6,8,10,12};
//int a1[] = {1,2,3,4,5,6,7,8,9};
//int a2[] = {4,6,8,10,12};
//int a1[] = {1,7,9,10,30};
//int a2[] = {3,5,8,11};
vector<int> v1(a1, a1+9);
vector<int> v2(a2, a2+5);
cout << getNth(v1, v2, 5);
return 0;
}
Here is my implementation in C, you can refer to #Jules Olléon 's explains for the algorithm: the idea behind the algorithm is that we maintain i + j = k, and find such i and j so that a[i-1] < b[j-1] < a[i] (or the other way round). Now since there are i elements in 'a' smaller than b[j-1], and j-1 elements in 'b' smaller than b[j-1], b[j-1] is the i + j-1 + 1 = kth smallest element. To find such i,j the algorithm does a dichotomic search on the arrays.
int find_k(int A[], int m, int B[], int n, int k) {
if (m <= 0 )return B[k-1];
else if (n <= 0) return A[k-1];
int i = ( m/double (m + n)) * (k-1);
if (i < m-1 && i<k-1) ++i;
int j = k - 1 - i;
int Ai_1 = (i > 0) ? A[i-1] : INT_MIN, Ai = (i<m)?A[i]:INT_MAX;
int Bj_1 = (j > 0) ? B[j-1] : INT_MIN, Bj = (j<n)?B[j]:INT_MAX;
if (Ai >= Bj_1 && Ai <= Bj) {
return Ai;
} else if (Bj >= Ai_1 && Bj <= Ai) {
return Bj;
}
if (Ai < Bj_1) { // the answer can't be within A[0,...,i]
return find_k(A+i+1, m-i-1, B, n, j);
} else { // the answer can't be within A[0,...,i]
return find_k(A, m, B+j+1, n-j-1, i);
}
}
Here's my solution. The C++ code prints the kth smallest value as well as the number of iterations to get the kth smallest value using a loop, which in my opinion is in the order of log(k). The code however requires k to be smaller than the length of the first array which is a limitation.
#include <iostream>
#include <vector>
#include<math.h>
using namespace std;
template<typename comparable>
comparable kthSmallest(vector<comparable> & a, vector<comparable> & b, int k){
int idx1; // Index in the first array a
int idx2; // Index in the second array b
comparable maxVal, minValPlus;
float iter = k;
int numIterations = 0;
if(k > a.size()){ // Checks if k is larger than the size of first array
cout << " k is larger than the first array" << endl;
return -1;
}
else{ // If all conditions are satisfied, initialize the indexes
idx1 = k - 1;
idx2 = -1;
}
for ( ; ; ){
numIterations ++;
if(idx2 == -1 || b[idx2] <= a[idx1] ){
maxVal = a[idx1];
minValPlus = b[idx2 + 1];
idx1 = idx1 - ceil(iter/2); // Binary search
idx2 = k - idx1 - 2; // Ensures sum of indices = k - 2
}
else{
maxVal = b[idx2];
minValPlus = a[idx1 + 1];
idx2 = idx2 - ceil(iter/2); // Binary search
idx1 = k - idx2 - 2; // Ensures sum of indices = k - 2
}
if(minValPlus >= maxVal){ // Check if kth smallest value has been found
cout << "The number of iterations to find the " << k << "(th) smallest value is " << numIterations << endl;
return maxVal;
}
else
iter/=2; // Reduce search space of binary search
}
}
int main(){
//Test Cases
vector<int> a = {2, 4, 9, 15, 22, 34, 45, 55, 62, 67, 78, 85};
vector<int> b = {1, 3, 6, 8, 11, 13, 15, 20, 56, 67, 89};
// Input k < a.size()
int kthSmallestVal;
for (int k = 1; k <= a.size() ; k++){
kthSmallestVal = kthSmallest<int>( a ,b ,k );
cout << k <<" (th) smallest Value is " << kthSmallestVal << endl << endl << endl;
}
}
Basically, via this approach you can discard k/2 elements at each step.
The K will recursively change from k => k/2 => k/4 => ... till it reaches 1.
So, Time Complexity is O(logk)
At k=1 , we get the lowest of the two arrays.
The following code is in JAVA. Please note that the we are subtracting 1 (-1) in the code from the indices because Java array's index starts from 0 and not 1, eg. k=3 is represented by the element in 2nd index of an array.
private int kthElement(int[] arr1, int[] arr2, int k) {
if (k < 1 || k > (arr1.length + arr2.length))
return -1;
return helper(arr1, 0, arr1.length - 1, arr2, 0, arr2.length - 1, k);
}
private int helper(int[] arr1, int low1, int high1, int[] arr2, int low2, int high2, int k) {
if (low1 > high1) {
return arr2[low2 + k - 1];
} else if (low2 > high2) {
return arr1[low1 + k - 1];
}
if (k == 1) {
return Math.min(arr1[low1], arr2[low2]);
}
int i = Math.min(low1 + k / 2, high1 + 1);
int j = Math.min(low2 + k / 2, high2 + 1);
if (arr1[i - 1] > arr2[j - 1]) {
return helper(arr1, low1, high1, arr2, j, high2, k - (j - low2));
} else {
return helper(arr1, i, high1, arr2, low2, high2, k - (i - low1));
}
}
The first pseudo code provided above, does not work for many values. For example,
here are two arrays.
int[] a = { 1, 5, 6, 8, 9, 11, 15, 17, 19 };
int[] b = { 4, 7, 8, 13, 15, 18, 20, 24, 26 };
It did not work for k=3 and k=9 in it. I have another solution. It is given below.
private static void traverse(int pt, int len) {
int temp = 0;
if (len == 1) {
int val = 0;
while (k - (pt + 1) - 1 > -1 && M[pt] < N[k - (pt + 1) - 1]) {
if (val == 0)
val = M[pt] < N[k - (pt + 1) - 1] ? N[k - (pt + 1) - 1]
: M[pt];
else {
int t = M[pt] < N[k - (pt + 1) - 1] ? N[k - (pt + 1) - 1]
: M[pt];
val = val < t ? val : t;
}
++pt;
}
if (val == 0)
val = M[pt] < N[k - (pt + 1) - 1] ? N[k - (pt + 1) - 1] : M[pt];
System.out.println(val);
return;
}
temp = len / 2;
if (M[pt + temp - 1] < N[k - (pt + temp) - 1]) {
traverse(pt + temp, temp);
} else {
traverse(pt, temp);
}
}
But... it is also not working for k=5. There is this even/odd catch of k which is not letting it to be simple.
public class KthSmallestInSortedArray {
public static void main(String[] args) {
int a1[] = {2, 3, 10, 11, 43, 56},
a2[] = {120, 13, 14, 24, 34, 36},
k = 4;
System.out.println(findKthElement(a1, a2, k));
}
private static int findKthElement(int a1[], int a2[], int k) {
/** Checking k must less than sum of length of both array **/
if (a1.length + a2.length < k) {
throw new IllegalArgumentException();
}
/** K must be greater than zero **/
if (k <= 0) {
throw new IllegalArgumentException();
}
/**
* Finding begin, l and end such that
* begin <= l < end
* a1[0].....a1[l-1] and
* a2[0]....a2[k-l-1] are the smallest k numbers
*/
int begin = Math.max(0, k - a2.length);
int end = Math.min(a1.length, k);
while (begin < end) {
int l = begin + (end - begin) / 2;
/** Can we include a1[l] in the k smallest numbers */
if ((l < a1.length) &&
(k - l > 0) &&
(a1[l] < a2[k - l - 1])) {
begin = l + 1;
} else if ((l > 0) &&
(k - l < a2.length) &&
(a1[l - 1] > a2[k - 1])) {
/**
* This is the case where we can discard
* a[l-1] from the set of k smallest numbers
*/
end = l;
} else {
/**
* We found our answer since both inequalities were
* false
*/
begin = l;
break;
}
}
if (begin == 0) {
return a2[k - 1];
} else if (begin == k) {
return a1[k - 1];
} else {
return Math.max(a1[begin - 1], a2[k - begin - 1]);
}
}
}
Here is mine solution in java . Will try to further optimize it
public class FindKLargestTwoSortedArray {
public static void main(String[] args) {
int[] arr1 = { 10, 20, 40, 80 };
int[] arr2 = { 15, 35, 50, 75 };
FindKLargestTwoSortedArray(arr1, 0, arr1.length - 1, arr2, 0,
arr2.length - 1, 6);
}
public static void FindKLargestTwoSortedArray(int[] arr1, int start1,
int end1, int[] arr2, int start2, int end2, int k) {
if ((start1 <= end1 && start1 >= 0 && end1 < arr1.length)
&& (start2 <= end2 && start2 >= 0 && end2 < arr2.length)) {
int midIndex1 = (start1 + (k - 1) / 2);
midIndex1 = midIndex1 >= arr1.length ? arr1.length - 1 : midIndex1;
int midIndex2 = (start2 + (k - 1) / 2);
midIndex2 = midIndex2 >= arr2.length ? arr2.length - 1 : midIndex2;
if (arr1[midIndex1] == arr2[midIndex2]) {
System.out.println("element is " + arr1[midIndex1]);
} else if (arr1[midIndex1] < arr2[midIndex2]) {
if (k == 1) {
System.out.println("element is " + arr1[midIndex1]);
return;
} else if (k == 2) {
System.out.println("element is " + arr2[midIndex2]);
return;
}else if (midIndex1 == arr1.length-1 || midIndex2 == arr2.length-1 ) {
if(k==(arr1.length+arr2.length)){
System.out.println("element is " + arr2[midIndex2]);
return;
}else if(k==(arr1.length+arr2.length)-1){
System.out.println("element is " + arr1[midIndex1]);
return;
}
}
int remainingElementToSearch = k - (midIndex1-start1);
FindKLargestTwoSortedArray(
arr1,
midIndex1,
(midIndex1 + remainingElementToSearch) >= arr1.length ? arr1.length-1
: (midIndex1 + remainingElementToSearch), arr2,
start2, midIndex2, remainingElementToSearch);
} else if (arr1[midIndex1] > arr2[midIndex2]) {
FindKLargestTwoSortedArray(arr2, start2, end2, arr1, start1,
end1, k);
}
} else {
return;
}
}
}
This is inspired from Algo at wonderful youtube video
Link to code complexity (log(n)+log(m))
Link to Code (log(n)*log(m))
Implementation of (log(n)+log(m)) solution
I would like to add my explanation to the problem.
This is a classic problem where we have to use the fact that the two arrays are sorted .
we have been given two sorted arrays arr1 of size sz1 and arr2 of size sz2
a)Lets suppose if
Checking If k is valid
k is > (sz1+sz2)
then we cannot find kth smallest element in union of both sorted arrays ryt So return Invalid data.
b)Now if above condition holds false and we have valid and feasible value of k,
Managing Edge Cases
We will append both the arrays by -infinity values at front and +infinity values at end to cover the edge cases of k = 1,2 and k = (sz1+sz2-1),(sz1+sz2)etc.
Now both the arrays have size (sz1+2) and (sz2+2) respectively
Main Algorithm
Now,we will do binary search on arr1 .We will do binary search on arr1 looking for an index i , startIndex <= i <= endIndex
such that if we find corresponding index j in arr2 using constraint {(i+j) = k},then if
if (arr2[j-1] < arr1[i] < arr2[j]),then arr1[i] is the kth smallest (Case 1)
else if (arr1[i-1] < arr2[j] < arr1[i]) ,then arr2[i] is the kth smallest (Case 2)
else signifies either arr1[i] < arr2[j-1] < arr2[j] (Case3)
or arr2[j-1] < arr2[j] < arr1[i] (Case4)
Since we know that the kth smallest element has (k-1) elements smaller than it in union of both the arrays ryt? So,
In Case1, what we did , we ensured that there are a total of (k-1) smaller elements to arr1[i] because elements smaller than arr1[i] in arr1 array are i-1 in number than we know (arr2[j-1] < arr1[i] < arr2[j]) and number of elements smaller than arr1[i] in arr2 is j-1 because j is found using (i-1)+(j-1) = (k-1) So kth smallest element will be arr1[i]
But answer may not always come from the first array ie arr1 so we checked for case2 which also satisfies similarly like case 1 because (i-1)+(j-1) = (k-1) . Now if we have (arr1[i-1] < arr2[j] < arr1[i]) we have a total of k-1 elements smaller than arr2[j] in union of both the arrays so its the kth smallest element.
In case3 , to form it to any of case 1 or case 2, we need to increment i and j will be found according using constraint {(i+j) = k} ie in binary search move to right part ie make startIndex = middleIndex
In case4, to form it to any of case 1 or case 2, we need to decrement i and j will be found according using constraint {(i+j) = k} ie in binary search move to left part ie make endIndex = middleIndex.
Now how to decide startIndex and endIndex at beginning of binary search over arr1
with startindex = 1 and endIndex = ??.We need to decide.
If k > sz1,endIndex = (sz1+1) , else endIndex = k;
Because if k is greater than the size of the first array we may have to do binary search over the entire array arr1 else we only need to take first k elements of it because sz1-k elements can never contribute in calculating kth smallest.
CODE Shown Below
// Complexity O(log(n)+log(m))
#include<bits/stdc++.h>
using namespace std;
#define f(i,x,y) for(int i = (x);i < (y);++i)
#define F(i,x,y) for(int i = (x);i > (y);--i)
int max(int a,int b){return (a > b?a:b);}
int min(int a,int b){return (a < b?a:b);}
int mod(int a){return (a > 0?a:((-1)*(a)));}
#define INF 1000000
int func(int *arr1,int *arr2,int sz1,int sz2,int k)
{
if((k <= (sz1+sz2))&&(k > 0))
{
int s = 1,e,i,j;
if(k > sz1)e = sz1+1;
else e = k;
while((e-s)>1)
{
i = (e+s)/2;
j = ((k-1)-(i-1));
j++;
if(j > (sz2+1)){s = i;}
else if((arr1[i] >= arr2[j-1])&&(arr1[i] <= arr2[j]))return arr1[i];
else if((arr2[j] >= arr1[i-1])&&(arr2[j] <= arr1[i]))return arr2[j];
else if(arr1[i] < arr2[j-1]){s = i;}
else if(arr1[i] > arr2[j]){e = i;}
else {;}
}
i = e,j = ((k-1)-(i-1));j++;
if((arr1[i] >= arr2[j-1])&&(arr1[i] <= arr2[j]))return arr1[i];
else if((arr2[j] >= arr1[i-1])&&(arr2[j] <= arr1[i]))return arr2[j];
else
{
i = s,j = ((k-1)-(i-1));j++;
if((arr1[i] >= arr2[j-1])&&(arr1[i] <= arr2[j]))return arr1[i];
else return arr2[j];
}
}
else
{
cout << "Data Invalid" << endl;
return -INF;
}
}
int main()
{
int n,m,k;
cin >> n >> m >> k;
int arr1[n+2];
int arr2[m+2];
f(i,1,n+1)
cin >> arr1[i];
f(i,1,m+1)
cin >> arr2[i];
arr1[0] = -INF;
arr2[0] = -INF;
arr1[n+1] = +INF;
arr2[m+1] = +INF;
int val = func(arr1,arr2,n,m,k);
if(val != -INF)cout << val << endl;
return 0;
}
For Solution of complexity (log(n)*log(m))
Just i missed using advantage of the fact that for each i the j can be found using constraint {(i-1)+(j-1)=(k-1)} So for each i i was further applying binary search on second array to find j such that arr2[j] <= arr1[i].So this solution can be optimized further
#include <bits/stdc++.h>
using namespace std;
int findKthElement(int a[],int start1,int end1,int b[],int start2,int end2,int k){
if(start1 >= end1)return b[start2+k-1];
if(start2 >= end2)return a[start1+k-1];
if(k==1)return min(a[start1],b[start2]);
int aMax = INT_MAX;
int bMax = INT_MAX;
if(start1+k/2-1 < end1) aMax = a[start1 + k/2 - 1];
if(start2+k/2-1 < end2) bMax = b[start2 + k/2 - 1];
if(aMax > bMax){
return findKthElement(a,start1,end1,b,start2+k/2,end2,k-k/2);
}
else{
return findKthElement(a,start1 + k/2,end1,b,start2,end2,k-k/2);
}
}
int main(void){
int t;
scanf("%d",&t);
while(t--){
int n,m,k;
cout<<"Enter the size of 1st Array"<<endl;
cin>>n;
int arr[n];
cout<<"Enter the Element of 1st Array"<<endl;
for(int i = 0;i<n;i++){
cin>>arr[i];
}
cout<<"Enter the size of 2nd Array"<<endl;
cin>>m;
int arr1[m];
cout<<"Enter the Element of 2nd Array"<<endl;
for(int i = 0;i<m;i++){
cin>>arr1[i];
}
cout<<"Enter The Value of K";
cin>>k;
sort(arr,arr+n);
sort(arr1,arr1+m);
cout<<findKthElement(arr,0,n,arr1,0,m,k)<<endl;
}
return 0;
}
Time Complexcity is O(log(min(n,m)))
Below C# code to Find the k-th Smallest Element in the Union of Two Sorted Arrays. Time Complexity : O(logk)
public static int findKthSmallestElement1(int[] A, int startA, int endA, int[] B, int startB, int endB, int k)
{
int n = endA - startA;
int m = endB - startB;
if (n <= 0)
return B[startB + k - 1];
if (m <= 0)
return A[startA + k - 1];
if (k == 1)
return A[startA] < B[startB] ? A[startA] : B[startB];
int midA = (startA + endA) / 2;
int midB = (startB + endB) / 2;
if (A[midA] <= B[midB])
{
if (n / 2 + m / 2 + 1 >= k)
return findKthSmallestElement1(A, startA, endA, B, startB, midB, k);
else
return findKthSmallestElement1(A, midA + 1, endA, B, startB, endB, k - n / 2 - 1);
}
else
{
if (n / 2 + m / 2 + 1 >= k)
return findKthSmallestElement1(A, startA, midA, B, startB, endB, k);
else
return findKthSmallestElement1(A, startA, endA, B, midB + 1, endB, k - m / 2 - 1);
}
}
Check this code.
import math
def findkthsmallest():
A=[1,5,10,22,30,35,75,125,150,175,200]
B=[15,16,20,22,25,30,100,155,160,170]
lM=0
lN=0
hM=len(A)-1
hN=len(B)-1
k=17
while True:
if k==1:
return min(A[lM],B[lN])
cM=hM-lM+1
cN=hN-lN+1
tmp = cM/float(cM+cN)
iM=int(math.ceil(tmp*k))
iN=k-iM
iM=lM+iM-1
iN=lN+iN-1
if A[iM] >= B[iN]:
if iN == hN or A[iM] < B[iN+1]:
return A[iM]
else:
k = k - (iN-lN+1)
lN=iN+1
hM=iM-1
if B[iN] >= A[iM]:
if iM == hM or B[iN] < A[iM+1]:
return B[iN]
else:
k = k - (iM-lM+1)
lM=iM+1
hN=iN-1
if hM < lM:
return B[lN+k-1]
if hN < lN:
return A[lM+k-1]
if __name__ == '__main__':
print findkthsmallest();

Maximum span in two arrays with equal sum

This is programming puzzle. We have two arrays A and B. Both contains 0's and 1's only.
We have to two indices i, j such that
a[i] + a[i+1] + .... a[j] = b[i] + b[i+1] + ... b[j].
Also we have to maximize this difference between i and j. Looking for O(n) solution.
I found O(n^2) solution but not getting O(n).
Best solution is O(n)
First let c[i] = a[i] - b[i], then question become find i, j, which sum(c[i], c[i+1], ..., c[j]) = 0, and max j - i.
Second let d[0] = 0, d[i + 1] = d[i] + c[i], i >= 0, then question become find i, j, which d[j + 1] == d[i], and max j - i.
The value of d is in range [-n, n], so we can use following code to find the answer
answer = 0, answer_i = 0, answer_j = 0
sumHash[2n + 1] set to -1
for (x <- 0 to n) {
if (sumHash[d[x]] == -1) {
sumHash[d[x]] = x
} else {
y = sumHash[d[x]]
// find one answer (y, x), compare to current best
if (x - y > answer) {
answer = x - y
answer_i = y
answer_j = y
}
}
}
Here is an O(n) solution.
I use the fact that sum[i..j] = sum[j] - sum[i - 1].
I keep the leftmost position of each found sum.
int convertToPositiveIndex(int index) {
return index + N;
}
int mostLeft[2 * N + 1];
memset(mostLeft, -1, sizeof(mostLeft));
int bestLen = 0, bestStart = -1, bestEnd = -1;
int sumA = 0, sumB = 0;
for (int i = 0; i < N; i++) {
sumA += A[i];
sumB += B[i];
int diff = sumA - sumB;
int diffIndex = convertToPositiveIndex(diff);
if (mostLeft[diffIndex] != -1) {
//we have found the sequence mostLeft[diffIndex] + 1 ... i
//now just compare it with the best one found so far
int currentLen = i - mostLeft[diffIndex];
if (currentLen > bestLen) {
bestLen = currentLen;
bestStart = mostLeft[diffIndex] + 1;
bestEnd = i;
}
}
if (mostLeft[diffIndex] == -1) {
mostLeft[diffIndex] = i;
}
}
cout << bestStart << " " << bestEnd << " " << bestLen << endl;
P.S. mostLeft array is 2 * N + 1, because of the negatives.
This is a fairly straightforward O(N) solution:
let sa = [s1, s2, s3.. sn] where si = sum(a[0:i]) and similar for sb
then sum(a[i:j]) = sa[j]-sa[i]
and sum(b[i:j]) = sb[j] - sb[i]
Note that because the sums only increase by 1 each time, we know 0 <= sb[N], sa[N] <=N
difference_array = [d1, d2, .. dn] where di = sb[i] - sa[i] <= N
note if di = dj, then sb[i] - sa[i] = sb[j] - sa[j] which means they have the same sum (rearrange to get sum(b[i:j]) and sum(a[i:j]) from above).
Now for each difference we need its max position occurrence and min position occurrence
Now for each difference di, the difference between max - min, is an i-j section of equal sum. Find the maximum max-min value and you're done.
sample code that should work:
a = []
b = []
sa = [0]
sb = [0]
for i in a:
sa.append(sa[-1] + i)
for i in b:
sb.append(sb[-1] + i)
diff = [sai-sbi for sai, sbi in zip(sa, sb)]
min_diff_pos = {}
max_diff_pos = {}
for pos, d in enumerate(diff):
if d in min_diff_pos:
max_diff_pos[d] = pos
else:
min_diff_pos[d] = pos
ans = min(max_diff_pos[d] - min_diff_pos[d] for d in diff)
Basically, my solution goes like this.
Take a variable to take care of the difference since the beginning.
int current = 0;
for index from 0 to length
if a[i] == 0 && b[i] == 1
current--;
else if a[i] == 1 && b[i] == 0
current++;
else
// nothing;
Find the positions where the variable has the same value, which indicates that there are equal 1s and 0s in between.
Pseudo Code:
Here is my primary solution:
int length = min (a.length, b.length);
int start[] = {-1 ... -1}; // from -length to length
start[0] = -1;
int count[] = {0 ... 0}; // from -length to length
int current = 0;
for (int i = 0; i < length; i++) {
if (a[i] == 0 && b[i] == 1)
current--;
else if (a[i] == 1 && b[i] == 0)
current++;
else
; // nothing
if (start[current] == -1) // index can go negative here, take care
start[current] = current;
else
count[current] = i - start[current];
}
return max_in(count[]);

Resources