I have been asked to write algorithm for this problem: we are given array A and we want to know if there is any two elements U and L in array which U+L=K
I wrote my algorithm like this:
while(first<last)
{
if(arr[first]+arr[last]==k)
return true
else if(arr[first]+arr[last]<k)
last=last-1;
else
first++;
}
return false
}
But the problem is that what is the running time of this algorithm ?is it O(nlogn)? if yes why?and if not how can I implement it in O(nlogn)?
Running time of your algorithm is O(N) since in the worst case, you just end up iterating over the whole array.
Though your algorithm would not solve the problem. For example consider {9,1,3,4,2}. In this case if k would be 12, it would return false. For your algorithm the input array should be sorted first and then passed to the algorithm, which will take O(NlogN) in the worst case.
A much faster solution will be however to use something like HashMap to solve the problem in O(N) time.
Here is a small example of the alg in python where the result is false, but there is two elements in the list that fulfilles the U+L=k
def testArray(a, k):
first = 0
last = len(a) - 1
while (first < last):
print first, last
if (a[first] + a[last] == k):
return True
elif (a[first] + a[last] < k):
last=last-1
else:
first=first+1
return False
a = [3, 1, 5, 3, 6]
print testArray(a, 6)
Related
I was trying to develop a solution to decrease the time complexity of O(n^2) or O(n*m) algorithm to O(n) or O(n+m) algorithm. For example:
let arr = [[1, 2], [1, 2, 3], [1, 2, 3, 4, 5, 6, 7, 8]];
let x = 0;
let len = getArrayMaxLength (arr) //Get the maximum length of a 2d array which in this example is 8.
for (let i = 0; i < len && x < arr.length; ++i) {
print (arr [x][(i % arr [x].length);
if ((i + 1) % arr [x].length == 0) {
++x;
if (x != arr.length) i = -1;
}
}
I'm having problem to determine the Big-O of this algorithm as i have never dealt with loop with multiple conditions that much. I've read this and this and still don't get quite right. From what i understand the time-complexity will be O(n+m). Where n is [arr.length] and m is [len] which is the output of a function getArrayMaxLength as described above.
So to sum things up. What is the time complexity of the algorithm?
Thank you.
If the body of your loop contains lots of conditionals but none of these conditionals add to the number of repetitions on the outer for-loop and they also don't do any computation that has a variable time based on some input, them you should consider the body as a constant, thus not influencing on the final big-O complexity.
Your assumption of O(m + n) is correct.
Note that every time the for loop reaches the end of an inner array, the counter (variable i) resets and you increase x, passing onto the next inner array. That means you go through every single element of your two-dimensional array, which the program output confirms.
Although n+m complexity may seem ok, it is actually a bad approximation. In practice, the complexity is always bigger because array lengths vary. Imagine that all the subarrays have the same length, so n = m. As you visit n elements for every of the n inner arrays, the total complexity would then be quadratic (n*n) and not linear. When you're working with big arrays this difference becomes very obvious.
In conclusion, the time complexity is O(n*m).
It is trivial to write a O(n!) algorithm with recursion, but can someone give me an example of a O(n!) algorithm using just iterations without recursion?
A trivial way is to count from 1 to n!, where you compute n! by products.
The problem of the N-Queens, solved by a brute-force approach, takes O(N!) time.
The problem is basically positioning N queens in a NxN chess table without any of them being able to kill another.
The brute-force solution consider that, in the first step, you can try N slots of the first column, then N-1 slots of the second, ..., until you test the only possible position in the N-th column, hence you have O(N!).
Could you please point me to a how you could generate all possible permutations without recursion?
Sure. Here's a Python program that generate all possible permutations with recursion:
N = 4
arr = [0] * N
def permute(n):
if n == N + 1:
print(arr)
else:
for i in range(N):
if arr[i] == 0:
arr[i] = n
permute(n + 1)
arr[i] = 0
permute(1)
and here's one that does so without recursion
N = 4
arr = [0] * N
stack = [(1, 0, "do-it")]
while stack:
n, i, state = stack.pop()
if state == "do-it":
if n == N + 1:
print(arr)
else:
if arr[i] == 0:
arr[i] = n
stack.append((n, i, "cleanup"))
stack.append((n + 1, 0, "do-it"))
else:
stack.append((n, i, "no-cleanup"))
if state == "cleanup":
arr[i] = 0
if state in ["cleanup", "no-cleanup"]:
if i + 1 < N:
stack.append((n, i + 1, "do-it"))
The trick is to notice that recursion uses call stack, so if you want to avoid recursion, then roll your own stack whose each element captures the essence of your program state.
I am not sure how to do this. Given a list of numbers and a number k, return all pairs of numbers from the list that add up to k. only pass through the list once.
For example, given [10, 15, 3, 7] and k of 17. The program should return 10 + 7.
How do you order and return every pair while only going through the list once.
Use a set to keep track of what you've seen. Runtime O(N), Space: O(N)
def twoAddToK(nums, k):
seen = set()
N = len(nums)
for i in range(N):
if k - nums[i] in seen:
return True
seen.add(nums[i])
return False
As an alternative to Shawn's code, which uses a set, there is also the option of sorting the list in O(N log N) time (and possibly no extra space, if you are allowed to overwrite the original input), and then applying an O(N) algorithm to solve the problem on a sorted list
While asymptotic complexity slightly favors using hash sets in terms of time, since O(N) is better than O(N log N), I am ready to bet that sorting + single-pass lookup is considerably faster in practice.
I had a job interview a few weeks ago and I was asked to design a divide and conquer algorithm. I could not solve the problem, but they just called me for a second interview! Here is the question:
we are giving as input two n-element arrays A[0..n − 1] and B[0..n − 1] (which
are not necessarily sorted) of integers, and an integer value. Give an O(nlogn) divide and conquer algorithm that determines if there exist distinct values i, j (that is, i != j) such that A[i] + B[j] = value. Your algorithm should return True if i, j exists, and return False otherwise. You may assume that the elements in A are distinct, and the elements in B are distinct.
can anybody solve the problem? Thanks
My approach is..
Sort any of the array. Here we sort array A. Sort it with the Merge Sort algorithm which is a Divide and Conquer algorithm.
Then for each element of B, Search for Required Value- Element of B in array A by Binary Search. Again this is a Divide and Conquer algorithm.
If you find the element Required Value - Element of B from an Array A then Both element makes pair such that Element of A + Element of B = Required Value.
So here for Time Complexity, A has N elements so Merge Sort will take O(N log N) and We do Binary Search for each element of B(Total N elements) Which takes O(N log N). So total time complexity would be O(N log N).
As you have mentioned you require to check for i != j if A[i] + B[j] = value then here you can take 2D array of size N * 2. Each element is paired with its original index as second element of the each row. Sorting would be done according the the data stored in the first element. Then when you find the element, You can compare both elements original indexes and return the value accordingly.
The following algorithm does not use Divide and Conquer but it is one of the solutions.
You need to sort both the arrays, maintaining the indexes of the elements maybe sorting an array of pairs (elem, index). This takes O(n log n) time.
Then you can apply the merge algorithm to check if there two elements such that A[i]+B[j] = value. This would O(n)
Overall time complexity will be O(n log n)
I suggest using hashing. Even if it's not the way you are supposed to solve the problem, it's worth mentioning since hashing has a better time complexity O(n) v. O(n*log(n)) and that's why more efficient.
Turn A into a hashset (or dictionary if we want i index) - O(n)
Scan B and check if there's value - B[j] in the hashset (dictionary) - O(n)
So you have an O(n) + O(n) = O(n) algorithm (which is better that required (O n * log(n)), however the solution is NOT Divide and Conquer):
Sample C# implementation
int[] A = new int[] { 7, 9, 5, 3, 47, 89, 1 };
int[] B = new int[] { 5, 7, 3, 4, 21, 59, 0 };
int value = 106; // 47 + 59 = A[4] + B[5]
// Turn A into a dictionary: key = item's value; value = item's key
var dict = A
.Select((val, index) => new {
v = val,
i = index, })
.ToDictionary(item => item.v, item => item.i);
int i = -1;
int j = -1;
// Scan B array
for (int k = 0; k < B.Length; ++k) {
if (dict.TryGetValue(value - B[k], out i)) {
// Solution found: {i, j}
j = k;
// if you want any solution then break
break;
// scan further (comment out "break") if you want all pairs
}
}
Console.Write(j >= 0 ? $"{i} {j}" : "No solution");
Seems hard to achieve without sorting.
If you leave the arrays unsorted, checking for existence of A[i]+B[j] = Value takes time Ω(n) for fixed i, then checking for all i takes Θ(n²), unless you find a trick to put some order in B.
Balanced Divide & Conquer on the unsorted arrays doesn't seem any better: if you divide A and B in two halves, the solution can lie in one of Al/Bl, Al/Br, Ar/Bl, Ar/Br and this yields a recurrence T(n) = 4 T(n/2), which has a quadratic solution.
If sorting is allowed, the solution by Sanket Makani is a possibility but you do better in terms of time complexity for the search phase.
Indeed, assume A and B now sorted and consider the 2D function A[i]+B[j], which is monotonic in both directions i and j. Then the domain A[i]+B[j] ≤ Value is limited by a monotonic curve j = f(i) or equivalently i = g(j). But strict equality A[i]+B[j] = Value must be checked exhaustively for all points of the curve and one cannot avoid to evaluate f everywhere in the worst case.
Starting from i = 0, you obtain f(i) by dichotomic search. Then you can follow the border curve incrementally. You will perform n step in the i direction, and at most n steps in the j direction, so that the complexity remains bounded by O(n), which is optimal.
Below, an example showing the areas with a sum below and above the target value (there are two matches).
This optimal solution has little to do with Divide & Conquer. It is maybe possible to design a variant based on the evaluation of the sum at a central point, which allows to discard a whole quadrant, but that would be pretty artificial.
If I had a simple recursive algorithm, such as:
numberOfMatches(A, x, i): // A is an array of values, x is a single value
the algorithm will search the array from A[1] to A[i]
count = 0
if i==0:
return 0
if A[i]=x:
count = numberOfMatches(A, x, i-1) +1
else:
count = numberOfMatches(A, x, i-1)
return count
How would I go about finding the running time (which I know from common sense is O(n)) using recurrences?
I have got T(n) = T(n-1) because the list to be searched decreases by 1 each time, however, I don't think this is right.
I also need to solve the recurrence algorithm by expanding it, and I dont even know where to start with that.
T(n)=T(n-1)+1
By induction you can prove it easily.