Determining recurrence relation from recursive algorithm - algorithm

I got the following recursive algorithm and was asked to find its recurrence relation.
int search(int A[], int key, int min, int max)
{
if (max < min) // base case
return KEY_NOT_FOUND;
else
{
int mid = midpoint(min, max);
if (A[mid] > key)
return search(A, key, min, mid-1);
else if (A[mid] < key)
return search(A, key, mid+1, max);
else
return mid; // key found
}
}
The solution is T(n) = T(n/2) + 1 but I am not sure why is it T(n/2) ? and why is it + 1? Is + 1 because recursion takes constant time? or what? Could anyone understand the solution?

Your code is an implementation of binary search. In binary search, at each recursive call you break the sorted array into half, then search for the element in the left part of the array if the element is smaller than the middle element, or search the right part of the array if the element is bigger than the middle element or stop if what you are looking for is the middle element.
Now if n shows the number of elements in your sorted array, whenever you break it into two almost same size arrays, your problem size decreases to n/2, and since you only call the search function once either way on a n/2 array, you can easily say that :
T(n) = T(n/2)+O(1)
The O(1) addition is because of the if you run to check in which condition you are.

Related

How to find which position have prefix sum M in BIT?

Suppose I have created a Binary Indexed Tree with prefix sum of length N. The main array contains only 0s and 1s. Now I want to find which index has a prefix sum M(That means have exactly M 1s).
Like my array is a[]={1,0,0,1,1};
prefix-sum would look like {1,1,1,2,3};
now 3rd index(0 based) has prefix sum of 2.
How can i find this index with BIT?
Thanks in advance.
Why can't you do binary search for that index ? It will take O(log n * log n) time. Here is a simple implementation -
int findIndex(int sum) {
int l = 1, r = n;
while(l <= r) {
int mid = l + r >> 1;
int This = read(mid);
if(This == sum) return mid;
else if(This < sum) l = mid+1;
else r = mid-1;
} return -1;
}
I used the read(x) function. That should return the sum of interval [1,x] in O(log n) time. The overall complexity will be O(log^2 n).
Hope it helps.
If elements in array a[n] is non-negative (and the prefix sum array p[n]is non-decreasing), you can locate an element by prefix sum as query prefix sum by index from BIT, which takes O(logn) time. The only difference is that you need to compare the prefix sum you get at each level to your input to decide which subtree you need to search subsequently -- if the prefix sum is smaller than your input, continue searching the left subtree; otherwise, search the right subtree; repeat this process until reach a node that sums up the desired prefix sum, in which case return the index of the node. The idea is analogous to binary search because the prefix sums are naturally sorted in BIT. If there are negative values in a[n], this method won't work since prefix sums in BIT won't be sorted in this case.

Divide and Conquer Algorithms- Binary search variant

This is a practice question for the understanding of Divide and conquer algorithms.
You are given an array of N sorted integers. All the elements are distinct except one
element is repeated twice. Design an O (log N) algorithm to find that element.
I get that array needs to be divided and see if an equal counterpart is found in the next index, some variant of binary search, I believe. But I can't find any solution or guidance regarding that.
You can not do it in O(log n) time because at any step even if u divide the array in 2 parts, u can not decide which part to consider for further processing and which should be left.
On the other hand if the consecutive numbers are all present in the array then by looking at the index and the value in the index we can decide if the duplicate number is in left side or right side of the array.
D&C should look something like this
int Twice (int a[],int i, int j) {
if (i >= j)
return -1;
int k = (i+j)/2;
if (a[k] == a[k+1])
return k;
if (a[k] == a[k-1])
return k-1;
int m = Twice(a,i,k-1);
int n = Twice(a,k+1,j);
return m != -1 ? m : n;
}
int Twice (int a[], int n) {
return Twice(a,0,n);
}
But it has complexity O(n). As it is said above, it is not possible to find O(lg n) algorithm for this problem.

Search a sorted integer array for an element equal to its index, where A may have duplicate entries

My question is very similar to Q1 and Q2, except that I want to deal with the case where the array may have duplicate entries.
Assume the array A consists of integers sorted in increasing order. If its entries are all distinct, you can do this easily in O(log n) with binary search. But if there are duplicate entries, it's more complicated. Here's my approach:
int search(const vector<int>& A) {
int left = 0, right = A.size() - 1;
return binarySearchHelper(A, left, right);
}
int binarySearchHelper(const vector<int>& A, int left, int right) {
int indexFound = -1;
if (left <= right) {
int mid = left + (right - left) / 2;
if (A[mid] == mid) {
return mid;
} else {
if (A[mid] <= right) {
indexFound = binarySearchHelper(A, mid + 1, right);
}
if (indexFound == -1 && A[left] <= mid) {
indexFound = binarySearchHelper(A, left, mid - 1);
}
}
}
return indexFound;
}
In the worst case (A has no element equal to its index), binarySearchHelper makes 2 recursive calls with input size halved at each level of recursion, meaning it has a worst-case time complexity of O(n). That's the same as the O(n) approach where you just read through the array in order. Is this really the best you can do? Also, is there a way to measure the algorithm's average time complexity? If not, is there some heuristic for deciding when to use the basic O(n) read-through approach and when to try a recursive approach such as mine?
If A has negative integers, then it's necessary to check the condition if (left <= right) in binarySearchHelper. Since, for example, if A = [-1], then the algorithm would recurse from bsh(A, 0, 0) to bsh(A,1,0) and to bsh(A,0,-1). My intuition leads me to believe the check if (left <= right) is necessary if and only if A has some negative integers. Can anyone help me verify this?
I would take a different approach. First I would eliminate all negative numbers in O(log n) simply by doing a binary search for the first positive number. This is allowed because no negative number can be equal to its index. Let's say the index of the first positive element is i.
Now I will keep doing the following until I find the element or find that it doesn't exist:
If i not inside A, return false.
If i < A[i] do i = A[i]. It would take A[i] - i duplicates to have i 'catch up' to A[i], so we would increment i by A[i] - i, this is equivalent to setting i to A[i]. Go to 1.
If i == A[i] return true (and index if you want to).
Find the first index greater than i such that i <= A[i]. You can do this doing a 'binary search from the left' by incrementing i by 1, 2, 4, 8, etc and then doing a binary search on the last interval you found it in. If it doesn't exist, return false.
In the worst case the above is stil O(n), but it has many tricks to speed it up way beyond that in better cases.

Find second largest element in an array using recursion

As the title says, is there any efficient way to find the second largest element in an array using recursion?
partition based Selection algorithm is recursive by nature, and it lets you select the k'th element in the array, so using it - you can actually find the answer for any k, including k = n-1 (your case).
This is done in O(n) on average with fairly low constants.
If nothing is known about the array, you can't do better than O(n), whether it's recursive or iterative.
Just pass throught the array recursively, while passing the two largest elements and replacing them if you find larger values.
find_largest(array_begin, largest, secondLargest)
if (array_begin = NULL)
return secondLargest
if (array_begin.value > largest)
secondLargest = largest
largest = array_begin.value
return find_largest(array_begin+1, largest, secondLargest)
largest and secondLargest can initially be set to the minimum you expect to find in the array.
You're right, sorting (at least full sorting) is overkill.
Something in O(n) like this:
int findSecondLargest(int[] arr, int index, int largest, int secondLargest) {
if(index == arr.length) {
return secondLargest;
}
int element = arr[index];
if(element > secondLargest) {
if(element > largest) {
return findSecondLargest(arr, index + 1, element, largest);
} else {
return findSecondLargest(arr, index + 1, largest, element);
}
}
return findSecondLargest(arr, index + 1, largest, secondLargest);
}
public void recurs(int[] data, int ind, int max1, int max2){
if(ind<data.length){
if(data[ind]>max1){
int temp = max1;
max1 = data[ind];
max2 = temp;
} else if(data[ind]>max2){
max2 = data[ind];
}
recurs(data, ind+1, max1, max2);
} else {
return max2;
}
return -1;
}
to call it :
recurs(dataX, 0, Integer.MIN_VALUE, Integer.MIN_VALUE);
Instinctively, you can scan the array and do comparison on every value twice. Anyway, you need O(n) to solve the problem. It's fast enough.
Try to avoid recursive when it is not necessary because it is not free.
If you do it by recursion then at most you have to do 3(n)/2-2 comparisons but for a better solution, think this problem as a binary tree with n numbers of nodes. Then there will be n-1 comparison for finding largest and log(n)-1 comparison for second largest. But some argue that it needs n + log(n) comparison.
function findTwoLargestNumber(n,startIndex,largestNumber,secondLargestNumber){
if(startIndex==arrlengths-1){
return [largestNumber,secondLargestNumber];
}
if(largestNumber<n[startIndex+1]){
secondLargestNumber=largestNumber;
largestNumber=n[startIndex+1];
}
return findTwoLargestNumber(n,startIndex+1 ,largestNumber,secondLargestNumber)
}

Algorithm to find the smallest non negative integer that is not in a list

Given a list of integers, how can I best find an integer that is not in the list?
The list can potentially be very large, and the integers might be large (i.e. BigIntegers, not just 32-bit ints).
If it makes any difference, the list is "probably" sorted, i.e. 99% of the time it will be sorted, but I cannot rely on always being sorted.
Edit -
To clarify, given the list {0, 1, 3, 4, 7}, examples of acceptable solutions would be -2, 2, 8 and 10012, but I would prefer to find the smallest, non-negative solution (i.e. 2) if there is an algorithm that can find it without needing to sort the entire list.
One easy way would be to iterate the list to get the highest value n, then you know that n+1 is not in the list.
Edit:
A method to find the smallest positive unused number would be to start from zero and scan the list for that number, starting over and increase if you find the number. To make it more efficient, and to make use of the high probability of the list being sorted, you can move numbers that are smaller than the current to an unused part of the list.
This method uses the beginning of the list as storage space for lower numbers, the startIndex variable keeps track of where the relevant numbers start:
public static int GetSmallest(int[] items) {
int startIndex = 0;
int result = 0;
int i = 0;
while (i < items.Length) {
if (items[i] == result) {
result++;
i = startIndex;
} else {
if (items[i] < result) {
if (i != startIndex) {
int temp = items[startIndex];
items[startIndex] = items[i];
items[i] = temp;
}
startIndex++;
}
i++;
}
}
return result;
}
I made a performance test where I created lists with 100000 random numbers from 0 to 19999, which makes the average lowest number around 150. On test runs (with 1000 test lists each), the method found the smallest number in unsorted lists by average in 8.2 ms., and in sorted lists by average in 0.32 ms.
(I haven't checked in what state the method leaves the list, as it may swap some items in it. It leaves the list containing the same items, at least, and as it moves smaller values down the list I think that it should actually become more sorted for each search.)
If the number doesn't have any restrictions, then you can do a linear search to find the maximum value in the list and return the number that is one larger.
If the number does have restrictions (e.g. max+1 and min-1 could overflow), then you can use a sorting algorithm that works well on partially sorted data. Then go through the list and find the first pair of numbers v_i and v_{i+1} that are not consecutive. Return v_i + 1.
To get the smallest non-negative integer (based on the edit in the question), you can either:
Sort the list using a partial sort as above. Binary search the list for 0. Iterate through the list from this value until you find a "gap" between two numbers. If you get to the end of the list, return the last value + 1.
Insert the values into a hash table. Then iterate from 0 upwards until you find an integer not in the list.
Unless it is sorted you will have to do a linear search going item by item until you find a match or you reach the end of the list. If you can guarantee it is sorted you could always use the array method of BinarySearch or just roll your own binary search.
Or like Jason mentioned there is always the option of using a Hashtable.
"probably sorted" means you have to treat it as being completely unsorted. If of course you could guarantee it was sorted this is simple. Just look at the first or last element and add or subtract 1.
I got 100% in both correctness & performance,
You should use quick sorting which is N log(N) complexity.
Here you go...
public int solution(int[] A) {
if (A != null && A.length > 0) {
quickSort(A, 0, A.length - 1);
}
int result = 1;
if (A.length == 1 && A[0] < 0) {
return result;
}
for (int i = 0; i < A.length; i++) {
if (A[i] <= 0) {
continue;
}
if (A[i] == result) {
result++;
} else if (A[i] < result) {
continue;
} else if (A[i] > result) {
return result;
}
}
return result;
}
private void quickSort(int[] numbers, int low, int high) {
int i = low, j = high;
int pivot = numbers[low + (high - low) / 2];
while (i <= j) {
while (numbers[i] < pivot) {
i++;
}
while (numbers[j] > pivot) {
j--;
}
if (i <= j) {
exchange(numbers, i, j);
i++;
j--;
}
}
// Recursion
if (low < j)
quickSort(numbers, low, j);
if (i < high)
quickSort(numbers, i, high);
}
private void exchange(int[] numbers, int i, int j) {
int temp = numbers[i];
numbers[i] = numbers[j];
numbers[j] = temp;
}
Theoretically, find the max and add 1. Assuming you're constrained by the max value of the BigInteger type, sort the list if unsorted, and look for gaps.
Are you looking for an on-line algorithm (since you say the input is arbitrarily large)? If so, take a look at Odds algorithm.
Otherwise, as already suggested, hash the input, search and turn on/off elements of boolean set (the hash indexes into the set).
There are several approaches:
find the biggest int in the list and store it in x. x+1 will not be in the list. The same applies with using min() and x-1.
When N is the size of the list, allocate an int array with the size (N+31)/32. For each element in the list, set the bit v&31 (where v is the value of the element) of the integer at array index i/32. Ignore values where i/32 >= array.length. Now search for the first array item which is '!= 0xFFFFFFFF' (for 32bit integers).
If you can't guarantee it is sorted, then you have a best possible time efficiency of O(N) as you have to look at every element to make sure your final choice is not there. So the question is then:
Can it be done in O(N)?
What is the best space efficiency?
Chris Doggett's solution of find the max and add 1 is both O(N) and space efficient (O(1) memory usage)
If you want only probably the best answer then it is a different question.
Unless you are 100% sure it is sorted, the quickest algorithm still has to look at each number in the list at least once to at least verify that a number is not in the list.
Assuming this is the problem I'm thinking of:
You have a set of all ints in the range 1 to n, but one of those ints is missing. Tell me which of int is missing.
This is a pretty easy problem to solve with some simple math knowledge. It's known that the sum of the range 1 .. n is equal to n(n+1) / 2. So, let W = n(n+1) / 2 and let Y = the sum of the numbers in your set. The integer that is missing from your set, X, would then be X = W - Y.
Note: SO needs to support MathML
If this isn't that problem, or if it's more general, then one of the other solutions is probably right. I just can't really tell from the question since it's kind of vague.
Edit: Well, since the edit, I can see that my answer is absolutely wrong. Fun math, none-the-less.
I've solved this using Linq and a binary search. I got 100% across the board. Here's my code:
using System.Collections.Generic;
using System.Linq;
class Solution {
public int solution(int[] A) {
if (A == null) {
return 1;
} else {
if (A.Length == 0) {
return 1;
}
}
List<int> list_test = new List<int>(A);
list_test = list_test.Distinct().ToList();
list_test = list_test.Where(i => i > 0).ToList();
list_test.Sort();
if (list_test.Count == 0) {
return 1;
}
int lastValue = list_test[list_test.Count - 1];
if (lastValue <= 0) {
return 1;
}
int firstValue = list_test[0];
if (firstValue > 1) {
return 1;
}
return BinarySearchList(list_test);
}
int BinarySearchList(List<int> list) {
int returnable = 0;
int tempIndex;
int[] boundaries = new int[2] { 0, list.Count - 1 };
int testCounter = 0;
while (returnable == 0 && testCounter < 2000) {
tempIndex = (boundaries[0] + boundaries[1]) / 2;
if (tempIndex != boundaries[0]) {
if (list[tempIndex] > tempIndex + 1) {
boundaries[1] = tempIndex;
} else {
boundaries[0] = tempIndex;
}
} else {
if (list[tempIndex] > tempIndex + 1) {
returnable = tempIndex + 1;
} else {
returnable = tempIndex + 2;
}
}
testCounter++;
}
if (returnable == list[list.Count - 1]) {
returnable++;
}
return returnable;
}
}
The longest execution time was 0.08s on the Large_2 test
You need the list to be sorted. That means either knowing it is sorted, or sorting it.
Sort the list. Skip this step if the list is known to be sorted. O(n lg n)
Remove any duplicate elements. Skip this step if elements are already guaranteed distinct. O(n)
Let B be the position of 1 in the list using a binary search. O(lg n)
If 1 isn't in the list, return 1. Note that if all elements from 1 to n are in the list, then the element at B+n must be n+1. O(1)
Now perform a sortof binary search starting with min = B, max = end of the list. Call the position of the pivot P. If the element at P is greater than (P-B+1), recurse on the range [min, pivot], otherwise recurse on the range (pivot, max]. Continue until min=pivot=max O(lg n)
Your answer is (the element at pivot-1)+1, unless you are at the end of the list and (P-B+1) = B in which case it is the last element + 1. O(1)
This is very efficient if the list is already sorted and has distinct elements. You can do optimistic checks to make it faster when the list has only non-negative elements or when the list doesn't include the value 1.
Just gave an interview where they asked me this question. The answer to this problem can be found using worst case analysis. The upper bound for the smallest natural number present on the list would be length(list). This is because, the worst case for the smallest number present in the list given the length of the list is the list 0,1,2,3,4,5....length(list)-1.
Therefore for all lists, smallest number not present in the list is less than equal to length of the list. Therefore, initiate a list t with n=length(list)+1 zeros. Corresponding to every number i in the list (less than equal to the length of the list) mark assign the value 1 to t[i]. The index of the first zero in the list is the smallest number not present in the list. And since, the lower bound on this list n-1, for at least one index j

Resources