Can someone please explain how to find the time complexity of this algorithm
I think its O(logN) because of the binary splits, but im not sure
int findRotationCount(int a[], int sizeOfArray) //O(logN)
{
int start = 0;
int endValue = sizeOfArray -1;
while(start<endValue)
{
if(a[start] < a[endValue])
return endValue+1;
else
{
int mid = (start+endValue)/2;
if(a[start]<=a[mid] && a[mid+1]<=a[endValue])
return mid+1;
else if(a[start]<=a[mid])
start = mid+1;
else
endValue = mid;
}
}
return -1;
}
Thanks!
You are right, each loop of the while reduces the range start to endValue to (roughly) half the size, so it is O(log n). Look for the analysis of e.g. binary search, which has a similar structure if you need more details.
Related
I am aware that this is one of the most common coding questions when it comes to integral arrays. I am looking for a solution to the problem of finding the longest contiguous subArray product within the array, but using a Divide and Conquer approach.
I split my input array into two halves: the left and right arrays are solved recursively in case the solution falls entirely in the half array. Where I have a problem is with the scenario where the subArray crosses the mid-point of the array. Here is a short snippet of my code for the function handling the crossing:
pair<int,pair<int, int>> maxMidCrossing(vector<int>& nums, int low, int mid, int high)
{
int m = 1;
int leftIndx = low;
long long leftProduct = INT_MIN;
for(int i = mid-1; i>= low; --i)
{
m *= nums[i];
if(m > leftProduct) {
leftProduct = m;
leftIndx = i;
}
}
int mleft = m;
m=1;
int rightIndx = high;
long long rightProduct = INT_MIN;
for(int i = mid; i<= high; ++i)
{
m *= nums[i];
if(m > rightProduct) {
rightProduct = m;
rightIndx = i;
}
}
int mright = m;
cout << "\nRight product " << rightProduct;
pair<int, int> tmp;
int maximum = 0;
// Check the multiplication of both sides of the array to see if the combined subarray satisfies the maximum product condition.
if(mleft*mright < leftProduct*rightProduct) {
tmp = pair(leftIndx, rightIndx);
maximum = leftProduct*rightProduct;
}
else {
tmp = pair(low, high);
maximum = mleft*mright;
}
return pair(maximum, tmp);
}
The function handling the entire search contains the following:
auto leftIndx = indexProduct(left);
auto rightIndx = indexProduct(right);
auto midResult = maxMidCrossing(nums, 0, mid, nums.size()-1); // middle crossing
//.....more code........
if(mLeft > midProduct && mLeft > mRight)
tmp=leftIndx;
else if (mRight > midProduct && mRight > mLeft)
tmp = pair(rightIndx.first + mid, rightIndx.second + mid);
else tmp=midIndx;
In the end, I just compute the maximum product across the 3 scenarios: left array, crossing array, right array.
I still have a few corner cases failing. My question is if this problem admits a recursive solution of the Divide and Conquer type, and if anyone can spot what I may be doing wrong in my code, I would appreciate any hints that could help me get unstuck.
Thanks,
Amine
Take a look at these from leetcode
C++ Divide and Conquer
https://leetcode.com/problems/maximum-product-subarray/discuss/48289/c++-divide-and-conquer-solution-8ms
Java
https://leetcode.com/problems/maximum-product-subarray/discuss/367839/java-divide-and-conquer-2ms
c#
https://leetcode.com/problems/maximum-product-subarray/discuss/367839/java-divide-and-conquer-2ms
I came across this question on code fight/code signal
Given an array of points on a plane find the maximum no of points that are visible from the origin with a viewing angle of 45 degrees.
int visiblePoints(std::vector<std::vector<int>> points) {
const double pi=M_PI,pi_45=M_PI_4,pi_360=M_PI*2.0;
const double epsilon=1e-10;
int n=points.size(),result=0;
vector<double> angles(n);
for(int i=0;i<n;i++){
double angle=atan2(points[i][1],points[i][0]);
angles[i]=angle;
if(angle<pi_45-pi){
angles.push_back(angle+pi_360);
}
}
sort(angles.begin(),angles.end());
//std::sort(angles.begin(), angles.end());
for(auto it=angles.begin();it!=angles.begin()+n;++it){
auto bound=upper_bound(it,angles.end(),*it+(pi_45+epsilon));
int curr=distance(it,bound);
if(curr>result){
result=curr;
}
}
return result;
/*
for (auto it = angles.begin(), e = angles.begin() + n; it != e; ++it) {
auto bound = std::upper_bound(it, angles.end(), *it + (pi_over_4 + epsilon));
int cur = std::distance(it, bound);
if (cur > result)
result = cur;
}
return result;
*/
So the code is fine,I can figure out what is happening here.I just wanted to check is the time complexity O(NlogN).
The first for loop takes O(N).points is an array of several points in 2D.For example points =[[1,1],[3,1],.....]
Then we have the sorting part. I am assuming that sort takes O(N*logN). Of course, quick sort in worst case takes O(N^2), but for now, I will ignore that fact.
And then the last loop is again O(N)
Also, will the space complexity in this scenario be O(1) or O(N)(due to the sorting)
Thank you
You can use 2 pointers so the complexity will be just O(N), not counting sort.
int l = 0, r = 0, res = 0;
while (l < N) {
while (r < N + l && angles[r] - angles[l] < M_PI_4 + eps) ++r;
res = max(res, r - l);
++l;
}
return res;
As per problem statement:
Write a solution with O(n) time complexity and O(1) additional space
complexity. Given an array a that contains only numbers in the range
from 1 to a.length, find the first duplicate number for which the
second occurrence has the minimal index. In other words, if there are
more than 1 duplicated numbers, return the number for which the second
occurrence has a smaller index than the second occurrence of the other
number does. If there are no such elements, return -1
I followed my code according to constraints and still I'm getting time complexity error. Here's my solution:
int firstDuplicate(std::vector<int> a)
{
long long int n = a.size();
int cnt=0;
for(long long int i=0;i<n;i++)
{
//cout<<a[i]<<" "<<cnt<<endl;
if(a[i]==n||a[i]==-n)
{ cnt++;
if(cnt>1)
return n;
}
else if(a[abs(a[i])]<0)
return -a[i];
else
a[a[i]] = -a[a[i]];
}
return -1;
}
Can anyone suggest me better algorithm or whats wrong with this algorithm?
The algorithm for this problem is as follows:
For each number in the array, a, each time we see that number, we make a[abs(a[i]) - 1] negative. While iterating through a, if at some point we find that a[abs(a[i] - 1] is negative, we return a[i]. If we reach the last item in the array without finding a negative number, we return -1.
I feel like, this is what you were trying to get at, but you might have overcomplicated things. In code this is:
int firstDuplicate(std::vector<int> a)
{
for (int i = 0; i < a.size(); i += 1)
{
if (a[abs(a[i]) - 1] < 0)
return abs(a[i]);
else
a[abs(a[i]) - 1] = -a[abs(a[i]) - 1];
}
return -1;
}
This runs in O(n) time, with O(1) space complexity.
You can use the indexes to mark whether an element has occured before or not, if the value at idx is negative then it has already occurred before
int firstDuplicate(std::vector<int> a)
{
long long int n = a.size();
int cnt=0;
for(long long int i=0;i<n;i++)
{
int idx = a[i] - 1;
if(a[idx] < 0){
return a[i];
}
a[idx] *= -1;
}
return -1;
}
Hey guys i need some help for this piece of code, computation had become a problem coz i dont know the exact format in computing this code. any help would do.
int fib(int n)
{
int prev = -1;
int result = 1;
int sum = 0;
for(int i = 0;i <= n;++ i)
{
sum = result + prev;
prev = result;
result = sum;
}
return result;
}
I am not sure exactly what you are asking, maybe you can clarify
The time complexity of this algorithm is O(n). The loop will execute n times until i is greater than n. i starts at zero and increments by 1 every iteration of this loop until n.
I hope this helps
wanted to analyse the complexity of recursive linear search ( using divide and conquer technique ). Is it log(n) or n ? if not log(n) then what is the actually complexity and how ?
int linear_search(int *a,int l,int h,int key){
if(h == l){
if(key == a[l])
return l;
else
return -1;
}
int mid =(l+h)/2;
int i = linear_search(a,l,mid,key);
if(i == -1)
i = linear_search(a,mid+1,h,key);
return i;
}
Yes it is O(n). But this algorithm doesn't make sense. All you have to do is go through the array and find if the element is found which is what this algorithm is doing but it is unnecessarily complex.
Yes, it's O(n). What the recursive method does is just a loop, so you would be better off using a real loop:
int linear_search(int *a,int l,int h,int key){
for (int i = l; i <= h; i++) {
if (a[i] == key) return i;
}
return -1;
}
If you want to use recursion to avoid a loop, there is one worse way of doing it, sometimes found in (bad) examples showing recursion:
int linear_search(int *a,int l,int h,int key){
if (l > h) {
return -1;
} else if (a[l] == key) {
return l;
} else {
return linear_search(a, l + 1, h, key);
}
}
It's still O(n), but it's worse because it will fill the stack if the array is too large. The divide and conquer approach at least will never nest deeper than the number of bits in an integer.
yes, it search all value in array till find them, and its time complexity is omega(n). and it looks to be in lg(n) but because of your if(h == l) it search all values of your array