complexity - bigtheta 3 for cycle - for-loop

I just resolve a problem but i don't have the solution of that so i kindly ask you if you can confirm if my solution is correct or not
int h=1; int cont = 0;
for (j = 2^N; j>1; j = j/2) {
h = h * 2;
for (i =1; i < j; i = i*2)
for (k=2; k<h; k++)
cont ++;
}
I must find the complexity of that portion of code in BIGTHETA.
So, i analyze that the third cycle grow up in that way
k -> linear until = h (h grow up like 2^w) - So the complexity is log n.
About the second, the first cycles' limit is 0 so i think that the complexity is log n.
About the first 2^N = 2^N-1 so the complexity is n
The total complexity is n * log n

You can proceed formally, step by step, using Sigma notation (I skipped some steps, but feel free to ask for more details if necessary):

Related

Better understanding of (log n) time complexity

I'm a bit confused on (log n). Given this code
public static boolean IsPalindrome(String s) {
char[] chars = s.toCharArray();
for (int i = 0; i < (chars.length / 2); i++) {
if (chars[i] != chars[(chars.length - i - 1)])
return false;
}
return true;
}
}
I am looping n/2 times. So, as length of n increase, my time is increasing half the time of n. In my opinion, I thought that's exactly what log n was? But the person who wrote this code said this is still O(N).
In what case of a loop, can something be (log n)? For example this code:
1. for (int i = 0; i < (n * .8); i++)
Is this log n? I'm looping 80% of n length.
What about this one?
2. for (int i = 1; i < n; i += (i * 1.2))
Is that log n? If so, why.
1. for (int i = 0; i < (n * .8); i++)
In the first case basically you can replace 0.8n with another variable, let's call it m.
for (int i = 0; i < m; i++) You're looping m number of times. You're increasing value of i one unit in each iteration. Since m and n are just variable names, the Big-O complexity of the above loop is O(n).
2. for (int i = 0; i < n; i += (i * 1.2))
In the second scenario, you're not incrementing the value of i, the value of i is always going to be 0. And it is a classic case of an for-infinite loop.
What you're looking for is 2. for (int i = 1; i <= n; i += (i * 1.2)) Here, you're incrementing the value of i logarithmically(but not to the base 2).
Consider for (int i = 1; i <= n; i += i) The value of i doubles after every iteration. value of i is going to be 1, 2, 4, 8, 16, 32, 64.. Let's say n value is 64, your loop is going to terminate in 7 iterations, which is (log(64) to the base 2) + 1(+1 because we are starting the loop from 1) number of operations. Hence it becomes a logarithmic operation.
2. for (int i = 1; i <= n; i += (i * 1.2)) In your case as well the solution is a logarithmic solution, but it is not to the base 2. The base of your logarithmic operation is 2.2, But in big-O notation it boils down to a O(log(n))
I think you miss what is time complexity and how the big O notation work.
The Big O notation is used to describe the asymptotic behavior of the algorithm as the size of the problem growth (to infinity). Particular coefficients do not matter.
As a simple intuition, if when you increase n by a factor of 2, the number of steps you need to perform also increases by about 2 times, it is a linear time complexity or what is called O(n).
So let's get back to your examples #1 and #2:
yes, you do only chars.length/2 loop iterations but if the length of the s is doubled, you also double the number of iterations. This is exactly the linear time complexity
similarly to the previous case you do 0.8*n iterations but if n is doubled, you do twice as many iterations. Again this is linear
The last example is different. The coefficient 1.2 doesn't really matter. What matters is that you add i to itself. Let's re-write that statement a bit
i += (i * 1.2)
is the same as
i = i + (i * 1.2)
which is the same as
i = 2.2 * i
Now you clearly see that each iteration you more than double i. So if you double n you'll only need one more iteration (or even the same). This is a sign of a fundamentally sub-linear time complexity. And yes this is an example of O(log(n)) because for a big n you need only about log(n, base=2.2) iterations and it is true that
log(n, base=a) = log(n, base=b) / log(n, base=b) = constant * log(x, base=b)
where constant is 1/log(a, base=b)

what is the time complexity of this code and how? in Big-O

int i, j, k = 0;
for (i = n/2; i <= n; i++) {
for (j = 2; j <= n; j = j * 2) {
k = k + n/2;
}
}
I came across this question and this is what I think.
The outer loop will run, N/2 times and the inner loop will run logN times so it should be N/2*logN. But this is not the correct answer.
The correct answer is O(NlogN), can anybody tell me what I am missing?
Any help would be appreciated.
Let's take a look at this block of code.
First of all, you can notice that inner loop doesn't depend on the external, so the complexity of it would not change at any iteration.
for (j = 2; j <= n; j = j * 2) {
k = k + n/2;
}
I think, your knowledge will be enough to understand, that complexity of this loop is O(log n).
Now we need to understand how many times this loop will be performed. So we should take a look at external loop
for (i = n/2; i <= n; i++) {
and find out, that there will be n / 2 iterations, or O(n) in a Big-O notation.
Combine these complexities and you'll see, that your O(log n) loop will be performed O(n) times, so the total complexity will be O(n) * O(log n) = O(n log n).

how i can find the time complexity of the above code

for(i=0; i<n; i++) // time complexity n+1
{
k=1; // time complexity n
while(k<=n) // time complexity n*(n+1)
{
for(j=0; j<k; j++) // time complexity ??
printf("the sum of %d and %d is: %d\n",j,k,j+k); time complexity ??
k++;
}
What is the time complexity of the above code? I stuck in the second (for) and i don't know how to find the time complexity because j is less than k and not less than n.
I always having problems related to time complexity, do you guys got some good article on it?
especially about the step count and loops.
From the question :
because j is less than k and not less than n.
This is just plain wrong, and I guess that's the assumption that got you stuck. We know what values k can take. In your code, it ranges from 1 to n (included). Thus, if j is less than k, it is also less than n.
From the comments :
i know the the only input is n but in the second for depends on k an not in n .
If a variable depends on anything, it's on the input. j depends on k that itself depends on n, which means j depends on n.
However, this is not enough to deduce the complexity. In the end, what you need to know is how many times printf is called.
The outer for loop is executed n times no matter what. We can factor this out.
The number of executions of the inner for loop depends on k, which is modified within the while loop. We know k takes every value from 1 to n exactly once. That means the inner for loop will first be executed once, then twice, then three times and so on, up until n times.
Thus, discarding the outer for loop, printf is called 1+2+3+...+n times. That sum is very well known and easy to calculate : 1+2+3+...+n = n*(n+1)/2 = (n^2 + n)/2.
Finally, the total number of calls to printf is n * (n^2 + n)/2 = n^3/2 + n^2/2 = O(n^3). That's your time complexity.
A final note about this kind of codes. Once you see the same patterns a few times, you quickly start to recognize the kind of complexity involved. Then, when you see that kind of nested loops with dependent variables, you immediately know that the complexity for each loop is linear.
For instance, in the following, f is called n*(n+1)*(n+2)/6 = O(n^3) times.
for (i = 1; i <= n; ++i) {
for (j = 1; j <= i; ++j) {
for (k = 1; k <= j; ++k) {
f();
}
}
}
First, simplify the code to show the main loops. So, we have a structure of:
for(int i = 0; i < n; i++) {
for(int k = 1; k <= n; k++) {
for(int j = 0; j < k; j++) {
}
}
}
The outer-loops run n * n times but there's not much you can do with this information because the complexity of the inner-loop changes based on which iteration of the outer-loop you're on, so it's not as simple as calculating the number of times the outer loops run and multiplying by some other value.
Instead, I would find it easier to start with the inner-loop, and then add the outer-loops from the inner-most to outer-most.
The complexity of the inner-most loop is k.
With the middle loop, it's the sum of k (the complexity above) where k = 1 to n. So 1 + 2 + ... + n = (n^2 + n) / 2.
With the outer loop, it's done n times so another multiplication by n. So n * (n^2 + n) / 2.
After simplifying, we get a total of O(n^3)
The time complexity for the above code is : n x n x n = n^3 + 1+ 1 = n^3 + 2 for the 3 loops plus the two constants. Since n^3 carries the heaviest growing rate the constant values can be ignored, so the Time complexity would be n^3.
Note: Take each loop as (n) and to obtained the total time, multiple the (n) values in each loop.
Hope this will help !

Write an algorithm to efficiently find all i and j for any given N such that N=i^j

I am looking for an efficient algorithm of the problem, for any N find all i and j such that N=i^j.
I can solve it of O(N^2) as follows,
for i=1 to N
{
for j=1 to N
{
if((Power(i,j)==N)
print(i,j)
}
}
I am looking for better algorithm(or program in any language)if possible.
Given that i^j=N, you can solve the equation for j by taking the log of both sides:
j log(i) = log(N) or j = log(N) / log(i). So the algorithm becomes
for i=2 to N
{
j = log(N) / log(i)
if((Power(i,j)==N)
print(i,j)
}
Note that due to rounding errors with floating point calculations, you might want to check j-1 and j+1 as well, but even so, this is an O(n) solution.
Also, you need to skip i=1 since log(1) = 0 and that would result in a divide-by-zero error. In other words, N=1 needs to be treated as a special case. Or not allowed, since the solution for N=1 is i=1 and j=any value.
As M Oehm pointed out in the comments, an even better solution is to iterate over j, and compute i with pow(n,1.0/j). That reduces the time complexity to O(logN), since the maximum value for j is log2(N).
Here is a method you can use.
Lets say you have to solve an equation:
a^b = n //b and n are known
You can find this using binary search. If, you get a condition such that,
x^b < n and (x+1)^b > n
Then, no pair (a,b) exists such that a^b = n.
If you apply this method in range for b from 1..log(n), you should get all possible pairs.
So, complexity of this method will be: O(log n * log n)
Follow these steps:
function ifPower(n,b)
min=1, max=n;
while(min<max)
mid=min + (max-min)/2
k = mid^b, l = (mid + 1)^b;
if(k == n)
return mid;
if(l == n)
return mid + 1;
if(k < n && l > n)
return -1;
if(k < n)
max = mid - 1;
else
min = mid + 2; //2 as we are already checking for mid+1
return -1;
function findAll(n)
s = log2(n)
for i in range 2 to s //starting from 2 to ignore base cases, powers 0,1...you can handle them if required
p = ifPower(n,i)
if(b != -1)
print(b,i)
Here, in the algorithm, a^b means a raised to power of b and not a xor b (its obvs, but just saying)

complexity on a annidate for cycle

i would like to know if the solution for the complexity of that code is correct:
for (j = 2^N; j>1; j = j/2) {
h = h * 2;
for (i =1; i < j; i = i*2)
for (k=2; k<log N; k++)
cont ++;
}
According me the last cycle have complexity log n
The first cycle have complexity n
The second cycle have complexity log n
So the total complexity is n log n
Best Regards
You have three loops here:
First is linear in N (logarithmic in 2^N)
Second is linear in N (logarithmic in 2^N)
Third is logarithmic in N
So the whole code looks rather as O(N^2 log N)

Resources