Simplifying Big O - big-o

// Assume n is some random integer
int q = 1;
while (q <= Math.Sqrt(n))
{
q++;
int k = 1;
while (k <= Math.Log(n, 2))
{
k++;
if (q^k == n){
return true;
}
}
return false;
In this code above, I'm finding it very difficult to decide what the Big O would be for the worst case. Since the loop runs N times with a nested loop that runs log2(N) times I know it should be O(sqrt(n)*log2(n)) times. However, I find it very confusing as to how it's suppose to be simplified. I understand that sqrt(n) grows faster but I'm unsure if I can disregard log2(n) since it's being multiplied. If I'm not disregarding log2(n), I'm not sure if it should be n^2 since it's two terms of n being multiplied, or if I should leave it as it is.

Take it simple, think that the extern while loop is executed sqrt(n) times and that inside there is another while loop that is executed log2(n) times and inside it assume that all operations take O(1) time to being executed.
So we have a while loop executed sqrt(n) times and an operation inside it that take O(log2(n)) to being executed (that is the other while loop, think of it as a black box pf which you know the asymptotic running time). Therefore the complexity of the algorithm is O(sqrt(n)log2(n)).

Related

Time complexity for a for loop with a multi-variable condition statement

A. What is the complexity (big-O) of the following code fragment?
for (i = 0; (i < n) || (i < m); i++) {
sequence of statements
}
The best i could come up with is the following..
if n is less then m O(n)
else O(m)
I have no clue how to write big-o in the case where there are two variables.
I know this is a very corner case and basic time complexity question so i do not mind removing it after I get some clarification.
The time complexity is O( max(m,n) ) assuming that the body of the loop is O(1).
You say in the question that it would be O(n) if n < m -- that would be the case if the condition on the for loop used an and clause not an or clause. The way it is written the loop will iterate as long as i is less than the bigger of m and n, e.g. if m is a million and n is 0, it is going to iterate a million times. The time complexity scales with the maximum of the two variables.

Time and Space Complexity of an Algorithm - Big O Notation

I am trying to analyze the Big-O-Notation of a simple algorithm and it has been a while I've worked with it. So I've come with an analysis and trying to figure out if this is correct one according to rules for the following code:
public int Add()
{
int total = 0; //Step 1
foreach(var item in list) //Step 2
{
if(item.value == 1) //Step 3
{
total += 1; //Step 4
}
}
return total;
}
If you assign a variable or set, in this case the complexity is determined according to the rules of Big O is O(1). So the first phase will be O(1) - This means whatever the input size is, the program will execute for the same time and memory space.
The second step comes up with foreach loop. One thing is pretty clear in the loop. According to the input, the loop iterates or runs. As an example, for input 10, loop iterates 10 times and for 20, 20 times. Totally depends on the input. In accordance with the rules of the Big O, the complexity would be O(n) - n is the number of inputs. So in the above code, the loop iterates depending upon the number of items in the list.
In this step, we define a variable that determines a condition check (See Step 3 in the coding). In that case, the complexity is O(1) according to the Big O rule.
In the same way, in step 4, there is also no change (See Step 4 in the coding). If the condition check is true, then total variable increments a value by 1. So we write - complexity O(1).
So if the above calculations are perfect, then the final complexity stands as the following:
O(1) + O(n) + O(1) + O(1) or (O(1) + O(n) * O(1) + O(1))
I am not sure if this is correct. But I guess, I would expect some clarification on this if this isn't the perfect one. Thanks.
Big O notation to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines
For example, when analyzing some algorithm, one might find that the time (or the number of steps) it takes to complete a problem of size n is given by
T(n) = 4 n^2 - 2 n + 2
If we ignore constants (which makes sense because those depend on the particular hardware the program is run on) and slower growing terms, we could say "T(n)" grows at the order of n^2 " and write:T(n) = O(n^2)
For the formal definition, suppose f(x) and g(x) are two functions defined on some subset of the real numbers. We write
f(x) = O(g(x))
(or f(x) = O(g(x)) for x -> infinity to be more precise) if and only if there exist constants N and C such that
|f(x)| <= C|g(x)| for all x>N
Intuitively, this means that f does not grow faster than g
If a is some real number, we write
f(x) = O(g(x)) for x->a
if and only if there exist constants d > 0 and C such that
|f(x)| <= C|g(x)| for all x with |x-a| < d
So for your case it would be
O(n) as |f(x)| > C|g(x)|
Reference from http://web.mit.edu/16.070/www/lecture/big_o.pdf
int total = 0;
for (int i = n; i < n - 1; i++) { // --> n loop
for (int j = 0; j < n; j++) { // --> n loop
total = total + 1; // -- 1 time
}
}
}
Big O Notation gives an assumption when value is very big outer loop will run n times and inner loop is running n times
Assume n -> 100 than total n^2 10000 run times
Your analysis is not exactly correct.
Step 1 indeed takes O(1) operations
Step 2 indeed takes O(n) operations
Step 3 takes O(1) operations, but it is executed n times, so its whole contribution to complexity is O(1*n)=O(n)
Step 4 takes O(1) operations, but it is executed up to n times, so its whole contribution to complexity is also O(1*n)=O(n)
The whole complexity is O(1)+O(n)+O(n)+O(n) = O(n).
Your calculation for step 3 and 4 are incorrect as both these steps are inside the for loop.
so step 2,3 and 4 complexity will be O(n)*(O(1) +O(1))=O(n)
and when clubbed with step 1 it will be O(1)+O(n)=O(n).

theoretical analysis of comparisons

I'm first asked to develop a simple sorting algorithm that sorts an array of integers in ascending order and put it to code:
int i, j;
for ( i = 0; i < n - 1; i++)
{
if(A[i] > A[i+1])
swap(A, i+1, i);
for (j = n - 2; j >0 ; j--)
if(A[j] < A[j-1])
swap(A, j-1, j);
}
Now that I have the sort function, I'm asked to do a theoretical analysis for the running time of the algorithm. It says that the answer is O(n^2) but I'm not quite sure how to prove that complexity.
What I know so far is that the 1st loop runs from 0 to n-1, (so n-1 times), and the 2nd loop from n-2 to 0, (so n-2 times).
Doing the recurrence relation:
let C(n) = the number of comparisons
for C(2) = C(n-1) + C(n-2)
= C(1) + C(0)
C(2) = 0 comparisons?
C(n) in general would then be: C(n-1) + C(n-2) comparisons?
If anyone could guide my step by step, that would be greatly appreciated.
When doing a "real" big O - time complexity analysis, you select one operation that you count, obviously the one that dominates the running time. In your case you could either choose the comparison or the swap, since worst case there will be a lot of swaps right?
Then you calculate how many times this will be evoked, scaling to input. So in your case you are quite right with your analysis, you simply do this:
C = O((n - 1)(n - 2)) = O(n^2 -3n + 2) = O(n^2)
I come up with these numbers through reasoning about the flow of data in your code. You have one outer for-loop iterating right? Inside that for-loop you have another for-loop iterating. The first for-loop iterates n - 1 times, and the second one n - 2 times. Since they are nested, the actual number of iterations are actually the multiplication of these two, because for every iteration in the outer loop, the whole inner loop runs, doing n - 2 iterations.
As you might know you always remove all but the dominating term when doing time complexity analysis.
There is a lot to add about worst-case complexity and average case, lower bounds, but this will hopefully make you grasp how to reason about big O time complexity analysis.
I've seen a lot of different techniques for actually analyzing the expression, such as your recurrence relation. However I personally prefer to just reason about the code instead. There are few algorithms which have hard upper bounds to compute, lower bounds on the other hand are in general very hard to compute.
Your analysis is correct: the outer loop makes n-1 iterations. The inner loop makes n-2.
So, for each iteration of the outer loop, you have n-2 iterations on the internal loop. Thus, the total number of steps is (n-1)(n-2) = n^2-3n+2.
The dominating term (which is what matters in big-O analysis) is n^2, so you get O(n^2) runtime.
I personally wouldn't use the recurrence method in this case. Writing the recurrence equation is usually helpful in recursive functions, but in simpler algorithms like this, sometimes it's just easier to look at the code and do some simple math.

Time complexity of this primality testing algorithm?

I have the following code which determines whether a number is prime:
public static boolean isPrime(int n){
boolean answer = (n>1)? true: false;
for(int i = 2; i*i <= n; ++i)
{
System.out.printf("%d\n", i);
if(n%i == 0)
{
answer = false;
break;
}
}
return answer;
}
How can I determine the big-O time complexity of this function? What is the size of the input in this case?
Think about the worst-case runtime of this function, which happens if the number is indeed prime. In that case, the inner loop will execute as many times as possible. Since each iteration of the loop does a constant amount of work, the total work done will therefore be O(number of loop iterations).
So how many loop iterations will there be? Let's look at the loop bounds:
for(int i = 2; i*i <= n; ++i)
Notice that this loop will keep executing as long as i2 ≤ n. Therefore, the loop will terminate as soon as i ≥ √n + 1. Consequently, the loop will end up running O(√n) times, so the worst-case time complexity of the function is O(√n).
As to your second question - what is the size of the input? - typically, when looking at primality testing algorithms (or other algorithms that work on large numbers), the size of the input is defined to be the number of bits required to write out the input. In your case, since you're given a number n, the number of bits required to write out n is Θ(log n). This means that "polynomial time" in this case would be something like O(logk n). Your runtime, O(√n), is not considered polynomial time because O(√n) = O((2log n)1/2), which is exponentially larger than the number of bits required to write out the input.
Hope this helps!

Big Oh nested while

I am having some challenges with big-oh problems. These are NOT homework problems. I am writing these problems to better understand the concept here.
function func(n)
{
int k,i = 0;
while(k < n){ < --- Guessing this outer loop is O(n/2)
k = n + 2
i = 0;
while(i < k){ <--- Not sure what this is?
i ++;
i = i * i;
}
}
}
I would really like it if you can explain to me what is going on in the inner loop and how your logic ends up at the big-o notation that you finally end up at.
The outer loop, with its test (k < n) and its step, k = n + 2, will run one time, providing an O(1) factor of complexity.
The inner loop has test (i < k) which is to say (i < n+2), and has steps i++; i=i*i; At the end,
i = (...(((1+1)^2+1)^2+1)^2+ ... )^2 > n+2`
which makes the value of i super-exponential. That is, i grows faster than exp(exp(p)) in p passes so that overall complexity is less than O(log log n). This is a tighter bound than the previously-mentioned O(log n), which also is an upper bound but not as tight.
While #alestanis has provided what looks to me like a much more accurate analysis of this problem than those in the comments, I still don't think it's quite right.
Let's create a small test program that prints out the values of i produced by the inner loop:
#include <iostream>
void inner(double k) {
double i;
i = 0.0;
while(i < k) {
i ++;
i = i * i;
std::cout << i << "\n";
}
}
int main() {
inner(1e200);
return 0;
}
When I run this, the result I get is:
1
4
25
676
458329
2.10066e+011
4.41279e+022
1.94727e+045
3.79186e+090
1.43782e+181
1.#INF
If the number of iterations were logarithmic, then the number of iterations to reach a particular number should be proportional to the number of digits in the limit. For example, if it were logarithmic, it should take around 180 iterations to reach 1e181, give or take some (fairly small) constant factor. That's clearly not the case here at all -- as is easily visible by looking at the exponents of the results in scientific notation, this is approximately doubling the number of digits every iteration, where logarithmic would mean it was roughly adding one digit every iteration.
I'm not absolutely certain, but I believe that puts the inner loop at something like O(log log N) instead of just O(log N). I think it's pretty easy to agree that the outer loop is probably intended to be O(N) (though it's currently written to execute only once), putting the overall complexity at O(N log log N).
I feel obliged to add that from a pragmatic viewpoint, O(log log N) can often be treated as essentially constant -- as shown above, the highest limit you can specify with a typical double precision floating point number is reached in only 11 iterations. As such, for most practical purposes, the overall complexity can be treated as O(N).
[Oops -- didn't notice he'd answered as I was writing this, but it looks like #jwpat7 has reached about the same conclusion I did. Kudos to him/her.]
The second loop squares the value of i until it reaches k. If we ignore the constant term, this loop runs in O(log k) time.
Why? Because if you solve i^m = k you get m = constant * log(k).
The outer loop, as you said, runs in O(n) time.
As bigger values of k depend on n, you can say the inner loop runs in O(log n) which gives you an overall complexity of O(n log n).

Resources