Calculate Big-Oh and theta bound for the running time - big-o

sum = 0;
for (int i = 0; i < N; i++)
for (int j = i; j ≥ 0; j--)
sum++;
I found out the big-oh to be O(n^2) but I am not sure how to find the theta bound for it. Can someone help?

You can make use of sigma notation to derive asymptotic bounds for your algorithm:
Since we can show that T(n) O(n^2) (upper asymptotic bound) as well as in Ω(n^2) (lower asymptotic bound), it follows that T(n) is in Θ(n^2) ("sandwiched", upper and lower asymptotic bound).
In case it's unclear: note that the inner and out sums in the initial expressions above describe your inner and outer for loops, respectively.

Related

What is the asymptotic running time of the following piece of code?

What is the asymptotic running time of the following piece of code?
if (N % 2 == 0) // N is even
for (int i = 0; i < N; i = i+1)
for (int j = 0; j < N; j = j+1)
A[i] = j;
else // N is odd
for (int i = 0; i < N; i = i+1)
A[i] = i;
If N is even we see the running time is O(n^2), when N is odd the running time is O(n). But I can't determine what the asymptotic running time is.
The possible answers are:
~ O(n)
~ O(n^2)
~ O(N * sqrt(N))
~ O(n log n)
There isn't a simple function you can use to asymptotically tightly bound the runtime. As you noted, the runtime oscillates between linear and quadratic at each step. You can say that the runtime is O(n2) and Ω(n), but without defining a piecewise function you can't give a Θ bound here.

Time complexity of nested for loops

the following sample loops have O(n^2) time complexity
Can anyone explain me why it is O(n^2)? As it depends on the value of c...
loop 1---
for (int i = 1; i <=n; i += c)
{
for (int j = 1; j <=n; j += c)
{
// some O(1) expressions
}
}
loop 2---
for (int i = n; i > 0; i -= c)
{
for (int j = i+1; j <=n; j += c)
{
// some O(1) expressions
}
}
If c=0 ; then it runs infinite number of times , in the similar way if c value is increased then the number of times the inner loops run will be decreased
Can anyone explain it to me?
Each of these parts of code takes a time O(n^2/c^2). c is probably considered a strictly positive constant here and therefore O(n^2/c^2) = O(n^2). But it all depends on the context...
Big-O notation is a relative representation of the complexity of an algorithm.
Big-O does not says anything about how many iterations your algorithm will make in any case .
It says in worst case your algorithm will be making n squared computations. Which is useful if you have to compare 2 algorithms.
In your code if we assume c to be a constant then it could be ignored from Big-O notation because Big-O is all about comparison and how thing scale. where constants play no role.
But when c is not a constant the correct Big-O notation would be O(n^2/c^2).
Read this awesome explanation of Big-O by cletus .
For every FIXED c, the time is O (n^2). The number of iterations is roughly max (n^2 / c^2, n^2) unless c = 0. n^2 / c^2 is O (n^2) for every fixed n.
If you had code where c was changed during the loop, you might get a different answer.

What is the big-O complexity of this code

Whats the big-O notation of this code?
for( int i=1; i<2n; i++)
x=x+1;
My answer = O(2*n) Is this correct?
Consider this an A algorithm
for( int i=1; i<2*n; i++)
x=x+1;
Algorithm A’s run-time: T(n) = 2n-1
Eliminate lower-order terms: 2n-1 -> 2n
Drop all constant coefficients: 2n -> n
So the algorithm A’s time complexity is O(n).
It is O(n). Big O is meant to describe the complexity of the application and in this case it is linear so it is O(n).
The big-o run time of this is O(2n) like you guessed but that is usually just simplified to O(n).

big theta for quad nested loop with hash table lookup

for (int i = 0; i < 5; i++) {
for (int j = 0; j < 5; j++) {
for (int k = 0; k < 5; k++) {
for (int l = 0; l < 5; l++) {
look up in a perfect constant time hash table
}
}
}
}
what would the running time of this be in big theta?
my best guess, a shot in the dark: i always see that nested for loops are O(n^k) where k is the number of loops, so the loops would be O(n^4), then would i multiply by O(1) for constant time? what would this all be in big theta?
If you consider that accessing a hash table is really theta(1), then this algorithm runs in theta(1) too, because it makes only a constant number (5^4) lookups at the hashtable.
However, if you change 5 to n, it will be theta(n^4) because you'll do exactly n^4 constant-time operations.
The big-theta running time would be Θ(n^4).
Big-O is an upper bound, where-as big-theta is a tight bound. What this means is that to say the code is O(n^5) is also correct (but Θ(n^5) is not), whatever's inside the big-O just has to be asymptotically bigger than or equal to n^4.
I'm assuming 5 can be substituted for another value (i.e. is n), if not, the loop would run in constant time (O(1) and Θ(1)), since 5^4 is constant.
Using Sigma notation:
Indeed, instructions inside the innermost loop will execute 625 times.

Big-O and generic units of time?

I have these two questions that I think I understand how to answer (answers after the questions). I just wanted to see if I am understanding time complexity calculations and how to find the BigO.
Generic form is just the product of each value on the right side of the expression.
The BigO is the largest power in an polynomial. Is this way of thinking correct?
int sum = 0;
for (int i = 0; i < n; i++)
for (int j = 0; j < n * n; j++)
for (int k = 0; k < 10; k++)
sum += i;
How many generic time units does this code take? n(n^2)*10
What is the big-oh run time of this code? O(n^3)
Yes. Basically the definition of big O states that you the time units (as you call them) are bounded from above by a constant time you expression starting from some (arbitrarily high) natural number to infinity. In a more mathematical notation this is:
A function f(n) is O(g(n)) if there exist a constant C and a number N such that f(n) < C*g(n) for all n > N.
In your context f(n) = n(n^2)*10 and g(n) = n^3.
You could, by the way, also say that the function is O(n^4). You can use the big theta notation to indicate that this is also the lower bound: f(n) is $\theta(n^3).
See more on this here: https://en.wikipedia.org/wiki/Big_O_notation
Yes your understanding is correct. But sometimes you have to deal with logarithmic terms also.
The way to look at a logarithmic term could be viewing it as n^(1+epsilon). Where epsilon is a small quantity.

Resources