big theta for quad nested loop with hash table lookup - algorithm

for (int i = 0; i < 5; i++) {
for (int j = 0; j < 5; j++) {
for (int k = 0; k < 5; k++) {
for (int l = 0; l < 5; l++) {
look up in a perfect constant time hash table
}
}
}
}
what would the running time of this be in big theta?
my best guess, a shot in the dark: i always see that nested for loops are O(n^k) where k is the number of loops, so the loops would be O(n^4), then would i multiply by O(1) for constant time? what would this all be in big theta?

If you consider that accessing a hash table is really theta(1), then this algorithm runs in theta(1) too, because it makes only a constant number (5^4) lookups at the hashtable.
However, if you change 5 to n, it will be theta(n^4) because you'll do exactly n^4 constant-time operations.

The big-theta running time would be Θ(n^4).
Big-O is an upper bound, where-as big-theta is a tight bound. What this means is that to say the code is O(n^5) is also correct (but Θ(n^5) is not), whatever's inside the big-O just has to be asymptotically bigger than or equal to n^4.
I'm assuming 5 can be substituted for another value (i.e. is n), if not, the loop would run in constant time (O(1) and Θ(1)), since 5^4 is constant.

Using Sigma notation:
Indeed, instructions inside the innermost loop will execute 625 times.

Related

Find the Big O time complexity of the code

I am fairly familiar with simple time complexity regarding constant, linear, and quadratic time complexities. In simple code segments like:
int i = 0;
i + 1;
This is constant. So O(1). And in:
for (i = 0; i < N; i++)
This is linear since it iterates n+1 times, but for Big O time complexities we remove the constant, so just O(N). In nested for loops:
for (i = 0; i < N; i++)
for (j = 0; j < N; j++)
I get how we multiply n+1 by n and reach a time complexity of O(N^2). My issue is with slightly more complex versions of this. So, for example:
S = 0;
for (i = 0; i < N; i++)
for (j = 0; j < N*N; j++)
S++;
In such a case, would I be multiplying n+1 by the inner for loop time complexity, of which I presume is n^2? So the time complexity would be O(n^3)?
Another example is:
S = 0;
for (i = 0; i < N; i++)
for (j = 0; j < i*i; j++)
for (k = 0; k < j; k++)
S++;
In this case, I expanded it and wrote it out and realized that the inner, middle for loop seems to be running at an n*n time complexity, and the most inner for loop at the pace of j, which is also nxn. So in that case, would I be multiplying n+1 x n^2 x n^2, which would give me O(n^5)?
Also, I am still struggling to understand what kind of code has logarithmic time complexity. If someone could give me an algorithm or segment of code that performs at log(n) or n log(n) time complexity, and explain it, that would be much appreciated.
All of your answers are correct.
Logarithmic time complexity typically occurs when you're reducing the size of the problem by a constant factor on every iteration.
Here's an example:
for (int i = N; i >= 0; i /= 2) { .. do something ... }
In this for-loop, we're dividing the problem size by 2 on every iteration. We'll need approximately log_2(n) iterations prior to terminating. Hence, the algorithm runs in O(log(n)) time.
Another common example is the binary search algorithm, which searches a sorted interval for a value. In this procedure, we remove half of the values on each iteration (once again, we're reducing the size of the problem by a constant factor of 2). Hence, the runtime is O(log(n)).

Complexity of a nested for loop over n and an exact integer?

I am wondering how one would find the Big-Oh complexity of an algorithm if the for loops loop over both n and some specified integer. For example, what would be the complexity of a function such as this one:
for (int i=0; i < n; i++) {
for (int j=0; j < 100; j++) {
for (int k=0; k < n; k++) {
// Some O(1) operation here.
}
}
}
Now, I know that both the outer and most-inner for loops have complexity O(n), but what is the complexity of the middle loop? O(100), would that reduce to O(1)?
Yes it will be O(1) for the middle one only. That means no matter what my input n is it doesn't matter it will run 100 times. You can't make it loop more.
But the outermost and inner most they are dependent on n. If n is 100 then ouermost runs 100 times if it is 1000000 then yes it runs 1000000 times.
What about inner most loop. For each iteration of outermost it runs 100*n times. So in total it will run even more 100*n*n times.
Now think how much total work they do?
100n^2+100n+n = An^2+B
O(n^2) will be the time complexity.
Is O(10) or O(100) and O(100000) same?
Okay I will save you from writing more for any Constant C O(C) is equivalent to O(1).
Here C was 10 or 100 or 100000.

Time complexity of nested for loops

the following sample loops have O(n^2) time complexity
Can anyone explain me why it is O(n^2)? As it depends on the value of c...
loop 1---
for (int i = 1; i <=n; i += c)
{
for (int j = 1; j <=n; j += c)
{
// some O(1) expressions
}
}
loop 2---
for (int i = n; i > 0; i -= c)
{
for (int j = i+1; j <=n; j += c)
{
// some O(1) expressions
}
}
If c=0 ; then it runs infinite number of times , in the similar way if c value is increased then the number of times the inner loops run will be decreased
Can anyone explain it to me?
Each of these parts of code takes a time O(n^2/c^2). c is probably considered a strictly positive constant here and therefore O(n^2/c^2) = O(n^2). But it all depends on the context...
Big-O notation is a relative representation of the complexity of an algorithm.
Big-O does not says anything about how many iterations your algorithm will make in any case .
It says in worst case your algorithm will be making n squared computations. Which is useful if you have to compare 2 algorithms.
In your code if we assume c to be a constant then it could be ignored from Big-O notation because Big-O is all about comparison and how thing scale. where constants play no role.
But when c is not a constant the correct Big-O notation would be O(n^2/c^2).
Read this awesome explanation of Big-O by cletus .
For every FIXED c, the time is O (n^2). The number of iterations is roughly max (n^2 / c^2, n^2) unless c = 0. n^2 / c^2 is O (n^2) for every fixed n.
If you had code where c was changed during the loop, you might get a different answer.

Logarithmic complexity of an algorithm

It's diffcult for me to understand logarithmic complexity of algorithm.
For example
for(int j=1; j<=n; j*=2){
...
}
Its complexity is O(log2N)
So what if it is j*=3? The complexity will then be O(log3N)?
You could say yes as long as the loop body is O(1).
However, note that log3N = log2N / log23, so it is also O(log2N), since the constant factor does not matter.
Also note it is apparent from this argument, for any fixed constant k, O(logkN) is also O(log2N), since you could substitute 3 with k.
Basicly, yes.
Let's assume that your for loop looks like this:
for (int j = 1; j < n; j *= a) {...}
where a is some const.
If the for loop executes k times, then in the last iteration, j will be equal to ak. And since N = O(j) and j = O(ak), N = O(ak). It follows that k = O(logaN). Once again, for loop executes k times, so time complexity of this algorithm is O(k) = O(logaN).

Big-O and generic units of time?

I have these two questions that I think I understand how to answer (answers after the questions). I just wanted to see if I am understanding time complexity calculations and how to find the BigO.
Generic form is just the product of each value on the right side of the expression.
The BigO is the largest power in an polynomial. Is this way of thinking correct?
int sum = 0;
for (int i = 0; i < n; i++)
for (int j = 0; j < n * n; j++)
for (int k = 0; k < 10; k++)
sum += i;
How many generic time units does this code take? n(n^2)*10
What is the big-oh run time of this code? O(n^3)
Yes. Basically the definition of big O states that you the time units (as you call them) are bounded from above by a constant time you expression starting from some (arbitrarily high) natural number to infinity. In a more mathematical notation this is:
A function f(n) is O(g(n)) if there exist a constant C and a number N such that f(n) < C*g(n) for all n > N.
In your context f(n) = n(n^2)*10 and g(n) = n^3.
You could, by the way, also say that the function is O(n^4). You can use the big theta notation to indicate that this is also the lower bound: f(n) is $\theta(n^3).
See more on this here: https://en.wikipedia.org/wiki/Big_O_notation
Yes your understanding is correct. But sometimes you have to deal with logarithmic terms also.
The way to look at a logarithmic term could be viewing it as n^(1+epsilon). Where epsilon is a small quantity.

Resources