Is O(3^n) still written as O(2^n)? - algorithm

I'm wondering if an algorithm with an exponential worst time complexity, should always have that stated as O(2^n). For example, if I had an algorithm that's operations triple for each addition to the input size, would I write it's time complexity as O(3^n), or would it still be classified as O(2^n).
Any formal explanation would be greatly appreciated.

3^n != O(2^n).
Let us assume that it were true.
Then there exists constants c and n0 such that 3^n ≤ c * 2^n for all n ≥ n0.
The last requirement is equivalent to (3/2)^n ≤ c for all n ≥ n0.
However, (3/2)^n → ∞ as n → ∞, so (3/2)^n ≤ c cannot be true for all n ≥ n0 for any constant c.

No, O(2^n) and O(3^n) are different. If 3^n were O(2^n), there'd be a constant k such that 3^n <= k * 2^n for all large n. There's no such k because 3^n / 2^n is (3/2)^n which grows arbitrarily large.

Related

Finding the values in Big Oh

I am going through the Asymptotic notations from here. I am reading this f(n) ≤ c g(n)
For example, if f(n) = 2n + 2, We can satisfy it in any way as f(n) is O (c.g(n)) by adjusting the value of n and c. Or is there any specific rule or formula for selecting the value of c and n. Will no always be 1?
There is no formula per se. You can find the formal definition here:
f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n. (big-O notation).
What I understood from your question is, you are not getting the essence of big-O notation. If your complexity is, for example, O(n^2), then you can guarantee that there is some value of n (greater than k) after which f(n) in no case will exceed c g(n).
Let's try to prove f(n) = 2n + 2 is O(n):
As it seems from the function itself, you cannot set the value of c equal to 2 as you want to find f(n) ≤ c g(n). If you plug in c = 2, you have to find k such that f(n) ≤ c g(n) for n ≥ k. Clearly, there is no n for which 2n ≥ 2n + 2. So, we move on to c = 3.
Now, let's find the value of k. So, we solve the equation 3n ≥ 2n + 2. Solving it:
3n ≥ 2n + 2
=> 3n - 2n ≥ 2
=> n ≥ 2
Therefore, for c = 3, we found value of k = 2 (n ≥ k).
You must also understand, your function isn't just O(n). It is also O(n^2), O(n^3), O(n^4) and so on. All because corresponding values of c and k exist for g(n) = n^2, g(n) = n^3 and g(n) = n^4.
Hope it helps.

Big O vs Small omega

Why is ω(n) smaller than O(n)?
I know what is little omega (for example, n = ω(log n)), but I can't understand why ω(n) is smaller than O(n).
Big Oh 'O' is an upper bound and little omega 'ω' is a Tight lower bound.
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}
ω(g(n)) = { f(n): for all constants c > 0, there exists a constant n0 such that 0 ≤ cg(n) < f(n) for all n ≥ n0}.
ALSO: infinity = lim f(n)/g(n)
n ∈ O(n) and n ∉ ω(n).
Alternatively:
n ∈ ω(log(n)) and n ∉ O(log(n))
ω(n) and O(n) are at the opposite side of the spectrum, as is illustrated below.
Formally,
For more details, see CSc 345 — Analysis of Discrete Structures
(McCann), which is the source of the graph above. It also contains a compact representation of the definitions, which makes them easy to remember:
I can't comment, so first of all let me say that n ≠ Θ(log(n)). Big Theta means that for some positive constants c1, c2, and k, for all values of n greater than k, c1*log(n) ≤ n ≤ c2*log(n), which is not true. As n approaches infinity, it will always be larger than log(n), no matter log(n)'s coefficient.
jesse34212 was correct in saying that n = ω(log(n)). n = ω(log(n)) means that n ≠ Θ(log(n)) AND n = Ω(log(n)). In other words, little or small omega is a loose lower bound, whereas big omega can be loose or tight.
Big O notation signifies a loose or tight upper bound. For instance, 12n = O(n) (tight upper bound, because it's as precise as you can get), and 12n = O(n^2) (loose upper bound, because you could be more precise).
12n ≠ ω(n) because n is a tight bound on 12n, and ω only applies to loose bounds. That's why 12n = ω(log(n)), or even 12n = ω(1). I keep using 12n, but that value of the constant does not affect the equality.
Technically, O(n) is a set of all functions that grow asymptotically equal to or slower than n, and the belongs character is most appropriate, but most people use "= O(n)" (instead of "∈ O(n)") as an informal way of writing it.
Algorithmic complexity has a mathematic definition.
If f and g are two functions, f = O(g) if you can find two constants c (> 0) and n such as f(x) < c * g(x) for every x > n.
For Ω, it is the opposite: you can find constants such as f(x) > c * g(x).
f = Θ(g) if there are three constants c, d and n such as c * g(x) < f(x) < d * g(x) for every x > n.
Then, O means your function is dominated, Θ your function is equivalent to the other function, Ω your function has a lower limit.
So, when you are using Θ, your approximation is better for you are "wrapping" your function between two edges ; whereas O only set a maximum. Ditto for Ω (minimum).
To sum up:
O(n): in worst situations, your algorithm has a complexity of n
Ω(n): in best case, your algorithm has a complexity of n
Θ(n): in every situation, your algorithm has a complexity of n
To conclude, your assumption is wrong: it is Θ, not Ω. As you may know, n > log(n) when n has a huge value. Then, it is logic to say n = Θ(log(n)), according to previous definitions.

Are 2^n and 4^n in the same Big-Θ complexity class?

Is 2^n = Θ(4^n)?
I'm pretty sure that 2^n is not in Ω(4^n) thus not in Θ(4^n), but my university tutor says it is. This confused me a lot and I couldn't find a clear answer per Google.
2^n is NOT big-theta (Θ) of 4^n, this is because 2^n is NOT big-omega (Ω) of 4^n.
By definition, we have f(x) = Θ(g(x)) if and only if f(x) = O(g(x)) and f(x) = Ω(g(x)).
Claim
2^n is not Ω(4^n)
Proof
Suppose 2^n = Ω(4^n), then by definition of big-omega there exists constants c > 0 and n0 such that:
2^n ≥ c * 4^n for all n ≥ n0
By rearranging the inequality, we have:
(1/2)^n ≥ c for all n ≥ n0
But notice that as n → ∞, the left hand side of the inequality tends to 0, whereas the right hand side equals c > 0. Hence this inequality cannot hold for all n ≥ n0, so we have a contradiction! Therefore our assumption at the beginning must be wrong, therefore 2^n is not Ω(4^n).
Update
As mentioned by Ordous, your tutor may refer to the complexity class EXPTIME, in that frame of reference, both 2^n and 4^n are in the same class. Also note that we have 2^n = 4^(Θ(n)), which may also be what your tutor meant.
Yes: one way to see this is to notice 4^n = 2^(2n). So 2^n is the same complexity as 4^n (exponential) because n and 2n are the same complexity (linear).
In conclusion, the bases don't affect the complexity here; it only matters that the exponents are of the same complexity.
Edit: this answer only shows that 4^n and 2^n are of the same complexity, not that 2^n is big-Theta of 4^n: you're correct that this is not the case as there is no constant k such that k*n^2 >= n^4 for all n. At some point, n^4 will overtake k*n^2. (Acknowledgements to #chiwangc / #Ordous for highlighting the distinction in their answer/comment.)
Yes. Both have exponential complexity.
Yes theta is possible even though big omega did not satisfied but equality exist by using stirling approximation. hence (2^n)=θ(3^n).

What is the difference between O(1) and Θ(1)?

I know the definitions of both of them, but what is the reason sometimes I see O(1) and other times Θ(1) written in textbooks?
Thanks.
O(1) and Θ(1) aren't necessarily the same if you are talking about functions over real numbers. For example, consider the function f(n) = 1/n. This function is O(1) because for any n ≥ 1, f(n) ≤ 1. However, it is not Θ(1) for the following reason: one definition of f(n) = Θ(g(n)) is that the limit of |f(n) / g(n)| as n goes to infinity is some finite value L satisfying 0 < L. Plugging in f(n) = 1/n and g(n) = 1, we take the limit of |1/n| as n goes to infinity and get that it's 0. Therefore, f(n) ≠ Θ(1).
Hope this helps!
Big-O notation expresses an asymptotic upper bound, whereas Big-Theta notation additionally expresses an asymptotic lower bound. Often, the upper bound is what people are interested in, so they write O(something), even when Theta(something) would also be true. For example, if you wanted to count the number of things that are equal to x in an unsorted list, you might say that it can be done in linear time and is O(n), because what matters to you is that it won't take any longer than that. However, it would also be true that it's Omega(n) and therefore Theta(n), since you have to examine all of the elements in the list - it can't be done in sub-linear time.
UPDATE:
Formally:
f in O(g) iff there exists a c and an n0 such that for all n > n0, f(n) <= c * g(n).
f in Omega(g) iff there exists a c and an n0 such that for all n > n0, f(n) >= c * g(n).
f in Theta(g) iff f in O(g) and f in Omega(g), i.e. iff there exist a c1, a c2 and an n0 such that for all n > n0, c1 * g(n) <= f(n) <= c2 * g(n).

Exponential growth in big-o notation

I have got problem about understanding the following question. It says:
Prove that exponential functions have different orders of growth for different
values of base.
It looks to me like for example, consider an. If a=3, its growth rate will be larger than when a=2. It looks obvious. Is that really what the question wants? How can i do a formal proof for that?
Thanks in advance for your help.
f(n) ∈ O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
Let 1>a>b without loss of generality, and suppose b^n ∈ O(a^n). This implies that there are positive constants c and k such that 0 ≤ b^n ≤ c.a^n for all n ≥ k, which is impossible :
b^n ≤ c.a^n for all n ≥ k implies (b/a)^n ≤ c for all n ≥ k
which is in contradiction with lim (b/a)^n = +inf because b/a>1.
If 1>a>b then b^n ∉ O(a^n), but a^n ∈ O(b^n) so O(a^n)⊊O(b^n)

Resources