How to completely simplify terms by a theta expression - big-o

I have an expression that is a complex runtime of an algorithm. The expression is this:
n^2 + 55*sqrt(n) + 4n^4 + 19^19
My task is to simplify this term as much as I can using a theta expression. My initial thought is that the answer is theta(n^4) as this is the largest growth rate in the term. However, I cannot verify if this is correct. Any help would be much appreciated, thanks in advance.
(This is for revision, not homework or coursework)

Related

Can Someone Verify My Answers on Asymptotic Analysis?

This for a data structures and algorithms course. I am confident in all of them but part d and I am not sure how to approach e. I know for part e, it is the sum of the harmonic series, and our professor told us it is bounded by (ln(n) + 1/n, ln(n) + 1) since there is no closed form representation for the sum of the harmonic series, but I am still not sure how to realize which has the faster or slower growth rate to determine how to classify them. If someone could review my answers and help me understand part e, I would appreciate it. Thank you.
The question: https://imgur.com/a/mzi0LL9
My answers: https://imgur.com/a/yxV6pim
Any function of the form is going to dominate a series like that.
We can factor out the constant to see that a bit easier and such a general harmonic series is bounded above by log.
So obviously we can ignore the 200 in big-O. In lieu of a proof since it seems one isn't required you can think about the intuition behind it. The summation as n grows will keep adding smaller and smaller terms but is going to keep growing to the point where is massive but 1/n is practically zero.

What is the net running time of the below expression?

For a recursive algorithm, I came up with the following expression to calculate the running time. But I am not clear on how to simplify this and express in Big-O notation.
If it is just 4k, then I know that it is simply a GP series and we can take the last term which is 4n as the worst case running time. Help me understand how to deal with (k+1) here.
Just try to simplify the term a little bit
Σk=0,...,n 4k(k+1) < Σk=0,...,n 4k(n+1) = (n+1) Σk=0,...,n 4k
So this is in O(n⋅4n). And this bound is tight since 4n(n+1) is part of the sum.
Notice: what you mean by "running time" is usually called "complexity".

Finding the best BigO notation with two strong terms

I am asked to find the simplest exact answer and the best big-O expression for the expression:
sum n, n = j to k.
I have computed what I think the simplest exact answer is as:
-1/2(j-k-1)(j+k)
Now when I go to take the best possible big-O expression I am stuck.
From my understanding, big-O is just finding the operation time of the worst case for an algorithm by taking the term that over powers the rest. So like I know:
n^2+n+1 = O(n^2)
Because in the long run, n^2 is the only term that matters for big n.
My confusion with the original formula in question:
-1/2(j-k-1)(j+k)
is as to what the strongest term is? To try and solve again I try factoring to get:
-1/2(j^2-jk-j+jk-k^2-k)
Which still does not make itself clear to me since we now have j^2-k^2. Is the answer I am looking for O(k^2) since k is the end point of my summation?
Any help thanks.
EDIT: It is unspecified as to which variable (j or k) is larger.
If you know k > j, then you have O(k^2). Intuitively, that's because as numbers get bigger, squares get farther apart.
It's a little unclear from your question which variable is the larger of the two, but I've assumed that it's k.

Asymptotic Notations and forming Recurrence relations by analysing the algorithms

I went through many lectures, videos and sources regarding Asymptotic notations. I understood what O, Omega and Theta were. But in algorithms, why do we use only Big Oh notation always, why not Theta and Omega (I know it sounds noobish, but please help me with this). What exactly is this upperbound and lowerbound in accordance with Algorithms?
My next question is, how do we find the complexity from an algorithm. Say I have an algorithm, how do I find the recurrence relation T(N) and then compute the complexity out of it? How do I form these equations? Like in the case of Linear Search using Recursive way, T(n)=T(N-1) + 1. How?
It would be great if someone can explain me considering me a noob so that I can understand even better. I found some answers but wasn't convincing enough in StackOverFlow.
Thank you.
Why we use big-O so much compared to Theta and Omega: This is partly cultural, rather than technical. It is extremely common for people to say big-O when Theta would really be more appropriate. Omega doesn't get used much in practice both because we frequently are more concerned about upper bounds than lower bounds, and also because non-trivial lower bounds are often much more difficult to prove. (Trivial lower bounds are usually the kind that say "You have to look at all of the input, so the running time is at least equal to the size of the input.")
Of course, these comments about lower bounds also partly explain Theta, since Theta involves both an upper bound and a lower bound.
Coming up with a recurrence relation: There's no simple recipe that addresses all cases. Here's a description for relatively simple recursive algorithmms.
Let N be the size of the initial input. Suppose there are R recursive calls in your recursive function. (Example: for mergesort, R would be 2.) Further suppose that all the recursive calls reduce the size of the initial input by the same amount, from N to M. (Example: for mergesort, M would be N/2.) And, finally, suppose that the recursive function does W work outside of the recursive calls. (Example: for mergesort, W would be N for the merge.)
Then the recurrence relation would be T(N) = R*T(M) + W. (Example: for mergesort, this would be T(N) = 2*T(N/2) + N.)
When we create an algorithm, it's always in order to be the fastest and we need to consider every case. This is why we use O, because we want to major the complexity and be sure that our algorithm will never overtake this.
To assess the complexity, you have to count the number of step. In the equation T(n) = T(n-1) + 1, there is gonna be N step before compute T(0), then the complixity is linear. (I'm talking about time complexity and not space complexity).

Big-O complexity of c^n + n*(logn)^2 + (10*n)^c

I need to derive the Big-O complexity of this expression:
c^n + n*(log(n))^2 + (10*n)^c
where c is a constant and n is a variable.
I'm pretty sure I understand how to derive the Big-O complexity of each term individually, I just don't know how the Big-O complexity changes when the terms are combined like this.
Ideas?
Any help would be great, thanks.
The answer depends on |c|
If |c| <= 1 it's O(n*(log(n))^2)
IF |c| > 1 it's O(c^n)
The O() notation considers the highest term; think about which one will dominate for very, very large values of n.
In your case, the highest term is c^n, actually; the others are essentially polynomial. So, it's exponential complexity.
Wikipedia is your friend:
In typical usage, the formal definition of O notation is not used directly; rather, the O notation for a function f(x) is derived by the following simplification rules:
If f(x) is a sum of several terms, the one with the largest growth rate is kept, and all others omitted.
If f(x) is a product of several factors, any constants (terms in the product that do not depend on x) are omitted.

Resources