Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Let f(n)= ( (n^2+2n)/n + 1/1000*(n^(3/2)))*log(n)
The time complexity for this function could be both O(n²*log(n)) and O(n^(3/2)*log(n))
How is this possible? I thought the dominating term here was n² (*log(n)) and therefore it should be O(n²*log(n)) only the big O notation and time complexity measures feels so ambiguous
Big O notation isn't that confusing. It defines an upper bound to the running time of an algorithm, hence, if O(f(n)) is a valid upper bound, every other O(g(n)) such that g(n) > f(n) definitively is valid, since if your code will run in less then f(n), will for sure run in less than g(n).
In your case, since O(n^2 *log(n)) dominates O(n^(3/2) log(n)), it's a valid upper bound too, even if it's less strict. Furthermore, you could say that your algorithm is O(n^3). The question is, which one of those Big O notation gives us more informations about the algorithm? The obvious answer is the lower one, and that's the reason why we usually indicate that.
To make things cler : let's say you can throw a ball up in the air 10m. Then, you can say that you can't throw higher than 10m, OR you could say you can't throw it higher than 15 meters. The fact that the first one is a stricter upper bound, doesn't make the second one a false statement.
"Big O notation" being applied on the sum always leaves dominant (the biggest ones) terms only. In case of one independent variable one term only will survive. In your case
O(n^2*log(n) + n^(3/2)*log(n)) = O(n^2*log(n))
since 1-st term is bigger than the 2-nd:
lim(term1/term2) = lim(n^2*log(n) / (n^(3/2)*log(n))) = lim(n^(1/2)) = inf
but it seems, that you made an arithemic error in your computations:
(n^2+2n)/n = n + 2, not n^2 + 2 * n
in that case
O(n*log(n) + 2*log(n) + n^(3/2)*log(n))
the last term which is "n^(3/2)*log(n)" is the biggest one
O(n*log(n) + 2*log(n) + n^(3/2)*log(n)) = O(n^(3/2)*log(n))
Related
This question already has answers here:
What is the difference between Θ(n) and O(n)?
(9 answers)
Closed 2 years ago.
i heard somewhere that for example to tell that a function has a big theta of n it has to have complexity of n in both its best and worst cases, so linear search would not be big theta of n because it has best case O(1), but i doubt this information, so if you have any code which you want to analyse, when to say that this code has a big theta of some function ?
To understand big theta, we should first review big O and big omega. Big O describes the worst case runtime of a function, which means that it will never perform worse than that. Big omega describes the best case runtime of a function, meaning the function will never perform better than that. Big theta is found when big O = big omega, because the worst case = the best case, so the function is bounded above and below by the same time complexity. For example, when doing matrix multiplication of 2 matrices, each with dimensions n x n, the runtime of the function is O(n^3). The runtime is also bigOmega(n^3). This is because in both the worst and best cases of the function, you are always multiplying both matrices. You will not be able to "end early" because you are always multiplying everything in the matrix. Because the function is bounded above and below by n^3, the function is bigTheta(n^3).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I'm trying to see if I'm correct in my work for finding the big-O notation of some formulas. Each of these formulas are the number of operations in some algorithm. These are the formulas:
Forumulas
a.) n2 + 5n
b.) 3n2 + 5n
c.) (n + 7)(n - 2)
d.) 100n + 5
e.) 5n +3n2
f.) The number of digits in 2n
g.)The number of times that n can be divided by 10 before dropping below 1.0
My answers:
a.) O(n2)
b.) O(n2)
c.) O(n2)
d.) O(n)
e.) O(n2)
f.) O(n)
g.) O(n)
Am I correct on my analysis?
Let's go through this one at a time.
a.) n2 + 5. Your answer: O(n2)
Yep! You're ignoring lower-order terms correctly.
b.) 3n2 + 5n. Your answer: O(n2).
Yep! Big-O eats constant factors for lunch.
c.) (n + 7)(n - 2). Your answer: O(n2).
Yep! You could expand this out into n2 + 5n - 14 and from there drop the low-order terms to get O(n2), or you could realize that n + 7 = O(n) and n - 2 = O(n) to see that this is the product of two terms that are each O(n).
d.) 100n + 5. Your answer: O(n).
Yep! Again, dropping constants and lower-order terms.
e.) 5n + 3n2. Your answer: O(n2).
Yep! Order is irrelevant; 5n is still a low-order term.
f.) The number of digits in 2n. Your answer: O(n).
This one is technically correct but is not a good bound. Remember that big-O notation gives an upper bound and you are correct that the number n has O(n) digits, but only in the sense that the number of digits of n is asymptotically less than n. To see why this bound isn't very good, let's look at the numbers 10, 100, 1000, 10000, and 100000. These numbers have 2, 3, 4, 5, and 6 digits, respectively. In other words, growing by a factor of ten only grows the number of digits by one. If the O(n) bound you had were tight, then you'd expect that the number of digits would grow by a factor of ten every time you made the number ten times bigger, which isn't accurate.
As a hint for this one, if a number has d digits, then it's between 10d and 10d+1 - 1. That means the numeric value of a d-digit number is exponential as a function of d. So, if you start with a number of digits, the numeric value is exponentially larger. Try running this backwards. If you have a numeric value that you know is exponentially larger than the number of digits, what does that mean about the number of digits as a function of the numeric value?
f.) The number of times that n can be divided by 10 before dropping below 1.0. Your answer: O(n)
This one is also technically correct but not a very good bound. Let's take the number 100,000, for example. You can divide this by 10 seven times before you drop below 1.0, but giving a bound of O(n) means that you're saying the answers grows linearly as a function of n, so doubling n should double the number of times you can divide by ten... but is that actually the case?
As a hint, the number of times you can divide a number by ten before it drops below 1.0 is closely related to the number of digits in that number. If you can figure out this problem, you'll figure out part (e), and vice-versa.
Good luck!
This question already has answers here:
What does O(log n) mean exactly?
(32 answers)
Closed 7 years ago.
So I have been studying Big O notation ( Noob ) , and most things looks like alien language to me. Now I understand basic of log like log 16 of base2 is the power of 2 equals the number 16. Now for Binary search big O(logN) making no sense to me , what is the value of LogN exacly what is the base here? I have searched internet, problem is everyone explained this mathmetically which i cant catch I am not good with math. Can someone explain this to me in Basic English not Alien language like exponential. I know How Binary search works.
Second question: [I dont even know what f = Ω(g) this symbol means] Can someone explain to me in Plain English what is required here , I dont want the answer , just what this means.
Question :
In each of the following situations, indicate whether f = O(g), or f = Ω(g), or both. (in which case f = Θ(g)).
f(n) g(n)
(a) n-100 ...............n-200
(b) 100n + logn .......n + (log n)2
(c) log2n ............... log3n
Update: I just realized that I studied algorithms from MIT's videos. Here is the link to the first of those videos. Keep going to next lecture as far as you want.
Clearly, Log(n) has no value without fixing what n is and what base of log we are using. The purpose of mentioning log(n) so often is to help people understand the rate of growth of a particular algorithm or piece of code. It is only to help people see things in perspective. To build your perspective, see the comparison below:
1 < logn < n < nlogn < n2 < 2^n < n! < n^n
The line above says that after some value of n on the number line, the rate of growth of the above written functions is in the order mentioned there. This way, decision makers can decide which approach they want to take in solving their problem (and students can pass their Algorithm Design and Analysis exam).
Coming to your question, when books say 'binary search's run time is Log(n)', essentially they mean that the if you have n elements, the running time for binary search would be proportional to Log(n) and if you have 17n elements then you can expect the answer from your algorithm in a time duration that is proportional to Log(17n). In this case, the base of Log function is 2 because in binary search, we have exactly <= 2 paths to pick from at every node.
Since, the log function's base can be easily converted from any number to any other number by multiplying a constant, telling what the base is becomes irrelevant as in Big O notations, constants are ignored.
Coming to the answer to your second question, images will explain it the best.
Big O is only about the upper bound on a function. In the image below, f(n) = O(g(n)). In other words, there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k.
Importance of k is that after 'k' this Big O will stay true, no matter what value of n. If we can't fix a 'k', we cannot say that the growth rate will always stay below the function mentioned in O(...).
Importance of c is in saying that it is the function between O(...) that's really important.
Omega is simply the inversion of Big O. If f(n) = O(g(n)), then g(n) = Ω(f(n)). In other words, Ω() is about your function staying above what is mentioned in Ω(...) for a given value of another 'k' and another 'c'.
The pictorial visualization is
Finally, Big theta is about finding a mathematical function that grows at same rate as your given function. But how do you prove that this function runs same as your function. By using two constant values.
Since it runs same as your given function, you should be able to multiply two constants 'c1' and 'c2' that will be able to put c1 * g(n) above your function f(n) and put c2 * g(n) below your function f(n).
The thing behind Big theta is to provide a function with same rate of growth. Note that there may be no constant 'c' that will be able to get f(n) and g(n) to overlap. Nobody is concerned with that. The only concern is to be able to sandwich the f(n) between a g(n) using two constants so that we can confidently say that we found the rate of growth of f(n).
How to apply the above learned ideas to your question?
Let's take each of them one by one. You can use some online tool to plot these functions and see first hand, how these function behave when you go along the number line.
f(n) = n - 100 and g(n) = n - 200
Here, the rate of growth can be found out by differentiating both functions wrt n. d(f(n))/dn = d(g(n))/dn = 1. Therefore, even though the running times of f(n) and g(n) may be different, their rate of growth is same. Can you pick 'c1' and 'c2' such that c1 * g(n) < f(n) < c2 * g(n)?
f(n) = 100n + log(n) and g(n) = n + 2(log (n))
Differentiate and tell if you can relate the functions as Big O or Big Theta or Big Omega.
f(n) = log (2n) and g(n) = log (3n)
Same as above.
(The images are taken from different pages on this website: http://xlinux.nist.gov/dads/HTML/)
My experience: Try to compare the growth rate of a lot of different functions. Eventually you will get the hang of it for all of them and it will become very intuitive for you. Given concentrated effort for one week or two, this concept cannot remain esoteric for anyone.
First of all, let's go through the notations. I'm assuming from the questions that
O(f) is upper bound,
Ω(f) is lower bound, and
Θ(f) is both
For O(log(N)) in this case, generally the base isn't given because the general form of log(N) is known regardless of the base. E.g.,
(source: rapidtables.com)
So if you've worked through the binary search algorithm (I suggest you do this if you haven't), you should find that the worst case scenario (upper bound) is log_2(N). So given N terms, it will take "log_2(N) computations" in the worst case in order to find the term.
For your second question,
You are simply comparing computational run-times of f and g.
f = O(g)
is when f is an upper bound on g, i.e., f will definitely take longer to compute than g. Alternately,
f = Ω(g)
is when f is a lower bound on g, i.e., g will definitely take longer to compute than f. Lastly,
f = Θ(g)
is when the f is both an upper and lower bound on g, i.e., the run times are the same.
You need to compare the two functions for each question and determine which will take longer to compute. As Mitch mentioned you can check here where this question has already been answered.
Edit: accidentally linked e^x instead of log(x)
The reason the base of the log is never specified is because it is actually completely irrelevant. You can convince yourself of this in three steps:
First, recall that log_2(x) = log_10(x)/log_10(2). But also recall that log_10(2) is a constant, which we'll call k2, so really, log_2(x) * k2 = log_10(x)
Second, recall that this is not unique to logs of base 2. The constants of conversion vary, but all the log functions are related to each other through multiplicative constants.
(You can prove this to yourself if you understand the mathematics behind log functions, or you can just work it up very quickly on a spreadsheet-- have a column of log_2(x) and a column of log_3(x) and divide them.)
Finally, remember that in Big Oh notation, constants basically drop out as being irrelevant. Trying to draw a distinction between O(log_2(N)) and O(log_3(N)) is like trying to draw a distinction between O(N) and O(2N). It is a distinction that does not matter because log_2 and log_3 are related through a constant.
Honestly, the base of the log does not matter.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
When referring to big o, what is considered the tight bound?
For example, in the function, f(n) = 10c7n^3 + 10c4nlog(n)) // This function represents the number of operations in terms of n //
according to this example, the tight bound for big O would be O(n3).
In this example, why is n3 considered the tight bound for Big O?
What characteristics does a tight bound exhibit?
Also, what is a tilde value?
according to this example, the tilde value for this function would be 10c7n3.
I have searched online, but I can't seem to find anything useful. I was hoping someone could clear this up.
The tight bound is that term which best captures the overall growth characteristics of your function as you increase the value of n.
In other words, 10c7n^3 + 10c4nlog(n)) is O(n^3) because the term with n^3 in it has the greatest effect on the computing time of the function, as n increases. All of the other terms in the function have an insignificant effect over the amount of computing time, compared to the cubed term.
What you call the tilde value appears to be merely the term containing the tilde; i.e. the term containing the highest power of n. (the "terms" are those parts of the function separated by a + or - sign)
The term tight bound says that both f(n)/n^3 and n^3/f(n) are bounded. f(n)~10*c7*n^3 says that f(n)/10*c7*n^3 rends to 1 as n tens to infinity.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Some standard books on Algorithms produce this:
0 ≤ f(n) ≤ c⋅g(n) for all n > n0
While defining big-O, can anyone explain to me what this means, using a strong example which can help me to visualize and understand big-O more precisely?
Assume you have a function f(n) and you are trying to classify it - is it a big O of some other function g(n).
The definition basically says that f(n) is in O(g(n)) if there exists two constants C,N such that
f(n) <= c * g(n) for each n > N
Now, let's understand what it means.
Start with the n>N part - it means, we do not "care" for low values of n, we only care for high values, and if some (final number of) low values do not follow the criteria - we can silently ignore them by choosing N bigger then them.
Have a look on the following example:
Though we can see that for low values of n: n^2 < 10nlog(n), the second quickly catches up and after N=10 we get that for all n>10 the claim 10nlog(n) < n^2 is correct, and thus 10nlog(n) is in O(n^2).
The constant c means we can also tolerate some multiple by constant factor, and we can still accept it as desired behavior (useful for example to show that 5*n is O(n), because without it we could never find N such that for each n > N: 5n < n, but with the constant c, we can use c=6 and show 5n < 6n and get that 5n is in O(n).
This question is a math problem, not an algorithmic one.
You can find a definition and a good example here: https://math.stackexchange.com/questions/259063/big-o-interpretation
As #Thomas pointed out, Wikipedia also has a good article on this: http://en.wikipedia.org/wiki/Big_O_notation
If you need more details, try to ask a more specific question.