Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
When referring to big o, what is considered the tight bound?
For example, in the function, f(n) = 10c7n^3 + 10c4nlog(n)) // This function represents the number of operations in terms of n //
according to this example, the tight bound for big O would be O(n3).
In this example, why is n3 considered the tight bound for Big O?
What characteristics does a tight bound exhibit?
Also, what is a tilde value?
according to this example, the tilde value for this function would be 10c7n3.
I have searched online, but I can't seem to find anything useful. I was hoping someone could clear this up.
The tight bound is that term which best captures the overall growth characteristics of your function as you increase the value of n.
In other words, 10c7n^3 + 10c4nlog(n)) is O(n^3) because the term with n^3 in it has the greatest effect on the computing time of the function, as n increases. All of the other terms in the function have an insignificant effect over the amount of computing time, compared to the cubed term.
What you call the tilde value appears to be merely the term containing the tilde; i.e. the term containing the highest power of n. (the "terms" are those parts of the function separated by a + or - sign)
The term tight bound says that both f(n)/n^3 and n^3/f(n) are bounded. f(n)~10*c7*n^3 says that f(n)/10*c7*n^3 rends to 1 as n tens to infinity.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
In my assignment I was given some information about an algorithm in form of statements. One of these statements was: "the best-case running time of Algorithm B is Ω(n^2);".
I was under the impression that best-case running time of algorithms is always either lower-bound, upper-bound or tight-bound. I am wondering if an algorithm such as this can also have an upper-bound of its best-case running time. If so, what are some examples of algorithms where this occurs?
A case is a class of inputs for which you consider your algorithm's performance. The best case is the class of inputs for which your algorithm's runtime has the most desirable asymptotic bounds. Typically this might mean there is no other class which gives a lower Omega bound. The worst case is the class of inputs for which your algorithm's runtime has the least desirable asymptotic bounds. Typically this might mean there is no other class which gives a higher O bound. The average case has nothing to do with desirability of bounds but looks at the expected value of the runtime given some stated distribution of inputs. Etc. You can define whatever cases you want.
Imagine the following algorithm:
Weird(a[1...n])
if n is even then
flip a coin
if heads then bogosort(a[1...n])
else if tails then bubble_sort(a[1...n])
else if n is odd then
flip a coin
if heads then merge_sort(a[1...n])
else if tails then counting_sort(a[1...n])
Given a list of n integers each between 1 and 10, inclusive, this algorithm sorts the list.
In the best case, n is odd. A lower bound on the best case is Omega(n) and an upper bound on the best case is O(n log n).
In the worst case, n is even. A lower bound on the worst case is O(n^2) and there is no upper bound on the worst case (since bogosort may never finish, despite that being terribly unlikely).
To define an average case, we would need to define probabilities. Assuming even/odd are equally likely for all n, then there is no upper bound on the average case and a lower bound on the average case is Omega(n^2), same as for the worst case. To get a different bound for the average case, we'd need to define the distribution so that n being even gets increasingly unlikely for larger lists. Maybe the only even-length list you typically pass into this algorithm has length 2, for example.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I had previously mistakenly asked for str.count, when I really meant str.length. Thanks to responders for getting back to me
Is this a constant time operation or linear time? I know in Java it's constant time and C it's linear time, according to In Java, for a string x, what is the runtime cost of s.length()? Is it O(1) or O(n)? but not sure what the case is in Ruby.
String#count counts the number of occurences of (a set of) substrings in the string. In order to do this, it must compare each character against the predicate set, there is no other way.
It cannot possibly be faster than O(n). The trivial implementation is O(n), so in order to make it slower than O(n), you have to be extra stupid and do extra work. So, since it cannot be faster than O(n), and we can assume that nobody would be stupid or malicious enough to deliberately make it slower than O(n), we can safely conclude that it is O(n).
However, that is just a conclusion. It is no guarantee. The Ruby Language Specification does not make performance guarantees. But you can be pretty sure that a Ruby implementation where it is not O(n) would be ridiculed and simply not used and die.
The complexity is O(n + m)
Where n is the size of the string and m is the number of character set parameters.
O(m) for the creation of the lookup table/hash from the arguments
O(n) * O(1) for the comparison of the input string with the lookup table/hash
Depending on receiver and arguments, either n or m can be the dominating factor.
If however you mean String#length then it is O(1)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 8 years ago.
Improve this question
what's the time complexity of the following program?
sum=0;
for(i=1;i<=5;i++)
sum=sum+i;
and how to define this complexity in log ? i shall highly appreciate if someone explain complexity step by step. furthermore how to show in O(big o) and logn.
[Edited]
sum=0; //1 time
i=1; //1 time
i<=5; //6 times
i++ //5 times
sum=sum+i;//5 times
is time complexity 18? Correct?
Preliminaries
Time complexity isn't usually expressed in terms of a specific integer, so the statement "The time complexity of operation X is 18" isn't clear without a unit, e.g., 18 "doodads".
One usually expresses time complexity as a function of the size of the input to some function/operation.
You often want to ignore the specific amount of time a particular operation takes, due to differences in hardware or even differences in constant factors between different languages. For example, summation is still O(n) (in general) in C and in Python (you still have to perform n additions), but differences in constant factors between the two languages will result in C being faster in terms of absolute time the operation takes to halt.
One also usually assumes that "Big-Oh"--e.g, O(f(n))--is the "worst-case" running time of an algorithm. There are other symbols used to study more strict upper and lower bounds.
Your question
Instead of summing from 1 to 5, let's look at summing from 1 to n.
The complexity of this is O(n) where n is the number of elements you're summing together.
Each addition (with +) takes constant time, which you're doing n times in this case.
However, this particular operation that you've shown can be accomplished in O(1) (constant time), because the sum of the numbers from 1 to n can be expressed as a single arithmetic operation. I'll leave the details of that up to you to figure out.
As far as expressing this in terms of logarithms: not exactly sure why you'd want to, but here goes:
Because exp(log(n)) is n, you could express it as O(exp(log(n))). Why would you want to do this? O(n) is perfectly understandable without needing to invoke log or exp.
First of all the loop runs 5 times for 5 inputs hence it has a time complexity of O(n). I am assuming here that values in i are the inputs for sum.
Secondly you cant just define time complexity in log terms it should always in BIG O notation. For example if you perform a binary search then the worst case time complexity of that algorithm is O(log n) because you are getting result in say 3 iterations when the input arrays is 8.
Complexity = log2(base)8 = 3
now here your comlexity is in log.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Let f(n)= ( (n^2+2n)/n + 1/1000*(n^(3/2)))*log(n)
The time complexity for this function could be both O(n²*log(n)) and O(n^(3/2)*log(n))
How is this possible? I thought the dominating term here was n² (*log(n)) and therefore it should be O(n²*log(n)) only the big O notation and time complexity measures feels so ambiguous
Big O notation isn't that confusing. It defines an upper bound to the running time of an algorithm, hence, if O(f(n)) is a valid upper bound, every other O(g(n)) such that g(n) > f(n) definitively is valid, since if your code will run in less then f(n), will for sure run in less than g(n).
In your case, since O(n^2 *log(n)) dominates O(n^(3/2) log(n)), it's a valid upper bound too, even if it's less strict. Furthermore, you could say that your algorithm is O(n^3). The question is, which one of those Big O notation gives us more informations about the algorithm? The obvious answer is the lower one, and that's the reason why we usually indicate that.
To make things cler : let's say you can throw a ball up in the air 10m. Then, you can say that you can't throw higher than 10m, OR you could say you can't throw it higher than 15 meters. The fact that the first one is a stricter upper bound, doesn't make the second one a false statement.
"Big O notation" being applied on the sum always leaves dominant (the biggest ones) terms only. In case of one independent variable one term only will survive. In your case
O(n^2*log(n) + n^(3/2)*log(n)) = O(n^2*log(n))
since 1-st term is bigger than the 2-nd:
lim(term1/term2) = lim(n^2*log(n) / (n^(3/2)*log(n))) = lim(n^(1/2)) = inf
but it seems, that you made an arithemic error in your computations:
(n^2+2n)/n = n + 2, not n^2 + 2 * n
in that case
O(n*log(n) + 2*log(n) + n^(3/2)*log(n))
the last term which is "n^(3/2)*log(n)" is the biggest one
O(n*log(n) + 2*log(n) + n^(3/2)*log(n)) = O(n^(3/2)*log(n))
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Some standard books on Algorithms produce this:
0 ≤ f(n) ≤ c⋅g(n) for all n > n0
While defining big-O, can anyone explain to me what this means, using a strong example which can help me to visualize and understand big-O more precisely?
Assume you have a function f(n) and you are trying to classify it - is it a big O of some other function g(n).
The definition basically says that f(n) is in O(g(n)) if there exists two constants C,N such that
f(n) <= c * g(n) for each n > N
Now, let's understand what it means.
Start with the n>N part - it means, we do not "care" for low values of n, we only care for high values, and if some (final number of) low values do not follow the criteria - we can silently ignore them by choosing N bigger then them.
Have a look on the following example:
Though we can see that for low values of n: n^2 < 10nlog(n), the second quickly catches up and after N=10 we get that for all n>10 the claim 10nlog(n) < n^2 is correct, and thus 10nlog(n) is in O(n^2).
The constant c means we can also tolerate some multiple by constant factor, and we can still accept it as desired behavior (useful for example to show that 5*n is O(n), because without it we could never find N such that for each n > N: 5n < n, but with the constant c, we can use c=6 and show 5n < 6n and get that 5n is in O(n).
This question is a math problem, not an algorithmic one.
You can find a definition and a good example here: https://math.stackexchange.com/questions/259063/big-o-interpretation
As #Thomas pointed out, Wikipedia also has a good article on this: http://en.wikipedia.org/wiki/Big_O_notation
If you need more details, try to ask a more specific question.