I'm trying to determine whether it is: O(1).
How can I prove it?
In complexity terms, log_b(n) is log(n). So is O(log_2(n)-log_3(n))=O(0)=O(1)? that doesn't seem like a strong proof.
Also, this doesn't converge asymptotically, so how can it be O(1)?
...your proof is wrong. O(log_2(n)-log_3(n))==O(log(n)/log(2)-log(n)/log(3))==O(log(n)*(1/log(2)-1/log(3))=O(Clog(n))=O(log(n)).
Also, you might have a look at Wolfram Alpha
It gives some nice plots for log_2(n)-log_3(n)
And, even more important for you, it describes O(log_2(n)-log_3(n))
Related
https://www.freecodecamp.org/news/big-o-notation-why-it-matters-and-why-it-doesnt-1674cfa8a23c/
Exponentials have greater complexity than polynomials as long as the coefficients are positive multiples of n
O(2ⁿ) is more complex than O(n⁹⁹), but O(2ⁿ) is actually less complex
than O(1). We generally take 2 as base for exponentials and logarithms
because things tends to be binary in Computer Science, but exponents
can be changed by changing the coefficients. If not specified, the
base for logarithms is assumed to be 2.
I thought O(1) is the simplest in complexity. Could anyone help me explain why O(2ⁿ) is less complex than O(1) ?
Errata. The author made an obvious mistake and you caught it. It's not the only mistake in the article. For example, I would expect O(n*log(n)) to be the more appropriate complexity for sorting algorithms than the one they claim (quoted below). Otherwise, you'd be able to sort a set without even seeing all of the data.
"As complexity is often related to divide and conquer algorithms, O(log(n)) is generally a good complexity you can reach for sorting algorithms."
It might be worthwhile to try to contact the author and give him a heads up so he can correct it and avoid confusing anyone else with misinformation.
I'm trying to solve exercises about algorithms complexity and in a case like the one in the title I'm not sure on how to proceed.
I know that I would have to find the fastest growing term and remove the coefficient unless the coefficient includes another term:
for example: (n^2)*logn complexity is O((n^2)*logn) and (n^2)*2 complexity is O(n^2).
What I did was simplifying the function to n^2(1/2+logn), but after that I'm not sure if the complexity would just be O(n^2(1/2+logn)) or if the result is something else.
Like Damien suggested in the comment, the answer is:
O(1/2 + logn) = O(logn)
My textbook is very poor at explaining how big-o works and gives little to no examples with no detail.
I have a few exercise questions I'm trying to attempt but thanks to the textbook
I don't understand how to tackle these questions.
Here is one:
determine whether each of these functions is O(x)
f(x)=x^2+x+1
and
determine whether each of these functions is O(x^2)
f(x)=xlogx
How do I go about solving these questions? From what I have gathered online and the textbook I find this very confusing..
Thanks in advance.
For the first one, x^2+x+1 is not O(x), as the first expression grows faster than the second no matter how large x gets. Typically, x^2+x+1 would be said to be O(x^2) ("quadratic"), as x^2 is the dominant term.
For the second one, xlogx is O(x^2) since the second expression grows at least as fast as the first. Example constraints would be c=1 and x>0. This is an overly-conservative expression though, and generally xlogx would be said to be O(xlogx) ("linearithmic"), its own complexity class.
The Wikipedia article on Big-O notation lists other common named complexities. While there are general methods to analyze a function and determine the its Big-O complexity, it's usually faster to just familiarize yourself with the common ones and recognize the most relevant one in an expression or algorithm. Usually you'll only encounter a few common complexity classes. In increasing order of complexity, these are:
Constant (1)
Logarithmic (logx)
Linear (x)
Linearithmic (or often just "n-log-n") (xlogx)
Poynomial (x^c for c>1)
Exponential (c^x for c>1)
What is the clear interpretation of this?
O(1)+O(2)+O(3)+O(4)+O(5).......O(n)
And how different is this from
sigma O(i) 1<=i<=n?
CLRS says it is different but does not explain how are these different?
If I remember correctly, the asymptotic complexity is always expressed with the highest order function, so
O(1)+O(2)+...+O(n)
is just
O(n)
Which makes sense if n is reasonably large. If n is small the whole complexity stuff makes little sense anyway.
Let's say A(n) is the average running time of an algorithm and W(n) is the worst. Is it correct to say that
A(n) = O(W(n))
is always true?
The Big O notation is kind of tricky, since it only defines an upper bound to the execution time of a given algorithm.
What this means is, if f(x) = O(g(x)) then for every other function h(x) such that g(x) < h(x) you'll have f(x) = O(h(x)) . The problem is, are those over extimated execution times usefull? and the clear answer is not at all. What you usually whant is the "smallest"
upper bound you can get, but this is not strictly required in the definition, so you can play around with it.
You can get some stricter bound using the other notations, such as the Big Theta, as you can read here.
So, the answer to your question is yes, A(n) = O(W(n)), but that doesn't give any usefull information on the algorithm.
If you're mentioning A(n) and W(n) are functions - then, yes, you can do such statement in common case - it is because big-o formal definition.
Note, that in terms on big-o there's no sense to act such way - since it makes understanding of the real complexity worse. (In general, three cases - worst, average, best - are present exactly to show complexity more clear)
Yes, it is not a mistake to say so.
People use asymptotic notation to convey the growth of running time on specific cases in terms of input sizes.To compare the average case complexity with the worst case complexity isn't providing much insight into understanding the function's growth on either of the cases.
Whilst it is not wrong, it fails to provide more information than what we already know.
I'm unsure of exactly what you're trying to ask, but bear in mind the below.
The typical algorithm used to show the difference between average and worst case running time complexities is Quick Sort with poorly chosen pivots.
On average with a random sample of unsorted data, the runtime complexity is n log(n). However, with an already sorted set of data where pivots are taken from either the front/end of the list, the runtime complexity is n^2.