Given a list of complexities:
How do you order then in their Big O order?
I think the answer is below?
Question now is how does log(n!) become n log(n). Also I don't know if I got the n! and (n-1)! right. Is it possible that c^n can be bigger than n!? When c > n?
In general how do I visualize such Big O problem ... it took me quite long to do this ... compared to coding so far ... Any resources, videos MIT Open Courseware resources, something with explaination
You might want to see how the functions grow. Here's a quick plot from Wolfram Alpha:
link
In general, n^n grows much faster than c^n for any n greater than some n_0 (because n will overtake c at some point, even if c is extremely large). log grows much slower than quadratic or exponential, and slightly faster than linear.
For O(log(n!)) = O(nlogn), I believe there was something called Stirling's Approximation. It boils down to seeing that O(n!) = O(n^n) as n! = n*(n-1)*(n-2)*...*2*1, so n^n = n*n*n*...*n is an upper bound. It can be proven that is it a lower bound as well, but you don't need that.
Since log(n^n) = nlogn by log rules, O(log(n!) = O(log(n^n)) = O(nlogn).
Related
I had seen in one of the videos (https://www.youtube.com/watch?v=A03oI0znAoc&t=470s) that, If suppose f(n)= 2n +3, then BigO is O(n).
Now my question is if I am a developer, and I was given O(n) as upperbound of f(n), then how I will understand, what exact value is the upper bound. Because in 2n +3, we remove 2 (as it is a constant) and 3 (because it is also a constant). So, if my function is f(n) where n = 1, I can't say g(n) is upperbound where n = 1.
1 cannot be upperbound for 1. I find hard understanding this.
I know it is a partial (and probably wrong answer)
From Wikipedia,
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
In your example,
f(n) = 2n+3 has the same growth rate as f(n) = n
If you plot the functions, you will see that both functions have the same linear growth; and as n -> infinity, the difference between the 2 gets minimal.
In Big O notation, f(n) = 2n+3 when n=1 means nothing; you need to look at the trend, not discreet values.
As a developer, you will consider big-O as a first indication for deciding which algorithm to use. If you have an algorithm which is say, O(n^2), you will try to understand whether there is another one which is, say, O(n). If the problem is inherently O(n^2), then the big-O notation will not provide further help and you will need to use other criterion for your decision. However, if the problem is not inherently O(n^2), but O(n), you should discard any algorithm that happen to be O(n^2) and find an O(n) one.
So, the big-O notation will help you to better classify the problem and then try to solve it with an algorithm whose complexity has the same big-O. If you are lucky enough as to find 2 or more algorithms with this complexity, then you will need to ponder them using a different criterion.
I have a problem that my professor gave us:
Algorithms A and B spend exactly TA(n) = 0.1n^2log10(n) and TB(n) =
2.5n^2 microseconds, respectively, for a problem of size n. Choose the algorithm, which is better in the Big-Oh sense, and find out a problem
size n0 such that for any larger size n > n0 the chosen algorithm
outperforms the other. If your problems are of the size n ≤ 10^9,
which algorithm will you recommend to use?
At first, I thought algorithm A would be better in the Big-Oh sense, but his answer says A is better. My reasoning for A being better is that it grows slower than B. Am I correct or is my professor correct?
Here's his answer:
In the Big-Oh sense, the algorithm B is better. It
outperforms the algorithm A when TB(n) ≤ TA(n), that is, when 2.5n^2 ≤
0.1n^2log10(n). This inequality reduces to log10(n) ≥ 25, or n ≥ n0 = 1025. If n≤ 10^9, the algorithm of choice is A
Is he correct or am I correct?
B is better in big-o terms because it takes time proportional to n squared, but A is proportional to n squared times log n which is bigger.
So for large enough values of n B will be faster.
I have come across some of the difficulties during doing this question. The question is: sort the following functions by order of growth from slowest to fastest:
7n^3 − 10n, 4n^2, n, n^8621909, 3n, 2^(log log n), n log n, 6n log n, n!, 1.1^n
And my answer to the question is
n,3n
nlogn, 6nlogn
4n^2(which equals to n^2)
7n^3 – 10n(which equals to n^3)
n^ 8621909
2^loglogn
1.1^n(exponent 2^0.1376n)
n!
Just wondering: can I assume that 2^(loglogn) has the same growth as 2^n? Should I take 1.1^n as a constant?
Just wondering can i assume that 2^(loglogn) has the same growth as
2^n?
No. Assuming that the logs are in base 2 then 2^log(n) mathematically equals to n, so 2^(log(log(n)) = log(n). And of course it does not have the same growth as 2^n.
Should I take 1.1^n as a constant?
No. 1.1^n is still an exponent by n that cannot be ignored - of course it is not a constant.
The correct order is:
2^loglogn = log(n)
n,3n
nlogn, 6nlogn
4n^2
7n^3 – 10n
n^8621909
1.1^n
n!
I suggest to take a look at the Formal definition of the Big-O notation. But, for simplicity, the top of the list goes slower to infinity than the lower functions.
If for example, you will put two of those function on a graph you will see that one function will eventually pass the other one and will go faster to infinity.
Take a look at this comparing n^2 with 2^n. You will notice that 2^n is passing n^2 and going faster to infinity.
You might also want to check the graph in this page.
2log(x) = x, so 2log(log(n)) = log(n), which grows far more slower than 2n.
1.1n is definitely not a constant. 1.1n tends to infinity, and a constant obviously does not.
Okay so I have this project I have to do, but I just don't understand it. The thing is, I have 2 algorithms. O(n^2) and O(n*log2n).
Anyway, I find out in the project info that if n<100, then O(n^2) is more efficient, but if n>=100, then O(n*log2n) is more efficient. I'm suppose to demonstrate with an example using numbers and words or drawing a photo. But the thing is, I don't understand this and I don't know how to demonstrate this.
Anyone here that can help me understand how this works?
Good question. Actually, I always show these 3 pictures:
n = [0; 10]
n = [0; 100]
n = [0; 1000]
So, O(N*log(N)) is far better than O(N^2). It is much closer to O(N) than to O(N^2).
But your O(N^2) algorithm is faster for N < 100 in real life. There are a lot of reasons why it can be faster. Maybe due to better memory allocation or other "non-algorithmic" effects. Maybe O(N*log(N)) algorithm requires some data preparation phase or O(N^2) iterations are shorter. Anyway, Big-O notation is only appropriate in case of large enough Ns.
If you want to demonstrate why one algorithm is faster for small Ns, you can measure execution time of 1 iteration and constant overhead for both algorithms, then use them to correct theoretical plot:
Example
Or just measure execution time of both algorithms for different Ns and plot empirical data.
Just ask wolframalpha if you have doubts.
In this case, it says
n log(n)
lim --------- = 0
n^2
Or you can also calculate the limit yourself:
n log(n) log(n) (Hôpital) 1/n 1
lim --------- = lim -------- = lim ------- = lim --- = 0
n^2 n 1 n
That means n^2 grows faster, so n log(n) is smaller (better), when n is high enough.
Big-O notation is a notation of asymptotic complexity. This means it calculates the complexity when N is arbitrarily large.
For small Ns, a lot of other factors come in. It's possible that an algorithm has O(n^2) loop iterations, but each iteration is very short, while another algorithm has O(n) iterations with very long iterations. With large Ns, the linear algorithm will be faster. With small Ns, the quadratic algorithm will be faster.
So, for small Ns, just measure the two and see which one is faster. No need to go into asymptotic complexity.
Incidentally, don't write the basis of the log. Big-O notation ignores constants - O(17 * N) is the same as O(N). Since log2N is just ln N / ln 2, the basis of the logarithm is just another constant and is ignored.
Let's compare them,
On one hand we have:
n^2 = n * n
On the other hand we have:
nlogn = n * log(n)
Putting them side to side:
n * n versus n * log(n)
Let's divide by n which is a common term, to get:
n versus log(n)
Let's compare values:
n = 10 log(n) ~ 2.3
n = 100 log(n) ~ 4.6
n = 1,000 log(n) ~ 6.9
n = 10,000 log(n) ~ 9.21
n = 100,000 log(n) ~ 11.5
n = 1,000,000 log(n) ~ 13.8
So we have:
n >> log(n) for n > 1
n^2 >> n log(n) for n > 1
Anyway, I find out in the project info that if n<100, then O(n^2) is
more efficient, but if n>=100, then O(n*log2n) is more efficient.
Let us start by clarifying what is Big O notation in the current context. From (source) one can read:
Big O notation is a mathematical notation that describes the limiting
behavior of a function when the argument tends towards a particular
value or infinity. (..) In computer science, big O notation is used to classify algorithms
according to how their run time or space requirements grow as the
input size grows.
Big O notation does not represent a function but rather a set of functions with a certain asymptotic upper-bound; as one can read from source:
Big O notation characterizes functions according to their growth
rates: different functions with the same growth rate may be
represented using the same O notation.
Informally, in computer-science time-complexity and space-complexity theories, one can think of the Big O notation as a categorization of algorithms with a certain worst-case scenario concerning time and space, respectively. For instance, O(n):
An algorithm is said to take linear time/space, or O(n) time/space, if its time/space complexity is O(n). Informally, this means that the running time/space increases at most linearly with the size of the input (source).
and O(n log n) as:
An algorithm is said to run in quasilinear time/space if T(n) = O(n log^k n) for some positive constant k; linearithmic time/space is the case k = 1 (source).
Mathematically speaking the statement
Which is better: O(n log n) or O(n^2)
is not accurate, since as mentioned before Big O notation represents a set of functions. Hence, more accurate would have been "does O(n log n) contains O(n^2)". Nonetheless, typically such relaxed phrasing is normally used to quantify (for the worst-case scenario) how a set of algorithms behaves compared with another set of algorithms regarding the increase of their input sizes. To compare two classes of algorithms (e.g., O(n log n) and O(n^2)) instead of
Anyway, I find out in the project info that if n<100, then O(n^2) is
more efficient, but if n>=100, then O(n*log2n) is more efficient.
you should analyze how both classes of algorithms behaves with the increase of their input size (i.e., n) for the worse-case scenario; analyzing n when it tends to the infinity
As #cem rightly point it out, in the image "big-O denote one of the asymptotically least upper-bounds of the plotted functions, and does not refer to the sets O(f(n))"
As you can see in the image after a certain input, O(n log n) (green line) grows slower than O(n^2) (orange line). That is why (for the worst-case) O(n log n) is more desirable than O(n^2) because one can increase the input size, and the growth rate will increase slower with the former than with the latter.
First, it is not quite correct to compare asymptotic complexity mixed with N constraint. I.E., I can state:
O(n^2) is slower than O(n * log(n)), because the definition of Big O notation will include n is growing infinitely.
For particular N it is possible to say which algorithm is faster by simply comparing N^2 * ALGORITHM_CONSTANT and N * log(N) * ALGORITHM_CONSTANT, where ALGORITHM_CONSTANT depends on the algorithm. For example, if we traverse array twice to do our job, asymptotic complexity will be O(N) and ALGORITHM_CONSTANT will be 2.
Also I'd like to mention that O(N * log2N) which I assume logariphm on basis 2 (log2N) is actually the same as O(N * log(N)) because of logariphm properties.
We have two way to compare two Algo
->first way is very simple compare and apply limit
T1(n)-Algo1
T2(n)=Alog2
lim (n->infinite) T1(n)/T2(n)=m
(i)if m=0 Algo1 is faster than Algo2
(ii)m=k Both are same
(iii)m=infinite Algo2 is faster
*Second way pretty simple as compare to 1st there you just take a log of both but do not neglet multi constant
Algo 1=log n
Algo 2=sqr(n)
keep log n =x
Any poly>its log
O(sqr(n))>o(logn)
I am a mathematician so i will try to explain why n^2 is faster than nlogn for small values of n , with a simple limit , while n-->0 :
lim n^2 / nlogn = lim n / logn = 0 / -inf = 0
so , for small values of n ( in this case "small value" is n existing in [1,99] ) , the nlogn is faster than n^2 , 'cause as we see limit = 0 .
But why n-->0? Because n in an algorithm can take "big" values , so when n<100 , it is considered like a very small value so we can take the limit n-->0.
I am a little confused on the difference between T(N) and O(N) when dealing with time complexity. I have three algorithms with their respective T(N) equations and I have to find the worst case time complexity O(N) and I’m not sure how that differs from T(N).
An example would be:
T(n) = 150⋅N² + 3⋅N + 11⋅log₂(N)
Would the O() be just O(N²)
Also, should the algorithm with the lower order of complexity always be used? I have a feeling the answer is no but I'm not too sure as to why.
Would the O() be just O(N²)
Yes.
For large N, the N² term will dominate the runtime, so that the other terms don't matter anymore.
E.g., for N=10, in your example, 150⋅N² is already 15000, while 3⋅N = 30 and 11⋅log₂(N) = 36.5, so the non-N² terms make up only 0.44% of the total number of steps.
For N=100, 150⋅N² = 1500000, 3⋅N = 300, 11⋅log₂(N) = 73.1, so the non-N² terms make up only 0.025% of the total number of steps.
So for higher N, the relevance of lower order terms diminishes.
Also, should the algorithm with the lower order of complexity always be used?
No. Because Big-O notation describes only the asymptotic behavior as N gets large, and does not include any constant factor overhead, you may often be better off using a less optimal algorithm with a lower overhead.
In your example, if I have an alternate algorithm for the problem you are trying to solve that has runtime T'(N) = 10⋅N³, then for N=10 it will take only 10000 steps, while your example would require 150067 steps. Basically, for any N ≤ 15, the T'(N) algorithm would be faster, and for any N > 15, your T(N) algorithm would be faster. So if you know in advance that you are not going to see any N > 15, you are better off by choosing the theoretically less efficient algorithm T'(N).
Of course, in practice there are many other considerations as well, such as:
Availability of algorithms that you can reuse in libraries, on the web, etc.
If you implement it yourself: ease of implementation
Whether or not the algorithm scales to multiple cores or multiple machines easily
T(n) is the function representing the time taken for an input of size n. Big-oh notation is a classification of that. Like you said in your example, the big-oh of that example would be n^2.
Regarding your second question, big-oh notation indicates the algorithm you should use as the input size approaches infinity. Practically speaking, there are cases where you would never get an input large enough to compensate.
For example, if T1(n) = 999999999999*N and T2(n) = 2*N^2, eventually n is large enough for T2 to be greater than T1. However, for smaller sizes of n, T1 is greater. You can graph the functions, or even solve a system of equations to find out what size of n will make a difference.
Note: Also keep in mind that big-oh is a bound on the complexity, which means that you can have a loose bound which is still correct.
T(n) is just a function. O or big oh is a level of complexity.
T(n) can be f(n) or g(n) for that matter.
I hope that is clear.
Big Oh is a measure of the time or space complexity of an algorithm.
You dont consider the lower order for complexity because for very large values of n, the higher order complexity is >> lower order complexity.