Between O(nlog*n) and O(n)? - algorithm

Is there any real complexity between O(n logstar(n) ) and O(n)?
I know that O(n sqrt(logstar(n))) and other similar functions are between these two but I mean something original which is not made of logstar(n).

Yes, there is. The most famous example would be the Ackermann inverse function α(n), which grows much more slowly than log* n. It shows up in contexts like the disjoint-set forest data structure, where each operation has an amortized cost of O(α(n)).
You can think of log* n as the number of times you need to apply log to n to drop the value down to some fixed constant (say, 2). You can then generalize this to log** n, which is the number of times that you need to apply log* to n to drop the value down to 2. You can then define log*** n, log**** n, log***** n, etc. in similar ways. The value of α(n) is usually given as the number of stars you need to put in log**...* n to get the value down to 2, so it grows much more slowly than the iterated logarithm function.
Intuitively speaking, you can think of log n as the inverse of exponentiation (repeated multiplication), log* n as the inverse of tetration (repeated exponentiation), log** n as the inverse of pentation (repeated tetration), etc. The Ackermann function effectively applies the n-th order generalization of exponentiation to the number n, so its inverse corresponds to how high of a level of exponentiation you need to apply to get to it. This results in an unbelievably slowly-growing function.
The most comically slowly-growing function I've ever seen used in a serious context is α*(n), the number of times you need to apply the Ackermann inverse function to a number n to drop it down to some fixed constant. It is almost inconceivable how large of an input you'd have to put into this function to get back anything close to, say, 10. If you're curious, the paper that introduced it is available here.

Related

Is there a way to simplify O(n^c) · O(c^n)?

I'm trying to figure out if there's a way to simplify O(nc) · O(cn) but couldn't come up with anything. Is this equal to infinity, or would it fall under one of the more common type of complexities (O(n), O(nc), O(cn), O(n log n), O(log n) or O(n!))?
O(nc) · O(cn) means the set of functions that can be expressed as f(n) · g(n) where f(n) ∈ O(nc) and g(n) ∈ O(cn).
Since nc ∈ O((1+δ)n) for arbitrarily small positive δ, we have O(1n) ⊂ O(nc) ⊂ O((1+δ)n). (Do you see why?)
Additionally, it's a general property of big-O notation that O(foo) · O(bar) = O(foo · bar); so we can always move multiplication inside the O (or pull multiplication outside the O). (Do you see why?)
Combining these two observations, we have O(cn) ⊂ O(nc) · O(cn) ⊂ O((c+δ)n) for arbitrarily small positive δ. So you can reasonably simplify O(nc) · O(cn) to O((c+δ)n). This is analogous to how O(n log n) is sometimes called "quasilinear", because it's a subset of O(n1+δ) for arbitrarily small positive δ.
e^N grows faster then all functions from your list of asymptotically equivalent candidates.
To find out whether g(N) = E^n*n^E and f(N) = e^N have the same order of grows we need the limit of g(N)/f(N) -> 1 when N->infinity. But:
So given function doesn't fall under one of the more common type of complexities (O(n), O(nc), O(cn), O(n log n), O(log n)) - it grows faster.
Following log log plot illustrates and compares order of grows of mentioned functions:
Some algorithms has exponential brute force solution but may be simplified(restrictions added) and become faster. The brute force solution of the traveling salesman problem is O(n!) which is approximately O(N^N).
Nowadays input with N = 30 is solvable in minutes for 2^N. For N = 1..30 you can observe behavior of c^n*n^c and c^N using following loglog plot:

Ambiguity about the Big-oh notation

I am currently trying to learn time complexity of algorithms, big-o notation and so on. However, some point confuses me a lot. I know that most of the time, the input size of an array or whatever we are dealing with determines the running time of the algorithm. Let's say I have an unsorted array with size N and I want to find the maximum element of this array without using any special algorithm. I just want to iterate over the array and find the maximum element. Since the size of my array is N, this process runs at O(N) or linear time. Let M is an integer that is the square root of N. So N can be written as the square of M that is M*M or M^2. So, I think there is nothing wrong if I want to replace N with M^2. I know that M^2 is also the size of my array so my big-o notation could be written as O(M^2). So, my new running time looks like running in quadratic time. Why does this happen?
You are correct, if it happens to be that you have some variable M such that M^2 ~= N is always true, you could say the algorithm runs in O(M^2).
But, note that now - the algorithm runs in quadratic related to M, and not quadratic time related to the input, it is still linear related to the size of the input.
The important thing here is defining linear/quadratic, etc.
More precisely, you have to detail linear/quadratic, etc. with respect to something (N or M for your example). The most natural choice is to study the complexity wrt. the size of the input (N for your example).
Another trap for big integers is that the size of n is log(n). So for instance if you loop over all smaller integers, your algorithm is not polynomial.

Is `log(n)` base 10?

Still getting a grip on logarithms being the opposite of exponentials. (Would it also be correct to describe them as the inversion of exponentials?)
There are lots of great SO entries already on Big-O notation including O(log n) and QuickSort n(log n) specifically. Found some useful graphs as well.
In looking at Divide and Conquer algorithms, I'm coming across n log n, which I think is n multiplied by the value of log n. I often try concrete examples like 100 log 100, to help visualize what's going on in an abstract equation.
Just read that log n assumes base 10. Does n log n translate into:
"the number n multiplied by the amount 10 needs to be raised to the power of in order to equal the number n"?
So 100 log 100 equals 200 because 10 needs to be raised to the power of two to equal 100?
Does the base change as an algorithm iterates through a set? Does the base even matter if we're talking in abstractions anyway?
Yes, the base does change depending on the way it iterates, but it doesn't matter. As you might remember, changing the base of logarithms means multiplying them by a constant. Since you mentioned that you have read about Big-O notation, then you probably already know that constants do not make a difference (O(n) is the same as O(2n) or O(1000n)).
EDIT: to clarify something you said - "I'm coming across n log n, which I think is n multiplied by the value of log n". Yes, you are right. And if you want to know why it involves log n, then think of what algorithms like divide and conquer do - they split the input (in two halves or four quarters or ten tenths, depending on the algorithm) during each iteration. The question is "How many times can that input be split up before the algorithm ends?" So you look at the input and try to find how many times you can divide it by 2, or by 4, or by 10, until the operation is meaningless? (unless the purpose of the algorithm is to divide 0 as many times as possible) Now you can give yourself concrete examples, starting with easy stuff like "How many times can 8 be divided by 2?" or ""How many times can 1000 be divided by 10?"
You don't need to worry about the base - if you're dealing with algorithmic complexity, it doesn't matter which base you're in, because the difference is just a constant factor.
Fundamentally, you just need to know that log n means that as n increases exponentially, the running time (or space used) increases linearly. For example, if n=10 takes 1 minute, then n=100 would take 2 miuntes, and n=1000 would take 3 minutes - roughly. (It's usually in terms of upper bounds, with smaller factors ignored... but that's the general gist of it.)
n log n is just that log n multiplied by n - so the time or space taken increases "a bit faster than linearly", basically.
The base does not matter at all. In fact people tend to drop part of the operations, e.g. if one has O(n^4 + 2*n), this is often reduced to O(n^4). Only the most relevant power needs to be considered when comparing algorithms.
For the case of comparing two closely related algorithms, say O(n^4 + 2*n) against O(n^4 + 3*n), one needs to include the linear dependency in order to conserve the relevant information.
Consider a divide and conquer approach based on bisection: your base is 2, so you may talk about ld(n). On the other hand you use the O-notation to compare different algorithms by means of the same base. This being said, the difference between ld, ln, and log10 is just a matter of a general offset.
Logarithms and Exponents are inverse operations.
if
x^n = y
then
Logx(y) = n
For example,
10^3 = 1000
Log10 (1000) = 3
Divide and conquer algorithms work by dividing the problem into parts that are then solved as independent problems. There can also be a combination step that combines the parts. Most divide and conquer algorithms are base 2 which means they cut the problem in half each time. For example, Binary Search, works like searching a phone book for a name. You flip to the middle and say.. Is the name I'm looking for in the first half or last half? (before or after what you flipped to), then repeat. Every time you do this you divide the problem's size by 2. Therefore, it's base 2, not base 10.
Order notation is primarily only concerned with the "order" of the runtime because that is what is most important when trying to determine if a problem will be tractable (solvable in a reasonable amount of time).
Examples of different orders would be:
O(1)
O(n)
O(n * log n)
O(n^2 * log n)
O(n^3.5)
O(n!)
etc.
The O here stands for "big O notation" which is basically provides an upper bound on the growth rate of the function. Because we only care about the growth of the function for large inputs.. we typically ignore lower order terms for example
n^3 + 2 n^2 + 100 n
would be
O(n^3)
because n^3 is the largest order term it will dominate the growth function for large values of N.
When you see O(n * log n) people are just abbreviating... if you understand the algorithm it is typically Log base 2 because most algorithms cut the problem in half. However, it could be log base 3 for example if the algorithm cut the problem into thirds for example.
Note:
In either case, if you were to graph the growth function, it would appear as a logarithmic curve. But, of course a O(Log3 n) would be faster than O(Log2 n).
The reason you do not see O(log10 n) or O(log3 n) etc.. is that it just isn't that common for an algorithm to work better this way. In our phone book example you could split the pages into 3 separate thirds and compare inbetween 1-2 and 2-3. But, then you just made 2 comparisons and ended up knowing which 1/3 the name was in. However, if you just split it in half each time you'd know which 1/4 it was in which is more efficient.
In the vast set of programming languages I know, the function log() is intended to be base e=2.718281....
In mathematical books sometimes it means base "ten" and sometimes base "e".
As another answers pointed out, for the big-O notation does not matter, because, for all base x, the complexities O(log_x (n)) is the same as O(ln(n)) (here log_x means "logarithm in base x" and ln() means "logarithm in base e").
Finally, it's common that, in the analysis of several algorithms, it's more convenient consider that log() is, indeed, "logarithm in base 2". (I've seen some texts taking this approach). This is obviously related to the binary representation of numbers in the computers.

What is meant when it's said that the union-find data structure will be "linear in the real world?"

I am undertaking the algorithms course on Coursera, there is a section where the author mentions the following
the running time of weighted quick union with path compression is
going be linear in the real world and actually could be improved to
even a more interesting function called the Ackermann function, which
is even more slowly growing than lg. And another point about this
is it seems that this is so close to being linear that is time proportional to N instead of time proportional to N times the
slowly growing function in N. Is there a simple algorithm that is
linear? And people, looked for a long time for that, and actually it
works out to be the case that we can prove that there is no such
algorithm. (emphasis added)
(You can find the entire transcript here)
In all other sources including Wikipedia "linear" is used when time increases proportionally with the input size, and in weighted quick-union with path compression this is certainly not the case.
What exactly is meant by "linear in the real world" here?
The runtime of m operations on a union-find data structure with path compression and union-by-rank is O(mα(m)), where α(m) is the inverse Ackermann function. This function is so slowly-growing that you cannot express an input to it for which the output is 6 in scientific notation. In other words, for any possible value of m that fits into the universe (or even that has size around 2num atoms in the universe), we have that α(m) ≤ 5. Therefore, for any "reasonable" input the cost of m operations will be O(m · 6) = O(m), which is linear.
Of course, the runtime isn't linear because α(m) does indeed grow, just very, very slowly. However, it's usually fine to approximate the runtime as O(m) because there's no possible way you'd ever notice the runtime of the function deviating from a simple linear function.
Hope this helps!
Here are some chunks from the transcript:
And what was proved
by Hopcroft Ulman and Tarjan was that if
you have N objects, any sequence of M
union and find operations will touch the
array at most a c (N + M lg star N) times.
And now, lg N is kind of a funny function....
And another point
about this is it< /i> seems that this is
so close to being linear that is t ime
proportional to N instead of time
proportional to N times the slowly growing
function in N.
(end quote)
You are pointing out that the cost of an individual operation grows very slowly with the number of objects, but they are looking at how the total cost of a number of operations grows with the number of objects involved so N times a per-operation cost that grows only very slowly with N is still just over linear in N.

Trying to understand Big-oh notation

Hi I would really appreciate some help with Big-O notation. I have an exam in it tomorrow and while I can define what f(x) is O(g(x)) is, I can't say I thoroughly understand it.
The following question ALWAYS comes up on the exam and I really need to try and figure it out, the first part seems easy (I think) Do you just pick a value for n, compute them all on a claculator and put them in order? This seems to easy though so I'm not sure. I'm finding it very hard to find examples online.
From lowest to highest, what is the
correct order of the complexities
O(n2), O(log2 n), O(1), O(2n), O(n!),
O(n log2 n)?
What is the
worst-case computational-complexity of
the Binary Search algorithm on an
ordered list of length n = 2k?
That guy should help you.
From lowest to highest, what is the
correct order of the complexities
O(n2), O(log2 n), O(1), O(2n), O(n!),
O(n log2 n)?
The order is same as if you compare their limit at infinity. like lim(a/b), if it is 1, then they are same, inf. or 0 means one of them is faster.
What is the worst-case
computational-complexity of the Binary
Search algorithm on an ordered list of
length n = 2k?
Find binary search best/worst Big-O.
Find linked list access by index best/worst Big-O.
Make conclusions.
Hey there. Big-O notation is tough to figure out if you don't really understand what the "n" means. You've already seen people talking about how O(n) == O(2n), so I'll try to explain exactly why that is.
When we describe an algorithm as having "order-n space complexity", we mean that the size of the storage space used by the algorithm gets larger with a linear relationship to the size of the problem that it's working on (referred to as n.) If we have an algorithm that, say, sorted an array, and in order to do that sort operation the largest thing we did in memory was to create an exact copy of that array, we'd say that had "order-n space complexity" because as the size of the array (call it n elements) got larger, the algorithm would take up more space in order to match the input of the array. Hence, the algorithm uses "O(n)" space in memory.
Why does O(2n) = O(n)? Because when we talk in terms of O(n), we're only concerned with the behavior of the algorithm as n gets as large as it could possibly be. If n was to become infinite, the O(2n) algorithm would take up two times infinity spaces of memory, and the O(n) algorithm would take up one times infinity spaces of memory. Since two times infinity is just infinity, both algorithms are considered to take up a similar-enough amount of room to be both called O(n) algorithms.
You're probably thinking to yourself "An algorithm that takes up twice as much space as another algorithm is still relatively inefficient. Why are they referred to using the same notation when one is much more efficient?" Because the gain in efficiency for arbitrarily large n when going from O(2n) to O(n) is absolutely dwarfed by the gain in efficiency for arbitrarily large n when going from O(n^2) to O(500n). When n is 10, n^2 is 10 times 10 or 100, and 500n is 500 times 10, or 5000. But we're interested in n as n becomes as large as possible. They cross over and become equal for an n of 500, but once more, we're not even interested in an n as small as 500. When n is 1000, n^2 is one MILLION while 500n is a "mere" half million. When n is one million, n^2 is one thousand billion - 1,000,000,000,000 - while 500n looks on in awe with the simplicity of it's five-hundred-million - 500,000,000 - points of complexity. And once more, we can keep making n larger, because when using O(n) logic, we're only concerned with the largest possible n.
(You may argue that when n reaches infinity, n^2 is infinity times infinity, while 500n is five hundred times infinity, and didn't you just say that anything times infinity is infinity? That doesn't actually work for infinity times infinity. I think. It just doesn't. Can a mathematician back me up on this?)
This gives us the weirdly counterintuitive result where O(Seventy-five hundred billion spillion kajillion n) is considered an improvement on O(n * log n). Due to the fact that we're working with arbitrarily large "n", all that matters is how many times and where n appears in the O(). The rules of thumb mentioned in Julia Hayward's post will help you out, but here's some additional information to give you a hand.
One, because n gets as big as possible, O(n^2+61n+1682) = O(n^2), because the n^2 contributes so much more than the 61n as n gets arbitrarily large that the 61n is simply ignored, and the 61n term already dominates the 1682 term. If you see addition inside a O(), only concern yourself with the n with the highest degree.
Two, O(log10n) = O(log(any number)n), because for any base b, log10(x) = log_b(*x*)/log_b(10). Hence, O(log10n) = O(log_b(x) * 1/(log_b(10)). That 1/log_b(10) figure is a constant, which we've already shown drop out of O(n) notation.
Very loosely, you could imagine picking extremely large values of n, and calculating them. Might exceed your calculator's range for large factorials, though.
If the definition isn't clear, a more intuitive description is that "higher order" means "grows faster than, as n grows". Some rules of thumb:
O(n^a) is a higher order than O(n^b) if a > b.
log(n) grows more slowly than any positive power of n
exp(n) grows more quickly than any power of n
n! grows more quickly than exp(kn)
Oh, and as far as complexity goes, ignore the constant multipliers.
That's enough to deduce that the correct order is O(1), O(log n), O(2n) = O(n), O(n log n), O(n^2), O(n!)
For big-O complexities, the rule is that if two things vary only by constant factors, then they are the same. If one grows faster than another ignoring constant factors, then it is bigger.
So O(2n) and O(n) are the same -- they only vary by a constant factor (2). One way to think about it is to just drop the constants, since they don't impact the complexity.
The other problem with picking n and using a calculator is that it will give you the wrong answer for certain n. Big O is a measure of how fast something grows as n increases, but at any given n the complexities might not be in the right order. For instance, at n=2, n^2 is 4 and n! is 2, but n! grows quite a bit faster than n^2.
It's important to get that right, because for running times with multiple terms, you can drop the lesser terms -- ie, if O(f(n)) is 3n^2+2n+5, you can drop the 5 (constant), drop the 2n (3n^2 grows faster), then drop the 3 (constant factor) to get O(n^2)... but if you don't know that n^2 is bigger, you won't get the right answer.
In practice, you can just know that n is linear, log(n) grows more slowly than linear, n^a > n^b if a>b, 2^n is faster than any n^a, and n! is even faster than that. (Hint: try to avoid algorithms that have n in the exponent, and especially avoid ones that are n!.)
For the second part of your question, what happens with a binary search in the worst case? At each step, you cut the space in half until eventually you find your item (or run out of places to look). That is log2(2k). A search where you just walk through the list to find your item would take n steps. And we know from the first part that O(log(n)) < O(n), which is why binary search is faster than just a linear search.
Good luck with the exam!
In easy to understand terms the Big-O notation defines how quickly a particular function grows. Although it has its roots in pure mathematics its most popular application is the analysis of algorithms which can be analyzed on the basis of input size to determine the approximate number of operations that must be performed.
The benefit of using the notation is that you can categorize function growth rates by their complexity. Many different functions (an infinite number really) could all be expressed with the same complexity using this notation. For example, n+5, 2*n, and 4*n + 1/n all have O(n) complexity because the function g(n)=n most simply represents how these functions grow.
I put an emphasis on most simply because the focus of the notation is on the dominating term of the function. For example, O(2*n + 5) = O(2*n) = O(n) because n is the dominating term in the growth. This is because the notation assumes that n goes to infinity which causes the remaining terms to play less of a role in the growth rate. And, by convention, any constants or multiplicatives are omitted.
Read Big O notation and Time complexity for more a more in depth overview.
See this and look up for solutions here is first one.

Resources