Big Oh time complexity for alpha(n) - big-o

What does O(alpha(n)) mean? I recently stumbled upon 2048 but in terms of run times and one of the blocks had that. Thanks!

It appears to be a reference to the inverse Ackermann function, written as α(n)
From wikipedia:
This inverse appears in the time complexity of some algorithms, such as the disjoint-set data structure and Chazelle's algorithm for minimum spanning trees.

Related

Compare the complexity of two algorithms given steps

Assume you had a data set of size n and two algorithms that processed that data
set in the same way. Algorithm A took 10 steps to process each item in the data set. Algorithm B processed each item in 100 steps. What would the complexity
be of these two algorithms?
I have drawn from the question that algorithm A completes the processing of each item with 1/10th the complexity of algorithm B,and using the graph provided in the accepted answer from the question: What is a plain English explanation of "Big O" notation? I am concluding that algorithm B has a complexity of O(n^2) and algorithm A a complexity of O(n), but am struggling to make conclusions beyond that without the implementation.
You need more than one data point before you can start making any conclusions about time complexity. The difference of 10 steps and 100 steps between Algorithm A and Algorithm B could be for many different reasons:
Additive Constant difference: Algorithm A is always 90 steps faster than Algorithm B no matter the input. In this case, both algorithms would have the same time complexity.
Scalar Multiplicative difference: Algorithm A is always 10 times faster than Algorithm B no matter the input. In this case, both algorithms would have the same time complexity.
The case that you brought up, where B is O(n^2) and A is O(n)
Many, many other possibilities.

pseudo polynomial analysis of algorithms

When learning for exam in Algorithms and Data Structures i have stumbled upon a question, what does it mean if an algorithm has pseudo polynomial time efficiency( analysis)
Did a lot of searching but turned empty handed
It means that the algorithm is polynomial with respect to the size of the input, but the input actually grows exponentially.
For example take the subset sum problem. We have a set S of n integers and we want to find a subset which sums up to t.
To solve this problem you can just check the sum of each subset, so it is O(P) where P is the number of subsets. However in fact the number of subsets is 2^n so the algorithm has exponential complexity.
I hope this introduction helps to understand the wikipedia's article about it http://en.wikipedia.org/wiki/Pseudo-polynomial_time :)

How do you work out how long a program will take to run by inspection?

I went to a lecture on algorithms the other day, and we were learning about dynamic programming. The lecturer mentioned that to solve fibonacci recursively will take close to exponential time, and doing it in a linear fashion will take quadratic time. How do you work this kind of stuff out just by inspection?
If you use only the recursive implementation(with out memoization)
to determine f_n you need to calcualte f_n-1 and f_n-2, similarly
f_n-1 will branch to f_n-2 and f_n-3 and same for f_n-2.
So if there is such branching
and there is no any reuse concept try to count the nodes. It will be >2^n most of the time
this implies exponential running time.
But if you use a linear fashion(as you mentioned) it will be linear O(n) not even quadratic!
But you have to be careful about divide and conquer algorithms like merge sort
although it uses binary division the deepth of the tree will be logn where the number of nodes will become to be 2^logn which is equal to n.

Why do divide and conquer algorithms often run faster than brute force?

Why do divide and conquer algorithms often run faster than brute force? For example, to find closest pair of points. I know you can show me the mathematical proof. But intuitively, why does this happen? Magic?
Theoretically, is it true that "divide and conquer is always better than brute force"? If it is not, is there any counterexample?
For your first question, the intuition behind divide-and-conquer is that in many problems the amount of work that has to be done is based on some combinatorial property of the input that scales more than linearly.
For example, in the closest pair of points problem, the runtime of the brute-force answer is determined by the fact that you have to look at all O(n2) possible pairs of points.
If you take something that grows quadratically and cut it into two pieces, each of which is half the size as before, it takes one quarter of the initial time to solve the problem in each half, so solving the problem in both halves takes time roughly one half the time required for the brute force solution. Cutting it into four pieces would take one fourth of the time, cutting it into eight pieces one eighth the time, etc.
The recursive version ends up being faster in this case because at each step, we avoid doing a lot of work from dealing with pairs of elements by ensuring that there aren't too many pairs that we actually need to check. Most algorithms that have a divide and conquer solution end up being faster for a similar reason.
For your second question, no, divide and conquer algorithms are not necessarily faster than brute-force algorithms. Consider the problem of finding the maximum value in an array. The brute-force algorithm takes O(n) time and uses O(1) space as it does a linear scan over the data. The divide-and-conquer algorithm is given here:
If the array has just one element, that's the maximum.
Otherwise:
Cut the array in half.
Find the maximum in each half.
Compute the maximum of these two values.
This takes time O(n) as well, but uses O(log n) memory for the stack space. It's actually worse than the simple linear algorithm.
As another example, the maximum single-sell profit problem has a divide-and-conquer solution, but the optimized dynamic programming solution is faster in both time and memory.
Hope this helps!
I recommend you read through the chapter 5 of Algorithm Design, it explains divide-and-conquer very well.
Intuitively, for a problem, if you can divide it into two sub-problems with the same pattern as the origin one, and the time complexity to merge the results of the two sub-problems into the final result is somehow small, then it's faster than solve the orignal complete problem by brute-force.
As said in Algorithm Design, you actually cannot gain too much from divide-and-conquer in terms of time, general you can only reduce time complexity from higher polynomial to lower polynomial(e.g. from O(n^3) to O(n^2)), but hardly from exponential to polynomial(e.g. from O(2^n) to O(n^3)).
I think the most you can gain from divide-and-conquer is the mindset for problem solving. It's always a good attempt to break the original big problem down to smaller and easier sub-problems. Even if you don't get a better running time, it still helps you think through the problem.

How to test an algorithm for perfect optimization?

Is there any way to test an algorithm for perfect optimization?
There is no easy way to prove that any given algorithm is asymptotically optimal.
Proving optimality (if ever) sometimes follows years and/or decades after the algorithm has been written. A classic example is the Union-Find/disjoint-set data structure.
Disjoint-set forests are a data structure where each set is represented by a tree data structure, in which each node holds a reference to its parent node. They were first described by Bernard A. Galler and Michael J. Fischer in 1964, although their precise analysis took years.
[...] These two techniques complement each other; applied together, the amortized time per operation is only O(α(n)), where α(n) is the inverse of the function f(n) = A(n,n), and A is the extremely quickly-growing Ackermann function.
[...] In fact, this is asymptotically optimal: Fredman and Saks showed in 1989 that Ω(α(n)) words must be accessed by any disjoint-set data structure per operation on average.
For some algorithms optimality can be proven after very careful analysis, but generally speaking, there's no easy way to tell if an algorithm is optimal once it's written. In fact, it's not always easy to prove if the algorithm is even correct.
See also
Wikipedia/Matrix multiplication
The naive algorithm is O(N3), Strassen's is roughly O(N2.807), Coppersmith-Winograd is O(N2.376), and we still don't know what is optimal.
Wikipedia/Asymptotically optimal
it is an open problem whether many of the most well-known algorithms today are asymptotically optimal or not. For example, there is an O(nα(n)) algorithm for finding minimum spanning trees. Whether this algorithm is asymptotically optimal is unknown, and would be likely to be hailed as a significant result if it were resolved either way.
Practical considerations
Note that sometimes asymptotically "worse" algorithms are better in practice due to many factors (e.g. ease of implementation, actually better performance for the given input parameter range, etc).
A typical example is quicksort with a simple pivot selection that may exhibit quadratic worst-case performance, but is still favored in many scenarios over a more complicated variant and/or other asymptotically optimal sorting algorithms.
For those among us mortals that merely want to know if an algorithm:
reasonably works as expected;
is faster than others;
there is an easy step called 'benchmark'.
Pick up the best contenders in the area and compare them with your algorithm.
If your algorithm wins then it better matches your needs (the ones defined by
your benchmarks).

Resources