I was talking with a student the other day about the common complexity classes of algorithms, like O(n), O(nk), O(n lg n), O(2n), O(n!), etc. I was trying to come up with an example of a problem for which solutions whose best known runtime is super-exponential, such as O(22n), but still decidable (e.g. not the halting problem!) The only example I know of is satisfiability of Presburger arithmetic, which I don't think any intro CS students would really understand or be able to relate to.
My question is whether there is a well-known problem whose best known solution has runtime that is superexponential; at least ω(n!) or ω(nn). I would really hope that there is some "reasonable" problem meeting this description, but I'm not aware of any.
Maximum Parsimony is the problem of finding an evolutionary tree connecting n DNA sequences (representing species) that requires the fewest single-nucleotide mutations. The n given sequences are constrained to appear at the leaves; the tree topology and the sequences at internal nodes are what we get to choose.
In more CS terms: We are given a bunch of length-k strings that must appear at the leaves of some tree, and we have to choose a tree, plus a length-k string for each internal node in the tree, so as to minimise the sum of Hamming distances across all edges.
When a fixed tree is also given, the optimal assignment of sequences to internal nodes can be determined very efficiently using the Fitch algorithm. But in the usual case, a tree is not given (i.e. we are asked to find the optimal tree), and this makes the problem NP-hard, meaning that every tree must in principle be tried. Even though an evolutionary tree has a root (representing the hypothetical ancestor), we only need to consider distinct unrooted trees, since the minimum number of mutations required is not affected by the position of the root. For n species there are 3 * 5 * 7 * ... * (2n-5) leaf-labelled unrooted binary trees. (There is just one such tree with 3 species, which has a single internal vertex and 3 edges; the 4th species can be inserted at any of the 3 edges to produce a distinct 5-edge tree; the 5th species can be inserted at any of these 5 edges, and so on -- this process generates all trees exactly once.) This is sometimes written (2n-5)!!, with !! meaning "double factorial".
In practice, branch and bound is used, and on most real datasets this manages to avoid evaluating most trees. But highly "non-treelike" random data requires all, or almost all (2n-5)!! trees to be examined -- since in this case many trees have nearly equal minimum mutation counts.
Showing all permutation of string of length n is n!, finding Hamiltonian cycle is n!, minimum graph coloring, ....
Edit: even faster Ackerman functions. In fact they seems without bound function.
A(x,y) = y+1 (if x = 0)
A(x,y) = A(x-1,1) (if y=0)
A(x,y) = A(x-1, A(x,y-1)) otherwise.
from wiki:
A(4,3) = 2^2^65536,...
Do algorithms to compute real numbers to a certain precision count? The formula for the area of the Mandelbrot set converges extremely slowly; 10118 terms for two digits, 101181 terms for three.
This is not a practical everyday problem, but it's a way to construct relatively straightforward problems of increasing complexity.
The Kolmogorov complexity K(x) is the size of the smallest program that outputs the string $x$ on a pre-determined universal computer U. It's easy to show that most strings cannot be compressed at all (since there are more strings of length n than programs of length n).
If we give U a maximum running time (say some polynomial function P), we get a time-bounded Kolmogorov complexity. The same counting argument holds: there are some strings that are incompressible under this time bounded Kolmogorov complexity. Let's call the first such string (of some length n) xP
Since the time-bounded Kolmogorov complexity is computable, we can test all strings, and find xP
Finding xP can't be done in polynomial time, or we could use this algorithm to compress it, so finding it must be a super-polynomial problem. We do know we can find it in exp(P) time, though. (Jumping over some technical details here)
So now we have a time-bound E = exp(P). We can repeat the procedure to find xE, and so on.
This approach gives us a decidable super-F problem for every time-constructible function F: find the first string of length n (some large constant) that is incompressible under time-bound F.
Related
I'm reading CLRS Algorithms Edition 3 and I have two problems for my homework (I'm not asking for answers, I promise!). They are essentially the same question, just applied to Kruskal or to Prim. They are as follows:
Suppose that all edge weights in a graph are integers in the range from 1 to |V|. How fast can you make [Prim/Kruskal]'s algorithm run? What if the edge weights are integers in the range from 1 to W for some constant W?
I can see the logic behind the answers I'm thinking of and what I'm finding online (ie sort the edge weights using a linear sort, change the data structure being used, etc), so I don't need help answering it. But I'm wondering why there is a difference between the answer if the range is 1 to |V| and 1 to W. Why ask the same question twice? If it's some constant W, it could literally be anything. But honestly, so could |V| - we could have a crazy large graph, or a very small one. I'm not sure how the two questions posed in this problem are different, and why I need two separate approaches for both of them.
There's a difference in complexity between an algorithm that runs in O(V) time and O(W) time for constant W. Sure, V could be anything, as could W, but that's not really the point: one is linear, one, is O(1). The question is then for which algorithms could having a restricted range of edge-weights impact complexity (based, as you suggest on edge-weight sort time and choice in data-structure), and what would the actual new optimal complexity be for linearly bounded edge-weights vs. for edge-weights bounded by a constant, W.
Having bounded edge-weights could open up new possibilities for sorting algorithms for Kruskal's, and might change the data structure you'd want to use to implement the queue for Prim's along with the most optimal way you could implement extract-min and update-key operations for that queue. The extent to which edge-weights are bounded can impact whether a particular change in data structure or implementation is even beneficial to make in terms of final complexity.
For example, knowing that the n elements of a list are bounded in value by a constant W makes it so that a switch to radix sort would improve the asymptotic complexity of sorting them, but if I instead only knew that they were bounded in value by 2^n there would be no advantage in changing to radix sort over the traditional methods and their O(n*logn) sorting complexity.
I have a graph with a guarantee that it can be divided into two equal-sized partitions (one side may be 1 larger than the other), with no edges across this partition. I initially thought this was NP-hard, but I suspect it might not be. Is there any (efficient) way of solving this?
It's possible to solve your problem in time O(n2) by combining together two well-known algorithms.
When I first saw your problem, I initially thought it was going to relate to something like finding a maximum or minimum cut in a graph. However, since you're specifically looking for a way of splitting the nodes into two groups where there are no edges at all running between those groups, I think what you're looking for is much closer to finding connected components of a graph. After all, if you break the graph apart into connected components, there will be no edges running between those components. Therefore, the question boils down to the following:
Find all the connected components of the graph, making a note of how many nodes are in each component.
Partition the two connected components into two groups of roughly equal size - specifically, the size of the group on one side should be at most one more than the size of the group on the other.
Step (1) is something that you can do using a breadth-first or depth-first search to identify all the connected components. That will take you time O(m + n), where m is the number of edges and n is the number of nodes.
Step (2), initially, seems like it might be pretty hard. It's reminiscent of the partition problem, which is known to be NP-hard. The partition problem works like this: you're given as input a list of numbers, and you want to determine whether there's a split of those numbers into two groups whose totals are equal to one another. (It's possible to adapt this problem so that you can tolerate a split that's off by plus or minus one without changing the complexity). That problem happens to be NP-complete, which suggests that your problem might be hard.
However, there's a small nuance that actually makes the apparent NP-hardness of the partition problem not an issue. The partition problem is NP-hard in the case where the numbers you're given are written out in binary. On the other hand, if the numbers are written out in unary, then the partition problem has a polynomial-time solution. More specifically, there's an algorithm for the partition problem that runs in time O(kU), where k is the number of numbers and U is the sum of all those numbers. In the case of the problem you're describing, you know that the sum of the sizes of the connected components in your graph must be n, the number of nodes in the graph, and you know that the number of connected components is also upper-bounded by n. This means that the runtime of O(kU), plugging in k = O(n) and U = O(n), works out to O(n2), firmly something that can be done in polynomial time.
(Another way to see this - there's a pseudopolynomial time algorithm for the partition problem, but since in your case the maximum possible sum is bounded by an actual polynomial in the size of the input, the overall runtime is a polynomial.)
The algorithm I'm alluding to above is a standard dynamic programming exercise. You pick some ordering of the numbers - not necessarily in sorted order - and then fill in a 2D table where each entry corresponds to an answer to the question "is there a subset of the first i numbers that adds up to exactly j?" If you're not familiar with this algorithm, I'll leave it up to you to work out the details, as it's a really beautiful problem to solve that has a fairly simple and elegant solution.
Overall, this algorithm will run in time O(n2), since you'll do O(m + n) = O(n2) work to find connected components, followed by time O(n2) to run the partition problem DP to determine whether the split exists.
Hope this helps!
I would like to quote from Wikipedia
In mathematics, the minimum k-cut, is a combinatorial optimization
problem that requires finding a set of edges whose removal would
partition the graph to k connected components.
It is said to be the minimum cut if the set of edges is minimal.
For a k = 2, It would mean Finding the set of edges whose removal would Disconnect the graph into 2 connected components.
However, The same article of Wikipedia says that:
For a fixed k, the problem is polynomial time solvable in O(|V|^(k^2))
My question is Does this mean that minimum 2-cut is a problem that belongs to complexity class P?
The min-cut problem is solvable in polynomial time and thus yes it is true that it belongs to complexity class P. Another article related to this particular problem is the Max-flow min-cut theorem.
First of all, the time complexity an algorithm should be evaluated by expressing the number of steps the algorithm requires to finish as a function of the length of the input (see Time complexity). More or less formally, if you vary the length of the input, how would the number of steps required by the algorithm to finish vary?
Second of all, the time complexity of an algorithm is not exactly the same thing as to what complexity class does the problem the algorithm solves belong to. For one problem there can be multiple algorithms to solve it. The primality test problem (i.e. testing if a number is a prime or not) is in P, but some (most) of the algorithms used in practice are actually not polynomial.
Third of all, in the case of most algorithms you'll find on the Internet evaluating the time complexity is not done by definition (i.e. not as a function of the length of the input, at least not expressed directly as such). Lets take the good old naive primality test algorithm (the one in which you take n as input and you check for division by 2,3...n-1). How many steps does this algo take? One way to put it is O(n) steps. This is correct. So is this algorithm polynomial? Well, it is linear in n, so it is polynomial in n. But, if you take a look at what time complexity means, the algorithm is actually exponential. First, what is the length of the input to your problem? Well, if you provide the input n as an array of bits (the usual in practice) then the length of the input is, roughly said, L = log n. Your algorithm thus takes O(n)=O(2^log n)=O(2^L) steps, so exponential in L. So the naive primality test is in the same time linear in n, but exponential in the length of the input L. Both correct. Btw, the AKS primality test algorithm is polynomial in the size of input (thus, the primality test problem is in P).
Fourth of all, what is P in the first place? Well, it is a class of problems that contains all decision problems that can be solved in polynomial time. What is a decision problem? A problem that can be answered with yes or no. Check these two Wikipedia pages for more details: P (complexity) and decision problems.
Coming back to your question, the answer is no (but pretty close to yes :p). The minimum 2-cut problem is in P if formulated as a decision problem (your formulation requires an answer that is not just a yes-or-no). In the same time the algorithm that solves the problem in O(|V|^4) steps is a polynomial algorithm in the size of the input. Why? Well, the input to the problem is the graph (i.e. vertices, edges and weights), to keep it simple lets assume we use an adjacency/weights matrix (i.e. the length of the input is at least quadratic in |V|). So solving the problem in O(|V|^4) steps means polynomial in the size of the input. The algorithm that accomplishes this is a proof that the minimum 2-cut problem (if formulated as decision problem) is in P.
A class related to P is FP and your problem (as you formulated it) belongs to this class.
There is a question on an assignment that was due today which solutions have been released for, and I don't understand the correct answer. The question deals with best-case performance of disjoint sets in the form of disjoint set forests that utilize the weighed union algorithm to improve performance (the smaller of the trees has its root connected as a child to the root of the larger of the two trees) but without using the path compression algorithm.
The question is whether the best case performance of doing (n-1) Union operations on n singleton nodes and m>=n Find operations in any order is Omega(m*logn) which the solution confirms is correct like this:
There is a sequence S of n-1 Unions followed by m >= n Finds that takes Omega(m log n) time. The sequence S starts with a sequence n-1 Unions that builds a tree with depth Omega(log n). Then it has m>=n Finds, each one for the deepest leaf of that tree, so each one takes
(log n) time.
My question is, why does that prove the lower bound is Omega(m*logn) is correct? Isn't that just an isolated example of when the bound would be Omega(m*logn) that doesn't prove it for all inputs? I am certain one needs to only show one counter-example when disproving a claim but needs to prove a predicate for all possible inputs in order to prove its correctness.
In my answer, I pointed out the fact that you could have a case when you start off by joining two singleton nodes together. You then join in another singleton to that 2-node tree with 3 nodes sharing the same parent, then another etc., until you join together all the n nodes. You then have a tree where n-1 nodes all point up to the same parent, which is essentially the result you obtain if you use path compression. Then every FIND is executed in O(1) time. Thus, a sequence of (n-1) Unions and m>=n Finds ends up being Omega(n-1+m) = Omega(n+m) = Omega(m).
Doesn't this imply that the Omega(m*logn) bound is not tight and the claim is, therefore, incorrect? I'm starting to wonder if I don't fully understand Big-O/Omega/Theta :/
EDIT : fixed up the question to be a little clearer
EDIT2: Here is the original question the way it was presented and the solution (it took me a little while to realize that Gambarino and the other guy are completely made up; hardcore Italian prof)
Seems like I indeed misunderstood the concept of Big-Omega. For some strange reason, I presumed Big-Omega to be equivalent to "what's the input into the function that results in the best possible performance". In reality, most likely unsurprisingly to the reader but a revelation to me, Big-Omega simply describes the lower bound of a function. That's it. Therefore, a worst case input will have a lower and upper bounds (big-O and omega), and so will the best possible input. In case of big-omega here, all we had to do was come up with a scenario where we pick the 'best' input given the limitations of the worst case, i.e. that there is some input of size n that will take the algorithm at least m*logn steps. If such input exists, then the lower bound is tight.
Fibonacci numbers have become a popular introduction to recursion for Computer Science students and there's a strong argument that they persist within nature. For these reasons, many of us are familiar with them.
They also exist within Computer Science elsewhere too; in surprisingly efficient data structures and algorithms based upon the sequence.
There are two main examples that come to mind:
Fibonacci heaps which have better
amortized running time than binomial
heaps.
Fibonacci search which shares
O(log N) running time with binary
search on an ordered array.
Is there some special property of these numbers that gives them an advantage over other numerical sequences? Is it a spatial quality? What other possible applications could they have?
It seems strange to me as there are many natural number sequences that occur in other recursive problems, but I've never seen a Catalan heap.
The Fibonacci numbers have all sorts of really nice mathematical properties that make them excellent in computer science. Here's a few:
They grow exponentially fast. One interesting data structure in which the Fibonacci series comes up is the AVL tree, a form of self-balancing binary tree. The intuition behind this tree is that each node maintains a balance factor so that the heights of the left and right subtree differ by at most one. Because of this, you can think of the minimum number of nodes necessary to get an AVL tree of height h is defined by a recurrence that looks like N(h + 2) ~= N(h) + N(h + 1), which looks a lot like the Fibonacci series. If you work out the math, you can show that the number of nodes necessary to get an AVL tree of height h is F(h + 2) - 1. Because the Fibonacci series grows exponentially fast, this means that the height of an AVL tree is at most logarithmic in the number of nodes, giving you the O(lg n) lookup time we know and love about balanced binary trees. In fact, if you can bound the size of some structure with a Fibonacci number, you're likely to get an O(lg n) runtime on some operation. This is the real reason that Fibonacci heaps are called Fibonacci heaps - the proof that the number of heaps after a dequeue min involves bounding the number of nodes you can have in a certain depth with a Fibonacci number.
Any number can be written as the sum of unique Fibonacci numbers. This property of the Fibonacci numbers is critical to getting Fibonacci search working at all; if you couldn't add together unique Fibonacci numbers into any possible number, this search wouldn't work. Contrast this with a lot of other series, like 3n or the Catalan numbers. This is also partially why a lot of algorithms like powers of two, I think.
The Fibonacci numbers are efficiently computable. The fact that the series can be generated extremely efficiently (you can get the first n terms in O(n) or any arbitrary term in O(lg n)), then a lot of the algorithms that use them wouldn't be practical. Generating Catalan numbers is pretty computationally tricky, IIRC. On top of this, the Fibonacci numbers have a nice property where, given any two consecutive Fibonacci numbers, let's say F(k) and F(k + 1), we can easily compute the next or previous Fibonacci number by adding the two values (F(k) + F(k + 1) = F(k + 2)) or subtracting them (F(k + 1) - F(k) = F(k - 1)). This property is exploited in several algorithms, in conjunction with property (2), to break apart numbers into the sum of Fibonacci numbers. For example, Fibonacci search uses this to locate values in memory, while a similar algorithm can be used to quickly and efficiently compute logarithms.
They're pedagogically useful. Teaching recursion is tricky, and the Fibonacci series is a great way to introduce it. You can talk about straight recursion, about memoization, or about dynamic programming when introducing the series. Additionally, the amazing closed-form for the Fibonacci numbers is often taught as an exercise in induction or in the analysis of infinite series, and the related matrix equation for Fibonacci numbers is commonly introduced in linear algebra as a motivation behind eigenvectors and eigenvalues. I think that this is one of the reasons that they're so high-profile in introductory classes.
I'm sure there are more reasons than just this, but I'm sure that some of these reasons are the main factors. Hope this helps!
Greatest Common Divisor is another magic; see this for too many magics. But Fibonacci numbers are easy to calculate; also it has a specific name. For example, natural numbers 1,2,3,4,5 have too many logic; all primes are within them; sum of 1..n is computable, each one can produce with other ones, ... but no one take care about them :)
One important thing I forgot about it is Golden Ratio, which has very important impact in real life (for example you like wide monitors :)
If you have an algorithm that can be successfully explained in a simple and concise mannor with understandable examples in CS and nature, what better teaching tool could someone come up with?
Fibonacci sequences are indeed found everywhere in nature/life. They're useful at modeling growth of animal populations, plant cell growth, snowflake shape, plant shape, cryptography, and of course computer science. I've heard it being referred to as the DNA pattern of nature.
Fibonacci heap's have already been mentioned; the number of children of each node in the heap is at most log(n). Also the subtree starting a node with m children is at least (m+2)th fibonacci number.
Torrent like protocols which use a system of nodes and supernodes use a fibonacci to decide when a new super node is needed and how many subnodes it will manage. They do node management based on the fibonacci spiral (golden ratio). See the photo below how nodes are split/merged (partitioned from one large square into smaller ones and vice versa). See photo: http://smartpei.typepad.com/.a/6a00d83451db7969e20115704556bd970b-pi
Some occurences in nature
http://www.mcs.surrey.ac.uk/Personal/R.Knott/Fibonacci/sneezewort.GIF
http://img.blogster.com/view/anacoana/post-uploads/finger.gif
http://jwilson.coe.uga.edu/EMAT6680/Simmons/6690Pictures/pinecone3yellow.gif
http://2.bp.blogspot.com/-X5II-IhjXuU/TVbHrpmRnLI/AAAAAAAAABU/nv73Y9Ylkkw/s320/amazing_fun_featured_2561778790105101600S600x600Q85_200907231856306879.jpg
I don't think there's a definitive answer but one possibility is that the operation of dividing a set S into two partitions S1 and S2 one of which is then divided into to sub-partitions S11 and S12, one of which has the same size as S2 - is a likely approach to many algorithms and that can be sometimes numerically described as a Fibonacci sequence.
Let me add another data structure to yours: Fibonacci trees. They are interesting because the calculation of the next position in the tree can be done by mere addition of the previous nodes:
http://xw2k.nist.gov/dads/html/fibonacciTree.html
It ties well in with the discussion by templatetypedef on AVL-trees (an AVL tree can at worst have fibonacci structure). I've also seen buffers extended in fibonacci-steps rather than powers of two in some cases.
Just to add a trivia about this, Fibonacci numbers describe the breading of rabbits. You start with (1, 1), two rabbits, and then their population grows exponentially .
Their computation as a power of [[0,1],[1,1]] matrix can be considered as the most primitive problem of Operational Research (sort of like Prisoner's Dilemma is the most primitive problem of Game Theory).
Symbols with frequencies that are successive fibonacci numbers create maximum depth huffman trees, which trees correspond to source symbols being encoded with maximum length binary codes. Non-fibonacci source symbol frequencies create more balanced trees, with shorter codes. The code length has direct implications in the description complexity of the finite state machine that is responsible for decoding a given huffman code.
Conjecture: The 1st(fib) image will be compressed to 38bits, while the 2nd(uniform) with 50bits. It seems that the closer your source symbol frequencies are to fibonacci numbers the shorter the final binary sequence, the better the compression, maybe optimal in the huffman model.
Further Reading:
Buro, M. (1993). On the maximum length of Huffman codes. Information
Processing Letters, 45(5), 219-223. doi:10.1016/0020-0190(93)90207-p
For me This is about order and space coordinates.
The Fibonacci sequence can be used as a clock.
The Fibonacci sequence allows to calculate the golden number decimal by decimal.
The golden number multiplied by itself gives almost the golden number +1.
So we can certainly cut an integer into a series of integers, of units by using for example the indexes.
I made a first naive version in python.(poc) code to be updated.
https://gitlab.com/numbers/Numbers/-/blob/main/ranging.py
So we can frame, count and coordinate the calculation steps and the memory spaces to this perfectly periodic reference frame (in time) and thus make it a kind of universal multiplication table equivalent. For me it is explicitly a mapping.
The idea is to eventually propose a ternary code with explicit management of the memory spaces according to the Fibonacci calculation step, and then to find all our numbers there.
Once done, to use this mapping, this universal table, this filter : to check the concordance, the consistency, the periodicity of complex computable operations, such as the wheeler experiment, sinus, gravity etc...
It sounds pretentious when you say it like that. It is not. Nobody create the golden number or Fibonacci. They are here, they are given like fruits on a tree.