This question already has answers here:
Ukkonen's suffix tree algorithm in plain English
(7 answers)
Closed 9 years ago.
I'm doing some work with Ukkonen's algorithm for building suffix trees, but I'm not understanding some parts of the author's explanation for it's linear-time complexity.
I have learned the algorithm and have coded it, but the paper which I'm using as the main source of information (linked bellow) is kinda confusing at some parts so it's not really clear for me why the algorithm is linear.
Any help? Thanks.
Link to Ukkonen's paper: http://www.cs.helsinki.fi/u/ukkonen/SuffixT1withFigs.pdf
Find a copy of Gusfield's string algorithms textbook. It's got the best exposition of the suffix tree construction I've seen. The linearity is a surprising consequence of a number of optimizations of the high-level algorithm.
Related
This question already has answers here:
Soft heaps: what is corruption and why is it useful?
(2 answers)
Closed 2 years ago.
From the paper that I was reading By Bernard chazelle https://www.cs.princeton.edu/courses/archive/fall05/cos528/handouts/The%20Soft%20Heap.pdf
I failed to find Soft heap being used much in practical scenario. So, It would be helpful if someone could let me know why is it really useful.
I haven't red the article, only the abstract and I quote
The soft heap can be used to compute exact or approximate medians and percentiles optimally. It is also useful for approximate sorting and for computing minimum spanning trees of general graphs.
So it has some uses in the graphs' algorithms or in medians' computing.
In graph algorithms there's a popular algorithm called "Prim's Algorithmm" and it finds the minimum spanning trees of general graphs. I'm not 100% sure but I think Soft Heap are used in this algorithm.
You might be familiar with plain old heap, it is potent for its fast computing response time. Seems like Soft Heap share the same property.
This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
Closed 3 years ago.
I have been learning about sorting algorithms and understand the idea of time and space complexity. I was wondering if there was a fairly quick and easy way of working out the complexities given the algorithm (one that may even be quick enough to do in an exam) as opposed to learning all of the complexities for the algorithms.
If this isn't an option, is there an easy way to remember or learn them for some of the more basic sortings algorithms such as merge sort and quicksort.
You have to memorize this:
It depends on the problem. Best option in different situations:
In place and stable: Selection Sort
In place (don't care about stable): Heap Sort
Stable (don't care about in place): Merge Sort
I’m wondering whether there are any algorithms that use so much time that they must be represented using Knuth up-arrow notation.
Required: Use more than one up-arrow for time complexity.
Bonus points:
Have the algorithm be useful.
Have the algorithm be useful and optimized
Sister question on CS: (recommendation from #Amy)
https://cs.stackexchange.com/questions/94184/are-there-any-algorithms-that-run-in-2-↑-↑-n
Answering this to get this question off the unanswered queue.
As mentioned in the related question linked, a simple automaton-based algorithm for answering first-order queries in Presburger arithmetic (and some extensions) has worst-case running time about 2↑↑n, where n is the number of quantifier alternations in the query. It has both been implemented and provides useful results. Find more about it here
This question already has answers here:
Example of O(n!)?
(16 answers)
Closed 5 years ago.
I can't seem to find any examples that uses O(n!) time complexity.
I can't seem to comprehend how it works. Please help
A trivial example is the random sort algorithm. It randomly shuffles its input until it gets it sorted.
Of course, it has strictly no use in the real world, but still, it is O(n!).
EDIT: As pointed out in the comments, this is actually the average time performance of this algorithm. The best-case time complexity is O(1), which happens when the algorithm finds the right permutation right away, and is unbounded in the worst case, since you have no guarantee that the right permutation will come up.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to find a binary logarithm very fast? (O(1) at best)
how does the log function work.
How the log of a with base b is calculated.
There are many algorithms for doing this. One of my favorites (WARNING: shameless plug) is this one based on the Fibonacci numbers. The code contains a pretty elaborate comment that goes into how the math works. Given a and ab, it runs in time O(lg b) and O(1) space.
Hope this helps!