How to calculate the running time of this algorithm? - big-o

For fun I wrote an anagram generator. It takes some input word or phrase and rearranges the letters in different combinations to generate a new word or phrase. For example, if you enter "cat and dog", it will return things like "can dad got" or "ant cog dad".
A friend asked what the running time was, and I realized that I wasn't sure how to calculate it in this case. At startup, I read in a list of words (a dictionary). In my case it's about 200,000 words (it's the standard unix /usr/share/dict/web2 dictionary). That doesn't really factor into the running time as that's a one-time thing at app startup and it takes well under a second to read in and index the dictionary.
When the user enters a word, the application searches the dictionary for a list of candidate words. A word is a candidate if it contains only a subset of letters from the input word or phrase. Generating the candidates is an insignificant part of the process and can be ignored for now.
Then it starts searching. It chooses the first word in the list of candidates. Next it removes the letters of that word from the remaining letters in the input string. It then searches the candidates for any words remaining that contain only a subset of the newly reduced input string. It then recurses with the new reduced input word and reduced candidate list. It repeats this until there are either no candidates left, or the input string is all used up.
So it may start with 100 candidates it has to search. It chooses one and after removing any others with the same letters, there could be 90 left, or there could be 50 left, or there could be 10 left, so when we recurse, there's a different number left to search each time. This is why I'm having trouble understanding the running time.
If we never removed any words from the list, it would be O(n!) where n is the number of candidates. But since we aggressively trim the list on each iteration, it works out to far less than n!. For example, one phrase I tried generates over 4,000 candidates, and ends up finding over 600,000 combinations. It only takes about 30 seconds to do so on a recent notebook computer (utilizing only a single core), so clearly it's not O(n!).
In order to understand the running time, would I need to have some statistics about how much the list of candidates gets trimmed on average with each iteration or something like that?
I was thinking that if each iteration removed 10 candidates from the list, then we'd have something like this for a 100 candidate list: 100 * 90 * 80 * 70... Or more generally, n * (n - 10) * (n - 20) * (n - 30)... In the case of a 100 candidate list that would work out to O(n^10 - a*n^9 - b*n^8 ...).
Have I calculated that correctly, or is there more to it than that?

First of all, note that the running time depends on the length of the input: O(m). If a user enters a very long phrase that contains all letters of the alphabet many times:
A quick brown fix jumps over the lazy dog; a quick brown fix jumps over the lazy dog; a quick brown fix jumps over the lazy dog, ...
your algorithm will consider the full dictionary (of size n) in the first O(m) iterations, so the running time is n^O(m).
Here, the statement n^O(m) is rather weak even though it's correct: the exact running time might look like n^0.01m or n^0.1m; you may consider both smaller than n^O(m), but you cannot find exactly which factor there is (it depends on the structure of English language), so n^O(m) here means "exponential running time in worst case; algorithm won't finish for large values of m".
Of course, you are probably interested in the running time for small values of m. If you assume m<20, it's pretty clear that the running time is O(n^20); you may consider this a better estimation than O(n!) or O(n^(n/10)).
To get better estimates, one would have to consider the structure of the dictionary; the running time depends really strongly on the dictionary. For example, if all words in the dictionary contain at least 2 letters (not sure about that), the running time can be estimated as O(n^(m/2)).
Anyway, the big-O notation doesn't seem to fit this problem in any useful manner.

You are in the right direction. Consider the only highest degree of the polynomial you get upon evaluation. So in your case:
n*(n-10)*(n-20)*...10
will give (n)^(n/10).
So the running time of your algorithm is O( (n)^(n/10) ).
Also see this for better understanding of running time.

If average length of a candidate is k, and the source phrase is such that all candidates are removed only one by one, then the complexity will be O((n/k)!).
If initial number of candidates is M, and each step removes s words from the list of candidates, then the complexity is O(M * (M-s) * (M-2s) * ...) = O((M/s)! * sM/s).
In the worst case, you still have O(n!).
But, well, n! is what one could expect for such a task. I suppose most optimizations should be performed at the code that searches and removes candidates.

Related

Check if string includes part of Fibonacci Sequence

Which way should I follow to create an algorithm to find out whether fibonacci sequence exists in a given string ?
The string includes only digits with no whitespaces and there may be more than one sequence, I need to find all of them.
If as your comment says the first number must have less than 6 digits, you can simply search for all positions there one of the 25 fibonacci numbers (there are only 25 with less than 6 digits) and than try to expand this 1 number sequence in both directions.
After your update:
You can even speed things up when you are only looking for sequences of at least 3 numbers.
Prebuild all 25 3-number-Strings that start with one of the 25 first fibonnaci-numbers this should give much less matches than the search for the single fibonacci-numbers I suggested above.
Than search for them (like described above and try to expand the found 3-number-sequences).
here's how I would approach this.
The main algorithm could search for triplets then try to extend them to as long a sequence as possible.
This leaves us with the subproblem of finding triplets. So if you are scanning through a string to look for fibonacci numbers, one thing you can take advantage of is that the next number must have the same number of digits or one more digit.
e.g. if you have the string "987159725844" and are considering "[987]159725844" then the next thing you need to look at is "987[159]725844" and "987[1597]25844". Then the next part you would find is "[2584]4" or "[25844]".
Once you have the 3 numbers you can check if they form an arithmetic progression with C - B == B - A. If they do you can now check if they are from the fibonacci sequence by seeing if the ratio is roughly 1.6 and then running the fibonacci iteration backwards down to the initial conditions 1,1.
The overall algorithm would then work by scanning through looking for all triples starting with width 1, then width 2, width 3 up to 6.
I'd say you should first find all interesting Fibonacci items (which, having 6 or less digits, are no more than 30) and store them into an array.
Then, loop every position in your input string, and try to find upon there the longest possible Fibonacci number (that is, you must browse the array backwards).
If some Fib number is found, then you must bifurcate to a secondary algorithm, consisting of merely going through the array from current position to the end, trying to match every item in the following substring. When the matching ends, you must get back to the main algorithm to keep searching in the input string from the current position.
None of these two algorithms is recursive, nor too expensive.
update
Ok. If no tables are allowed, you could still use this approach replacing in the first loop the way to get the bext Fibo number: Instead of indexing, apply your formula.

a puzzle about definition of the time complexity

Wikipedia defines time complexity as
In computer science, the time complexity of an algorithm quantifies
the amount of time taken by an algorithm to run as a function of the
length of the string representing the input.
What's mean of the strong part?
I know algorithm may be treated as a function but why its input must be "the length of the string representing"?
The function in the bolder part means the time complexity of the algorithm, not the algorithm itself. An algorithm may be implemented in a programming language that has a function keyword, but that's something else.
Algorithm MergeSort has as input a list of 32m bits (assuming m 32-bit values). It's time complexity T(n) is a function of n = 32m, the input size, and in the worst case is bound from above by O(n log n). MergeSort could be implemented as a function in C or JavaScript.
The defition is derived from the context of Turing machines where you define different states. Every function which you can compute with a computer is also computeable with turing machine.(i would say that computer computes a function on the basis of turing machine)
Every fucntion is just a mapping from one domain to other domain or same domain.
Before going to Turing machines look at the concept of finite-automata.It has finite states.If your input is of length n then it's possible that it only needs two stats but it has to visit those states n times where n is legnth of the string.
Not a good sketch but look at the image below ,our final state is C , means if
end with a string in c our string will be accepted.
We use unary numberal system. We want to check if this this string gets accepted by our automata: string is 010101010
When we read 0 from A we move to B and if we read again a 0 we will move to C and if we end with 0 our string gets accepted otherwise we move to A again.
In computer you represent numbers as strings with length n and in order to compute it you have to visit each character of the string.
Turing machines work on the same way but finite-automata only is limited to regular languages. How this is a big theory
Did you ever try to think how computer computes a function 2*x where x is your input.
It's fun :D . Suppose i want to compute 20*2 and i represent this number with unary numeral system because it's easy. so we represetn 0 with , 1 with 11 , 2 with 111 and etc so if we convert 20 to unary sytem we get 1111. You can think of a turing machine or computer(not advanced) , a system with linear memory.
Suppose empty spots in your memory are presented with # .
With you input you have something like this: ###1111#### where # means empty slot in memory, with your input head of your turing machine is at first 1 so you keep moving forward until you find first # once you find this you just replace it with * which is just a helping symbol and change the right side of # with 1 now move back and change one more 1 to * and write one 1 on the right side when you find a # keep doing this and you will be left with all * on the left hand side and all 1s on the right side, Now change all *s back to 1 and you have 2*x. Here is the trace and you have 2*x where x was your input.
The point is that the only thing these machines remember is the state.
#####1111#######
#####111*1######
#####11**11#####
#####1***111####
#####****1111###
#####11111111###
Summary
If there is some input, it is expressed as a string. So you have a length of that string. Then you have a function F that maps the length of the input (as a string) to the time needed by A to compute this input (in the worst case).
We call this F time complexity.
Say we have an algorithm A. What is its time complexity?
The very easy case
If A has constant complexity, the input doesn't matter. The input could be a single value, or a list, or a map from strings to lists of lists. The algorithm will run for the same amount of time. 3 seconds or 1000 ticks or a million years or whatever. Constant time values not depending on the input.
Not much complexity at all to be honest.
Adding complexity
Now let's say for example A is an algorithm for sorting list of integer numbers. It's clear that the time needed by A now depends on the length of the list. A list of length 0 is literally sorted in no time (but checking the length of the list) but this changes if the length of the input list grows.
You could say there exists a function F that maps the list length to the seconds needed by A to sort a list of that length. But wait! What if the list is already sorted? So for simplicity let's always assume a worst case scenario: F maps list length to the maximum of seconds needed by A to sort a list of that length.
You could measure in seconds, CPU cycles, ticks, or whatever. It doesn't depend on the units.
Generalizing a bit
What with all the other algorithms? How to measure time complexity for an algorithm that cooks me a nice meal?
If you cannot define any input parameter then we're back in the easy case: constant time. If there is some input it is expressed as a string. So you have a length of that string. And - similar to what has been said above - then you have a function F that maps the length of the input (as a string) to the time needed by A to compute this input (in the worst case).
We call this F time complexity.
That's too simple
Yeah, I know. There is the average case and the best case, there is the big O notation and asymptotic complexity. But for explaining the bold part in the original question this is sufficient, I think.

Efficiently finding a list of near matches for list of words and phrases

I am looking for an algorithm, but I don't know the name of the problem so I can't find anything. Hopefully my explanation of the problem makes sense!
Let's say you have a long list of phrases, where each phrase is a set of words. The user inputs a list of words, and their list "matches" a phrase every word in the phrase is found in their list. A list's "score" is the number of phrases it matches. The goal is to provide the user with a list of words that would most improve their list's score.
Here's a simple example. We have ten phrases:
wood cabin
camping in woods
camping cabin
fun camping
bon fire
camping fire
swimming hole
fun cabin
wood fire
fire place
And the user provides this list:
wood
fun
camping
We match phrases 1 and 4, so the score is 2. But if the user adds "cabin" to their list, they will match 3 more phrases and get a score of 5. "fire" would add 2 to the score.
With the trivially short list, there isn't any complicated problem, as you can just iterate through the options in almost no time. But as the list grows to the hundreds of thousands, it starts taking hundreds of milliseconds. It feels like there should be a way to build an index to make the process faster, but I can't think of what the index's structure would be.
Anyone who took the time to read all this, thank you! Hopefully someone knows what I'm talking about.
You need to map words to number of occurrences. If you use a hash table you can do it very quickly (O(N) - with N being the number of words in the phrases) - loop over all phrases, break them into words, if the word is already in the map increment its count, if not - add it to the map with count 1.
To compute the score of the input, just loop over the input words and accumulate the number of occurrences. O(M) - this time M being the number of input words.
I doubt you can get better complexity (you need to scan the phrases at least once), and with a proper implementation of a map (available in almost all modern languages) - it will be fast as well.
Suffix tree.
They're rather fiddly and complicated things, but basically we store a node for each character (26 * 2), then we store the suffixes for each character, so entries for th and an and so on, but presumably not for qj or other combinations which won't occur. Then you get the suffixes for those, (so the, thr, and and so on, but plenty of combinations of three letters disallowed).
It allows for very fast searching, which doesn't have to be exact. If we want to match a*d we simply follow all the suffixes of a, then only d suffixes, then we insist on nul.

Word Prediction algorithm

I'm sure there is a post on this, but I couldn't find one asking this exact question. Consider the following:
We have a word dictionary available
We are fed many paragraphs of words, and I wish to be able to predict the next word in a sentence given this input.
Say we have a few sentences such as "Hello my name is Tom", "His name is jerry", "He goes where there is no water". We check a hash table if a word exists. If it does not, we assign it a unique id and put it in the hash table. This way, instead of storing a "chain" of words as a bunch of strings, we can just have a list of uniqueID's.
Above, we would have for instance (0, 1, 2, 3, 4), (5, 2, 3, 6), and (7, 8, 9, 10, 3, 11, 12). Note that 3 is "is" and we added new unique id's as we discovered new words. So say we are given a sentence "her name is", this would be (13, 2, 3). We want to know, given this context, what the next word should be. This is the algorithm I thought of, but I dont think its efficient:
We have a list of N chains (observed sentences) where a chain may be ex. 3,6,2,7,8.
Each chain is on average size M, where M is the average sentence length
We are given a new chain of size S, ex. 13, 2, 3, and we wish to know what is the most probable next word?
Algorithm:
First scan the entire list of chains for those who contain the full S input(13,2,3, in this example). Since we have to scan N chains, each of length M, and compare S letters at a time, its O(N*M*S).
If there are no chains in our scan which have the full S, next scan by removing the least significant word (ie. the first one, so remove 13). Now, scan for (2,3) as in 1 in worst case O(N*M*S) which is really S-1.
Continue scanning this way until we get results > 0 (if ever).
Tally the next words in all of the remaining chains we have gathered. We can use a hash table which counts every time we add, and keeps track of the most added word. O(N) worst case build, O(1) to find max word.
The max word found is the the most likely, so return it.
Each scan takes O(M*N*S) worst case. This is because there are N chains, each chain has M numbers, and we must check S numbers for overlaying a match. We scan S times worst case (13,2,3,then 2,3, then 3 for 3 scans = S). Thus, the total complexity is O(S^2 * M * N).
So if we have 100,000 chains and an average sentence length of 10 words, we're looking at 1,000,000*S^2 to get the optimal word. Clearly, N >> M, since sentence length does not scale with number of observed sentences in general, so M can be a constant. We can then reduce the complexity to O(S^2 * N). O(S^2 * M * N) may be more helpful for analysis though, since M can be a sizeable "constant".
This could be the complete wrong approach to take for this type of problem, but I wanted to share my thoughts instead of just blatantly asking for assitance. The reason im scanning the way I do is because I only want to scan as much as I have to. If nothing has the full S, just keep pruning S until some chains match. If they never match, we have no idea what to predict as the next word! Any suggestions on a less time/space complex solution? Thanks!
This is the problem of language modeling. For a baseline approach, The only thing you need is a hash table mapping fixed-length chains of words, say of length k, to the most probable following word.(*)
At training time, you break the input into (k+1)-grams using a sliding window. So if you encounter
The wrath sing, goddess, of Peleus' son, Achilles
you generate, for k=2,
START START the
START the wrath
the wrath sing
wrath sing goddess
goddess of peleus
of peleus son
peleus son achilles
This can be done in linear time. For each 3-gram, tally (in a hash table) how often the third word follows the first two.
Finally, loop through the hash table and for each key (2-gram) keep only the most commonly occurring third word. Linear time.
At prediction time, look only at the k (2) last words and predict the next word. This takes only constant time since it's just a hash table lookup.
If you're wondering why you should keep only short subchains instead of full chains, then look into the theory of Markov windows. If your model were to remember all the chains of words that it has seen in its input, then it would badly overfit its training data and only reproduce its input at prediction time. How badly depends on the training set (more data is better), but for k>4 you'd really need smoothing in your model.
(*) Or to a probability distribution, but this is not needed for your simple example use case.
Yeh Whye Teh also has some recent interesting work that addresses this problem. The "Sequence Memoizer" extends the traditional prediction-by-partial-matching scheme to take into account arbitrarily long histories.
Here is a link the original paper: http://www.stats.ox.ac.uk/~teh/research/compling/WooGasArc2011a.pdf
It is also worth reading some of the background work, which can be found in the paper "A Bayesian Interpretation of Interpolated Kneser-Ney"

Algorithm to find an arbitrarily large number

Here's something I've been thinking about: suppose you have a number, x, that can be infinitely large, and you have to find out what it is. All you know is if another number, y, is larger or smaller than x. What would be the fastest/best way to find x?
An evil adversary chooses a really large number somehow ... say:
int x = 9^9^9^9^9^9^9^9^9^9^9^9^9^9^9
and provides isX, isBiggerThanX, and isSmallerThanx functions. Example code might look something like this:
int c = 2
int y = 2
while(true)
if isX(y) return true
if(isBiggerThanX(y)) fn()
else y = y^c
where fn() is a function that, once a number y has been found (that's bigger than x) does something to determine x (like divide the number in half and compare that, then repeat). The thing is, since x is arbitrarily large, it seems like a bad idea to me to use a constant to increase y.
This is just something that I've been wondering about for a while now, I'd like to hear what other people think
Use a binary search as in the usual "try to guess my number" game. But since there is no finite upper end point, we do a first phase to find a suitable one:
Initially set the upper end point arbitrarily (e.g. 1000000, though 1 or 1^100 would also work -- given the infinite space to work in, all finite values are equally disproportionate).
Compare the mystery number X with the upper end point.
If it's not big enough, double it, and try again.
Once the upper end point is bigger than the mystery number, proceed with a normal binary search.
The first phase is itself similar to a binary search. The difference is that instead of halving the search space with each step, it's doubling it! The cost for each phase is O(log X). A small improvement would be to set the lower end point at each doubling step: we know X is at least as high as the previous upper end point, so we can reuse it as the lower end point. The size of the search space still doubles at each step, but in the end it will be half as large as would have been. The cost of the binary search will be reduced by only 1 step, so its overall complexity remains the same.
Some notes
A couple of notes in response to other comments:
It's an interesting question, and computer science is not just about what can be done on physical machines. As long as the question can be defined properly, it's worth asking and thinking about.
The range of numbers is infinite, but any possible mystery number is finite. So the above method will eventually find it. Eventually is defined such as that, for any possible finite input, the algorithm will terminate within a finite number of steps. However since the input is unbounded, the number of steps is also unbounded (it's just that, in every particular case, it will "eventually" terminate.)
If I understand your question correctly (advise if I do not), you're asking about how to solve "pick a number from 1 to 10", except that instead of 10, the upper bound is infinity.
If your number space is truly infinite, the following are true:
The value will never be held in an int (or any other data type) on any physical hardware
You will NEVER find your number
If the space is immensely large but bound, I think the best you can do is a binary search. Start at the middle of the number range. If the desired number turns out to be higher or lower, divide that half of the number space, and repeat until the desired number is found.
In your suggested implementation you raise y ^ c. However, no matter how large c is chosen to be, it will not even move the needle in infinite space.
Infinity isn't a number. Thus you can't find it, even with a computer.
That's funny. I've wondered the same thing for years, though I've never heard anyone else ask the question.
As simple as your scenario is, it still seems to provide insufficient information to allow the choice of an optimal strategy. All one can choose is a suitable heuristic. My heuristic had been to double y, but I think that I like yours better. Yours doubles log(y).
The beauty of your heuristic is that, so long as the integer fits in the computer's memory, it finds a suitable y in logarithmic time.
Counter-question. Once you find y, how do you proceed?
I agree with using binary search, though I believe that a ONE-SIDED binary search would be more suitable, since here the complexity would NOT be O( log n ) [ Where n is the range of allowable numbers ], but O( log k ) - where k is the number selected by your adversary.
This would work as follows : ( Pseudocode )
k = 1;
while( isSmallerThanX( k ) )
{
k = k*2;
}
// At this point, once the loop is exited, k is bigger than x
// Now do normal binary search for the range [ k/2, k ] to find your number :)
So even if the allowable range is infinity, as long as your number is finite, you should be able to find it :)
Your method of tetration is guaranteed to take longer than the age of the universe to find an answer, if the opponent merely uses a paradigm which is better (for example, pentation). This is how you should do it:
You can only do this with symbolic representations of numbers, because it is trivial to name a number your computer cannot store in floating-point representation, even if it used arbitrary-precision arithmetic and all its memory.
Required reading: http://www.scottaaronson.com/writings/bignumbers.html - that pretty much sums it up
How do you represent a number then? You represent it by a program which will, if run to completion, print out that number. Even then, your computer is incapable of computing BusyBeaver(10^100) (if you dictated a program 1 terabyte in size, this well over the maximum number of finite clock cycles it could run without looping forever). You can see that we could easily have the computer print out 1 0 0... each clock cycle, making the maximum number it could say (if we waited nearly an eternity) would be 10^BusyBeaver(10^100). If you allowed it to say more complicated expressions like eval(someprogram), power-towers, Ackermann's function, whatever-- then I believe that would be no better than increasing the original 10^100 by some constant proportional to the complexity of what you described (plus some logarithmic interpreter factor, see Kolmogorov complexity).
So let's frame this another way:
Your opponent picks a finite computable number, and gives you a function tells you if the number is smaller/larger/equal by computing it. He also gives you a representation for the output (in a sane world this would be "you can only print numbers like 99999", but he can make it more complicated; it actually doesn't matter). Proceed to measure the size of this function in bits.
Now, answer with your own function, which is twice the size of his function (in bits), and prints out the largest number it can while keeping the code to less than 2N bits in length. (You use the same representation he chose: In a world where you can only print out numbers like "99999", that's what you do. If you can define functions, it gets slightly more complicated.)
I do not understand the purpose here, but I this is what I thought of:
Reading your comments, I suppose you aren't looking for infinitely large number, but a "super large number" instead. And whatever be the number, it will have a large no. of digits. How you got them, isn't the concern. Keeping this in mind:
No complex computation is required. Just type random keys on your numeric keyboard to have a super large number, and then have a program randomly add/remove/modify digits of that number. You get a list of very large numbers - select any one out of them.
e.g: 3672036025039629036790672927305060260103610831569252706723680972067397267209
and keep modifying/adding digits to get more numbers
PS: If you state the purpose in your question clearly, we might be able to give better answers.

Resources