What data structure for maintaining candidate words in the Wheel of Fortune? - data-structures

Background on the Wheel of Fortune, for those unfamiliar with it: in a Wheel of Fortune game, players initially see a set of blanks, representing words with letters hidden. (So players know the length of each word, but not what letters the words contain.) As the game progresses, players guess letters; if the phrase contains that letter, all the locations of that letter in the phrase are revealed. For example, a game (with the hidden phrase "stack overflow") would initially be represented as ????? ????????, and after the letter "o" is guessed, the game would display ????? o?????ow.
For simplicity, let's suppose our games contain only a single hidden word. What data structure would I use to hold all the possible candidates for that word? (I'm playing around with an AI that picks what letter to guess next, so in order to make the choice, I want to be able to calculate statistics like the most common letter out of the remaining candidates.) To be clear, initially I know that my word contains N letters, and then I learn the positions of various letters in the word, as well as what letters the word does not contain.
There's a similar question at Good algorithm and data structure for looking up words with missing letters?, but I think this question is slightly different in that I have more than two blanks, and I'm also iteratively pruning my candidate list (whereas that question seems to only use a single search). My current idea is to just maintain a candidate list of words, initialized to all English words (there should be at most 300k-500k words), and then (taking a similar approach to that question) to use a Regex to iteratively prune this candidate list as I guess more letters, but I'm curious if there's a better data structure or algorithm.

You should start by splitting the words based on size. Within each size, a http://en.wikipedia.org/wiki/Trie seems a good start. You'll get to prune whole subtrees at once and you could keep the trie in tact by only flipping a flag at each node whether the subtree rooted at that node is out of consideration in the current game.

Related

Comparing word to targeted words in dictionary

I'm trying to write a program in JAVA that stores a dictionary in a hashmap (each word under a different key) and compares a given word to the words in the dictionary and comes up with a spelling suggestion if it is not found in the dictionary -- basically a spell check program.
I already came up with the comparison algorithm (i.e. Needleman-Wunsch then Levenshtein distance), etc., but got stuck when it came figuring out what words in the dictionary-hashmap to compare the word to i.e. "hellooo".
I cannot compare "ohelloo" [should be corrected to "hello" to each word in the dictionary b/c that would take too long and I cannot compare it to all words int the dictionary starting with 'o' b/c it's supposed to be "hello".
Any ideas?
The most common spelling mistakes are
Delete a letter (smaller word OR word split)
Swap adjacent letters
Alter letter (QWERTY adjacent letters)
Insert letter
Some reports say that 70-90% of mistakes fall in the above categories (edit distance 1)
Take a look on the url below that provides a solution for single or double mistakes (edit distance 1 or 2). Almost everything you'll need is there!
How to write a spelling corrector
FYI: You can find implementation in various programming languages in the bottom of the aforementioned article. I've used it in some of my projects, practical accuracy is really good, sometimes more than 95+% as claimed by the author.
--Based on OP's comment--
If you don't want to pre-compute every possible alteration and then search on the map, I suggest that you use a patricia trie (radix tree) instead of a HashMap. Unfortunately, you will again need to handle the "first-letter mistake" (eg remove first letter or swap first with the second, or just replace it with a Qwerty adjacent) and you can limit your search with high probability.
You can even combine it with an extra Index Map or Trie with "reversed" words or an extra index that omits first N characters (eg first 2), so you can catch errors occurred on prefix only.

Finding a list of adjacent words between two words

I am working on a programming challenge for practice and am having trouble finding a good data structure/algorithm to use to implement a solution.
Background:
Call two words “adjacent” if you can change one word into the other by adding, deleting, or changing a single letter.
A “word list” is an ordered list of unique words where successive words are adjacent.
The problem:
Write a program which takes two words as inputs and walks through the dictionary and creates a list of words between them.
Examples:
hate → love: hate, have, hove, love
dogs → wolves: dogs, does, doles, soles, solves, wolves
man → woman: man, ran, roan, roman, woman
flour → flower: flour, lour, dour, doer, dower, lower, flower
I am not quite sure how to approach this problem, my first attempt involved creating permutations of the first word then trying to replace letters in it. My second thought was maybe something like a suffix tree
Any thoughts or ideas toward at least breaking the problem down would be appreciated. Keep in mind that this is not homework, but a programming challenge I am working on myself.
This puzzle was first stated by Charles Dodgson, who wrote Alice's Adventures in Wonderland under his pseudonym Lewis Carroll.
The basic idea is to create a graph structure in which the nodes are words in a dictionary and the edges connect words that are one letter apart, then do a breadth-first search through the graph, starting at the first word, until you find the second word.
I discuss this problem, and give an implementation that includes a clever algorithm for identifying "adjacent to" words, at my blog.
I have done this myself and used it to create a (not very good) Windows game.
I used the approach recommended by others of implementing this as a graph, where each node is a word and they are connected if they differ in one letter. This means you can use well known graph theory results to find paths between words (eg simple recursion where knowing the words at distance 1 allows you to find the words at distance 2).
The tricky part is building up the graph. The bad news is that it is O(n^2). The good news is that it doesn't have to be done in real time - rather than your program reading the dictionary words from a file, it reads in the data structure you baked earlier.
The key insight is that the order doesn't matter, in fact it gets in the way. You need to construct another form in which to hold the words which strips out the order information and allows words to be compared more easily. You can do this in O(n). You have lots of choices; I will show two.
For word puzzles I quit often use an encoding which I call anagram dictionary. A word is represented by another word which has the same letters but in alphabetic sequence. So "cars" becomes "acrs". Both lists and slits become "ilsst". This is a better structure for comparison than the original word, but much better comparisons exist (however, it is a very useful structure for other word puzzles).
Letter counts. An array of 26 values which show the frequency of that letter in the word. So for "cars" it starts 1,0,1,0,0... as there is one "a" and one "c". Hold an external list of the non-zero entries (which letters appear in the word) so you only have to check 5 or 6 values at most instead of 26. Very simple to compare two words held in this form quickly by ensuring at most two counts are different. This is the one I would use.
So, this is how I did it.
I wrote a program which implemented the data structure up above.
It had a class called WordNode. This contains the original word; a List of all other WordNodes which are one letter different; an array of 26 integers giving the frequency of each letter, a list of the non-zero values in the letter count array.
The initialiser populates the letter frequency array and the corresponding list of non-zero values. It sets the list of connected WordNodes to zero.
After I have created an instance of the WordNode class for every word, I run a compare method which checks to see if the frequency counts are different in no more than two places. That normally takes slightly less compares than there are letters in the words; not too bad. If they are different in exactly two places they differ by one letter, and I add that WordNode into the list of WordNodes differing in only one letter.
This means we now have a graph of all the words one letter different.
You can export either the whole data structure or strip out the letter frequency and other stuff you don't need and save it (I used serialized XML. If you go that way, make sure you check it handles the List of WordNodes as references and not embedded objects).
Your actual game then only has to read in this data structure (instead of a dictionary) and it can find the words one letter different with a direct lookup, in essentially zero time.
Pity my game was crap.
I don't know if this is the type of solution that you're looking for, but an active area of research is in constructing "edit distance 1" dictionaries for quickly looking up adjacent words (to use your parlance) for search term suggestions, data entry correction, and bioinformatics (e.g. finding similarities in chromosomes). See for example this research paper. Short of indexing your entire dictionary, at the very least this might suggest a search heuristic that you can use.
The simplest (recursive) algorithm is I can think of (well, the only one I can think in the moment) is
Initialize a empty blacklist
Take all words from your dictionary that is a valid step for the current word
remove the ones that are in the blacklist
Check if you can find the target word.
if not, repeat the algorithm for all words you found in last step
if yes, you found it. Return the recursion printing all words in the path you found.
Maybe someone with a bit more time can add the ruby code for this?
Try this
x = 'hate'
puts x = x.next until x == 'love'
And if you couple it with dictionary lookup, you will get a list of all valid words in between in that dictionary.

Data structure for dictionary with efficient querying of arbitrary positions

Can anyone suggest an appropriate data structure to hold a dictionary that will allow me to query the presence of words (items) that have particular letters at particular positions? For example, determine which words (if any) have letters a,b,c at positions x,y,z. Insertions do not have to be particularly efficient.
This is basically the scrabble problem (I have scores associated with the letters too, but that need not concern us). I suspect bioinformaticians have studied the same problem under the guise of sequence alignment. What's the state of the art in terms of speed?
If you are trying to build a very fast Scrabble player, you might want to look into the GADDAG data structure, which was specifically designed for the purpose. Essentially, the GADDAG is a compressed trie structure (specifically, it's a modified DAWG) that lets you explore outward and find all words that can be made with a certain set of letters subject to constraints about which letters of the words must be in what positions, as well as the overall lengths of the strings found.
The Wikipedia article on GADDAGs goes into more depth on the structure and links to the original paper on the subject. You might also want to look at DAWGs as a starting point.
Hope this helps!

Finding nearest string to a pair in a lexicon

I am currently trying to come up with an efficient solution to the problem with the following formulation:
Given an input string s and a fixed lexicon find a string w1||w2 (|| denotes concatenation, w1 and w2 are words in the lexicon) with the lowest levenshtein distance to s.
The obvious naive solution is:
for word1 in lexicon:
for word2 in lexicon:
if lev_dist(word1 + word2) < lev_dist(lowest):
lowest = word1 + word2
I'm sure there must be better solutions to the problem. Can anyone offer any insight?
You may be able to do a bit better by putting lower bounds on the cost of individual strings.
Looking at the algorithm in http://en.wikipedia.org/wiki/Levenshtein_distance, at the time you care computing d[i, j] for the distance you know you are adding in a contribution that depends on s[i] and t[j], where s and t are the strings being compared, so you can make the costs of change/delete/insert depend on the position of the operation within the two strings.
This means that you can compute the distance between abcXXX and abcdef using a cost function in which operations on the characters marked XXX are free. This allows you to compute the cost of transforming abcXXX to abcdef if the string XXX is in fact the most favourable string possible.
So for each word w1 in the lexicon compute the distance between w1XXX and the target string and XXXw1 and the target string. Produce two copies of the lexicon, sorted in order of w1XXX distance and XXXw1 distance. Now try all pairs in order of the sum of left hand and right hand costs, which is a lower bound on the cost of that pair. Keep track of the best answer so far. When the best answer is at least as good as the next lower bound cost you encounter, you know that nothing you can try can improve on this best answer, so you can stop.
I assume you want to do this many times for the same lexicon. You've a misspelled word and suspect it's caused by the lack of a space between them, for example.
The first thing you'll surely need is a way to estimate string "closeness". I'm fond of normalization techniques. For example, replace each letter by a representative from an equivalence class. (Perhaps M and N both go to M because they sound similar. Perhaps PH --> F for a similar reason.)
Now, you'll want your normalized lexicon entered both frontwards and backwards into a trie or some similar structure.
Now, search for your needle both forwards and backwards, but keep track of intermediate results for both directions. In other words, at each position in the search string, keep track of the list of candidate trie nodes which have been selected at that position.
Now, compare the forwards- and backwards-looking arrays of intermediate results, looking for places that look like a good join point between words. You might also check for join points off-by-one from each other. (In other words, you've found the end of the first word and the beginning of the second.)
If you do, then you've found your word pair.
If you are running lots of queries on the same lexicon and want to improve the query time, but can afford some time for preprocessing, you can create a trie containing all possible words in the form w1 || w2. Then you can use the algorithm described here: Fast and Easy Levenshtein distance using a Trie to find the answer for any word you need.
What the algorithm does is basically walking the nodes of the trie and keeping track of the current minimum. Then if you end up in some node and the Levenshtein distance (of the word from the root to the current node and the input string s) is already larger than the minimum achieved so far, you can prune the entire subtree rooted at this node, because it cannot yield an answer.
In my testing with a dictionary of English words and random query words, this is anywhere between 30 and 300 times faster than the normal approach of testing every word in the dictionary, depending on the type of queries you run on it.

Efficient word scramble algorithm

I'm looking for an efficient algorithm for scrambling a set of letters into a permutation containing the maximum number of words.
For example, say I am given the list of letters: {e, e, h, r, s, t}. I need to order them in such a way as to contain the maximum number of words. If I order those letters into "theres", it contain the words "the", "there", "her", "here", and "ere". So that example could have a score of 5, since it contains 5 words. I want to order the letters in such a way as to have the highest score (contain the most words).
A naive algorithm would be to try and score every permutation. I believe this is O(n!), so 720 different permutations would be tried for just the 6 letters above (including some duplicates, since the example has e twice). For more letters, the naive solution quickly becomes impossible, of course.
The algorithm doesn't have to actually produce the very best solution, but it should find a good solution in a reasonable amount of time. For my application, simply guessing (Monte Carlo) at a few million permutations works quite poorly, so that's currently the mark to beat.
I am currently using the Aho-Corasick algorithm to score permutations. It searches for each word in the dictionary in just one pass through the text, so I believe it's quite efficient. This also means I have all the words stored in a trie, but if another algorithm requires different storage that's fine too. I am not worried about setting up the dictionary, just the run time of the actual ordering and searching. Even a fuzzy dictionary could be used if needed, like a Bloom Filter.
For my application, the list of letters given is about 100, and the dictionary contains over 100,000 entries. The dictionary never changes, but several different lists of letters need to be ordered.
I am considering trying a path finding algorithm. I believe I could start with a random letter from the list as a starting point. Then each remaining letter would be used to create a "path." I think this would work well with the Aho-Corasick scoring algorithm, since scores could be built up one letter at a time. I haven't tried path finding yet though; maybe it's not a even a good idea? I don't know which path finding algorithm might be best.
Another algorithm I thought of also starts with a random letter. Then the dictionary trie would be searched for "rich" branches containing the remain letters. Dictionary branches containing unavailable letters would be pruned. I'm a bit foggy on the details of how this would work exactly, but it could completely eliminate scoring permutations.
Here's an idea, inspired by Markov Chains:
Precompute the letter transition probabilities in your dictionary. Create a table with the probability that some letter X is followed by another letter Y, for all letter pairs, based on the words in the dictionary.
Generate permutations by randomly choosing each next letter from the remaining pool of letters, based on the previous letter and the probability table, until all letters are used up. Run this many times.
You can experiment by increasing the "memory" of your transition table - don't look only one letter back, but say 2 or 3. This increases the probability table, but gives you more chance of creating a valid word.
You might try simulated annealing, which has been used successfully for complex optimization problems in a number of domains. Basically you do randomized hill-climbing while gradually reducing the randomness. Since you already have the Aho-Corasick scoring you've done most of the work already. All you need is a way to generate neighbor permutations; for that something simple like swapping a pair of letters should work fine.
Have you thought about using a genetic algorithm? You have the beginnings of your fitness function already. You could experiment with the mutation and crossover (thanks Nathan) algorithms to see which do the best job.
Another option would be for your algorithm to build the smallest possible word from the input set, and then add one letter at a time so that the new word is also is or contains a new word. Start with a few different starting words for each input set and see where it leads.
Just a few idle thoughts.
It might be useful to check how others solved this:
http://sourceforge.net/search/?type_of_search=soft&words=anagram
On this page you can generate anagrams online. I've played around with it for a while and it's great fun. It doesn't explain in detail how it does its job, but the parameters give some insight.
http://wordsmith.org/anagram/advanced.html
With javascript and Node.js I implemented a jumble solver that uses a dictionary and create a tree and then traversal the tree after that you can get all possible words, I explained the algorithm in this article in detail and put the source code on GitHub:
Scramble or Jumble Word Solver with Express and Node.js

Resources