Good algorithm and data structure for looking up words with missing letters? - algorithm

I need to write an efficient algorithm for looking up words with missing letters in a dictionary and I want the set of possible words.
For example, if I have th??e, I might get back "these", "those", "theme:, "there", etc.
There will be up to TWO question marks and when two question marks do occur, they will occur in sequence.
I was wondering if anyone can suggest some data structures or algorithm I should use.
A Trie is too space-inefficient and would make it too slow. Any other ideas modifications?
Currently I am using 3 hash tables for when it is an exact match, 1 question mark, and 2 question marks.
Given a dictionary I hash all the possible words. For example, if I have the word WORD. I hash WORD, ?ORD, W?RD, WO?D, WOR?, ??RD, W??D, and WO?? into the dictionary. Then I use a link list to link the collisions together. So say hash(W?RD) = hash(STR?NG) = 17. hashtab(17) will point to WORD and WORD points to STRING because it is a linked list.
The timing on average lookup of one word is about 2e-6s. I am looking to do better, preferably on the order of 1e-9. It took 0.5 seconds for 3m entries insertions and it took 4 seconds for 3m entries lookup.

I believe in this case it is best to just use a flat file where each word stands in one line. With this you can conveniently use the power of a regular expression search, which is highly optimized and will probably beat any data structure you can devise yourself for this problem.
Solution #1: Using Regex
This is working Ruby code for this problem:
def query(str, data)
r = Regexp.new("^#{str.gsub("?", ".")}$")
idx = 0
begin
idx = data.index(r, idx)
if idx
yield data[idx, str.size]
idx += str.size + 1
end
end while idx
end
start_time = Time.now
query("?r?te", File.read("wordlist.txt")) do |w|
puts w
end
puts Time.now - start_time
The file wordlist.txt contains 45425 words (downloadable here). The program's output for query ?r?te is:
brute
crate
Crete
grate
irate
prate
write
wrote
0.013689
So it takes just 37 milliseconds to both read the whole file and to find all matches in it. And it scales very well for all kinds of query patterns, even where a Trie is very slow:
query ????????????????e
counterproductive
indistinguishable
microarchitecture
microprogrammable
0.018681
query ?h?a?r?c?l?
theatricals
0.013608
This looks fast enough for me.
Solution #2: Regex with Prepared Data
If you want to go even faster, you can split the wordlist into strings that contain words of equal lengths and just search the correct one based on your query length. Replace the last 5 lines with this code:
def query_split(str, data)
query(str, data[str.length]) do |w|
yield w
end
end
# prepare data
data = Hash.new("")
File.read("wordlist.txt").each_line do |w|
data[w.length-1] += w
end
# use prepared data for query
start_time = Time.now
query_split("?r?te", data) do |w|
puts w
end
puts Time.now - start_time
Building the data structure takes now about 0.4 second, but all queries are about 10 times faster (depending on the number of words with that length):
?r?te 0.001112 sec
?h?a?r?c?l? 0.000852 sec
????????????????e 0.000169 sec
Solution #3: One Big Hashtable (Updated Requirements)
Since you have changed your requirements, you can easily expand on your idea to use just one big hashtable that contains all precalculated results. But instead of working around collisions yourself you could rely on the performance of a properly implemented hashtable.
Here I create one big hashtable, where each possible query maps to a list of its results:
def create_big_hash(data)
h = Hash.new do |h,k|
h[k] = Array.new
end
data.each_line do |l|
w = l.strip
# add all words with one ?
w.length.times do |i|
q = String.new(w)
q[i] = "?"
h[q].push w
end
# add all words with two ??
(w.length-1).times do |i|
q = String.new(w)
q[i, 2] = "??"
h[q].push w
end
end
h
end
# prepare data
t = Time.new
h = create_big_hash(File.read("wordlist.txt"))
puts "#{Time.new - t} sec preparing data\n#{h.size} entries in big hash"
# use prepared data for query
t = Time.new
h["?ood"].each do |w|
puts w
end
puts (Time.new - t)
Output is
4.960255 sec preparing data
616745 entries in big hash
food
good
hood
mood
wood
2.0e-05
The query performance is O(1), it is just a lookup in the hashtable. The time 2.0e-05 is probably below the timer's precision. When running it 1000 times, I get an average of 1.958e-6 seconds per query. To get it faster, I would switch to C++ and use the Google Sparse Hash which is extremely memory efficient, and fast.
Solution #4: Get Really Serious
All above solutions work and should be good enough for many use cases. If you really want to get serious and have lots of spare time on your hands, read some good papers:
Tries for Approximate String Matching - If well implemented, tries can have very compact memory requirements (50% less space than the dictionary itself), and are very fast.
Agrep - A Fast Approximate Pattern-Matching Tool - Agrep is based on a new efficient and flexible algorithm for approximate string matching.
Google Scholar search for approximate string matching - More than enough to read on this topic.

Given the current limitations:
There will be up to 2 question marks
When there are 2 question marks, they appear together
There are ~100,000 words in the dictionary, average word length is 6.
I have two viable solutions for you:
The fast solution: HASH
You can use a hash which keys are your words with up to two '?', and the values are a list of fitting words. This hash will have around 100,000 + 100,000*6 + 100,000*5 = 1,200,000 entries (if you have 2 question marks, you just need to find the place of the first one...). Each entry can save a list of words, or a list of pointers to the existing words. If you save a list of pointers, and we assume that there are on average less than 20 words matching each word with two '?', then the additional memory is less than 20 * 1,200,000 = 24,000,000.
If each pointer size is 4 bytes, then the memory requirement here is (24,000,000+1,200,000)*4 bytes = 100,800,000 bytes ~= 96 mega bytes.
To sum up this solution:
Memory Consumption: ~96 MB
Time for each search: calculating a hash function, and following a pointer. O(1)
Note: if you want to use a hash of a smaller size, you can, but then it is better to save a balanced search tree in each entry instead of a linked list, for better performance.
The space savvy, but still very fast solution: TRIE variation
This solution uses the following observation:
If the '?' signs were at the end of the word, trie would be an excellent solution.
The search in the trie would search at the length of the word, and for the last couple of letters, a DFS traversal would bring all of the endings.
Very fast, and very memory-savvy solution.
So lets use this observation, in order to build something to work exactly like this.
You can think about every word you have in the dictionary, as a word ending with # (or any other symbol that does not exist in your dictionary).
So the word 'space' would be 'space#'.
Now, if you rotate each of the words, with the '#' sign, you get the following:
space#, pace#s, ace#sp, *ce#spa*, e#spac
(no # as first letter).
If you insert all of these variations into a TRIE, you can easily find the word you are seeking at the length of the word, by 'rotating' your word.
Example:
You want to find all words that fit 's??ce' (one of them is space, another is slice).
You build the word: s??ce#, and rotate it so that the ? sign is in the end. i.e. 'ce#s??'
All of the rotation variations exist inside the trie, and specifically 'ce#spa' (marked with * above). After the beginning is found - you need to go over all of the continuations in the appropriate length, and save them. Then, you need to rotate them again so that the # is the last letter, and walla - you have all of the words you were looking for!
To sum up this solution:
Memory Consumption:
For each word, all of its rotations appear in the trie. On average, *6 of the memory size is saved in the trie. The trie size is around *3 (just guessing...) of the space saved inside it. So the total space necessary for this trie is 6*3*100,000 = 1,800,000 words ~= 6.8 mega bytes.
Time for each search:
rotating the word: O(word length)
seeking the beginning in the trie: O(word length)
going over all of the endings: O(number of matches)
rotating the endings: O(total length of answers)
To sum up, it is very very fast, and depends on the word length * small constant.
To sum up...
The second choice has a great time/space complexity, and would be the best option for you to use. There are a few problems with the second solution (in which case you might want to use the first solution):
More complex to implement. I'm not sure whether there are programming languages with tries built-in out of the box. If there isn't - it means that you'll need to implement it yourself...
Does not scale well. If tomorrow you decide that you need your question marks spread all over the word, and not necessarily joined together, you'll need to think hard of how to fit the second solution to it. In the case of the first solution - it is quite easy to generalize.

To me this problem sounds like a good fit for a Trie data structure. Enter the entire dictionary into your trie, and then look up the word. For a missing letter you would have to try all sub-tries, which should be relatively easy to do with a recursive approach.
EDIT: I wrote a simple implementation of this in Ruby just now: http://gist.github.com/262667.

Directed Acyclic Word Graph would be perfect data structure for this problem. It combines efficiency of a trie (trie can be seen as a special case of DAWG), but is much more space efficient. Typical DAWG will take fraction of size that plain text file with words would take.
Enumerating words that meet specific conditions is simple and the same as in trie - you have to traverse graph in depth-first fashion.

Anna's second solution is the inspiration for this one.
First, load all the words into memory and divide the dictionary into sections based on word length.
For each length, make n copies of an array of pointers to the words. Sort each array so that the strings appear in order when rotated by a certain number of letters. For example, suppose the original list of 5-letter words is [plane, apple, space, train, happy, stack, hacks]. Then your five arrays of pointers will be:
rotated by 0 letters: [apple, hacks, happy, plane, space, stack, train]
rotated by 1 letter: [hacks, happy, plane, space, apple, train, stack]
rotated by 2 letters: [space, stack, train, plane, hacks, apple, happy]
rotated by 3 letters: [space, stack, train, hacks, apple, plane, happy]
rotated by 4 letters: [apple, plane, space, stack, train, hacks, happy]
(Instead of pointers, you can use integers identifying the words, if that saves space on your platform.)
To search, just ask how much you would have to rotate the pattern so that the question marks appear at the end. Then you can binary search in the appropriate list.
If you need to find matches for ??ppy, you would have to rotate that by 2 to make ppy??. So look in the array that is in order when rotated by 2 letters. A quick binary search finds that "happy" is the only match.
If you need to find matches for th??g, you would have to rotate that by 4 to make gth??. So look in array 4, where a binary search finds that there are no matches.
This works no matter how many question marks there are, as long as they all appear together.
Space required in addition to the dictionary itself: For words of length N, this requires space for (N times the number of words of length N) pointers or integers.
Time per lookup: O(log n) where n is the number of words of the appropriate length.
Implementation in Python:
import bisect
class Matcher:
def __init__(self, words):
# Sort the words into bins by length.
bins = []
for w in words:
while len(bins) <= len(w):
bins.append([])
bins[len(w)].append(w)
# Make n copies of each list, sorted by rotations.
for n in range(len(bins)):
bins[n] = [sorted(bins[n], key=lambda w: w[i:]+w[:i]) for i in range(n)]
self.bins = bins
def find(self, pattern):
bins = self.bins
if len(pattern) >= len(bins):
return []
# Figure out which array to search.
r = (pattern.rindex('?') + 1) % len(pattern)
rpat = (pattern[r:] + pattern[:r]).rstrip('?')
if '?' in rpat:
raise ValueError("non-adjacent wildcards in pattern: " + repr(pattern))
a = bins[len(pattern)][r]
# Binary-search the array.
class RotatedArray:
def __len__(self):
return len(a)
def __getitem__(self, i):
word = a[i]
return word[r:] + word[:r]
ra = RotatedArray()
start = bisect.bisect(ra, rpat)
stop = bisect.bisect(ra, rpat[:-1] + chr(ord(rpat[-1]) + 1))
# Return the matches.
return a[start:stop]
words = open('/usr/share/dict/words', 'r').read().split()
print "Building matcher..."
m = Matcher(words) # takes 1-2 seconds, for me
print "Done."
print m.find("st??k")
print m.find("ov???low")
On my computer, the system dictionary is 909KB big and this program uses about 3.2MB of memory in addition to what it takes just to store the words (pointers are 4 bytes). For this dictionary, you could cut that in half by using 2-byte integers instead of pointers, because there are fewer than 216 words of each length.
Measurements: On my machine, m.find("st??k") runs in 0.000032 seconds, m.find("ov???low") in 0.000034 seconds, and m.find("????????????????e") in 0.000023 seconds.
By writing out the binary search instead of using class RotatedArray and the bisect library, I got those first two numbers down to 0.000016 seconds: twice as fast. Implementing this in C++ would make it faster still.

First we need a way to compare the query string with a given entry. Let's assume a function using regexes: matches(query,trialstr).
An O(n) algorithm would be to simply run through every list item (your dictionary would be represented as a list in the program), comparing each to your query string.
With a bit of pre-calculation, you could improve on this for large numbers of queries by building an additional list of words for each letter, so your dictionary might look like:
wordsbyletter = { 'a' : ['aardvark', 'abacus', ... ],
'b' : ['bat', 'bar', ...],
.... }
However, this would be of limited use, particularly if your query string starts with an unknown character. So we can do even better by noting where in a given word a particular letter lies, generating:
wordsmap = { 'a':{ 0:['aardvark', 'abacus'],
1:['bat','bar']
2:['abacus']},
'b':{ 0:['bat','bar'],
1:['abacus']},
....
}
As you can see, without using indices, you will end up hugely increasing the amount of required storage space - specifically a dictionary of n words and average length m will require nm2 of storage. However, you could very quickly now do your look up to get all the words from each set that can match.
The final optimisation (which you could use off the bat on the naive approach) is to also separate all the words of the same length into separate stores, since you always know how long the word is.
This version would be O(kx) where k is the number of known letters in the query word, and x=x(n) is the time to look up a single item in a dictionary of length n in your implementation (usually log(n).
So with a final dictionary like:
allmap = {
3 : {
'a' : {
1 : ['ant','all'],
2 : ['bar','pat']
}
'b' : {
1 : ['bar','boy'],
...
}
4 : {
'a' : {
1 : ['ante'],
....
Then our algorithm is just:
possiblewords = set()
firsttime = True
wordlen = len(query)
for idx,letter in enumerate(query):
if(letter is not '?'):
matchesthisletter = set(allmap[wordlen][letter][idx])
if firsttime:
possiblewords = matchesthisletter
else:
possiblewords &= matchesthisletter
At the end, the set possiblewords will contain all the matching letters.

If you generate all the possible words that match the pattern (arate, arbte, arcte ... zryte, zrzte) and then look them up in a binary tree representation of the dictionary, that will have the average performance characteristics of O(e^N1 * log(N2)) where N1 is the number of question marks and N2 is the size of the dictionary. Seems good enough for me but I'm sure it's possible to figure out a better algorithm.
EDIT: If you will have more than say, three question marks, have a look at Phil H's answer and his letter indexing approach.

Assume you have enough memory, you could build a giant hashmap to provide the answer in constant time. Here is a quick example in python:
from array import array
all_words = open("english-words").read().split()
big_map = {}
def populate_map(word):
for i in range(pow(2, len(word))):
bin = _bin(i, len(word))
candidate = array('c', word)
for j in range(len(word)):
if bin[j] == "1":
candidate[j] = "?"
if candidate.tostring() in big_map:
big_map[candidate.tostring()].add(word)
else:
big_map[candidate.tostring()] = set([word])
def _bin(x, width):
return ''.join(str((x>>i)&1) for i in xrange(width-1,-1,-1))
def run():
for word in all_words:
populate_map(word)
run()
>>> big_map["y??r"]
set(['your', 'year'])
>>> big_map["yo?r"]
set(['your'])
>>> big_map["?o?r"]
set(['four', 'poor', 'door', 'your', 'hour'])

You can take a look at how its done in aspell. It prompts suggestions of correct word for misspelled words.

Build a hash set of all the words. To find matches, replace the question marks in the pattern with each possible combination of letters. If there are two question marks, a query consists of 262 = 676 quick, constant-expected-time hash table lookups.
import itertools
words = set(open("/usr/share/dict/words").read().split())
def query(pattern):
i = pattern.index('?')
j = pattern.rindex('?') + 1
for combo in itertools.product('abcdefghijklmnopqrstuvwxyz', repeat=j-i):
attempt = pattern[:i] + ''.join(combo) + pattern[j:]
if attempt in words:
print attempt
This uses less memory than my other answer, but it gets exponentially slower as you add more question marks.

If 80-90% accuracy is acceptable, you could manage with Peter Norvig's spell checker. The implementation is small and elegant.

A regex-based solution will consider every possible value in your dictionary. If performance is your largest constraint, an index could be built to speed it up considerably.
You could start with an index on each word length containing an index of each index=character matching word sets. For length 5 words, for example, 2=r : {write, wrote, drate, arete, arite}, 3=o : {wrote, float, group}, etc. To get the possible matches for the original query, say '?ro??', the word sets would be intersected resulting in {wrote, group} in this case.
This is assuming that the only wildcard will be a single character and that the word length is known up front. If these are not valid assumptions, I can recommend n-gram based text matching, such as discussed in this paper.

The data structure you want is called a trie - see the wikipedia article for a short summary.
A trie is a tree structure where the paths through the tree form the set of all the words you wish to encode - each node can have up to 26 children, on for each possible letter at the next character position. See the diagram in the wikipedia article to see what I mean.

Have you considered using a Ternary Search Tree?
The lookup speed is comparable to a trie, but it is more space-efficient.
I have implemented this data structure several times, and it is a quite straightforward task in most languages.

My first post had an error that Jason found, it did not work well when ?? was in the beginning. I have now borrowed the cyclic shifts from Anna..
My solution:
Introduce an end-of-word character (#) and store all cyclic shifted words in sorted arrays!! Use one sorted array for each word length. When looking for "th??e#", shift the string to move the ?-marks to the end (obtaining e#th??) and pick the array containing words of length 5 and make a binary search for the first word occurring after string "e#th". All remaining words in the array match, i.e., we will find "e#thoo (thoose), e#thes (these), etc.
The solution has time complexity Log( N ), where N is the size of the dictionary, and it expands the size of the data by a factor of 6 or so ( the average word length)

Here's how I'd do it:
Concatenate the words of the dictionary into one long String separated by a non-word character.
Put all words into a TreeMap, where the key is the word and the value is the offset of the start of the word in the big String.
Find the base of the search string; i.e. the largest leading substring that doesn't include a '?'.
Use TreeMap.higherKey(base) and TreeMap.lowerKey(next(base)) to find the range within the String between which matches will be found. (The next method needs to calculate the next larger word to the base string with the same number or fewer characters; e.g. next("aa") is "ab", next("az") is "b".)
Create a regex for the search string and use Matcher.find() to search the substring corresponding to the range.
Steps 1 and 2 are done beforehand giving a data structure using O(NlogN) space where N is the number of words.
This approach degenerates to a brute-force regex search of the entire dictionary when the '?' appears in the first position, but the further to the right it is, the less matching needs to be done.
EDIT:
To improve the performance in the case where '?' is the first character, create a secondary lookup table that records the start/end offsets of runs of words whose second character is 'a', 'b', and so on. This can be used in the case where the first non-'?' is second character. You can us a similar approach for cases where the first non-'?' is the third character, fourth character and so on, but you end up with larger and larger numbers of smaller and smaller runs, and eventually this "optimization" becomes ineffective.
An alternative approach which requires significantly more space, but which is faster in most cases, is to prepare the dictionary data structure as above for all rotations of the words in the dictionary. For instance, the first rotation would consist of all words 2 characters or more with the first character of the word moved to the end of the word. The second rotation would be words of 3 characters or more with the first two characters moved to the end, and so on. Then to do the search, look for the longest sequence of non-'?' characters in the search string. If the index of the first character of this substring is N, use the Nth rotation to find the ranges, and search in the Nth rotation word list.

A lazy solution is to let SQLite or another DBMS do the job for you.
Just create an in-memory database, load your words and run a select using the LIKE operator.

Summary: Use two compact binary-searched indexes, one of the words, and one of the reversed words. The space cost is 2N pointers for the indexes; almost all lookups go very fast; the worst case, "??e", is still decent. If you make separate tables for each word length, that'd make even the worst case very fast.
Details: Stephen C. posted a good idea: search an ordered dictionary to find the range where the pattern can appear. This doesn't help, though, when the pattern starts with a wildcard. You might also index by word-length, but here's another idea: add an ordered index on the reversed dictionary words; then a pattern always yields a small range in either the forward index or the reversed-word index (since we're told there are no patterns like ?ABCD?). The words themselves need be stored only once, with the entries of both structures pointing to the same words, and the lookup procedure viewing them either forwards or in reverse; but to use Python's built-in binary-search function I've made two separate strings arrays instead, wasting some space. (I'm using a sorted array instead of a tree as others have suggested, as it saves space and goes at least as fast.)
Code:
import bisect, re
def forward(string): return string
def reverse(string): return string[::-1]
index_forward = sorted(line.rstrip('\n')
for line in open('/usr/share/dict/words'))
index_reverse = sorted(map(reverse, index_forward))
def lookup(pattern):
"Return a list of the dictionary words that match pattern."
if reverse(pattern).find('?') <= pattern.find('?'):
key, index, fixup = pattern, index_forward, forward
else:
key, index, fixup = reverse(pattern), index_reverse, reverse
assert all(c.isalpha() or c == '?' for c in pattern)
lo = bisect.bisect_left(index, key.replace('?', 'A'))
hi = bisect.bisect_right(index, key.replace('?', 'z'))
r = re.compile(pattern.replace('?', '.') + '$')
return filter(r.match, (fixup(index[i]) for i in range(lo, hi)))
Tests: (The code also works for patterns like ?AB?D?, though without the speed guarantee.)
>>> lookup('hello')
['hello']
>>> lookup('??llo')
['callo', 'cello', 'hello', 'uhllo', 'Rollo', 'hollo', 'nullo']
>>> lookup('hel??')
['helio', 'helix', 'hello', 'helly', 'heloe', 'helve']
>>> lookup('he?l')
['heal', 'heel', 'hell', 'heml', 'herl']
>>> lookup('hx?l')
[]
Efficiency: This needs 2N pointers plus the space needed to store the dictionary-word text (in the tuned version). The worst-case time comes on the pattern '??e' which looks at 44062 candidates in my 235k-word /usr/share/dict/words; but almost all queries are much faster, like 'h??lo' looking at 190, and indexing first on word-length would reduce '??e' almost to nothing if we need to. Each candidate-check goes faster than the hashtable lookups others have suggested.
This resembles the rotations-index solution, which avoids all false match candidates at the cost of needing about 10N pointers instead of 2N (supposing an average word-length of about 10, as in my /usr/share/dict/words).
You could do a single binary search per lookup, instead of two, using a custom search function that searches for both low-bound and high-bound together (so the shared part of the search isn't repeated).

If you only have ? wildcards, no * wildcards that match a variable number of characters, you could try this: For each character index, build a dictionary from characters to sets of words. i.e. if the words are write, wrote, drate, arete, arite, your dictionary structure would look like this:
Character Index 0:
'a' -> {"arete", "arite"}
'd' -> {"drate"}
'w' -> {"write", "wrote"}
Character Index 1:
'r' -> {"write", "wrote", "drate", "arete", "arite"}
Character Index 2:
'a' -> {"drate"}
'e' -> {"arete"}
'i' -> {"write", "arite"}
'o' -> {"wrote"}
...
If you want to look up a?i?? you would take the set that corresponds to character index 0 => 'a' {"arete", "arite"} and the set that corresponds to character index 2 = 'i' => {"write", "arite"} and take the set intersection.

If you seriously want something on the order of a billion searches per second (though i can't dream why anyone outside of someone making the next grand-master scrabble AI or something for a huge web service would want that fast), i recommend utilizing threading to spawn [number of cores on your machine] threads + a master thread that delegates work to all of those threads. Then apply the best solution you have found so far and hope you don't run out of memory.
An idea i had is that you can speed up some cases by preparing sliced down dictionaries by letter then if you know the first letter of the selection you can resort to looking in a much smaller haystack.
Another thought I had was that you were trying to brute-force something -- perhaps build a DB or list or something for scrabble?

Related

Efficiently compute permutations of a given set of "blocks" in a line

I am working on an application where I have a number of blocks which should be positioned on a line. I.e. there are varying number of blocks, each with a different length which should be positioned on the line. There needs to be at least one empty element between blocks.
I would like to get all possible permutations of the blocks on the line efficiently.
For example I have a line of length 15 and would like to place blocks of 1, 6 and 1 size.
Order matters, i.e. in my example the 1-size blocks always should be left and right of the 6-size block.
Possible permutations are
X.XXXXXX.X.....
X..XXXXXX.X....
...
.....X.XXXXXX.X
How do I efficiently generate all possible permutations in a higher level language, e.g. Java?
One way to do this is to approach it recursively:
If the minimum total length required to store all the blocks with exactly one space in-between them exceeds the available space, there are no ways to place the blocks.
Otherwise, if you have no blocks to place, then the only way to place the blocks is to leave all squares unfilled.
Otherwise, there are two options. First, you could place the first block at the first position in the row, then recursively place the remaining blocks in the remaining space within the row after first leaving one extra blank space at the start of the row. Second, you could leave the first space in the row blank, then recursively place the same set of blocks in the remaining space in the row. Trying out both options and combining the results back together should give you the answer you're looking for.
Translating this recursive logic into actual Java should not be too difficult. The code below is designed for readability and can be optimized a bit:
public List<String> allBlockOrderings(int rowLength, List<Integer> blockSizes) {
/* Case 1: Not enough space left. */
if (spaceNeededFor(blockSizes) > rowLength)) return Collections.EMPTY_LIST;
List<String> result = new ArrayList<String>();
/* Case 2: Nothing to place. */
if (blockSizes.isEmpty()) {
result.add(stringOf('.', rowLength));
} else {
/* Case 3a: place the very first block at the beginning of the row. */
List<String> placeFirst = allBlockOrderings(rowLength - blockSizes.get(0) - 1,
blockSizes.subList(1, blockSizes.length()));
for (String rest: placeFirst) {
result.add(stringOf('X', blockSizes.get(0)) + rest);
}
/* Case 3b: leave the very first spot open. */
List<String> skipFirst = allBlockOrderings(rowLength - 1, blockSizes);
for (String rest: skipFirst) {
result.add('.' + rest);
}
}
return result;
}
You'll need to implement the spaceNeededFor method, which returns the length of the shortest row that could possibly hold a given list of blocks, and the stringOf method, which takes in a character and a number, then returns a string of that many copies of the given character.
Hope this helps!
To me it seems more easy to think about the problem in another way:
We have fixed blocks in a fixed order, separated by dots. We can create all permutations by distributing the remaining dots over the allowed positions.
The length of this fixed part of the line is:
fixed_len = length_of_all_blocks + number_of_blocks - 1
The number of remaining dots is
free_dots = length_of_line - fixed_len.
The number of open positions is
pos_count = number_of_blocks + 1
Now we have to find all permutations of how to put free_dots into pos_count.
It's quite hard to determine what an "efficient implementation" is since the output can be very large and therefore even a fast implementation won't be fast enough.
I'd use technics of dynamic programming and recursion for such task. The recursive fuoction should take two parameters - list of unused numbers and remaining length of the row. Inside it will be a simple loop. You should store the results you already know. I'm sure you can handle the details by yourself. Edit : Our friend has already done that for you :-).
By the way, what is the goal of such task? It remainds me about the pictures in a grid where you have such numbers for every row and column and you need to decode the picture. There are better ways to solve such problem.

Given a word, how do I get the list of all words, that differ by one letter?

Let's say I have the word "CAT". These words differ from "CAT" by one letter (not the full list)
CUT
CAP
PAT
FAT
COT
etc.
Is there an elegant way to generate this? Obviously, one way to do it is through brute force.
pseduo code:
while (0 to length of word)
while (A to Z)
replace one letter at a time, and check if the resulting word is a valid word
If I had a 10 letter word, the loop would run 26 * 10 = 260 times.
Is there a better, elegant way to do this?
Given a list of words, for example with
words = set(line.strip().lower() for line in open('/usr/share/dict/words'))
you can build and index of "wildcarded" words, where you replace each character of the word with a wildcard (say "?"), so that for example "gat" and "fat" both get indexed to "?at":
def wildcard(s, idx):
return s[:idx] + '?' + s[idx+1:]
def wildcarded(s):
for idx in xrange(len(s)):
yield wildcard(s, idx)
# list(wildcarded('cat')) returns ['?at', 'c?t', 'ca?']
from collections import defaultdict
index = defaultdict(list)
for word in words:
for w in wildcarded(word):
index[w].append(word)
Now if you want to look for all the words that differ by one letter from "cat", just look for "?at", "c?t" and "ca?" and concatenate the results:
def near_words(word):
ret = []
for w in wildcarded(word):
ret += index[w]
return ret
print near_words('cat')
# outputs ['cat', 'bat', 'zat', 'jat', 'kat', 'rat', 'sat', 'pat', 'hat', 'oat', 'gat', 'vat', 'nat', 'fat', 'lat', 'wat', 'eat', 'yat', 'mat', 'tat', 'cat', 'cut', 'cot', 'cit', 'cay', 'car', 'cap', 'caw', 'cat', 'can', 'cam', 'cal', 'cad', 'cab', 'cag']
print near_words('stack')
# outputs ['stack', 'stack', 'smack', 'spack', 'slack', 'snack', 'shack', 'swack', 'stuck', 'stack', 'stick', 'stock', 'stank', 'stack', 'stark', 'stauk', 'stalk', 'stack']
If the maximum word length is L and the number of words is N, the index is made of O(NL) pointers, while the lookup algorithm runs in time O(L + number of results).
If you want to look for all the words that differ by K letters instead of 1 this approach doesn't generalize well, but it is a very hard problem in full generality (it is the problem of finding neighbors in Hamming spaces).
Work out what your performance requirements really are.
Implement it exactly as you described it above.
Time it, and see if you meet those requirements already.
Optimise only if required (and I am willing to bet it isn't required, because 260 look-ups in a hash table of words that fit in RAM isn't that slow.)
The size of a dictionary for a human-language and a word length are tiny (~10**5 and ~100), therefore a brute-force approach will do unless measurements shows otherwise in your case:
#!/usr/bin/env python
import string
ALL_WORDS = set(open('/usr/share/dict/words').read().lower().split())
ALPHABET = string.ascii_lowercase
def known(words): return set(w for w in words if w in ALL_WORDS)
def one_letter(word):
# http://norvig.com/spell-correct.html
splits = ((word[:i], word[i:]) for i in range(len(word) + 1))
replaces = (a + c + b[1:] for a, b in splits for c in ALPHABET if b)
return set(replaces)
from pprint import pprint
pprint(known(one_letter("cat")))
Output
set(['bat',
'cab',
'cad',
'cal',
'cam',
'can',
'cap',
'car',
'cat',
'caw',
'cot',
'cut',
'eat',
'fat',
'hat',
'mat',
'nat',
'oat',
'pat',
'rat',
'sat',
'tat',
'vat'])
You'll need a dictionary of valid words to check against, or otherwise the problem isn't going to generate "words" but "strings" rather. There are many available for free online, or if you're on Linux most distros ship with dictionary files in /usr/share/dict/.
There are two approaches to take:
For each letter in the word, replace it with all other 25 characters and check if it's in the dictionary. Use a hashtable to store the dictionary words for efficient querying. You only need to populate the hashtable with words of the same length as your search word. This will be O(MN + 25N) = O(MN), where M is the number of words of length N in your dictionary and N is the length of your word.
For each dictionary word that is the same length as your search word, check how many characters differ. This will be O(MN).
Although both fall into the same complexity class, the latter drops the O(25N) term and overhead associated with a hashtable.
For: l = word length, w = number of words in wordlist:
Your algorithm is O(l.(l log w)) for a tree wordlist, plus the cost of constructing the wordlist in the first place (which is O(w log (w))) (I assume a tree here, you can redo this with a hash if you like).
This is O(l.w)
As another answer already suggests, you don't care that the word has an a, b or z in place of the character you want to change, you just care that it's not the letter that you started with. So test the one combination you don't want, rather than all of the combinations that would do.
So:
for(each candidate word from the wordlist) {
difference = 0
for(each letter in your original word) {
does it match? if not, difference++
}
if difference = 1, store the candidate word as a solution
}
Now, you're going to argue that you're looking at 78 comparisons versus thousands, but that's not accurate: in order to make use of a wordlist to see if a candidate is available, your method involves creating a content-addressed structure (a tree or hash) before you even start, plus the lookups into the hash once you're running. The solution above also allows you to read the wordlist file once per word under test (without having to hold it in memory for rescanning). Your solution is probably faster for doing this on many words at once, but the above is better for a single word lookup, and more memory efficient in every case.
Credit to other answers for the 'count the difference' method of spotting word differences...
Anyway you'll need to iterate through all letters to check it. But the another approach would be to check dictionary for words, which corresponds to mask ?AT, C?T, CA? (where ? can be every symbol)
If the strings always match in length one way would be to remove one letter at a time and compare result on both strings, 10 chars would be 10 loops.
regards,
/t
Iterate the word list and for each word count the different letter. If the count becomes greater than 1 go to next word.
Faster solution, if dictionary is static and there are plenty of words to check: create a matrix of letters. Rows are first letter in the word, columns are second letter in the word. Cells are list of words that begin with given first and second letter. When you want to find similar words of a given word then iterate through just one row and then just one column. If not on intersecting cell the all other letters of every iterated word must match. On intersecting cell one letter must be different.
If you really want to optimise the run-time (and I still say you probably don't need to in any reasonable performance situation) then go through the dictionary once, and run your algorithm on every word.
Create a map from the corrupted words to a list of each of the corresponding correctly spelled words.
I estimate that the 20,000 words, processing at least 30 a second, will take no more than 11 minutes to process.
Store the resulting hash table on disk, and load it into memory when required. Then perform the processing by simply looking up the input words in the hash table and find the corresponding list of correctly-spelled words.
Memory intensive, but super-fast - and if you are worried about the performance of 260 look-ups, you must be dealing with tens of thousands of words, and a solution like this is probably the best you'll get.

String Algorithm Question - Word Beginnings

I have a problem, and I'm not too sure how to solve it without going down the route of inefficiency. Say I have a list of words:
Apple
Ape
Arc
Abraid
Bridge
Braide
Bray
Boolean
What I want to do is process this list and get what each word starts with up to a certain depth, e.g.
a - Apple, Ape, Arc, Abraid
ab - Abraid
ar -Arc
ap - Apple, Ape
b - Bridge, Braide, Bray, Boolean
br - Bridge, Braide, Bray
bo - Boolean
Any ideas?
You can use a Trie structure.
(root)
/
a - b - r - a - i - d
/ \ \
p r e
/ \ \
p e c
/
l
/
e
Just find the node that you want and get all its descendants, e.g., if I want ap-:
(root)
/
a - b - r - a - i - d
/ \ \
[p] r e
/ \ \
p e c
/
l
/
e
Perhaps you're looking for something like:
#!/usr/bin/env python
def match_prefix(pfx,seq):
'''return subset of seq that starts with pfx'''
results = list()
for i in seq:
if i.startswith(pfx):
results.append(i)
return results
def extract_prefixes(lngth,seq):
'''return all prefixes in seq of the length specified'''
results = dict()
lngth += 1
for i in seq:
if i[0:lngth] not in results:
results[i[0:lngth]] = True
return sorted(results.keys())
def gen_prefix_indexed_list(depth,seq):
'''return a dictionary of all words matching each prefix
up to depth keyed on these prefixes'''
results = dict()
for each in range(depth):
for prefix in extract_prefixes(each, seq):
results[prefix] = match_prefix(prefix, seq)
return results
if __name__ == '__main__':
words='''Apple Ape Arc Abraid Bridge Braide Bray Boolean'''.split()
test = gen_prefix_indexed_list(2, words)
for each in sorted(test.keys()):
print "%s:\t\t" % each,
print ' '.join(test[each])
That is you want to generate all the prefixes that are present in a list of words between one and some number you'll specify (2 in this example). Then you want to produce an index of all words matching each of these prefixes.
I'm sure there are more elegant ways to do this. For for a quick and easily explained approach I've just built this from a simple bottom-up functional decomposition of the apparent spec. Of the end result values are lists each matching a given prefix, then we start with the function to filter out such matches from our inputs. If the end result keys are all prefixes between 1 and some N that appear in our input then we have a function to extract those. Then our spec. is an extremely straightforward nested loop around that.
Of course this nest loop might be a problem. Such things usually equate to an O(n^2) efficiency. As shown this will iterate over the original list C * N * N times (C is the constant number representing the prefixes of length 1, 2, etc; while N is the length of the list).
If this decomposition provides the desired semantics then we can look at improving the efficiency. The obvious approach would be to lazily generate the dictionary keys as we iterate once over the list ... for each word, for each prefix length, generate key ... append this word to the the list/value stored at that key ... and continue to the next word.
There's still a nested loop ... but it's the short loop for each key/prefix length. That alternative design has the advantage of allowing us to iterate over lists of words from any iterable, not just an in memory list. So we could iterate over lines of a file, results generated from a database query, etc --- without incurring the memory overhead of keeping the entire original word list in memory.
Of course we're still storing the dictionary in memory. However we can also change that, decouple the logic from the input and storage. When we append each input to the various prefix/key values we don't care if they're lists in a dictionary, or lines in a set of files, or values being pulled out of (and pushed back into) a DBM or other key/value store (for example some sort of CouchDB or other "noSQL clustered/database."
The implementation of that is left as an exercise to the reader.
I don't know what you are thinking about, when you say "route of inefficiency", but pretty obvious solution (possibly the one you are thinking about) comes to mind. Trie looks like a structure for this kind of problems, but it's costly in terms of memory (there is a lot of duplication) and I'm not sure it makes things faster in your case. Maybe the memory usage would pay off, if the information was to be retrieved many times, but your answer suggests, you want to generate the output file once and store it. So in your case the Trie would be generated just to be traversed once. I don't think it makes sense.
My suggestion is to just sort the list of words in lexical order and then traverse the list in order as many times as the max length of the beginning is.
create a dictionary with keys being strings and values being lists of strings
for(i = 1 to maxBeginnigLength)
{
for(every word in your sorted list)
{
if(the word's length is no less than i)
{
add the word to the list in the dictionary at a key
being the beginning of the word of length i
}
}
}
store contents of the dictionary to the file
Using this PHP trie implementation will get you about 50% there. It's got some stuff you don't need and it doesn't have a "search by prefix" method, but you can write one yourself easily enough.
$trie = new Trie();
$trie->add('Apple', 'Apple');
$trie->add('Ape', 'Ape');
$trie->add('Arc', 'Arc');
$trie->add('Abraid', 'Abraid');
$trie->add('Bridge', 'Bridge');
$trie->add('Braide', 'Braide');
$trie->add('Bray', 'Bray');
$trie->add('Boolean', 'Boolean');
It builds up a structure like this:
Trie Object
(
[A] => Trie Object
(
[p] => Trie Object
(
[ple] => Trie Object
[e] => Trie Object
)
[rc] => Trie Object
[braid] => Trie Object
)
[B] => Trie Object
(
[r] => Trie Object
(
[idge] => Trie Object
[a] => Trie Object
(
[ide] => Trie Object
[y] => Trie Object
)
)
[oolean] => Trie Object
)
)
If the words were in a Database (Access, SQL), and you wanted to retrieve all words starting with 'br', you could use:
Table Name: mytable
Field Name: mywords
"Select * from mytable where mywords like 'br*'" - For Access - or
"Select * from mytable where mywords like 'br%'" - For SQL

How to split a string into words. Ex: "stringintowords" -> "String Into Words"?

What is the right way to split a string into words ?
(string doesn't contain any spaces or punctuation marks)
For example: "stringintowords" -> "String Into Words"
Could you please advise what algorithm should be used here ?
! Update: For those who think this question is just for curiosity. This algorithm could be used to camеlcase domain names ("sportandfishing .com" -> "SportAndFishing .com") and this algo is currently used by aboutus dot org to do this conversion dynamically.
Let's assume that you have a function isWord(w), which checks if w is a word using a dictionary. Let's for simplicity also assume for now that you only want to know whether for some word w such a splitting is possible. This can be easily done with dynamic programming.
Let S[1..length(w)] be a table with Boolean entries. S[i] is true if the word w[1..i] can be split. Then set S[1] = isWord(w[1]) and for i=2 to length(w) calculate
S[i] = (isWord[w[1..i] or for any j in {2..i}: S[j-1] and isWord[j..i]).
This takes O(length(w)^2) time, if dictionary queries are constant time. To actually find the splitting, just store the winning split in each S[i] that is set to true. This can also be adapted to enumerate all solution by storing all such splits.
As mentioned by many people here, this is a standard, easy dynamic programming problem: the best solution is given by Falk Hüffner. Additional info though:
(a) you should consider implementing isWord with a trie, which will save you a lot of time if you use properly (that is by incrementally testing for words).
(b) typing "segmentation dynamic programming" yields a score of more detail answers, from university level lectures with pseudo-code algorithm, such as this lecture at Duke's (which even goes so far as to provide a simple probabilistic approach to deal with what to do when you have words that won't be contained in any dictionary).
There should be a fair bit in the academic literature on this. The key words you want to search for are word segmentation. This paper looks promising, for example.
In general, you'll probably want to learn about markov models and the viterbi algorithm. The latter is a dynamic programming algorithm that may allow you to find plausible segmentations for a string without exhaustively testing every possible segmentation. The essential insight here is that if you have n possible segmentations for the first m characters, and you only want to find the most likely segmentation, you don't need to evaluate every one of these against subsequent characters - you only need to continue evaluating the most likely one.
If you want to ensure that you get this right, you'll have to use a dictionary based approach and it'll be horrendously inefficient. You'll also have to expect to receive multiple results from your algorithm.
For example: windowsteamblog (of http://windowsteamblog.com/ fame)
windows team blog
window steam blog
Consider the sheer number of possible splittings for a given string. If you have n characters in the string, there are n-1 possible places to split. For example, for the string cat, you can split before the a and you can split before the t. This results in 4 possible splittings.
You could look at this problem as choosing where you need to split the string. You also need to choose how many splits there will be. So there are Sum(i = 0 to n - 1, n - 1 choose i) possible splittings. By the Binomial Coefficient Theorem, with x and y both being 1, this is equal to pow(2, n-1).
Granted, a lot of this computation rests on common subproblems, so Dynamic Programming might speed up your algorithm. Off the top of my head, computing a boolean matrix M such M[i,j] is true if and only if the substring of your given string from i to j is a word would help out quite a bit. You still have an exponential number of possible segmentations, but you would quickly be able to eliminate a segmentation if an early split did not form a word. A solution would then be a sequence of integers (i0, j0, i1, j1, ...) with the condition that j sub k = i sub (k + 1).
If your goal is correctly camel case URL's, I would sidestep the problem and go for something a little more direct: Get the homepage for the URL, remove any spaces and capitalization from the source HTML, and search for your string. If there is a match, find that section in the original HTML and return it. You'd need an array of NumSpaces that declares how much whitespace occurs in the original string like so:
Needle: isashort
Haystack: This is a short phrase
Preprocessed: thisisashortphrase
NumSpaces : 000011233333444444
And your answer would come from:
location = prepocessed.Search(Needle)
locationInOriginal = location + NumSpaces[location]
originalLength = Needle.length() + NumSpaces[location + needle.length()] - NumSpaces[location]
Haystack.substring(locationInOriginal, originalLength)
Of course, this would break if madduckets.com did not have "Mad Duckets" somewhere on the home page. Alas, that is the price you pay for avoiding an exponential problem.
This can be actually done (to a certain degree) without dictionary. Essentially, this is an unsupervised word segmentation problem. You need to collect a large list of domain names, apply an unsupervised segmentation learning algorithm (e.g. Morfessor) and apply the learned model for new domain names. I'm not sure how well it would work, though (but it would be interesting).
This is basically a variation of a knapsack problem, so what you need is a comprehensive list of words and any of the solutions covered in Wiki.
With fairly-sized dictionary this is going to be insanely resource-intensive and lengthy operation, and you cannot even be sure that this problem will be solved.
Create a list of possible words, sort it from long words to short words.
Check if each entry in the list against the first part of the string. If it equals, remove this and append it at your sentence with a space. Repeat this.
A simple Java solution which has O(n^2) running time.
public class Solution {
// should contain the list of all words, or you can use any other data structure (e.g. a Trie)
private HashSet<String> dictionary;
public String parse(String s) {
return parse(s, new HashMap<String, String>());
}
public String parse(String s, HashMap<String, String> map) {
if (map.containsKey(s)) {
return map.get(s);
}
if (dictionary.contains(s)) {
return s;
}
for (int left = 1; left < s.length(); left++) {
String leftSub = s.substring(0, left);
if (!dictionary.contains(leftSub)) {
continue;
}
String rightSub = s.substring(left);
String rightParsed = parse(rightSub, map);
if (rightParsed != null) {
String parsed = leftSub + " " + rightParsed;
map.put(s, parsed);
return parsed;
}
}
map.put(s, null);
return null;
}
}
I was looking at the problem and thought maybe I could share how I did it.
It's a little too hard to explain my algorithm in words so maybe I could share my optimized solution in pseudocode:
string mainword = "stringintowords";
array substrings = get_all_substrings(mainword);
/** this way, one does not check the dictionary to check for word validity
* on every substring; It would only be queried once and for all,
* eliminating multiple travels to the data storage
*/
string query = "select word from dictionary where word in " + substrings;
array validwords = execute(query).getArray();
validwords = validwords.sort(length, desc);
array segments = [];
while(mainword != ""){
for(x = 0; x < validwords.length; x++){
if(mainword.startswith(validwords[x])) {
segments.push(validwords[x]);
mainword = mainword.remove(v);
x = 0;
}
}
/**
* remove the first character if any of valid words do not match, then start again
* you may need to add the first character to the result if you want to
*/
mainword = mainword.substring(1);
}
string result = segments.join(" ");

How to find all brotherhood strings?

I have a string, and another text file which contains a list of strings.
We call 2 strings "brotherhood strings" when they're exactly the same after sorting alphabetically.
For example, "abc" and "cba" will be sorted into "abc" and "abc", so the original two are brotherhood. But "abc" and "aaa" are not.
So, is there an efficient way to pick out all brotherhood strings from the text file, according to the one string provided?
For example, we have "abc" and a text file which writes like this:
abc
cba
acb
lalala
then "abc", "cba", "acb" are the answers.
Of course, "sort & compare" is a nice try, but by "efficient", i mean if there is a way, we can determine a candidate string is or not brotherhood of the original one after one pass processing.
This is the most efficient way, i think. After all, you can not tell out the answer without even reading candidate strings. For sorting, most of the time, we need to do more than 1 pass to the candidate string. So, hash table might be a good solution, but i've no idea what hash function to choose.
Most efficient algorithm I can think of:
Set up a hash table for the original string. Let each letter be the key, and the number of times the letter appears in the string be the value. Call this hash table inputStringTable
Parse the input string, and each time you see a character, increment the value of the hash entry by one
for each string in the file
create a new hash table. Call this one brotherStringTable.
for each character in the string, add one to a new hash table. If brotherStringTable[character] > inputStringTable[character], this string is not a brother (one character shows up too many times)
once string is parsed, compare each inputStringTable value with the corresponding brotherStringTable value. If one is different, then this string is not a brother string. If all match, then the string is a brother string.
This will be O(nk), where n is the length of the input string (any strings longer than the input string can be discarded immediately) and k is the number of strings in the file. Any sort based algorithm will be O(nk lg n), so in certain cases, this algorithm is faster than a sort based algorithm.
Sorting each string, then comparing it, works out to something like O(N*(k+log S)), where N is the number of strings, k is the search key length, and S is the average string length.
It seems like counting the occurrences of each character might be a possible way to go here (assuming the strings are of a reasonable length). That gives you O(k+N*S). Whether that's actually faster than the sort & compare is obviously going to depend on the values of k, N, and S.
I think that in practice, the cache-thrashing effect of re-writing all the strings in the sorting case will kill performance, compared to any algorithm that doesn't modify the strings...
iterate, sort, compare. that shouldn't be too hard, right?
Let's assume your alphabet is from 'a' to 'z' and you can index an array based on the characters. Then, for each element in a 26 element array, you store the number of times that letter appears in the input string.
Then you go through the set of strings you're searching, and iterate through the characters in each string. You can decrement the count associated with each letter in (a copy of) the array of counts from the key string.
If you finish your loop through the candidate string without having to stop, and you have seen the same number of characters as there were in the input string, it's a match.
This allows you to skip the sorts in favor of a constant-time array copy and a single iteration through each string.
EDIT: Upon further reflection, this is effectively sorting the characters of the first string using a bucket sort.
I think what will help you is the test if two strings are anagrams. Here is how you can do it. I am assuming the string can contain 256 ascii characters for now.
#define NUM_ALPHABETS 256
int alphabets[NUM_ALPHABETS];
bool isAnagram(char *src, char *dest) {
len1 = strlen(src);
len2 = strlen(dest);
if (len1 != len2)
return false;
memset(alphabets, 0, sizeof(alphabets));
for (i = 0; i < len1; i++)
alphabets[src[i]]++;
for (i = 0; i < len2; i++) {
alphabets[dest[i]]--;
if (alphabets[dest[i]] < 0)
return false;
}
return true;
}
This will run in O(mn) if you have 'm' strings in the file of average length 'n'
Sort your query string
Iterate through the Collection, doing the following:
Sort current string
Compare against query string
If it matches, this is a "brotherhood" match, save it/index/whatever you want
That's pretty much it. If you're doing lots of searching, presorting all of your collection will make the routine a lot faster (at the cost of extra memory). If you are doing this even more, you could pre-sort and save a dictionary (or some hashed collection) based off the first character, etc, to find matches much faster.
It's fairly obvious that each brotherhood string will have the same histogram of letters as the original. It is trivial to construct such a histogram, and fairly efficient to test whether the input string has the same histogram as the test string ( you have to increment or decrement counters for twice the length of the input string ).
The steps would be:
construct histogram of test string ( zero an array int histogram[128] and increment position for each character in test string )
for each input string
for each character in input string c, test whether histogram[c] is zero. If it is, it is a non-match and restore the histogram.
decrement histogram[c]
to restore the histogram, traverse the input string back to its start incrementing rather than decrementing
At most, it requires two increments/decrements of an array for each character in the input.
The most efficient answer will depend on the contents of the file. Any algorithm we come up with will have complexity proportional to N (number of words in file) and L (average length of the strings) and possibly V (variety in the length of strings)
If this were a real world situation, I would start with KISS and not try to overcomplicate it. Checking the length of the target string is simple but could help avoid lots of nlogn sort operations.
target = sort_characters("target string")
count = 0
foreach (word in inputfile){
if target.len == word.len && target == sort_characters(word){
count++
}
}
I would recommend:
for each string in text file :
compare size with "source string" (size of brotherhood strings should be equal)
compare hashes (CRC or default framework hash should be good)
in case of equity, do a finer compare with string sorted.
It's not the fastest algorithm but it will work for any alphabet/encoding.
Here's another method, which works if you have a relatively small set of possible "letters" in the strings, or good support for large integers. Basically consists of writing a position-independent hash function...
Assign a different prime number for each letter:
prime['a']=2;
prime['b']=3;
prime['c']=5;
Write a function that runs through a string, repeatedly multiplying the prime associated with each letter into a running product
long long key(char *string)
{
long long product=1;
while (*string++) {
product *= prime[*string];
}
return product;
}
This function will return a guaranteed-unique integer for any set of letters, independent of the order that they appear in the string. Once you've got the value for the "key", you can go through the list of strings to match, and perform the same operation.
Time complexity of this is O(N), of course. You can even re-generate the (sorted) search string by factoring the key. The disadvantage, of course, is that the keys do get large pretty quickly if you have a large alphabet.
Here's an implementation. It creates a dict of the letters of the master, and a string version of the same as string comparisons will be done at C++ speed. When creating a dict of the letters in a trial string, it checks against the master dict in order to fail at the first possible moment - if it finds a letter not in the original, or more of that letter than the original, it will fail. You could replace the strings with integer-based hashes (as per one answer regarding base 26) if that proves quicker. Currently the hash for comparison looks like a3c2b1 for abacca.
This should work out O(N log( min(M,K) )) for N strings of length M and a reference string of length K, and requires the minimum number of lookups of the trial string.
master = "abc"
wordset = "def cba accb aepojpaohge abd bac ajghe aegage abc".split()
def dictmaster(str):
charmap = {}
for char in str:
if char not in charmap:
charmap[char]=1
else:
charmap[char] += 1
return charmap
def dicttrial(str,mastermap):
trialmap = {}
for char in str:
if char in mastermap:
# check if this means there are more incidences
# than in the master
if char not in trialmap:
trialmap[char]=1
else:
trialmap[char] += 1
else:
return False
return trialmap
def dicttostring(hash):
if hash==False:
return False
str = ""
for char in hash:
str += char + `hash[char]`
return str
def testtrial(str,master,mastermap,masterhashstring):
if len(master) != len(str):
return False
trialhashstring=dicttostring(dicttrial(str,mastermap))
if (trialhashstring==False) or (trialhashstring != masterhashstring):
return False
else:
return True
mastermap = dictmaster(master)
masterhashstring = dicttostring(mastermap)
for word in wordset:
if testtrial(word,master,mastermap,masterhashstring):
print word+"\n"

Resources