Binary String Search - minimum bin width? - algorithm

I happen to be building the binary search in Python, but the question has more to do with binary search structure in general.
Let's assume I have about one thousand eligible candidates I am searching through using binary search, doing the classic approach of bisecting the sorted dataset and repeating this process in order to narrow down the eligible set to iterate over. The candidates are just strings of names,(first-last format, eg "Peter Jackson") I initially sort the set alphabetically and then proceed with bisection using something like this:
hi = len(names)
lo = 0
while lo < hi:
mid = (lo+hi)//2
midval = names[mid].lower()
if midval < query.lower():
lo = mid+1
elif midval > query.lower():
hi=mid
else:
return midval
return None
This code adapted from here: https://stackoverflow.com/a/212413/215608
Here's the thing, the above procedure assumes a single exact match or no result at all. What if the query was merely for a "Peter", but there are several peters with differing last names? In order to return all the Peters, one would have to ensure that the bisected "bins" never got so small as to except eligible results. The bisection process would have to cease and cede to something like a regex/regular old string match in order to return all the Peters.
I'm not so much asking how to accomplish this as what this type of search is called... what is a binary search with a delimited criteria for "bin size" called? Something that conditionally bisects the dataset, and once the criteria is fulfilled, falls back to some other form of string matching in order to ensure that there can effectively be a ending wildcard on the query (so a search for a "Peter" will get "Peter Jacksons" and "Peter Edwards")
Hopefully I've been clear what I mean. I realize in the typical DB scenario the names might be separated, this is just intended as a proof of concept.

I've not come across this type of two-stage search before, so don't know whether it has a well-known name. I can, however, propose a method for how it can be carried out.
Let say you've run the first stage and have found no match.
You can perform the second stage with a pair of binary searches and a special comparator. The binary searches would use the same principle as bisect_left and bisect_right. You won't be able to use those functions directly since you'll need a special comparator, but you can use them as the basis for your implementation.
Now to the comparator. When comparing the list element x against the search key k, the comparator would only use x[:len(k)] and ignore the rest of x. Thus when searching for "Peter", all Peters in the list would compare equal to the key. Consequently, bisect_left() to bisect_right() would give you the range containing all Peters in the list.
All of this can be done using O(log n) comparisons.

In your binary search you either hit an exact match OR an area where the match would be.
So in your case you need to get the upper and lower boundaries (hi lo as you call them) for the area that would include the Peter and return all the intermediate strings.
But if you aim to do something like show next words of a word you should look into Tries instead of BSTs

Related

How to find the correct "word part" records that make up an input "word string", given a word part dataset?

In agglutinative languages, "words" is a fuzzy concept. Some agglutinative languages are like Turkish, Inuktitut, and many Native American languages (amongst others). In them, "words" are often/usually composed of a "base", and multiple prefixes/suffixes. So you might have ama-ebi-na-mo-kay-i-mang-na (I just made that up), where ebi is the base, and the rest are affixes. Let's say this means "walking early in the morning when the birds start singing", ama/early ebi/walk na/-ing mo/during kay/bird i/plural mang/sing na-ing. These words can get quite long, like 30+ "letters".
So I was playing around with creating a "dictionary" for a language like this, but it's not realistic to write definitions or "dictionary entries" as your typical English "words", because there are a possibly infinite number of words! (All combinations of prefixes/bases/suffixes). So instead, I was trying to think maybe you could have just these "word parts" in the database (prefixes/suffixes/bases, which can't stand by themselves actually in the real spoken language, but are clearly distinct in terms of adding meaning). By having a database of word parts, you would then (in theory) query by passing as input a long say 20-character "word", and it would figure out how to break this word down into word parts because of the database (somehow).
That is, it would take amaebinamokayimangna as input, and know that it can be broken down into ama-ebi-na-mo-kay-i-mang-na, and then it simply queries the database for those parts to return whatever metadata is associated with those parts.
What would you need to do to accomplish this basically, at a high level? Assuming you had a database (SQL or just in a text file) containing these affixes and bases, how could you take the input and know that it breaks down into these parts organized in this way? Maybe it turns out there is are other parts in the DB which can be arrange like a-ma-e-bina-mo-kay-im-ang-na, which is spelled the the exact same way (if you remove the hyphens), so it would likely find that as a result too, and return it as another possible match.
The only way (naive way) I can think of solving this currently, is to break the input string into ngrams like this:
function getNgrams(str, { min = 1, max = 8 } = {}) {
const ngrams = []
const points = Array.from(str)
const n = points.length
let minSize = min
while (minSize <= max) {
for (let i = 0; i < (n - minSize + 1); i++) {
const ngram = points.slice(i, i + minSize)
ngrams.push(ngram.join(''))
}
minSize++
}
return ngrams
}
And it would then check the database if any of those ngrams exist, maybe passing in if this is a prefix (start of word), infix, or suffix (end of word) part. The database parts table would have { id, text, is_start, is_end } sort of thing. But this would be horribly inefficient and probably wouldn't work. It seems really complex how you might go about solving this.
So wondering, how would you solve this? At a high level, what is the main vision you see of how you would tackle this, either in a SQL database or some other approach?
The goal is, save to some persisted area the word parts, and how they are combined (if they are a prefix/infix/suffix), and then take as input a string which could be generated from those parts, and try and figure out what the parts are from the persisted data, and then return those parts in the correct order.
First consider the simplified problem where we have a combination of prefixes only. To be able to split this into prefixes, we would do:
Store all the prefixes in a trie.
Let's say the input has n characters. Create an array of length n (of numbers, if you need just one possible split, or sets of numbers, if you need all possible splits). We will store in this array for each index, from which positions of the input string this index can be reached by adding a prefix from the dictionary.
For each substring starting with the 1st character of the input, if it belongs to the Trie, mark the index as can be reached from 0th position (i.e. there is a path from 0th position to k-th position). Trie allows us to do this in O(n)
For all i = 2..n, if the i-th character can be reached from the beginning, repeat the previous step for the substrings starting at i, mark their end position as "can be reached from (i-1)th position" as appropriate (i.e. there is a path from (i-1)th position to ((i-1)+k)th position).
At the end, we can traverse these indices backwards, starting at the end of the array. Each time we jump to an index stored in the array, we are skipping a prefix in the dictionary. Each path from the last position to the first position gives us a possible split. Since we repeated the 4-th step only for positions that can be reached from the 0-th position, all paths are guaranteed to end up at the 0-th position.
Building the array takes O(n^2) time (assuming we have the trie built already). Traversing the array to find all possible splits is O(n*s), where s is the number of possible splits. In any case, we can say if there is a possible split as soon as we have built the array.
The problem with prefixes, suffixes and base words is a slight modification of the above:
Build the "previous" indices for prefixes and "next" for suffixes (possibly starting from the end of the input and tracking the suffixes backwards).
For each base word in the string (all of which we can also find efficiently -O(n^2)- using a trie) see if the starting position can be reached from the left using prefixes, and end position can be reached from right using suffixes. If yes, you have a split.
As you can see, the keywords are trie and dynamic programming. The problem of finding only a single split requires O(n^2) time after the tries are built. Tries can be built in O(m) time where m is the total length of added strings.

Algorithm to match sequential subset from a list

I am trying to remember the right algorithm to find a subset within a set that matches an element of a list of possible subsets. For example, given the input:
aehfaqptpzzy
and the subset list:
{ happy, sad, indifferent }
we can see that the word "happy" is a match because it is inside the input:
a e h f a q p t p z z y
I am pretty sure there is a specific algorithm to find all such matches, but I cannot remember what it is called.
UPDATE
The above example is not very good because it has letter repetitions, in fact in my problem both the dictionary entries and the input string are sortable sets. For example,
input: acegimnrqvy
dictionary:
{ cgn,
dfr,
lmr,
mnqv,
eg }
So in this example the algorithm would return cgn, mnqv and eg as matches. Also, I would like to find the best set of complementary matches where "best" means longest. So, in the example above the "best" answer would be "cgn mnqv", eg would not be a match because it conflicts with cgn which is a longer match.
I realize that the problem can be done by brute force scan, but that is undesirable because there could be thousands of entries in the dictionary and thousands of values in the input string. If we are trying to find the best set of matches, computability will become an issue.
You can use the Aho - Corrasick algorithm with more than one current states. For each of the input letters, you either stay (skip the letter) or move using the appropriate edge. If two or more "actors" meet at the same place, just merge them to one (if you're interested just in the presence and not counts).
About the complexity - this could be as slow as the naive O(MN) approach, because there can be up to size of dictionary actors. However, in practice, we can make a good use of the fact that many words are substrings of others, because there never won't be more than size of the trie actors, which - compared to the size of the dictionary - tends to be much smaller.

How to get a group of names all with the same first letter from an alphabetically sorted list?

I was wondering what is the best way to get a group of names given the first letter. The present application I am working in is in javascript, but I had a similar problem in another language sometimes ago. One idea I have thought of would be to do a binary search for the end of the names from a particular letter and then do another binary search for the beginning. Another idea was to take the ratio of of the distance of the given letter from the beginning and applying that ratio to find where to start the search. For example if the letter was 'e' then I would start start a quarter of the way through the list, and do some kind of search to see how close I am to the letter I need. The program will be working with several hundred names so I really didn't want to just do a for loop and search the whole thing. Also, I am interested what kind of algorithms for this are out there?
Both your approaches have their advantages and disadvantages. Binary search gives exactly O(log(N)) complexity, and your second method will give approximately O(log(N)) with some advantage for uniform distribution of names and possibly disadvantage for another type of distribution. What is better is up to your needs.
One big improvement I can propose is to index character positions while creating names list. Make simple hash map with first letters as keys and start positions as values. It will take O(N), but only once, and then you will get exact position for each letter in a constant time. For JavaScript you can do it, for example, while loading data to the page, when you walk trough the list anyway.
Guys I think we could use an approach similar to count sort.We could create an array of size 26 .This array would not be an normal array but would be an array of pointers to linked list which has the following structure.
Struct node
{
char *ptr ;
struct node *next;
};
struct node * names[26]; //Our array.
Now we would scan the list in O(n) time and corresponding to the first character we could subtract 65 (if ASCII value of letter is in the range 65 - 90).Guys i am subtracting 65 so as to fix the letter in 26 sized array.
At each location we could create a linked list and can store the corresponding words in that location.
Now suppose if we want to find all letters that begin with D we could directly do to array location 3(No need to apply hash function again) and then traverse linked list created till null is reached.
And what i think space complexity required in hashing would be same as that of above but hashing would also involve computing hash function every time when we want to insert or search for words beginning with same letter.
If the plan is to do something with the names (as opposed to just find out how many there are), then it will be necessary to scan the names that fit the criteria of matching the first letter. If so, then it seems that a binary search for the first name in the entire set is the fastest method. The "do something" part would involve scanning the names starting from the location found by the binary search. When a name is read that no longer starts with the given letter, you are done.
If you have an unsorted set of filenames then I would propose following algorithm:
1) Create two variables: 1) currently found first letter (I will call it currentLetter) 2) list of filenames which start with this letter (currentFilenames)
2) firstLetter = null
currentFilenames = [] - empty list or array
3) Iterate over filenames. If current filenames starts with currentLetter then add this filename to the currentFilenames. If it starts with letter which goes before currentLetter then assign currentLetter to the first letter of new filename and create a new currentFilenames list which consists only of one current filename.
With such an algorithm you will have at the end a letter which goes first in the alphabet and list of files starting from that letter.
Sample code (tried to write in Javascript but do not blame if I wrote anything wrong):
function GetFirstLetterAndFilenames(allFilenames) {
var currentLetter = null;
var currentFilenames = null;
for (int i = 0; i < allFilenames.length ; i++) {
var thisLetter = allFilenames[i][0];
if (currentLetter == null || thisLetter < currentLetter) {
currentLetter = thisLetter;
currentFilenames = [allFilenames[i]];
} else if (currentLetter == thisLetter) {
currentFilenames.push(allFilenames[i]);
}
}
return new {lowestLetter = currentLetter, filenames = currentFilenames};
}
Names have a funny way of not distributing themselves evenly over the alphabet, so you're probably not going to win by as much as you'd hope by predicting where to look.
But a really easy way to cut your search down by an average of two steps is as follows: if the letter is from a to m, binary search for the next letter. Then binary search from the beginning of the list only to the position you just found for the next letter. If the letter is from n to z, binary search for it. Then, again, only search the portion of the list after what you just found.
Is this worth saving two steps? Dunno. It's pretty easy to implement, but then again, two steps don't take very long. (Correctly guessing the letter would save you maybe 4 steps at best.)
Another possibility is to have bins for each letter to begin with. It starts out already sorted, and if you have to re-sort, you only have to sort within one letter, not the whole list. The downside is that if you need to manipulate the whole list frequently, you have to glue all the bins together.

Searching strings with . wildcard

I have an array with so much strings and want to search for a pattern on it.
This pattern can have some "." wildcard who matches (each) 1 character (any).
For example:
myset = {"bar", "foo", "cya", "test"}
find(myset, "f.o") -> returns true (matches with "foo")
find(myset, "foo.") -> returns false
find(myset, ".e.t") -> returns true (matches with "test")
find(myset, "cya") -> returns true (matches with "cya")
I tried to find a way to implement this algorithm fast because myset actually is a very big array, but none of my ideas has satisfactory complexity (for example O(size_of(myset) * lenght(pattern)))
Edit:
myset is an huge array, the words in it aren't big.
I can do a slow preprocessing. But I'll have so much find() queries, so find() I want find() to be as fast as possible.
You could build a suffix tree of the corpus of all possible words in your set (see this link)
Using this data structure your complexity would include a one time cost of O(n) to build the tree, where n is the sum of the lengths of all your words.
Once the tree is built finding if a string matches should take just O(n) where n is length of the string.
If the set is fixed, you could pre-calculate frequencies of a character c being at position p (for as many p values as you consider worth-while), then search through the array once, for each element testing characters at specific positions in an order such that you are most likely to exit early.
First, divide the corpus into sets per word length. Then your find algorithm can search over the appropriate set, since the input to find() always requires the match to have a specific length, and the algorithm can be designed to work well with all words of the same length.
Next (for each set), create a hash map from a hash of character x position to a list of matching words. It is quite ok to have a large amount of hash collision. You can use delta and run-length encoding to reduce the size of the list of matching words.
To search, pick the appropriate hash map for the find input length, and for each non . character, calculate the hash for that character x position, and AND together the lists of words, to get a much reduced list.
Brute force search through that much smaller list.
If you are sure that the length of the words in your set are not large. You could probably create a table which holds the following:
List of Words which have first Character 'a' , List of Words which have first Character 'b', ..
List of Words which have second Character 'a', List of words which have second Character 'b', ..
and so on.
When you are searching for the word. You can look for the list of words which have the first character same as the search strings' first character. With this refined list, look for the words which have the second character same as the search strings' second character and so on. You can ignore '.' whenever you encounter them.
I understand that building the table may take a large amount of space but the time taken will come down significantly.
For example, if you have myset = {"bar", "foo", "cya", "test"} and you are searching for 'f.o'
The moment you check for list of words starting with f, you eliminate the rest of the set. Just an idea.. Hope it helps.
I had this same question, and I wasn't completely happy with most of the ideas/solutions I found on the internet. I think the "right" way to do this is to use a Directed Acyclic Word Graph. I didn't quite do that, but I added some additional logic to a Trie to get a similar effect.
See my isWord() implementation, analogous to your desired find() interface. It works by recursing down the Trie, branching on wildcard, and then collecting results back into a common set. (See findNodes().)
getMatchingWords() is similar in spirit, except that it returns the set of matching words, instead of just a boolean as to whether or not the query matches anything.

Finding dictionary words

I have a lot of compound strings that are a combination of two or three English words.
e.g. "Spicejet" is a combination of the words "spice" and "jet"
I need to separate these individual English words from such compound strings. My dictionary is going to consist of around 100000 words.
What would be the most efficient by which I can separate individual English words from such compound strings.
I'm not sure how much time or frequency you have to do this (is it a one-time operation? daily? weekly?) but you're obviously going to want a quick, weighted dictionary lookup.
You'll also want to have a conflict resolution mechanism, perhaps a side-queue to manually resolve conflicts on tuples that have multiple possible meanings.
I would look into Tries. Using one you can efficiently find (and weight) your prefixes, which are precisely what you will be looking for.
You'll have to build the Tries yourself from a good dictionary source, and weight the nodes on full words to provide yourself a good quality mechanism for reference.
Just brainstorming here, but if you know your dataset consists primarily of duplets or triplets, you could probably get away with multiple Trie lookups, for example looking up 'Spic' and then 'ejet' and then finding that both results have a low score, abandon into 'Spice' and 'Jet', where both Tries would yield a good combined result between the two.
Also I would consider utilizing frequency analysis on the most common prefixes up to an arbitrary or dynamic limit, e.g. filtering 'the' or 'un' or 'in' and weighting those accordingly.
Sounds like a fun problem, good luck!
If the aim is to find the "the largest possible break up for the input" as you replied, then the algorithm could be fairly straightforward if you use some graph theory. You take the compound word and make a graph with a vertex before and after every letter. You'll have a vertex for each index in the string and one past the end. Next you find all legal words in your dictionary that are substrings of the compound word. Then, for each legal substring, add an edge with weight 1 to the graph connecting the vertex before the first letter in the substring with the vertex after the last letter in the substring. Finally, use a shortest path algorithm to find the path with fewest edges between the first and the last vertex.
The pseudo code is something like this:
parseWords(compoundWord)
# Make the graph
graph = makeGraph()
N = compoundWord.length
for index = 0 to N
graph.addVertex(i)
# Add the edges for each word
for index = 0 to N - 1
for length = 1 to min(N - index, MAX_WORD_LENGTH)
potentialWord = compoundWord.substr(index, length)
if dictionary.isElement(potentialWord)
graph.addEdge(index, index + length, 1)
# Now find a list of edges which define the shortest path
edges = graph.shortestPath(0, N)
# Change these edges back into words.
result = makeList()
for e in edges
result.add(compoundWord.substr(e.start, e.stop - e.start + 1))
return result
I, obviously, haven't tested this pseudo-code, and there may be some off-by-one indexing errors, and there isn't any bug-checking, but the basic idea is there. I did something similar to this in school and it worked pretty well. The edge creation loops are O(M * N), where N is the length of the compound word, and M is the maximum word length in your dictionary or N (whichever is smaller). The shortest path algorithm's runtime will depend on which algorithm you pick. Dijkstra's comes most readily to mind. I think its runtime is O(N^2 * log(N)), since the max edges possible is N^2.
You can use any shortest path algorithm. There are several shortest path algorithms which have their various strengths and weaknesses, but I'm guessing that for your case the difference will not be too significant. If, instead of trying to find the fewest possible words to break up the compound, you wanted to find the most possible, then you give the edges negative weights and try to find the shortest path with an algorithm that allows negative weights.
And how will you decide how to divide things? Look around the web and you'll find examples of URLs that turned out to have other meanings.
Assuming you didn't have the capitals to go on, what would you do with these (Ones that come to mind at present, I know there are more.):
PenIsland
KidsExchange
TherapistFinder
The last one is particularly problematic because the troublesome part is two words run together but is not a compound word, the meaning completely changes when you break it.
So, given a word, is it a compound word, composed of two other English words? You could have some sort of lookup table for all such compound words, but if you just examine the candidates and try to match against English words, you will get false positives.
Edit: looks as if I am going to have to go to provide some examples. Words I was thinking of include:
accustomednesses != accustomed + nesses
adulthoods != adult + hoods
agreeabilities != agree + abilities
willingest != will + ingest
windlasses != wind + lasses
withstanding != with + standing
yourselves != yours + elves
zoomorphic != zoom + orphic
ambassadorships != ambassador + ships
allotropes != allot + ropes
Here is some python code to try out to make the point. Get yourself a dictionary on disk and have a go:
from __future__ import with_statement
def opendict(dictionary=r"g:\words\words(3).txt"):
with open(dictionary, "r") as f:
return set(line.strip() for line in f)
if __name__ == '__main__':
s = opendict()
for word in sorted(s):
if len(word) >= 10:
for i in range(4, len(word)-4):
left, right = word[:i], word[i:]
if (left in s) and (right in s):
if right not in ('nesses', ):
print word, left, right
It sounds to me like you want to store you dictionary in a Trie or a DAWG data structure.
A Trie already stores words as compound words. So "spicejet" would be stored as "spicejet" where the * denotes the end of a word. All you'd have to do is look up the compound word in the dictionary and keep track of how many end-of-word terminators you hit. From there you would then have to try each substring (in this example, we don't yet know if "jet" is a word, so we'd have to look that up).
It occurs to me that there are a relatively small number of substrings (minimum length 2) from any reasonable compound word. For example for "spicejet" I get:
'sp', 'pi', 'ic', 'ce', 'ej', 'je', 'et',
'spi', 'pic', 'ice', 'cej', 'eje', 'jet',
'spic', 'pice', 'icej', 'ceje', 'ejet',
'spice', 'picej', 'iceje', 'cejet',
'spicej', 'piceje', 'icejet',
'spiceje' 'picejet'
... 26 substrings.
So, find a function to generate all those (slide across your string using strides of 2, 3, 4 ... (len(yourstring) - 1) and then simply check each of those in a set or hash table.
A similar question was asked recently: Word-separating algorithm. If you wanted to limit the number of splits, you would keep track of the number of splits in each of the tuples (so instead of a pair, a triple).
Word existence could be done with a trie, or more simply with a set (i.e. a hash table). Given a suitable function, you could do:
# python-ish pseudocode
def splitword(word):
# word is a character array indexed from 0..n-1
for i from 1 to n-1:
head = word[:i] # first i characters
tail = word[i:] # everything else
if is_word(head):
if i == n-1:
return [head] # this was the only valid word; return it as a 1-element list
else:
rest = splitword(tail)
if rest != []: # check whether we successfully split the tail into words
return [head] + rest
return [] # No successful split found, and 'word' is not a word.
Basically, just try the different break points to see if we can make words. The recursion means it will backtrack until a successful split is found.
Of course, this may not find the splits you want. You could modify this to return all possible splits (instead of merely the first found), then do some kind of weighted sum, perhaps, to prefer common words over uncommon words.
This can be a very difficult problem and there is no simple general solution (there may be heuristics that work for small subsets).
We face exactly this problem in chemistry where names are composed by concatenation of morphemes. An example is:
ethylmethylketone
where the morphemes are:
ethyl methyl and ketone
We tackle this through automata and maximum entropy and the code is available on Sourceforge
http://www.sf.net/projects/oscar3-chem
but be warned that it will take some work.
We sometimes encounter ambiguity and are still finding a good way of reporting it.
To distinguish between penIsland and penisLand would require domain-specific heuristics. The likely interpretation will depend on the corpus being used - no linguistic problem is independent from the domain or domains being analysed.
As another example the string
weeknight
can be parsed as
wee knight
or
week night
Both are "right" in that they obey the form "adj-noun" or "noun-noun". Both make "sense" and which is chosen will depend on the domain of usage. In a fantasy game the first is more probable and in commerce the latter. If you have problems of this sort then it will be useful to have a corpus of agreed usage which has been annotated by experts (technically a "Gold Standard" in Natural Language Processing).
I would use the following algorithm.
Start with the sorted list of words
to split, and a sorted list of
declined words (dictionary).
Create a result list of objects
which should store: remaining word
and list of matched words.
Fill the result list with the words
to split as remaining words.
Walk through the result array and
the dictionary concurrently --
always increasing the least of the
two, in a manner similar to the
merge algorithm. In this way you can
compare all the possible matching
pairs in one pass.
Any time you find a match, i.e. a
split words word that starts with a
dictionary word, replace the
matching dictionary word and the
remaining part in the result list.
You have to take into account
possible multiples.
Any time the remaining part is empty,
you found a final result.
Any time you don't find a match on
the "left side", in other words,
every time you increment the result
pointer because of no match, delete
the corresponding result item. This
word has no matches and can't be
split.
Once you get to the bottom of the
lists, you will have a list of
partial results. Repeat the loop
until this is empty -- go to point 4.

Resources