I am doing a bit of search engine. One of the features is an attempt to correct spelling in nothing is found. I replace the following phonetic sequences: ph<->f, ee <-> i, oo<->u, ou<->o (colour<->color). Where I can find a full list of things like that for English?
Thank you.
You might want to start here (Wikipedia on Soundex) and then start tracing through the "see also" links. (Metaphone has a list of replacements, for example.)
If you're creating search engine, you have to realize that there are plenty of web pages, which contains incorrectly spelled words. But, of course, you need any strategy to make that pages searchable too. So there are no general rules to implement spelling corrector (because of correctness becomes relative concept in web). But there are some tricks how to do that in practice :-)
I'd suggest you to use n-gram index + Levenstein distance (or any similar distance) to correct spelling.
Strings with small Levenstein distance are presumably the variations of the same word.
Assume you want to correct word "fantoma". If you have large amount of words - it would be very cost to iterate through dictionary and calculate distance to each word. So you have to find words with presumably small distance to "fantoma" very fast.
The main idea is while crawling and indexing web-pages - perform indexing of n-grams (for example - bigrams) into separate index. Split each word into n-grams, and add it to n-gram index:
1) Split each word from dictionary,
for example: "phantom" -> ["ph", "ha", "an", "nt", "to", "om"]
2) Create index:
...
"ph" -> [ "phantom", "pharmacy", "phenol", ... ]
"ha" -> [ "phantom", "happy" ... ]
"an" -> [ "phantom", "anatomy", ... ]
...
Now - you have index, and you may quickly find candidates for your words.
For example:
1) "fantoma" -> ["fa", "an", "nt", "to", "om", "ma"]
2) get lists of words for each n-gram from index,
and extract most frequent words from these lists - these words are candidates
3) calculate Levenstein distance to each candidate,
the word with smallest distance is probably spell-corrected variant of searched word.
I'd suggest you to look through the book "Introduction to information retrieval".
Related
I have a dictionary, it map word to id, like:
at: 0
hello: 1
school: 2
fortune:3
high:4
we: 5
eat: 6
....
high_school: 17
fortune_cookie: 18
....
Then, i have a document. What is the fastest and efficient way to transfer content of document to id.
Eg:
"At high school, we eat fortune cookie."
=> "0 17, 5 6 18"
Hope to see your suggestion.
Thank for readinng.
It really depends on how large your document is, whether your keyword list is static, and whether you need to find multi-word phrases. The naive way to do it is to look up every word from the document in the dictionary. Because dictionary lookups are O(1), looking up every word will take O(n) time, where n is the number of words in the document. If you need to find multi-word phrases, you can post-process the output to find those.
That's not the most efficient way to do things, but it's really easy to implement, reasonably fast, and will work very well if your documents aren't huge.
If you have very large documents, then you probably want something like the Aho-Corasick string matching algorithm. That algorithm works in two stages. First it builds a trie from the words in your dictionary, and then it makes a single pass through the document and outputs all of the matches. It's more complicated to implement than the naive method, but it works very well once the trie is built. And, truth to tell, it's not that hard to implement. The original paper, which is linked from the Wikipedia article, explains the algorithm well and it's not difficult to convert their pseudocode into a working program.
Note, however, that you might get some unexpected results. For example, if your dictionary contains the words "high" and "school" as well as the two-word phrase "high school", the Aho-Corasick will give you matches for all three when it sees the phrase "high school".
You can try a trie data structure or a red-black tree if the document hasn't so much duplicates. A trie is much less expensive. You can also combine a trie with a wildcard: http://phpir.com/tries-and-wildcards
I am building a document classifier to categorize documents.
So first step is to represent each documents as "features vector" for the training purpose.
After some research, I found that I can use either the Bag of Words approach or N-gram approach to represent a document as a vector.
The text in each document (scanned pdfs and images) is retrieved using an OCR, thus some words contain errors. And I don't have previous knowledge about the language used in these documents (can't use stemming).
So as far as I understand I have to use the n-gram approach. or are there other approaches to represent a document ?
I would also appreciate if someone could link me to an N-Gram guide in order to have a clearer picture and understand how it works.
Thanks in Advance
Use language detection to get document's language (my favorite tool is LanguageIdentifier from Tika project, but many others are available).
Use spell correction (see this question for some details).
Stem words (if you work in Java environment, Lucene is your choice).
Collect all N-grams (see below).
Make instances for classification by extracting n-grams from particular documents.
Build classifier.
N-gram models
N-grams are just sequences of N items. In classification by topic you normally use N-grams of words or their roots (though there are models based on N-grams of chars). Most popular N-grams are unigrams (just word), bigrams (2 serial words) and trigrams (3 serial words). So, from sentence
Hello, my name is Frank
you should get following unigrams:
[hello, my, name, is, frank] (or [hello, I, name, be, frank], if you use roots)
following bigrams:
[hello_my, my_name, name_is, is_frank]
and so on.
At the end your feature vector should have as much positions (dimensions) as there are words in all your text plus 1 for unknown words. Every position in instance vector should somehow reflect number of corresponding words in instance text. This may be number of occurrences, binary feature (1 if word occurs, 0 otherwise), normalized feature or tf-idf (very popular in classification by topic).
Classification process itself is the same as for any other domain.
I'm designing a cool spell checker (I know I know, modern browsers already have this), anyway, I am wondering what kind of effort would it take to develop a fairly simple but decent suggest-word algorithm.
My idea is that I would first look through the misspelled word's characters and count the amount of characters it matches in each word in the dictionary (sounds resources intensive), and then pick the top 5 matches (so if the misspelled word matches the most characters with 7 words from the dictionary, it will randomly display 5 of those words as suggested spelling).
Obviously to get more advanced, we would look at "common words" and have a dictionary file that is numbered with 'frequency of that word used in English language' ranking. I think that's taking it a bit overboard maybe.
What do you think? Anyone have ideas for this?
First of all you will have to consider the complexity in finding the "nearer" words to the misspelled word. I see that you are using a dictionary, a hash table perhaps. But this may not be enough. The best and cooler solution here is to go for a TRIE datastructure. The complexity of finding these so called nearer words will take linear order timing and it is very easy to exhaust the tree.
A small example
Take the word "njce". This is a level 1 example where one word is clearly misspelled. The obvious suggestion expected would be nice. The first step is very obvious to see whether this word is present in the dictionary. Using the search function of a TRIE, this could be done O(1) time, similar to a dictionary. The cooler part is finding the suggestions. You would obviously have to exhaust all the words that start with 'a' to 'z' that has words like ajce bjce cjce upto zjce. Now to find the occurences of this type is again linear depending on the character count. You should not carried away by multiplying this number with 26 the length of words. Since TRIE immediately diminishes as the length grows. Coming back to the problem. Once that search is done for which no result was found, you go the next character. Now you would be searching for nace nbce ncce upto nzce. In fact you wont have explore all the combinations as the TRIE data structure by itself will not be having the intermediate characters. Perhaps it will have na ni ne no nu characters and the search space becomes insanely simple. So are the further occurrences. You could develop on this concept further based on second and third order matches. Hope this helped.
I'm not sure how much of the wheel you're trying to reinvent, so you may want to check out Lucene.
Apache Lucene Coreā¢ (formerly named Lucene Java), our flagship sub-project, provides a Java-based indexing and search implementation, as well as spellchecking, hit highlighting and advanced analysis/tokenization capabilities.
We have a list of about 150,000 words, and when the user enters a free text, the system should present a list of words from the dictionary, that are very close to words in the free text.
For instance, the user enters: "I would like to buy legoe toys in Walmart". If the dictionary contains "Lego", "Car" and "Walmart", the system should present "Lego" and "Walmart" in the list. "Walmart" is obvious because it is identical to a word in the sentence, but "Lego" is similar enough to "Legoe" to be mentioned, too. However, nothing is similar to "Car", so that word is not shown.
Showing the list should be realtime, meaning that when the user has entered the sentence, the list of words must be present on the screen. Does anybody know a good algorithm for this?
The dictionary actually contains concepts which may include a space. For instance, "Lego spaceship". The perfect solution recognizes these multi-word concepts, too.
Any suggestions are appreciated.
Take a look at http://norvig.com/spell-correct.html for a simple algorithm. The article uses Python, but there are links to implementations in other languages at the end.
You will be doing quite a few lookups of words against a fixed dictionary. Therefore you need to prepare your dictionary. Logically, you can quickly eliminate candidates that are "just too different".
For instance, the words car and dissimilar may share a suffix, but they're obviously not misspellings of each other. Now why is that so obvious to us humans? For starters, the length is entirely different. That's an immediate disqualification (but with one exception - below). So, your dictionary should be sorted by word length. Match your input word with words of similar length. For short words that means +/- 1 character; longer words should have a higher margin (exactly how well can your demographic spell?)
Once you've restricted yourself to candidate words of similar length, you'd want to strip out words that are entirely dissimilar. With this I mean that they use entirely different letters. This is easiest to compare if you sort the letters in a word alphabetically. E.g. car becomes "acr"; rack becomes "ackr". You'll do this in preprocessing for your dictionary, and for each input word. The reason is that it's cheap to determine the (size of an) difference of two sorted sets. (Add a comment if you need explanation). car and rack have an difference of size 1, car and hat have a difference of size 2. This narrows down your set of candidates even further. Note that for longer words, you can bail out early when you've found too many differences. E.g. dissimilar and biography have a total difference of 13, but considering the length (8/9) you can probably bail out once you've found 5 differences.
This leaves you with a set of candidate words that use almost the same letters, and also are almost the same length. At this point you can start using more refined algorithms; you don't need to run 150.000 comparisons per input word anymore.
Now, for the length exception mentioned before: The problem is in "words" like greencar. It doesn't really match a word of length 8, and yet for humans it's quite obvious what was meant. In this case, you can't really break the input word at any random boundary and run an additional N-1 inexact matches against both halves. However, it is feasible to check for just a missing space. Just do a lookup for all possible prefixes. This is efficient because you'll be using the same part of the dictionary over and over, e.g. g gr, gre, gree, etc. For every prefix that you've found, check if the remaining suffixis also in the dictionery, e.g. reencar, eencar. If both halves of the input word are in the dictionary, but the word itself isn't, you can assume a missing space.
You would likely want to use an algorithm which calculates the Levenshtein distance.
However, since your data set is quite large, and you'll be comparing lots of words against it, a direct implementation of typical algorithms that do this won't be practical.
In order to find words in a reasonable amount of time, you will have to index your set of words in some way that facilitates fuzzy string matching.
One of these indexing methods would be to use a suffix tree. Another approach would be to use n-grams.
I would lean towards using a suffix tree since I find it easier to wrap my head around it and I find it more suited to the problem.
It might be of interest to look at a some algorithms such as the Levenshtein distance, which can calculate the amount of difference between 2 strings.
I'm not sure what language you are thinking of using but PHP has a function called levenshtein that performs this calculation and returns the distance. There's also a function called similar_text that does a similar thing. There's a code example here for the levenshtein function that checks a word against a dictionary of possible words and returns the closest words.
I hope this gives you a bit of insight into how a solution could work!
When entering a question, stackoverflow presents you with a list of questions that it thinks likely to cover the same topic. I have seen similar features on other sites or in other programs, too (Help file systems, for example), but I've never programmed something like this myself. Now I'm curious to know what sort of algorithm one would use for that.
The first approach that comes to my mind is splitting the phrase into words and look for phrases containing these words. Before you do that, you probably want to throw away insignificant words (like 'the', 'a', 'does' etc), and then you will want to rank the results.
Hey, wait - let's do that for web pages, and then we can have a ... watchamacallit ... - a "search engine", and then we can sell ads, and then ...
No, seriously, what are the common ways to solve this problem?
One approach is the so called bag-of-words model.
As you guessed, first you count how many times words appear in the text (usually called document in the NLP-lingo). Then you throw out the so called stop words, such as "the", "a", "or" and so on.
You're left with words and word counts. Do this for a while and you get a comprehensive set of words that appear in your documents. You can then create an index for these words:
"aardvark" is 1, "apple" is 2, ..., "z-index" is 70092.
Now you can take your word bags and turn them into vectors. For example, if your document contains two references for aardvarks and nothing else, it would look like this:
[2 0 0 ... 70k zeroes ... 0].
After this you can count the "angle" between the two vectors with a dot product. The smaller the angle, the closer the documents are.
This is a simple version and there other more advanced techniques. May the Wikipedia be with you.
#Hanno you should try the Levenshtein distance algorithm. Given an input string s and a list of of strings t iterate for each string u in t and return the one with the minimum Levenshtein distance.
http://en.wikipedia.org/wiki/Levenshtein_distance
See a Java implementation example in http://www.javalobby.org/java/forums/t15908.html
To augment the bag-of-words idea:
There are a few ways you can also pay some attention to n-grams, strings of two or more words kept in order. You might want to do this because a search for "space complexity" is much more than a search for things with "space" AND "complexity" in them, since the meaning of this phrase is more than the sum of its parts; that is, if you get a result that talks about the complexity of outer space and the universe, this is probably not what the search for "space complexity" really meant.
A key idea from natural language processing here is that of mutual information, which allows you (algorithmically) to judge whether or not a phrase is really a specific phrase (such as "space complexity") or just words which are coincidentally adjacent. Mathematically, the main idea is to ask, probabilistically, if these words appear next to each other more often than you would guess by their frequencies alone. If you see a phrase with a high mutual information score in your search query (or while indexing), you can get better results by trying to keep these words in sequence.
From my (rather small) experience developing full-text search engines: I would look up questions which contain some words from query (in your case, query is your question).
Sure, noise words should be ignored and we might want to check query for 'strong' words like 'ASP.Net' to narrow down search scope.
http://en.wikipedia.org/wiki/Index_(search_engine)#Inverted_indices'>Inverted indexes are commonly used to find questions with words we are interested in.
After finding questions with words from query, we might want to calculate distance between words we are interested in in questions, so question with 'phrases similarity' text ranks higher than question with 'discussing similarity, you hear following phrases...' text.
Here is the bag of words solution with tfidfvectorizer in python 3
#from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk
nltk.download('stopwords')
s=set(stopwords.words('english'))
train_x_cleaned = []
for i in train_x:
sentence = filter(lambda w: not w in s,i.split(","))
train_x_cleaned.append(' '.join(sentence))
vectorizer = TfidfVectorizer(binary=True)
train_x_vectors = vectorizer.fit_transform(train_x_cleaned)
print(vectorizer.get_feature_names_out())
print(train_x_vectors.toarray())
from sklearn import svm
clf_svm = svm.SVC(kernel='linear')
clf_svm.fit(train_x_vectors, train_y)
test_x = vectorizer.transform(["test phrase 1", "test phrase 2", "test phrase 3"])
print (type(test_x))
clf_svm.predict(test_x)