Predicting phrases instead of just next word - algorithm

For an application that we built, we are using a simple statistical model for word prediction (like Google Autocomplete) to guide search.
It uses a sequence of ngrams gathered from a large corpus of relevant text documents. By considering the previous N-1 words, it suggests the 5 most likely "next words" in descending order of probability, using Katz back-off.
We would like to extend this to predict phrases (multiple words) instead of a single word. However, when we are predicting a phrase, we would prefer not to display its prefixes.
For example, consider the input the cat.
In this case we would like to make predictions like the cat in the hat, but not the cat in & not the cat in the.
Assumptions:
We do not have access to past search statistics
We do not have tagged text data (for instance, we do not know the parts of speech)
What is a typical way to make these kinds of multi-word predictions? We've tried multiplicative and additive weighting of longer phrases, but our weights are arbitrary and overfit to our tests.

For this question, you need to define what it is you consider to be a valid completion -- then it should be possible to come up with a solution.
In the example you've given, "the cat in the hat" is much better than "the cat in the". I could interpret this as, "it should end with a noun" or "it shouldn't end with overly common words".
You've restricted the use of "tagged text data" but you could use a pretrained model, (e.g. NLTK, spacy, StanfordNLP) to guess the parts of speech and make an attempt to restrict predictions to only complete noun-phrases (or sequence ending in noun). Note that you would not necessarily need to tag all documents fed into the model, but only those phrases you're keeping in your autocomplete db.
Alternately, you could avoid completions that end in stopwords (or very high frequency words). Both "in" and "the" are words that occur in almost all English documents, so you could experimentally find a frequency cutoff (can't end in a word that occurs in more than 50% of documents) that help you filter. You could also look at phrases -- if the end of the phrase is drastically more common as a shorter phrase, then it doesn't make sense to tag it on, as the user could come up with it on their own.
Ultimately, you could create a labeled set of good and bad instances and try to create a supervised re-ranker based on word features -- both ideas above could be strong features in a supervised model (document frequency = 2, pos tag = 1). This is typically how search engines with data can do it. Note that you don't need search statistics or users for this, just a willingness to label the top-5 completions for a few hundred queries. Building a formal evaluation (that can be run in an automated manner) would probably help when trying to improve the system in the future. Any time you observe a bad completion, you could add it to the database and do a few labels -- over time, a supervised approach would get better.

Related

Correct way to represent documents containing multiple sentences in gensim file-based training

I am trying to use gensim's file-based training (example from documentation below):
from multiprocessing import cpu_count
from gensim.utils import save_as_line_sentence
from gensim.test.utils import get_tmpfile
from gensim.models import Word2Vec, Doc2Vec, FastText
# Convert any corpus to the needed format: 1 document per line, words delimited by " "
corpus = api.load("text8")
corpus_fname = get_tmpfile("text8-file-sentence.txt")
save_as_line_sentence(corpus, corpus_fname)
# Choose num of cores that you want to use (let's use all, models scale linearly now!)
num_cores = cpu_count()
# Train models using all cores
w2v_model = Word2Vec(corpus_file=corpus_fname, workers=num_cores)
d2v_model = Doc2Vec(corpus_file=corpus_fname, workers=num_cores)
ft_model = FastText(corpus_file=corpus_fname, workers=num_cores)
However, my actual corpus contains many documents, each containing many sentences.
For example, let's assume my corpus is the plays of Shakespeare - Each play is a document, each document has many many sentences, and I would like to learn embeddings for each play, but the word embeddings only from within the same sentence.
Since the file-based training is meant to be one document per line, I assume that I should put one play per line. However, the documentation for file-based-training doesn't have an example of any documents with multiple sentences.
Is there a way to peek inside the model to see the documents and word context pairs that have been found before they are trained?
What is the correct way to build this file, maintaining sentence boundaries?
Thank you
These algorithm implementations don't have any real understanding of, or dependence on, actual sentences. They just take texts – runs of word-tokens.
Often the texts provided to Word2Vec will be multiple sentences. Sometimes punctuation like sentence-ending periods are even retained as pseudo-words. (And when the sentences were really consecutive with each other in the source data, the overlapping word-context windows, between sentences, may even be a benefit.)
So you don't have to worry about "maintaining sentence boundaries". Any texts you provide that are sensible units of words that really co-occur will work about as well. (Especially in Word2Vec and FastText, even changing your breaks between texts to be sentences, or paragraphs, or sections, or documents is unlikely to have very much effect on the final word-vectors – it's just changing a subset of the training contexts, and probably not in any way that significantly changes which words influence which other words.)
There is, however, another implementation limit in gensim that you should watch out for: each training text can only be 10,000 tokens long, and if you supply larger texts, the extra tokens will be silently ignored.
So, be sure to use texts that are 10k tokens or shorter – even if you have to arbitrarily split longer ones. (Per above, any such arbitrary extra break in the token grouping is unlikely to have a noticeable effect on results.)
However, this presents a special problem using Doc2Vec in corpus_file mode, because in that mode, you don't get to specify your preferred tags for a text. (A text's tag, in this mode, is essentially just the line-number.)
In the original sequence corpus mode, the workaround for this 10k token limit was just to break up larger docs into multiple docs - but use the same repeated tags for all sub-documents from an original document. (This very closely approximates how a doc of any size would affect training.)
If you have documents with more than 10k tokens, I'd recommend either not using corpus_file mode, or figuring some way to use logical sub-documents of less than 10k tokens, then perhaps modeling your larger docs as the set of their sub-documents, or otherwise adjusting your downstream tasks to work on the same sub-document units.

Minhashing on Strings with K-length

I have a application where I should implement Bloom Filters and Minhashing to find similar items.
I have the Bloom Filter implemented but I need to make sure i understand the Minhashing part to do it:
The aplication generates a number of k-length Strings and stores it in a document, then all of those are inserted in the Bloom.
Where I want to implement the MinHash is by giving the option for the user to insert a String and then compare it and try to find the most similar ones on the document.
Do i have to Shingle all the Strings on the document? The problem is that I can't really find something to help me in theis, all I find is regarding two documents and never one String to a set of Strings.
So: the user enters a string and the application finds the most similar strings within a single document. By "similarity", do you mean something like Levenstein distance (whereby "cat" is deemed similar to "rat" and "cart"), or some other measure? And are you (roughly speaking) looking for similar paragraphs, similar sentences, similar phrases or similar words? These are important considerations.
Also, you say you are comparing one string to a set of strings. What are these strings? Sentences? Paragraphs? If you are sure you don't want to find any similarities spanning multiple paragraphs (or multiple sentences, or what-have-you) then it makes sense to think of the document as multiple separate strings; otherwise, you should think of it as a single long string.
The MinHash algorithm is for comparing many documents to each other, when it's impossible to store all document in memory simultaneously, and individually comparing every document to every other would be an n-squared problem. MinHash overcomes these problems by storing hashes for only some shingles, and it sacrifices some accuracy as a result. You don't need MinHash, as you could simply store every shingle in memory, using, say, 4-character-grams for your shingles. But if you don't expect word orderings to be switched around, you may find the Smith-Waterman algorithm more suitable (see also here).
If you're expecting the user to enter long strings of words, you may get better results basing your shingles on words; so 3-word-grams, for instance, ignoring differences in whitespacing, case and punctuation.
Generating 4-character-grams is simple: "The cat sat on the mat" would yield "The ", "he c", "e ca", " cat", etc. Each of these would be stored in memory, along with the paragraph number it appeared in. When the user enters a search string, that would be shingled in identical manner, and the paragraphs containing the greatest number of shared shingles can be retrieved. For efficiency of comparison, rather than storing the shingles as strings, you can store them as hashes using FNV1a or a similar cheap hash.
Shingles can also be built up from words rather than characters (e.g. "the cat sat", "cat sat on", "sat on the"). This tends to be better with larger pieces of text: say, 30 words or more. I would typically ignore all differences in whitespace, case and punctuation if taking this approach.
If you want to find matches that can span paragraphs as well, it becomes quite a bit more complex, as you have to store the character positions for every shingle and consider many different configurations of possible matches, penalizing them according to how widely scattered their shingles are. That could end up quite complex code, and I would seriously consider just sticking with a Levenstein-based solution such as Smith-Waterman, even if it doesn't deal well with inversions of word order.
I don't think a bloom filter is likely to help you much, though I'm not sure how you're using it. Bloom filters might be useful if your document is highly structured: a limited set of possible strings and you're searching for the existence of one of them. For natural language, though, I doubt it will be very useful.

Predicting Missing Word in Sentence

How can I predict a word that's missing from a sentence?
I've seen many papers on predicting the next word in a sentence using an n-grams language model with frequency distributions from a set of training data. But instead I want to predict a missing word that's not necessarily at the end of the sentence. For example:
I took my ___ for a walk.
I can't seem to find any algorithms that take advantage of the words after the blank; I guess I could ignore them, but they must add some value. And of course, a bi/trigram model doesn't work for predicting the first two words.
What algorithm/pattern should I use? Or is there no advantage to using the words after the blank?
Tensorflow has a tutorial to do this: https://www.tensorflow.org/versions/r0.9/tutorials/word2vec/index.html
Incidentally it does a bit more and generates word embeddings, but to get there they train a model to predict the (next/missing) word. They also show using only the previous words, but you can apply the same ideas and add the words that follow.
They also have a bunch of suggestions on how to improve the precision (skip ngrams).
Somewhere at the bottom of the tutorial you have links to working source-code.
The only thing to be worried about is to have sufficient training data.
So, when I've worked with bigrams/trigrams, an example query generally looked something like "Predict the missing word in 'Would you ____'". I'd then go through my training data and gather all the sets of three words matching that pattern, and count the things in the blanks. So, if my training data looked like:
would you not do that
would you kindly pull that lever
would you kindly push that button
could you kindly pull that lever
I would get two counts for "kindly" and one for "not", and I'd predict "kindly". All you have to do for your problem is consider the blank in a different place: "____ you kindly" would get two counts for "would" and one for "could", so you'd predict "would". As far as the computer is concerned, there's nothing special about the word order - you can describe whatever pattern you want, from your training data. Does that make sense?

Machine Learning: Good way to represent word features

Not quite sure if this is the right place or not..
But here is my question.
So for features which are numeric in nature, it is quite natural to represent them, plot them, etc., but what about words?
How do you deal with data where you have words as features? So let's say I have a dataset with following features:
InventoryVal, Number of Units, Avg Price, Category of Event and so on..
InventoryVal is a number
Number of Units is a number
Avg Price is a number
Category of Event is a word that is assigned by humans.
Event if I replace category (example) "books" by an id...... (say 1) but then that is also something which I have assigned and that's not something intrinsic of data.
What is a good metric to represent that a product belongs to category "art" without artificially assigning anything?
Eghh.. too vague or loosely worded question?/
So as you might have guessed there are entire ML libraries directed to this problem, but if you just want to get started, the simplest (and perhaps most common) is word frequency. In other words, you represent each word as a feature whose value is a function of the number of times that words occurs in each document.
But the most common words (a, and, the, this, etc.) are the most commonly occurring (in ordinary text documents (e.g, email messages) but are hardly the most important, so it is common to express a word feature as the inverse of it's frequency.
So again, this is the simplest methodology (bag of words is how it's usually referred to); more sophisticated analysis (which are not always required) pre-process the individual words to categorize them into e.g., parts-of-speech analysis.
If you like python, i recommend NLTK (Natural Language Tool Kit) is a mature and well-documented python library. There are quite a few "getting started" tutorials, but perhaps begin with ones created by the NLTK contributors and which are referenced on the NLTK homepage; these tutorials usually rely on corpus (data set) included in the base NLTK install.
If you are using an existing machine learning package, or a packaged machine learning algorithm, there may be a way to tell it that a particular field holds e.g. integers which are to be treated as identifiers, in which only comparisons for equality and inequality make sense. If not, if there are only a small number of distinct categories, it might make sense to replace a category field with 10 values with 10 binary fields, holding 1 if the object is in that particular category, or 0 if not (or 9 fields, with the object in the 10th category if all of them are 0).

How do you Index Files for Fast Searches?

Nowadays, Microsoft and Google will index the files on your hard drive so that you can search their contents quickly.
What I want to know is how do they do this? Can you describe the algorithm?
The simple case is an inverted index.
The most basic algorithm is simply:
scan the file for words, creating a list of unique words
normalize and filter the words
place an entry tying that word to the file in your index
The details are where things get tricky, but the fundamentals are the same.
By "normalize and filter" the words, I mean things like converting everything to lowercase, removing common "stop words" (the, if, in, a etc.), possibly "stemming" (removing common suffixes for verbs and plurals and such).
After that, you've got a unique list of words for the file and you can build your index off of that.
There are optimizations for reducing storage, techniques for checking locality of words (is "this" near "that" in the document, for example).
But, that's the fundamental way it's done.
Here's a really basic description; for more details, you can read this textbook (free online): http://informationretrieval.org/¹
1). For all files, create an index. The index consists of all unique words that occur in your dataset (called a "corpus"). With each word, a list of document ids is associated; each document id refers to a document that contains the word.
Variations: sometimes when you generate the index you want to ignore stop words ("a", "the", etc). You have to be careful, though ("to be or not to be" is a real query composed of stopwords).
Sometimes you also stem the words. This has more impact on search quality in non-English languages that use suffixes and prefixes to a greater extent.
2) When a user enters a query, look up the corresponding lists, and merge them. If it's a strict boolean query, the process is pretty straightforward -- for AND, a docid has to occur in all the word lists, for OR, in at least one wordlist, etc.
3) If you want to rank your results, there are a number of ways to do that, but the basic idea is to use the frequency with which a word occurs in a document, as compared to the frequency you expect it to occur in any document in the corpus, as a signal that the document is more or less relevant. See textbook.
4) You can also store word positions to infer phrases, etc.
Most of that is irrelevant for desktop search, as you are more interested in recall (all documents that include the term) than ranking.
¹ previously on http://www-csli.stanford.edu/~hinrich/information-retrieval-book.html, accessible via wayback machine
You could always look into something like Apache Lucene.
Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform.

Resources