I am working on feature engineering for text classification. I am stuck at a point over choosing features. Majority of the literatures say tokenize the text and use them as features(remove stop words ,punctuations), but then you miss out on multi-word words like (Lung cancer) or phrases. So the question is how do I decide the ngram order and treat them as features?
The relevant 2-gram (in this case Lung cancer) will appear by frequency.
Imagine the following text:
I know someone who has Lung cancer: Lung cancer is terrible disease.
If you make a list of the 2-grams you'll end with lung cancer first; and other combinations ('has Lung'; 'hate Lung') second.
This is because certain groups of words represent something - and are therefore called repeatedly - and others are just connectors ('has' or 'hate') that form 2-grams 'circumstantially'. The key is to filter by frequency.
If you are having issues generating n-grams, I feel you might be using the wrong libraries/toolset.
I would say that this highly depends on your training data. You can visualise distributions of bigrams and trigrams frequencies. This might give you an idea of the relevance of the n-gram order. You might also want to use noun chunks during your investigation. Relevant noun chunks (or parts of them) could appear often. It might give you a sense on how to select you n-grams.
Related
I have a set of almost 2000 texts.
My goal is to find the keywords across these texts to understand what is the subject of them, or simply the most common words and expressions.
I would like some ideias of algorithms to score the words and identify when they frequently come together.
I have read some other related questions here, but I'm trying to get more and more information about this subject. So any ideas are very welcome. Thank you so much!
--
I have already extracted stopwords. After removing them I have more than 7000 words remaing; My question is how to score these words and from which point I can consider removing some them from my list of keywords. Also, how to get key expressions, find words that come together.
You may want to refer to a classical text on Information Retrieval. Most of the algorithms use a stop list to remove commonly occurring words such as "for" and "the", and then, extract the base or root word (change "seeing", "seen", "see", "sees" to the base word "see"). The remaining words form the keywords of the document and are weighted by things like term frequency (how many times the word occurs in the document) and inverse document frequency (how unique is the word in describing the content). You can use the weighted keywords as document representation and use them for retrieval.
You can use the Lucene MoreLikeThis implementation, which extracts a list of top most important keywords from a given text document. The term scoring function it uses is the tf-idf, i.e. it chooses those terms with topmost tf-idf scores, i.e. the terms which are relatively uncommon and occur frequently in the document.
If efficiency is an issue, it employs some common heuristics as follows.
Since you're trying to maximize a tf*idf score, you're probably most interested in terms with a high tf. Choosing a tf threshold even as low as two or three will radically reduce the number of terms under consideration. Another heuristic is that terms with a high idf (i.e., a low df) tend to be longer. So you could threshold the terms by the number of characters, not selecting anything less than, e.g., six or seven characters. With these sorts of heuristics you can usually find small set of, e.g., ten or fewer terms that do a pretty good job of characterizing a document.
More details can be found in this javadoc.
I am trying to make a search tool that would search a small number of objects (about 1000, each with about 3 text fields I want to search) for a given phrase.
I was trying to find an algorithm that would rank the search results for me. Lots of topics lead to Fuzzy matching, and the Levenshtein distance algorithm, but that doesn’t seem appropriate for this case (for example, it would say the phrase “cats and dogs” is closer to “cars and cogs” than it is to “dogs and cats”).
Is there an algorithm/method dedicated to matching a search phrase against other blocks of text, and ranking the results according to things like the text being equal, the search phrase being contained, individual words being contained etc. I don’t even know what is normally appropriate.
I usually code in c#. I am not using a data base.
Look at Lucene. It can perform all sort of text indexing, return ranked results, and lots of other good stuff. There's an implementation in C#. It might be a bit overkill for your use case, but it's such an excellent and useful technology that you should really have a look into it, it's almost certain you will find good use for it during your career.
Lately I've been mucking about with text categorization and language classification based on Cavnar and Trenkle's article "N-Gram-Based Text Categorization" as well as other related sources.
For doing language classification I've found this method to be very reliable and useful. The size of the documents used to generate the N-gram frequency profiles is fairly unimportant as long as they are "long enough" since I'm just using the most common n N-grams from the documents.
On the other hand well-functioning text categorization eludes me. I've tried with both my own implementations of various variations of the algorithms at hand, with and without various tweaks such as idf weighting and other peoples' implementations. It works quite well as long as I can generate somewhat similarly-sized frequency profiles for the category reference documents but the moment they start to differ just a bit too much the whole thing falls apart and the category with the shortest profile ends up getting a disproportionate number of documents assigned to it.
Now, my question is. What is the preferred method of compensating for this effect? It's obviously happening because the algorithm assumes a maximum distance for any given N-gram that equals the length of the category frequency profile but for some reason I just can't wrap my head around how to fix it. One reason I'm interested in this fix is actually because I'm trying to automate the generation of category profiles based on documents with a known category which can vary in length (and even if they are the same length the profiles may end up being different lengths). Is there a "best practice" solution to this?
If you are still interested, and assuming I understand your question correctly, the answer to your problem would be to normalise your n-gram frequencies.
The simplest way to do this, on a per document basis, is to count the total frequency of all n-grams in your document and divide each individual n-gram frequency by that number. The result is that every n-gram frequency weighting now relates to a percentage of the total document content, regardless of the overall length.
Using these percentages in your distance metrics will discount the size of the documents and instead focus on the actual make up of their content.
It might also be worth noting that the n-gram representation only makes up a very small part of an entire categorisation solution. You might also consider using dimensional reduction, different index weighting metrics and obviously different classification algorithms.
See here for an example of n-gram use in text classification
As I know the task is to count probability of generation some text by language model M.
Recently i was working on measuring the readaiblity of texts using semantic, synctatic and lexical properties. It can be also measured by language model approach.
To answer properly you should consider these questions:
Are you using log-likelihood approach?
What levels of N-Grams are you using? unigrams digrams or higher level?
How big are language corpuses that you use?
Using only digrams and unigrams i managed to classify some documents with nice results. If your classification is weak consider creating bigger language corpuse or using n-grams of lower levels.
Also remember that classifying some text to invalid category may be an error depending on length of text (randomly there are few words appearing in another language models).
Just consider making your language corpuses bigger and know that analysing short texts have higher probability of missclasification
I am working a word based game. My word database contains around 10,000 english words (sorted alphabetically). I am planning to have 5 difficulty levels in the game. Level 1 shows the easiest words and Level 5 shows the most difficult words, relatively speaking.
I need to divide the 10,000 long words list into 5 levels, starting from the easiest words to difficult ones. I am looking for a program to do this for me.
Can someone tell me if there is an algorithm or a method to quantitatively measure the difficulty of an english word?
I have some thoughts revolving around using the "word length" and "word frequency" as factors, and come up with a formula or something that accomplishes this.
Get a large corpus of texts (e.g. from the Gutenberg archives), do a straight frequency analysis, and eyeball the results. If they don't look satisfying, weight each text with its Flesch-Kincaid score and run the analysis again - words that show up frequently, but in "difficult" texts will get a score boost, which is what you want.
If all you have is 10000 words, though, it will probably be quicker to just do the frequency sorting as a first pass and then tweak the results by hand.
I'm not understanding how frequency is being used... if you were to scan a newspaper, I'm sure you would see the word "thoroughly" mentioned much more frequently than the word "bop" or "moo" but that doesn't mean it's an easier word; on the contrary 'thoroughly' is one of the most disgustingly absurd spelling anomalies that gives grade school children nightmares...
Try explaining to a sane human being learning english as a second language the subtle difference between slaughter and laughter.
I agree that frequency of use is the most likely metric; there are studies supporting a high correlation between word frequency and difficulty (correct responses on tests, etc.). Check out the English Lexicon Project at http://elexicon.wustl.edu/ for some 70k(?) frequency-rated words.
Crowd-source the answer.
Create an online 'game' that lists 10 words at random.
Get the player to drag and drop them into easiest - hardest, and tick to indicate if the player has ever heard of the word.
Apply an ranking algorithm (e.g. ELO) on the result of each experiment.
Repeat.
It might even be fun to play, you could get a language proficiency score at the end.
Difficulty is a pretty amorphus concept. If you've no clear idea of what you want, perhaps you could take a look at the Porter Stemming Algorithm (see for example the original paper). That contains a more advanced idea of 'length' by defining words as being of the form [C](VC){m}[V]; C means a block of consonants and V a block of vowels and this definition says a word is an optional C followed by m VC blocks and finally an optional V. The m value is this advanced 'length'.
depending on the type of game the definition of "difficult" will change. If your game involves typing quickly (ztype-style...), "difficult" will have a different meaning than in a game where you need to define a word's meaning.
That said, Scrabble has a way to measure how "difficult" a word is which is also quite easy algoritmically.
Also you may look into defining "difficult" in terms of your game. You could beta test your game and classify words according to how "difficult" players find them in the context of your own game.
There are several factors that relate to word difficulty, including age at acquisition, imageability, concreteness, abstractness, syllables, frequency (spoken and written). There are also psycholinguistic databases that will search for word by at least some of these factors. (just do a search for "psycholinguistic database".
Word frequency is an obvious choice (of course not perfect). You can download Google n-grams V2 here, which is license under the Creative Commons Attribution 3.0 Unported License.
Format: ngram TAB year TAB match_count TAB page_count TAB volume_count NEWLINE
Example:
Corpus used (from Lin, Yuri, et al. "Syntactic annotations for the google books ngram corpus." Proceedings of the ACL 2012 system demonstrations. Association for Computational Linguistics, 2012.):
Word length is a good indicator , for word frequency , you would need data as an algorithm can obviously not determine it by itself.
You could also use some sort of scoring like the scrabble game does : each letter has a value and the final value would be the sum of the values.
It would be imo easier to find frequency data about each letter in your language .
In his article on spell correction Peter Norvig uses a dictionary to count the number of occurrences of each word (and thus determine their frequency).
You could use this as a stepping stone :)
Also, frequency should probably influence the difficulty more than length... you would have to beta-test the game for that.
In addition to metrics such as Flesch-Kincaid, you could try an approach based on the Dale-Chall readability formula, using lists of words that are familiar to readers of a particular level of ability.
Implementations of many of the readability formulae contain code for estimating the number of syllables in a word, which may also be useful.
I would guess that the grade at wich the word is introduced into normal students vocabulary is a measure of difficulty. Next would be how many standard rule violations it has. Meaning your words that have spellings or pronunciations that seem to violate the normal set off rules. Finally.. the meaning.. can be a tough concept. .. for example ... try explaining abstract to someone who's never heard the word.
Without claiming to know anything about their algorithm, there is an API that returns a 1-10 scale word difficulty: TwinWord API
I have never used it, myself, though.
How does something like Statistically Improbable Phrases work?
According to amazon:
Amazon.com's Statistically Improbable
Phrases, or "SIPs", are the most
distinctive phrases in the text of
books in the Search Inside!™ program.
To identify SIPs, our computers scan
the text of all books in the Search
Inside! program. If they find a phrase
that occurs a large number of times in
a particular book relative to all
Search Inside! books, that phrase is a
SIP in that book.
SIPs are not necessarily improbable
within a particular book, but they are
improbable relative to all books in
Search Inside!. For example, most SIPs
for a book on taxes are tax related.
But because we display SIPs in order
of their improbability score, the
first SIPs will be on tax topics that
this book mentions more often than
other tax books. For works of fiction,
SIPs tend to be distinctive word
combinations that often hint at
important plot elements.
For instance, for Joel's first book, the SIPs are: leaky abstractions, antialiased text, own dog food, bug count, daily builds, bug database, software schedules
One interesting complication is that these are phrases of either 2 or 3 words. This makes things a little more interesting because these phrases can overlap with or contain each other.
It's a lot like the way Lucene ranks documents for a given search query. They use a metric called TF-IDF, where TF is term frequence and idf is inverse document frequency. The former ranks a document higher the more the query terms appear in that document, and the latter ranks a document higher if it has terms from the query that appear infrequently across all documents. The specific way they calculate it is log(number of documents / number of documents with the term) - ie, the inverse of the frequency that the term appears.
So in your example, those phrases are SIPs relative to Joel's book because they are rare phrases (appearing in few books) and they appear multiple times in his book.
Edit: in response to the question about 2-grams and 3-grams, overlap doesn't matter. Consider the sentence "my two dogs are brown". Here, the list of 2-grams is ["my two", "two dogs", "dogs are", "are brown"], and the list of 3-grams is ["my two dogs", "two dogs are", "dogs are brown"]. As I mentioned in my comment, with overlap you get N-1 2-grams and N-2 3-grams for a stream of N words. Because 2-grams can only equal other 2-grams and likewise for 3-grams, you can handle each of these cases separately. When processing 2-grams, every "word" will be a 2-gram, etc.
They are probably using a variation on the tf-idf weight, detecting phrases that occur a high number of times in the specific book but few times in the whole corpus minus the specific book. Repeat for each book.
Thus 'improbability' is relative to the whole corpus and could be understood as 'uniqueness', or 'what makes a book unique compared to the rest of the library'.
Of course, I'm just guessing.
LingPipe has a tutorial on how to do this, and they link to references. They don't discuss the math behind it, but their source code is open so you can look in their source code.
I can't say I know what Amazon does, because they probably keep it a secret (or at least they just haven't bothered to tell anyone).
Sorry for reviving an old thread, but I landed up here for the same question and found that there is some newer work which might add to the great thread.
I feel SIPs are more unique to a document than just words with high TF-IDF scores. For example, In a document about Harry Potter, terms like Hermione Granger and Hogwarts tend to be better SIPs where as terms like magic and London aren't. TF-IDF is not great at making this distinction.
I came across an interesting definition of SIPs here. In this work, the phrases are modelled as n-grams and their probability of occurrence in a document is computed to identify their uniqueness.
As a starting point, I'd look at Markov Chains.
One option:
build a text corpus from the full index.
build a text corpus from just the one book.
for every m to n word phrase, find the probability that each corpus would generate it.
select the N phrases with the highest ratio of probabilities.
An interesting extension would be to run a Markov Chain generator where your weights table is a magnification of the difference between the global and local corpus. This would generate a "caricature" (literally) of the author's stylistic idiosyncrasies.
I am fairly sure its the combination of SIPs that identify the book as unique. In your example its very rare almost impossible that another book has "leaky abstractions" and "own dog food" in the same book.
I am however making an assumption here as I do not know for sure.