How Stanford natural Language Parser uses Penn Tree Bank for Tagging process ? I want to know how it finds the POS for the given input?
The Stanford part-of-speech tagger uses a probabilistic sequence model to determine the most likely sequence of part-of-speech tags underlying a sentence. Some of the features provided to this model are
Surrounding words and n-grams
Part-of-speech tags of surrounding words
"Word shapes" (e.g., "Foo5" is translated to "Xxx#")
Word suffix, prefix
See the ExtractorFrames class for details. The model is trained on a tagged corpus (like the Penn Treebank) which has each token annotated with its correct part of speech.
At run time, features like those mentioned above are calculated for input text and are used to build per-tag probabilities, which are then fed into an implementation of the Viterbi algorithm (ExactBestSequenceFinder), which finds the most likely arrangement of tags for the entire sequence.
For more information to get started with POS tagging:
Watch the Week 5 lectures of the Coursera NLP class (co-taught by the CoreNLP lead)
Check out the code in the edu.stanford.nlp.tagger.maxent package
Part-of-speech tagging in NLTK
while performing sentiment analysis, how can I make the machine understand that I'm referring apple (the iphone), instead of apple (the fruit)?
Thanks for the advise !
Well, there are several methods,
I would start with checking Capital letter, usually, when referring to a name, first letter is capitalized.
Before doing sentiment analysis, I would use some Part-of-speech and Named Entity Recognition to tag the relevant words.
Stanford CoreNLP is a good text analysis project to start with, it will teach
you the basic concepts.
Example from CoreNLP:
You can see how the tags can help you.
And check out more info
As described by Ofiris, NER is only one way to do solve your problem. I feel it's more effective to use word embedding to represent your words. In that way machine automatically recognize the context of the word. As an example "Apple" is mostly coming together with "eat" and But if the given input "Apple" is present with "mobile" or any other word in that domain, Machine will understand it's "iPhone apple" instead of "apple fruit". There are 2 popular ways to generate word embeddings such as word2vec and fasttext.
Gensim provides more reliable implementations for both word2vec and fasttext.
https://radimrehurek.com/gensim/models/word2vec.html
https://radimrehurek.com/gensim/models/fasttext.html
In presence of dates, famous brands, vip or historical figures you can use a NER (named entity recognition) algorithm; in such case, as suggested by Ofiris, the Stanford CoreNLP offers a good Named entity recognizer.
For a more general disambiguation of polysemous words (i.e., words having more than one sense, such as "good") you could use a POS tagger coupled with a Word Sense Disambiguation (WSD) algorithm. An example of the latter can be found HERE, but I do not know any freely downloadable library for this purpose.
This problem has already been solved by many open source pre-trained NER models. Anyways you can try retraining an existing NER models to finetune them to solve this issue.
You can find an demo of NER results as done by Spacy NER here.
Take the phrase "A Pedestrian wishes to cross the road".
I learnt english in England and, according to the old rules, the word 'Pedestrian' is a noun. Stanford CoreNLP finds it to be an adjective, regardless of capitalization.
I don't want to contradict the big-brains of Stanford, USA, but that is just wrong. I am new to this semantic stuff but, by finding the word to be an adjective, the sentence lacks a valid noun phrase.
Have I missed the point of CoreNLP, lost the point of the english language, or should I be seeking more effective analysis tools?
I ask as the example sentence is the very first sentence, of my very first processing experiment, and it is most discouraging.
CoreNLP is a statistical analysis tool. It is trained on many texts that have been annotated by pools of human experts. These experts agree on about 90% of the cases. Thus the CoreNLP system cannot beat that percentage and your sentence is part of the 10% wrong parses.
I'm trying to find out if there is a known algorithm that can detect the "key concept" of a sentence.
The use case is as follows:
User enters a sentence as a query (Does chicken taste like turkey?)
Our system identifies the concepts of the sentence (chicken, turkey)
And it runs a search of our corpus content
The area that we're lacking in is identifying what the core "topic" of the sentence is really about. The sentence "Does chicken taste like turkey" has a primary topic of "chicken", because the user is asking about the taste of chicken. While "turkey" is a helper topic of less importance.
So... I'm trying to find out if there is an algorithm that will help me identify the primary topic of a sentence... Let me know if you are aware of any!!!
I actually did a research project on this and won two competitions and am competing in nationals.
There are two steps to the method:
Parse the sentence with a Context-Free Grammar
In the resulting parse trees, find all nouns which are only subordinate to Noun-Phrase-like constituents
For example, "I ate pie" has 2 nouns: "I" and "pie". Looking at the parse tree, "pie" is inside of a Verb Phrase, so it cannot be a subject. "I", however, is only inside of NP-like constituents. being the only subject candidate, it is the subject. Find an early copy of this program on http://www.candlemind.com. Note that the vocabulary is limited to basic singular words, and there are no verb conjugations, so it has "man" but not "men", has "eat" but not "ate." Also, the CFG I used was hand-made an limited. I will be updating this program shortly.
Anyway, there are limitations to this program. My mentor pointed out in its currents state, it cannot recognize sentences with subjects that are "real" NPs (what grammar actually calls NPs). For example, "that the moon is flat is not a debate any longer." The subject is actually "that the moon is flat." However, the program would recognize "moon" as the subject. I will be fixing this shortly.
Anyway, this is good enough for most sentences...
My research paper can be found there too. Go to page 11 of it to read the methods.
Hope this helps.
Most of your basic NLP parsing techniques will be able to extract the basic aspects of the sentence - i.e., that chicken and turkey a NPs and they are linked by and adjective 'like', etc. Getting these to a 'topic' or 'concept' is more difficult
Technique such as Latent Semantic Analysis and its many derivatives transform this information into a vector (some have methods of retaining in some part the hierarchy/relations between parts of speech) and then compares them to existing, usually pre-classified by concept, vectors. See http://en.wikipedia.org/wiki/Latent_semantic_analysis to get started.
Edit Here's an example LSA app you can play around with to see if you might want to pursue it further . http://lsi.research.telcordia.com/lsi/demos.html
For many longer sentences its difficult to say what exactly is a topic and also there may be more than one.
One way to get approximate ans is
1.) First tag the sentence using openNLP, stanford Parser or any one.
2.) Then remove all the stop words from the sentence.
3.) Pick up Nouns( proper, singular and plural).
Other way is
1.) chuck the sentence into phrases by any parser.
2.) Pick up all the noun phrases.
3.) Remove the Noun phrases that doesn't have the Nouns as a child.
4.) Keep only adjectives and Nouns, remove all words from remaining Noun Phrases.
This might give approx. guessing.
"Key concept" is not a well-defined term in linguistics, but this may be a starting point: parse the sentence, find the subject in the parse tree or dependency structure that you get. (This doesn't always work; for example, the subject of "Is it raining?" is "it", while the key concept is likely "rain". Also, what's the key concept in "Are spaghetti and lasagna the same thing?")
This kind of problem (NLP + search) is more properly dealt with by methods such as LSA, but that's quite an advanced topic.
On the most basic level, a question in English is usually in the form of <verb> <subject> ... ? or <pronoun> <verb> <subject> ... ?. This is by no means a good algorithm, especially considering that the subject could span several words, but depending on how sophisticated a solution you need, it might be a useful starting point.
If you need precision, ignore this answer.
If you're willing to shell out money, http://www.connexor.com/ is supposed to be able to do this type of semantic analysis for a wide variety of languages, including English. I have never directly used their product, and so can't comment on how well it works.
There's an article about Parsing Noun Phrases in the MIT Computational Linguistics journal of this month: http://www.mitpressjournals.org/doi/pdf/10.1162/COLI_a_00076
Compound or complex sentences may have more than one key concept of a sentence.
You can use stanfordNLP or MaltParser which can give the dependency structure of a sentence. It also gives the parts of speech tagging including subject, verb , object etc.
I think most of the times the object will be the key concept of the sentence.
You should look at Google's Cloud Natural Language API. It's their NLP service.
https://cloud.google.com/natural-language/
Simple solution is to tag your sentence with part-of-speach tagger (e.g. from NLTK library for Python) then find matches with some predefined part-of-speach patterns in which it's clear where is main subject of the sentence
One option is to look into something like this as a first step:
http://www.abisource.com/projects/link-grammar/
But how you derive the topic from these links is another problem in itself. But as Abiword is trying to detect grammatical problems, you might be able to use it to determine the topic.
By "primary topic" you're referring to what is termed the subject of the sentence.
The subject can be identified by understanding a sentence through natural language processing.
The answer to this question is the same as that for How to determine subject, object and other words? - this is a currently unsolved problem.
I want to colorize the words in a text according to their classification (category/declination etc). I have a fully working dictionary, but the problem is that there is a lot of ambiguity. foedere, for instance, can be forms of either the verb "fornicate" or the noun "treaty".
What the general strategies for solving these ambiguities or generating good guesses are?
Thanks!
The general strategy is to first run a part-of-speech tagger on the data to determine the word category (noun, verb, etc.). That, however, requires data (context statistics) and tools. This research paper may be a starting point.