I am working on a project that uses StanfordNLP. One of the function in the project it to extract all nouns from a piece of text and lemmatize each noun. I am extracting the nouns using the below code
Properties props = new Properties();
props.setProperty("annotators", "tokenize, ssplit, pos, lemma, parse, natlog, openie");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
Annotation document = new Annotation(text);
pipeline.annotate(document);
List<CoreMap> sentences = document.get(SentencesAnnotation.class);
for(CoreMap sentence: sentences) {
SemanticGraph dependencies = sentence.get(BasicDependenciesAnnotation.class);
List<String> Nouns = Extractnouns(dependencies.typedDependencies(), sentence);
}
private List<String> Extractnouns(Collection<TypedDependency> tdl, CoreMap sentence) {
List<String> concepts=new ArrayList<String>();
for (TypedDependency td : tdl)
{
String govlemma = td.gov().lemma();
String deplemma = td.dep().lemma();
String deptag=td.dep().tag();
String govtag=td.gov().tag();
if(deptag!=null && deptag.contains("NN") )
{
concepts.add(deplemma);
}
if(govtag!=null && govtag.contains("NN") )
{
concepts.add(govlemma);
}
}
return concepts;
}
It is working as expected but for some words the lemmatization is not working. I observed that some of the nouns that come as the first word in a sentence have this problem. Example: "Protons and electrons both carry an electrical charge." Here the word "Protons" is not getting converted to "proton" on applying lemma. Same with with some other nouns too.
Could you please tell me a solution for this problem?
Unfortunately this is a part of speech tagging error. "Protons" gets labelled with "NNP" not "NNS", so lemmatization isn't performed on it.
You could try running on lower-cased versions of the text, I note in that case it does the right thing.
Related
The code below works well for full nouns but I have text where a single noun can have hyphens or slashes in them. What should I do to accommodate that?
c/h, for instance, is a valid abbreviation as in "c/h was born in Paris".
But I get
(c,NN)
(/,HYPH)
(h,NN)
When I would like (NNP) or (NP) at least.
Here's the code I've run:
import edu.stanford.nlp.ling._
import edu.stanford.nlp.pipeline._
import scala.collection.JavaConverters._
import java.util._
val text = "Marie was born in Paris";
val props = new Properties();
// set the list of annotators to run
props.setProperty("annotators", "tokenize,pos");
// build pipeline
val pipeline = new StanfordCoreNLP(props);
// create a document object
val document = pipeline.processToCoreDocument(text);
// display tokens
document.tokens().asScala.foreach(tok => {
println(tok.word(), tok.tag())
})
Stanford CoreNLP version 3.9.1
I have a problem getting StanfordCoreNLPClient work the same way as StanfordCoreNLP when doing sentiment analysis.
public class Test {
public static void main(String[] args) {
String text = "This server doesn't work!";
Properties props = new Properties();
props.setProperty("annotators", "tokenize, ssplit, pos, lemma, ner, parse, sentiment");
//If I uncomment this line, and comment out the next one, it works
//StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
StanfordCoreNLPClient pipeline = new StanfordCoreNLPClient(props, "http://localhost", 9000, 2);
Annotation annotation = new Annotation(text);
pipeline.annotate(annotation);
CoreDocument document = new CoreDocument(annotation);
CoreSentence sentence = document.sentences().get(0);
//outputs null when using StanfordCoreNLPClient
System.out.println(RNNCoreAnnotations.getPredictions(sentence.sentimentTree()));
//throws null pointer when using StanfordCoreNLPClien (reason of course is that it uses the same method I called above, I assume)
System.out.println(RNNCoreAnnotations.getPredictionsAsStringList(sentence.sentimentTree()));
}
}
Output using StanfordCoreNLPClient pipeline = new StanfordCoreNLPClient(props, "http://localhost", 9000, 2):
null
Exception in thread "main" java.lang.NullPointerException
at edu.stanford.nlp.neural.rnn.RNNCoreAnnotations.getPredictionsAsStringList(RNNCoreAnnotations.java:68)
at tomkri.mastersentimentanalysis.preprocessing.Test.main(Test.java:35)
Output using StanfordCoreNLP pipeline = new StanfordCoreNLP(props):
Type = dense , numRows = 5 , numCols = 1
0.127
0.599
0.221
0.038
0.015
[0.12680336652661395, 0.5988695516384742, 0.22125584263055106, 0.03843574738131668, 0.014635491823044227]
Other annotations than sentiment works in both cases (at least those I have tried).
The server starts fine, and I am able to use from my web browser. When using it there I also get output of sentiment scores (on each subtree in the parse) in json format.
My solution, in case anyone else need it.
I tried to get the required annotation by making http request to the server with JSON response:
HttpResponse<JsonNode> jsonResponse = Unirest.post("http://localhost:9000")
.queryString("properties", "{\"annotators\":\"tokenize, ssplit, pos, lemma, ner, parse, sentiment\",\"outputFormat\":\"json\"}")
.body(text)
.asJson();
String sentTreeStr = jsonResponse.getBody().getObject().
getJSONArray("sentences").getJSONObject(0).getString("sentimentTree");
System.out.println(sentTreeStr); //prints out sentiment values for tree and all sub trees.
But not all annotation data is available. For example, you don't get the probability distribution over all the possible
sentiment values, only the probability of the sentiment most likely (the sentiment with highest probability).
If you need that, this is a solution:
HttpResponse<InputStream> inStream = Unirest.post("http://localhost:9000")
.queryString(
"properties",
"{\"annotators\":\"tokenize, ssplit, pos, lemma, ner, parse, sentiment\","
+ "\"outputFormat\":\"serialized\","
+ "\"serializer\": \"edu.stanford.nlp.pipeline.GenericAnnotationSerializer\"}"
)
.body(text)
.asBinary();
GenericAnnotationSerializer serializer = new GenericAnnotationSerializer ();
try{
ObjectInputStream in = new ObjectInputStream(inStream.getBody());
Pair<Annotation, InputStream> deserialized = serializer.read(in);
Annotation annotation = deserialized.first();
//And now we are back to a state as if we were not running CoreNLP as server.
CoreDocument doc = new CoreDocument(annotation);
CoreSentence sentence = document.sentences().get(0);
//Prints out same output as shown in question
System.out.println(
RNNCoreAnnotations.getPredictions(sentence.sentimentTree()));
} catch (UnirestException ex) {
Logger.getLogger(SentimentTargetExtractor.class.getName()).log(Level.SEVERE, null, ex);
}
I have collection of Noun Phrases say around 10,000 words. I want to check every new input text data for these NP collection and extract those sentences that contains any of these NP. I don't want to run loops for every word because it makes my code dead slow. I am using Java and Stanford CoreNLP.
A quick and easy way to do this is to use RegexNER to identify all examples of anything in your dictionary, and then check for non "O" NER tags in the sentence.
package edu.stanford.nlp.examples;
import edu.stanford.nlp.ling.*;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.util.*;
import java.util.*;
import java.util.stream.Collectors;
public class FindSentencesWithPhrase {
public static boolean checkForNamedEntity(CoreMap sentence) {
for (CoreLabel token : sentence.get(CoreAnnotations.TokensAnnotation.class)) {
if (token.ner() != null && !token.ner().equals("O")) {
return true;
}
}
return false;
}
public static void main(String[] args) {
Properties props = new Properties();
props.setProperty("annotators", "tokenize,ssplit,pos,lemma,regexner");
props.setProperty("regexner.mapping", "phrases.rules");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
String exampleText = "This sentence contains the phrase \"ice cream\"." +
"This sentence is not of interest. This sentences contains pizza.";
Annotation ann = new Annotation(exampleText);
pipeline.annotate(ann);
for (CoreMap sentence : ann.get(CoreAnnotations.SentencesAnnotation.class)) {
if (checkForNamedEntity(sentence)) {
System.out.println("---");
System.out.println(sentence.get(CoreAnnotations.TokensAnnotation.class).
stream().map(token -> token.word()).collect(Collectors.joining(" ")));
}
}
}
}
The file "phrases.rules" should look like this:
ice cream PHRASE_OF_INTEREST MISC 1
pizza PHRASE_OF_INTEREST MISC 1
I'm experimenting with Stanford NLP's TokensRegex and try to find dimensions (e.g. 100x120) in a text. So my plan is to first retokenize the input to further split these tokens (using the example provided in retokenize.rules.txt) and then to search for the new pattern.
After doing the retokenization, however, only null-values are left that replace the original string:
The top level annotation
[Text=100x120 Tokens=[null-1, null-2, null-3] Sentences=[100x120]]
The retokenization seems to work fine (3 tokens in result), but the values are lost. What can I do to maintain the original values in the tokens list?
My retokenize.rules.txt file is (as in the demo):
tokens = { type: "CLASS", value:"edu.stanford.nlp.ling.CoreAnnotations$TokensAnnotation" }
options.matchedExpressionsAnnotationKey = tokens;
options.extractWithTokens = TRUE;
options.flatten = TRUE;
ENV.defaults["ruleType"] = "tokens"
ENV.defaultStringPatternFlags = 2
ENV.defaultResultAnnotationKey = tokens
{ pattern: ( /\d+(x|X)\d+/ ), result: Split($0[0], /x|X/, TRUE) }
The main method:
public static void main(String[] args) throws IOException {
//...
text = "100x120";
Properties properties = new Properties();
properties.setProperty("tokenize.language", "de");
properties.setProperty("annotators", tokenize,retokenize,ssplit,pos,lemma,ner");
properties.setProperty("customAnnotatorClass.retokenize", "edu.stanford.nlp.pipeline.TokensRegexAnnotator");
properties.setProperty("retokenize.rules", "retokenize.rules.txt");
StanfordCoreNLP stanfordPipeline = new StanfordCoreNLP(properties);
runPipeline(pipelineWithRetokenize, text);
}
And the pipeline:
public static void runPipeline(StanfordCoreNLP pipeline, String text) {
Annotation annotation = new Annotation(text);
pipeline.annotate(annotation);
out.println();
out.println("The top level annotation");
out.println(annotation.toShorterString());
//...
}
Thanks for letting us know. The CoreAnnotations.ValueAnnotation is not being populated and we'll update TokenRegex to populate the field.
Regardless, you should be able to use TokenRegex to retokenize as you have planned. Most of the pipeline does not depending on the ValueAnnotation and uses the CoreAnnotations.TextAnnotation instead. You can use the CoreAnnotations.TextAnnotation to get the text for the new tokens (each token is a CoreLabel so you can access it using token.word() as well).
See TokensRegexRetokenizeDemo for example code on how to get the different annotations out.
I am doing some test using WordDelimiterFilter in Solr but it doesn't preserve the protected list of words which I pass to it. Would you please inspect the code and the output example and suggest which part is missing or used badly?
with running this code:
private static Analyzer getWordDelimiterAnalyzer() {
return new Analyzer() {
#Override
public TokenStream tokenStream(String fieldName, Reader reader) {
TokenStream stream = new StandardTokenizer(Version.LUCENE_32, reader);
WordDelimiterFilterFactory wordDelimiterFilterFactory = new WordDelimiterFilterFactory();
HashMap<String, String> args = new HashMap<String, String>();
args.put("generateWordParts", "1");
args.put("generateNumberParts", "1");
args.put("catenateWords", "1");
args.put("catenateNumbers", "1");
args.put("catenateAll", "0");
args.put("luceneMatchVersion", Version.LUCENE_32.name());
args.put("language", "English");
args.put("protected", "protected.txt");
wordDelimiterFilterFactory.init(args);
ResourceLoader loader = new SolrResourceLoader(null, null);
wordDelimiterFilterFactory.inform(loader);
/*List<String> protectedWords = new ArrayList<String>();
protectedWords.add("good bye");
protectedWords.add("hello world");
wordDelimiterFilterFactory.inform(new LinesMockSolrResourceLoader(protectedWords));
*/
return wordDelimiterFilterFactory.create(stream);
}
};
}
input text:
hello world
good bye
what is your plan for future?
protected strings:
good bye
hello world
output:
(hello,startOffset=0,endOffset=5,positionIncrement=1,type=)
(world,startOffset=6,endOffset=11,positionIncrement=1,type=)
(good,startOffset=12,endOffset=16,positionIncrement=1,type=)
(bye,startOffset=17,endOffset=20,positionIncrement=1,type=)
(what,startOffset=21,endOffset=25,positionIncrement=1,type=)
(is,startOffset=26,endOffset=28,positionIncrement=1,type=)
(your,startOffset=29,endOffset=33,positionIncrement=1,type=)
(plan,startOffset=34,endOffset=38,positionIncrement=1,type=)
(for,startOffset=39,endOffset=42,positionIncrement=1,type=)
(future,startOffset=43,endOffset=49,positionIncrement=1,type=)
You are using a standard tokenizer which at least tokenizes on a whitespace level so you will always have "hello world" be split to "hello" and "world".
TokenStream stream = new StandardTokenizer(Version.LUCENE_32, reader);
See Lucene Documentation:
public final class StandardTokenizer extends Tokenizer
A grammar-based tokenizer constructed with JFlex
This should be a good tokenizer for most European-language documents:
Splits words at punctuation characters, removing punctuation.
However, a dot that's not followed by whitespace is considered part of
a token.
Splits words at hyphens, unless there's a number in the token, in
which case the whole token is interpreted as a product number and is
not split.
Recognizes email addresses and internet hostnames as one token.
The word delimiter protected word list is meant for something like:
ISBN2345677 to be split in ISBN 2345677
text2html not to be split in text 2 html (because text2html was added to protected words)
If you really want to do something like you mentioned you may use the KeywordTokenizer. But you have to do the complete splitting by yourself.