I just started using Stanford Parser but I do not understand the tags very well. This might be a stupid question to ask but can anyone tell me what does the SBARQ and SQ tags represent and where can I find a complete list for them? I know how the Penn Treebank looks like but these are slightly different.
Sentence: What is the highest waterfall in the United States ?
(ROOT
(SBARQ
(WHNP (WP What))
(SQ (VBZ is)
(NP
(NP (DT the) (JJS highest) (NN waterfall))
(PP (IN in)
(NP (DT the) (NNP United) (NNPS States)))))
(. ?)))
I have looked at Stanford Parser website and read a few of the journals listed there but there are no explanation of the tags mentioned earlier. I found a manual describing all the dependencies used but it doesn't explain what I am looking for. Thanks!
This reference looks to have an extensive list - not sure if it is complete or not.
Specifically, it lists the ones you're asking about as:
SBARQ - Direct question introduced by a wh-word or a wh-phrase. Indirect
questions and relative clauses should be bracketed as SBAR, not SBARQ.
SQ - Inverted yes/no question, or main clause of a wh-question,
following the wh-phrase in SBARQ.
To see the entire list just print the tagIndex of the parser
LexicalizedParser lp = LexicalizedParser.loadModel();
System.out.println(lp.tagIndex); // print the tag index
Related
Hi I want to apply a filter to the strapi api with a combination of AND and OR but I can't seem to get it working.
Situation:
I want to filter on a situation like this
(
(tag OR tag2 OR tag3) AND
(
(title CONTAINSI TEXT_FILTER_HERE) OR
(body CONTAINSI TEXT_FILTER_HERE) OR
(introduction CONTAINSI TEXT_FILTER_HERE)
)
)
(note that I have abbreviated the tags as that should also be a containsi)
I have tried something like the link below and many others, but this example below will make all of them an OR situation, and I will find too many entries.
{{host}/api/blog?sort=publishedAt%3Adesc
&populate=Tags.tags&populate=Image
&pagination[page]=1
&pagination[pageSize]=25
&locale=en
&filters[$or][4][Tags][tags][Tag][$containsi]=TAG1
&filters[$or][6][Tags][tags][Tag][$containsi]=TAG2
&filters[$or][8][Tags][tags][Tag][$containsi]=TAG3
&filters[$or][3][Tags][tags][Tag][$containsi]=TAG4
&filters[$or][5][Tags][tags][Tag][$containsi]=TAG5
&filters[$or][7][Tags][tags][Tag][$containsi]=TAG6
&filters[$or][0][title][$containsi]=FILTER_TEXT_HERE
&filters[$or][2][body][$containsi]=FILTER_TEXT_HERE
&filters[$or][1][introduction][$containsi]=FILTER_TEXT_HERE
(made it multiline for readability)
Is it possible to this in strapi?
I can get it to work in an multiple OR situation with just one (1) AND, but not with ((or or) AND (or or)):
...
&filters[$or][4][Tags][tags][Tag][$containsi]={{tag1}}
&filters[$or][6][Tags][tags][Tag][$containsi]={{tag2}}
&filters[$or][8][Tags][tags][Tag][$containsi]={{tag3}}
&filters[$or][3][Tags][tags][Tag][$containsi]={{tag4}}
&filters[$or][5][Tags][tags][Tag][$containsi]={{tag5}}
&filters[$or][7][Tags][tags][Tag][$containsi]={{tag6}}
&filters[$and][0][title][$containsi]={{filter}}
or
...
&filters[$or][4][Tags][tags][Tag][$containsi]={{tag1}}
&filters[$or][6][Tags][tags][Tag][$containsi]={{tag2}}
&filters[$or][8][Tags][tags][Tag][$containsi]={{tag3}}
&filters[$or][3][Tags][tags][Tag][$containsi]={{tag4}}
&filters[$or][5][Tags][tags][Tag][$containsi]={{tag5}}
&filters[$or][7][Tags][tags][Tag][$containsi]={{tag6}}
&filters[title][$containsi]={{filter}}
Any help will be appreciated
I actually think I have found the answer. Could not find it anywhere in the documentation (https://docs.strapi.io/developer-docs/latest/developer-resources/database-apis-reference/rest/filtering-locale-publication.html#complex-filtering)
{{host}}/api/blog?sort=publishedAt%3Adesc
&populate=Tags.tags
&populate=Image
&pagination[page]=1
&pagination[pageSize]=25
&locale=en
&filters[$and][0][$or][0][title][$containsi]={{filter}}
&filters[$and][0][$or][1][introduction][$containsi]={{filter}}
&filters[$and][0][$or][2][body][$containsi]={{filter}}
&filters[$and][1][$or][0][Tags][tags][Tag][$containsi]={{tag1}}
&filters[$and][1][$or][1][Tags][tags][Tag][$containsi]={{tag2}}
&filters[$and][1][$or][2][Tags][tags][Tag][$containsi]={{tag3}}
&filters[$and][1][$or][3][Tags][tags][Tag][$containsi]={{tag4}}
&filters[$and][1][$or][4][Tags][tags][Tag][$containsi]={{tag5}}
&filters[$and][1][$or][5][Tags][tags][Tag][$containsi]={{tag6}}
Note the [NUMBER] sections grouping the ANDs and OR's ...
Once you know it, it looks logical :-)
I am writing my first Prolog code, and I am have some difficulties with it I was wondering if anyone could help me out.
I am writing a program that needs to follow the following rules:
for Verb phrases., noun phrases come before transitive verbs.
subjects (nominative noun phrases) are followed by ga
Direct Objects (nominative noun phrases are followed by o.
it must be able to form these sentences with the given words in the code:
Adamu ga waraimasu (adam laughs)
Iive ga nakimasu (eve cries)
Adamu ga Iivu O mimasi (adam watches Eve)
Iivu ga Adamu O tetsudaimasu (eve helps adam)
here is my code. It it mostly complete except, I don't know if the rules are correct in the code:
Japanese([adamu ],[nounphrase],[adam],[entity]).
Japanese([iivu ],[nounphrase],[eve],[entity]).
Japanese([waraimasu ],[verb,intransitive],[laughs],[property]).
Japanese([nakimasu],[verb,intransitive],[cries],[property]).
Japanese([mimasu ],[verb,transitive],[watches],[relation]).
Japanese([tetsudaimasu ],[verb,transitive],[helps],[relation]).
Japanese(A,[verbphrase],B,[property]):-
Japanese(A,[verb,intransitive],B,[property]).
Japanese(A,[nounphrase,accusative],B,[entity]):-
Japanese(C,[nounphrase],B,[entity]),
append([ga],C,A).
Japanese(A,[verbphrase],B,[property]):-
Japanese(C,[verb,transitive],D,[relation]),
Japanese(E,[nounphrase,accusative],F,[entity]),
append(C,E,A),
append(D,F,B).
Japanese(A,[sentence],B,[proposition]):-
Japanese(C,[nounphrase],D,[entity]),
Japanese(E,[verbphrase],F,[property]),
append(E,C,A),
append(F,D,B).
I'm using the latest version [3.8.0] of CoreNLP with the python wrapper [py-corenlp] and I realized there is some inconsistency between the output I get from CoreNLP when I do the annotation with the following annotators: tokenize, ssplit, pos, depparse, parse, and the output from the Online Demo. What is more, Stanford's Parser, both when calling it in my code or when I run it online, is giving me the same results as CoreNLP.
For instance, I have the following question (borrowed from the Free917 question corpus):
at what institutions was Marshall Hall a professor
Using CoreNLP I get the following parsing:
(ROOT\n (SBAR\n (WHPP (IN at)\n (WHNP (WDT what)))\n (S\n (NP (NNS institutions))\n (VP (VBD was)\n (NP\n (NP (NNP Marshall) (NNP Hall))\n (NP (DT a) (NN professor)))))))
Same with Stanford's Parser:
[Tree('ROOT', [Tree('SBAR', [Tree('WHPP', [Tree('IN', ['at']), Tree('WHNP', [Tree('WP', ['what'])])]), Tree('S', [Tree('NP', [Tree('NNS', ['institutions'])]), Tree('VP', [Tree('VBD', ['was']), Tree('NP', [Tree('NP', [Tree('NNP', ['Marshall']), Tree('NNP', ['Hall'])]), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['professor'])])])])])])])]
The Online Demo is the correct version though:
Online Demo Parsing
How can I get the results I get using the Online Demo?
Thank you in advance!
The demo runs the shift-reduce parser, which is both faster and more accurate, at the expense of a [much] larger serialized model size. See https://nlp.stanford.edu/software/srparser.shtml
I am working with the edu.stanford.nlp.semgrex and edu.stanford.nlp.tress.semgraph packages and am looking for a way to match nodes with a text value other than the lemma: directive.
I couldn't find all possible attribute names in javadoc for SemgrexPattern, only those for lemma, tag, and relational operators - is there a comprehensive list available?
For example, in the following sentence
My take-home pay is $20.
extracting the 'take-home' node is not possible using
(SemgrexPattern.compile( "{lemma:take-home}"))
.matcher( "My take-home pay is $20.").find()
yields false, because take-home is deemed not to be a lemma.
What do I need to do to match nodes with non-lemma, arbitrary text?
Thanks for any advice or comment.
Sorry - I realize that {word:take-home} would work in the example above.
Thanks..
Has anyone ever tried parsing out phrasal verbs with Stanford NLP?
The problem is with separable phrasal verbs, e.g.: climb up, do over: We climbed that hill up. I have to do this job over.
The first phrase looks like this in the parse tree:
(VP
(VBD climbed)
(ADVP
(IN that)
(NP (NN hill)
)
)
(ADVP
(RB up)
)
)
the second phrase:
(VB do)
(NP
(DT this)
(NN job)
)
(PP
(IN over)
)
So it seems like reading the parse tree would be the right way, but how to know that verb is going to be phrasal?
Dependency parsing, dude. Look at the prt (phrasal verb particle) dependency in both sentences. See the Stanford typed dependencies manual for more info.
nsubj(climbed-2, We-1)
root(ROOT-0, climbed-2)
det(hill-4, that-3)
dobj(climbed-2, hill-4)
prt(climbed-2, up-5)
nsubj(have-2, I-1)
root(ROOT-0, have-2)
aux(do-4, to-3)
xcomp(have-2, do-4)
det(job-6, this-5)
dobj(do-4, job-6)
prt(do-4, over-7)
The stanford parser gives you very nice dependency parses. I have code for programmatically accessing these if you need it: https://gist.github.com/2562754