Is there wildcard entity in rasa nlu? - rasa-nlu

I need to extract free-text entities like :
"Can you research for coconut nut is not a nut"
then, the entity should be "coconut nut is not a nut".
So there is not really a precise entity. In Wit, dialogflow and luis, they use wildcards (#sys.any, wit/local_search_query ...).
Is there a wildcard like this in rasaNLU? I cannot find the list of prebuilt entities in the documentation...
Thank you.

Rasa NLU doesn't treat this any differently from other entities: entities can span multiple words, so you can annotate this as:
Can you research for [coconut nut is not a nut](query)
(in markdown format)
For built-in entities you can use the duckling and spaCy NER components .

Related

Are there any alternate ways other than Named Entity Recognition to extract event names from sentences?

I'm a newbie to NLP and I'm working on NER using OpenNLP. I have a sentence like " We have a dinner party today ". Here "dinner party" is an event type. Similarly consider this sentence- "we have a room reservation" here room reservation is an event type. My goal is to extract such words from sentences and label it as "Event_types" as the final output. This can be fairly achieved by creating custom NER model's by annotating sentences with proper tags in the training dataset. But the event types can be heterogeneous and random and hence it is very hard to label all possible patterns(ie. event types can be anything like "security meeting", "family function","parents teachers meeting", etc,etc,...). So I'm looking for an alternate way to achieve this problem... Immediate response would be appreciated. Thanks ! :)
Basically you have two options: 1) A list-based approach where you have lists of entities you will extract from text. To solve the heterogeneous language use, one can train an embedding (e.g. Word2Vec or FastText) to identify contextually similar phrases for your list. 2) Train a custom CRF with data you have annotated (this obviously requires that you annotate bunch of sentences with corresponding tags). I guess the ideal solution really depends on the data and people's willingness to annotate it.

Convention for creating good data set for RASA NER_CRF

I am trying to create a dataset for training RASA ner_crf for one type of entity. Please let me know the minimum number of sentences/variation_in_sentence_formation for good result. When I have one type of each of the possible sentence NER_CRF is not giving good result.
Rasa entity extraction depends heavily on the pipeline you have defined. Also depends on language model and tokenizers. So make sure you use good tokenizer. If it is normal English utterances try using tokenizer_ spacy before ner_crf. Also try with ner_spacy
As per my experience, 5 to 10 variations of utterances for each case gave a decent result to start with

How do you train LUIS to recognize general nouns?

I have a bot that was initially based on the Zummer example.
I would like the Search intent to pick up practically any topic you could search for as an entity.
I tried training using several example phrases but it became apparent that although the intent is correctly detected, the ArticleTopic entity only picks up the specific nouns provided as examples.
I also tried creating a regex entity using .* but this matches every complete utterance.
Is there a general approach to tell LUIS to capture some part of an utterance regardless of its contents?
Examples of what I would like to support:
Search for *, What is *, What are *, Tell me about *, etc.
You should use patterns and the entity which is specific to pattern which is Pattern.any. This entity return all the text which is where the entity has been marked.
It should give something like that :
Search for Entity
What is Entity
What are Entity
This issue could be covered with the new Patterns feature (using pattern.any).
This feature helps in labeling the noun following a specific pattern.
If you add the pattern.any entities to your LUIS app, you can't label utterances with these entities. They are only valid in patterns. Here is another example which explains how pattern.any feature resolves the issue of multi-word entity handling. I have reproduced your issue and it works. Hope this helps!!

Sentence segmentation with annotated corpus

I have a custom annotated corpus, in OpenNLP format. Ex:
<START:Person> John <END> went to <START:Location> London <END>. He visited <START:Organisation> ACME Co <END> in the afternoon.
What I need is to segment sentences from this corpus. But it won't always work as expected due to the annotations.
How can I do it without losing the entity annotations?
I am using OpenNLP.
In case you want to create multiple NLP models for OpenNLP you need multiple formats to train them:
The tokenizer requires a training format
The sentence detector requires a training format
The name finder requires a training format
Therefore, you need to manage these different annotation layers in some way.
I created an annotation tool and a Maven plugin which help you doing this, have a look here. All information can be stored in a single file and the Maven plugin will generate the NLP models for you.
Let me know if you have an further questions.

OpenNLP, Training Named Entity Recognition on unsupported languages: clarifications needed

I want to experiment NER on a specific domain, that is location names extraction from travel offers in Italian language.
So far I've got that I need to prepare the training set by myself, so I'm going to put the
<START:something><END>
tags in some offers from my training set.
But looking at OpenNLP documentation on how to train for NER, I ended up in having a couple of questions:
1) When defining the START/END tags, I'm I free to use whatever name inside the tags (where I wrote "something" a few line above) or is there a restricted set to be bound?
2) I noticed that the call to the training tool
opennlp TokenNameFinderTrainer
takes a string representing the language as the first argument. What is that for? Considering I want to train a model on Italian language that is NOT supported, is there any additional task to be done before I could train for NER?
1) Yes, you can specify multiple types. If the training file contains multiple types, the created model will also be able to detect these multiple types.
2) I think that "lang" parameter has the same meaning/use of other commands (e.g. opennlp TokenizerTrainer -lang it ...)

Resources