I'm having trouble with a LUIS Intent prediction. The utterance "consumo" should trigger the intent "1529_CONSULTAR_CONSUMO" but LUIS keeps assigning it to a wrong intent, even though the exact utterance is registered as an example to the right intent. How can I fix this situation?
The solution to this problems depends on your model. Because the intent that LUIS associates to one utterance and the score can be affected by any entity or any utterance added on any intent.
You can try :
- Look at the wrong intent "consumo" was associated to and if some utterances are similar to "consumo" maybe the two intents should be the same
- Create a list entity with "consumo" and other entities in the list
Related
We want to create a chatbot with about 80 intents, where most of them are just questions with direct answers related to HR (benefits, payroll, etc). It could be done with QnA maker but we decided using LUIS and utterances to have the option to query entities in the future if we need to.
I have tried to create some entities to lift the score of some intents, for example, we have a benefit question about inscriptions to sports in our company:
Utterance:
How can I enroll in soccer?
What steps do I have to register in tennis lessons?
...
So I have created two entities, one for "ActivityType" (soccer, tennis, etc), and other for "Enrollment" (register, enrollment, etc). The later is more as a synonym variable so I don't have to write that many utterances.
Then the utterances translates into:
How can I {Enrollment} in {ActivityType}
What steps do I have to {Enrollment} in {ActivityType} lessons?
These are my questions:
1) "Enrollment" entity is used to avoid creating that many utterances, does it make sense here or there is something better in LUIS for that?
2) I have tested the system and in some questions, it picks up the two entities (enrollmnet and activitytype) which are only present in a specific intent, but then it assigns in the top another intent which doesn't have at all those entities. I would expect the entities to somewhat lift the probability of selecting the intents that are using it (in this case, there is only one using them, so it should be pretty obvious but is not being selected)
Any help will be greatly appreciated :)
I have implemented a bot with luis ai.
One of the entites I use is the builtin.personName. If I test it online on the webpage, it returns the name correctly as entity.
If i use it in my bot, the entity personaName will be not returned.
There is the DateTimeV2 Entity and all of my own implemented... but not the personName.
Someone with the same issue or some ideas how to fix it?
BR
Suppose I have a intent "findStuff" that takes things of the form
find xxx
list xxx where something = somevalue
find xxx where something = somevalue
Getting the LUIS to understand that "xxx" is any word seems hard. I defined a "plainWord" entity, and defined a pattern feature with the same name & value "\w+". I thought that that used to work, but doesn't seem to be doing it any more. Some words that it has seen it recognizes, but it can never seem to deal with "find junk" -- "junk" is never recognized as any entity.
The system for which this is intended is open-ended. Users can add there own types of things that we may "find."...
How extensively trained is your model? You should update your model by labeling users' utterances. I recommend against using generalized entities like a "plainWord" entity, from your description it sounds like this entity is supposed to just be applied to words that occur after "find" and "list". If an entity has not been labeled/applied to many utterances, your model will not catch the words you want it to catch.
If you post your LUIS model I might be able to better help you. You can export the JSON model or provide your application ID to share it.
If I have a prebuilt entity temperature.
And how can I match it to my intent AskTemperature
Because In the AskTemperature.
It can't add the prebuilt entity.
So how do I implement to make entity temperature belongs to the intent AskTemperature?
Thank you!!!
If you provide an example of the labeled utterances, the community may be better equipped to answer your question. That said... Once you train your model, the model should automatically assign the prebuilt entity to the recognized tokens (words, numbers, etc).
After training your model, if this recognition doesn't occur, you may need to train with more straight forward utterances (e.g. "Is the temperature in Redmond right now 16 celsius?").
If all of this doesn't work, then posting relevant portions of your LUIS model may aid in getting help from the SO community.
After you trained your model/intents, the prebuilt entities should be assigned to that utterances.
I have this utterance: are there room available on 12/21/2016?
When I tried to assign the 12/21/2016 to "datetimeV2" entity directly, I cannot find that prebuilt entity; but after I trained it, it shows automatically.
Click to see this image (cannot post image since reputation not reaching 10)
Hope this helps!
I am using LUISDialog to communicate with luis and added business logic for each intent. During actual conversation, LuisDialog sends the utterance directly to luis and returns the result to my method.
For a use case, I need to pre-process the utterance before the dialog sends it to LUIS. Is there a way to interrupt and add pre-processing logic?
Thanks for help.
You could override the GetLuisQueryTextAsync method, that is the method responsible from extracting the utterance out of the message.
The text obtained from that method is then being sent to Luis (as you can see here).