LUIS strange behavior - botframework

Recently I created a LUIS App for testing and see how it works.
As you can see in the next image, the Intent "Actividades" (sorry about spanish words) has all that utterances.
When I train LUIS and then test with words like "act" or "acti" it returns the Intent Actividades, which is fine. But, if I test the words "actividade" or "activida" (for example) it returns a None Intent (which has no utterrances).
Why does LUIS do this?

Publish the app, test the endpoint, then go to "Review endpoint utterances" in Luis.ai. The utterances that are mapped to None are probably there, indicated that LUIS wants to know which intent they should be mapped to. Map them to the correct intent, train, and publish again. Test the corrected version. It should be correct now.

I suggest you note Microsoft's advice that you should always populate the 'None' intent with a collection of utterances that are completely off-topic, and the count of these should amount to at least 10% of all the other utterances... and as much as 20%. That's referred to in a number of places. They say if you leave None unpopulated, you'll possibly get the kind of response you describe here.

Related

Handling typos / misspellings on list entities

What is the best practice approach to handle typos / misspelling on LUIS List Entities?
I have intents on LUIS which use a list entity (specifically Company Department - HR, Finance, etc). It is common for users to misspell this when putting forward their utterance. LUIS expects an exact match, it doesn't do a "smart" match, and therefore doesn't pick up the misspelled entity.
a) Using bing spell check is not necessarily a good solution. e.g. Certain departments are acronyms such as VRPA - and bing wont correct a typo there.
b) When I used LUIS a year ago, I would pre-process the utterance and use a Levenshtein distance algorithm to fix typos on list entities before feeding them to LUIS.
I would imagine that by now LUIS has some better out of the box way of handling this very common use case.
I'd appreciate input on what the best practice approach is to handle this.
#acambitsis and I exchanged messages via his UserVoice ticket, but I'm going to post the answer here for others.
A combination of Bing and Simple Entities might be what you're looking for, then (they're machine-learned).
I was able to accomplish something close and attached images.
In entities, I created a Simple entity with the role, VRPA. In intents, I created the Show Me intent and added sample utterances "Show me the VRPA" and "Show me the VPRA". I clicked on V**A and selected the Simple Entity:VRPA role. After training, I tried "show me the varp" and it correctly guessed "varp" was the "Simple:VRPA" entity.
You may also find RegEx entities useful. For acronyms, you could do something like: /[vrpa]/i and then any combination of VRPA/VPRA/VARP/ARVP would match.
I highly recommend reading through the Entity Types and Improve App Performance to see if anything jumps out to solve your particular issues.
This may not do exactly what you're looking for. If not, I'd recommend implementing a fuzzy-matching algo of your choice.
entities
intents

LUIS entity not recognised

I trained my luis model to recognize an intent called "getDefinition" with example utterances such as: "What does BLANK mean" or "Can you explain BLANK to me?". It recognizes the intent correctly. I also added an entity called "topic" and trained it to recognize what topic the user is asking about. The problem is that luis only recognizes the exact topic the user is asking about if I used that specific term in one of the utterances before.
Does this mean I have to train it with all the possible terms a user can ask about or is there some way to have it recognize it anyway?
For example when I ask "What does blockchain mean" it correctly identifies the entity (topic) as blockchain because the word blockchain is in the utterance. But if I ask the same version of the question about another topic such as "what does mining mean", it doesn't recognize that as the entity.
Using a list or phrase list doesn't seem to be solving the problem. I want to eventually have thousands of topics the bot responds to, entering each topic in a list is tedious and inconvenient. Is there a way LUIS can recognize that its a topic just from the context?
What is the best way to go about this?
Same Doubt, Bit Modified. Sorry for Reposting this here.
At the moment LUIS cannot extract an entity just based on the the intent. Phrase lists will help LUIS extract tokens that don't have explicit training data. For example training LUIS with the utterance "What does blockchain mean?" does not mean that it will extract "mining" from "What does mining mean?" unless "mining" was either included in a phrase list, or a list entity. In addition to what Nicolas R said about tagging different values, another thing to consider is that using words not commonly found (or found at all) in the corpuses that LUIS uses for each culture will likely result in LUIS not extracting the words without assistance (either via Phrase list or list entity).
For example, if you created a LUIS application that dealt with units of measurement, while you might not be required to train it with units such as inch, meter, kilometer or ounce; you would probably have to train it with words like milliradian, parsec, and even other cultural spellings like kilometre. Otherwise these words would most likely not be extracted by LUIS. If a user provided the tokens "Planck unit", LUIS might provide a faulty extraction where it returns "unit" as the measurement entity instead of "Planck unit".

MSBOT-LUIS: How to specify the mandatory words in utterance? Is it possible by using phrase list features?

I am using phrase list features of LUIS. i am adding my mandatory words in my phrase list.(correct me if i am wrong)
For single mandatory word my intent works fine. But in my another intent i have 2 mandatory words in single intent which is not working fine.
Behaviour
My phrase list- product: [moisturizer,anti wrinkle cream,laugh lines,anti aging skin treatment]
target area: [face,my face,neck,forehead]
Intent name- ste1
utterance- do you have moisturizer?
user enters- "do you have bla bla"- as expected its going to none intent.
Intent name- ste2
utterance- do you have moisturizer for my face?
user input- "do you have moisturizer for my bla bla"- As here "moisturizer" is present bt "my face" is not! This should also hit none intent but its hitting to ste1 intent because "do you have moisturizer?" is completely present in ste1.
Expected Result-
I want to validate that my these two words(moisturizer, face) should be mandate to hit the ste2 intent otherwise i want it to hit none intent.
LUIS only provides a recognition service. If you want to validate something like "face" and "moisturizer" being present in a user's utterance, this should be done in your code.
You may train your bot to direct "incomplete" utterances to the "None" intent (by your description, utterances like, "I want moisturizer", or "I want lotion") but as you yourself noted;
But user can enter any random thing so I cant predict what should be in none intent...
Therefore what you should do in your model and code is add entities for "moisturizer" and "face". With these entities, inside of your code you can take the LUIS response and quickly see if you have the required basic information to start the dialog. If one entity is provided ("moisturizer") but another entity is missing (a part of the body), your bot would help the user disambiguate by prompting them what they're looking for specifically, e.g. face moisturizer or hand moisturizer.
A good way to approach the phrase lists and pattern features is that they're augmentations; they do help the machine learned model, but the weight/impact they provide when determining an intent is less than an entity's weight. The phrase lists and pattern features are not replacements for entities.

luis ai(utterances matching multiple intents)

I am working with Luis ai.
I have created multiple intents,utterances,entities.
When I trained the application I found that single utterance is matching with two different intents.Hence I am getting unexpected result.
Is there any solution to resolve the same..
Maybe you have assigned the same utterance to two different intents?
The quickest way to find out is by using a quick ctrl-F in the Json file.
The Json can be downloaded at the overview page at the right side.
I hope this is helpful.
It's normal for an utterance to match against different intents. It might be possible to assign the same utterance to two different intents via the API, but this isn't possible via the portal.
When you get the response from LUIS, it'll provide the "topScoringIntent" along with a list of "intents" that are arranged in descending score order.
If the wrong intent has the leading score, then you'll have to retrain the model with this utterance to mark it with the correct intent.

How to eliminate negative utterances while using LUIS/WIT Intent

I have an intent "BookTicket".
I have few utterances for the same: "book a ticket", "book my ticket".....it works fine.
It also works with "do not book a ticket", "book my show". My question is: How can I eliminate these negative searches to search for the intent mentioned above and return an error message instead of invoking the intent.
Right now, I am trying this with LUIS framework.
thanks
In case of LUIS, use the None intent to mark the negative examples you want to eliminate, this way your model will learn to associate these negative utterances with None intent not "BookTicket"
My be you can try this https://azure.microsoft.com/en-in/documentation/articles/machine-learning-apps-text-analytics/ . Set the threshold as required to your project so that only positive will come out.
Google brought me here so I'm posting this for anyone who finds it useful.
Microsoft documentation states: "you can create two intents (one positive, and one negative) and add appropriate utterances for each. Or you can create a single intent and mark the two different positive and negative terms as an entity."

Resources