How to use QnA maker and LUIS - azure-language-understanding

I am a bit lost in terms of how to use the Microsoft QnA maker and LUIS together. If I understand correctly QnA maker only works on FAQ styled data whereas LUIS is for understanding intents and providing the answer.
So the question I have is how to get both of them to work together. First, what technologies are there and how do they determine where the calls get routed to, as in QnA maker or LUIS.
Any insights will be most helpful.

I used this example a few times and it seems to work.
QnAMAker is use when the user would ask a question. "How can I set an alarm on my phone"
Luis is use to execute a command/action and identify entities. "Set an alarm at three o'clock"
Dispatch is used to route the message to the right service, either QNA or Luis (you can have more than one of each, or 5 qna and no Luis)
Hope this helps

To expand on other answers:
QnAMaker is for direct question => answer pairs. It trains based on exact questions, such as the one exampled by Alexandre, and has exact answers.
LUIS parses the question from the user, instead of using it directly, and uses the resulting score to return an 'intent'. The bot dev then uses this score/intent to route the conversation flow to other dialogs. A good example is to think about how many ways you can say 'goodbye' (Goodbye, bye, byebye, cya, peace!, TTYL). All of these can be programmed, or trained, in LUIS to return 'Goodbye' as the main intent. Then you can code 'if Goodbye is return, go to Goodbye dialogs' into your own chatbot.
Dispatch is like an umbrella over both. At it's core, it's a LUIS model (It looks at messages and scores them). Based on that score, it returns an intent, just like LUIS. And again, like LUIS, it would be up to the bot developer to route the returned intent (if QnAIntent is returned, go to QnA dialogs). Using dispatch to route your initial intents means you don't need to hit every single one of your models (both QnA and LUIS) just to test an utterance (message from a user). Just once through dispatch.

Related

Multiple Intents in DialogFlow CX

I want to have the user respond with a longish response, like talk about their education and family background. In this, I would like to identify multiple intents and then go back to the user with multiple questions (one by one) as a follow-on to the intents detected. Can I do this in DialogFlow CX and how?
Note that your desired use case is currently not feasible in Dialogflow CX. By default, Dialogflow matches a user query to only one intent route. Intents are matched based on the confidence value in the detectIntent Response’s queryResult.match field.
Moreover, the maximum detect intent text input length is 256 characters. If you are using an integration, the integration platform may have a smaller limit. However, you can use the sys.long-utterance built-in event to handle user queries that exceed the 256-character limit. Note that the long user query will still only get matched to only one intent route.
If you want to ask the user multiple questions, you can design your agent to have a conversation flow that asks the user one question at a time. You can utilize State Handlers to control the conversation flow. You may refer to the Voice agent design documentation for best practices for designing your agent.
You could do several intents with no response that has several follow up intents with your questions, u are going to need to change the number of context to match the amount of intents that you are going to use.

Bot greeting stops working when adding QnA Maker resource

Using the awesome resources provided by Microsoft.
Following the documentation at https://learn.microsoft.com/en-us/composer/
Create Folder somewhere
Perform https://learn.microsoft.com/en-us/composer/setup-yarn in that folder
Perform https://learn.microsoft.com/en-us/composer/tutorial/tutorial-create-bot
Test in Emulator: Pressed Restart Conversation - New User ID: Works fine, responds with: Hi! I’m a friendly bot that can help with the weather. Try saying WEATHER or FORECAST.
Perform https://learn.microsoft.com/en-us/composer/tutorial/tutorial-add-dialog
Test in Emulator: Presents ”Let’s check the weather” som response på user input “weather”. Works fine.
Then create new Trigger with Dialog event and Dialog started and continue with: https://learn.microsoft.com/en-us/composer/how-to-add-qna-to-bot, enter the following in the settings
Please note that in order to use the Settings values, an extra “=”
has to proceed the id, e.g. “=settings.qna.knowledgebaseid”.
Please also not that in order to make this work in Europe, with our
“,” instead of “.” as decimal marker, the Threshold has to be set to
“float(‘0.3’)” in order to be evaluated as a float.
Make sure that the settings are accurate according to your QnA Base.
Please note that at this point the LUIS fiels are left mostly empty,
except for the values prefilled as described in
https://learn.microsoft.com/en-us/composer/how-to-add-qna-to-bot
No LUIS added at this point.
Restart bot
Click Test in Emulator
Press Restart Conversation - New User ID
Now there are three problems:
A. There is no longer any greeting phrase.
B. The first response from QnA maker results in a “The given key ‘stepIndex’ was not present in the dictionary.”. Then after this the QnA maker part works, but issue A and C are still present.
C. The weather regex does only trigger once if it is the first entry only, then at the second attempt or after entering something else, it fails to trigger.
Expected behavior:
When Press Restart Conversation - New User ID, the bot should greet
the user.
When the weather regex is the best choice it should trigger
The text “The given key ‘stepIndex’ was not present in the dictionary” should not be presented as the first response, instead the right reply should be presented based on the intent provided.
I`m a bit late to the game on this but I hit the exact same issue in composer and found the same problem. The suggested approach to use unknown intent in MS Docs does not work well. its really just a tutorial to get you up and running as quickly as possible, with no real thought beyond that - and as you point out, it easily gets stuck in an internal loop that prevents other intents from firing.
Assuming you are using Luis.ai, a "QnA intent recognised" should be added & a "duplicate intent recognised". this will make sure that automatic cross-training is implemented such that QNAmaker will know about Luis Questions and vice versa and they will not only understand their own questions but know to exclude the questions in the other approach. This will make for better training. However, depending on how similar questions are in both, they may both return matches of varying confidence anyway - this is what the "duplicate intent recognised" is for. It catches both before they execute their intents and implements checks for confidence against each and re-raises the event that wins out. Thus ensuring only one of the two is recognised and executed.

How to train my QnA Bot while using it?

I made a QnA bot using bot-framework SDK3 and QnAmaker service.
As I know, we can train our knowledge base. But I don't know how to train it while using my QnA bot in other channels like teams.
I think I can let user judge whether the bot's answer has solved the problem. So I make the conversation like below.
User: Ask a question
bot: give a answer
bot: does it solve your problem? yes or no?
user: yes/no
At this time, how could I let my knowledge base know it and master it? Then next time my bot may give a correct answer.
Dialog-based training like you described above is not currently supported in the current (GA) release of the QnA Maker. It is, however, something that the dev team is looking into. Currently, you can only train the QnAMaker portion of your bot from the QnA Maker portal.
Hi you can handle this kind of problem by Scores property
100 An exact match of user query and a KB question
90 High confidence - most of the words match
40-60 Fair confidence - important words match
10 Low confidence - important words don't match
0 No word match
reference https://learn.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/confidence-score

Bot framework - Use LUIS in Scorables

I've read that to handle messages globally, I have to use Scorables and set a score based on the user's input. I am wondering if I can use LUIS to parse the user input and set a score based on LUIS intent score.
Is there any way that I can use LUIS inside my Scorable class?
Or do i have to manually call LUIS and get the response and process myself?
Yes, you can call LUIS yourself, pass the message to it and see what it returns.
You will receive a list of intents with a score back and you typically take the one with the highest score.
LUIS is just an API with one endpoint so you can call it from wherever really, it's actually very easy. Have a look here for more details : https://github.com/Microsoft/Cognitive-LUIS-Windows
The response from LUIS will give you the intent and the parameters it identified assuming you had any. It's probably a good idea to set a threshold, if the score you get back is not high enough then that means you need to train LUIS more but that's another story. My own threshold is set at 88 anything below that, I don't really like.
If you do it like this you basically eliminate any need to do any processing yourself and you use LUIS for what it's mean to be used, which is understanding the user's query. You can do something with the result after that.

Training None Intent in LUIS

I have a travel bot with following intents:-
BookAFlight (trained with 20 utterances)
GetTicketCopy (trained with 20 utterances)
CancelTicket (trained with 20 utterances)
None (default) (currently not trained)
MS documentation suggests that I train None for atleast 1-2 utterances for every 10 utterances added to other intents. Which means I would need say 6-12 utterances to train None.
My query is what kind of utterances should be used to train None?
Everything under the sun apart from what is relevant to my bot ( for e.g. I want to order a pizza, How is the weather today, Who is the president of USA? etc.)
All negative statements corresponding to the utterances used to train my other intents (for e.g. I don't want to book a flight ticket, I don't want to take a print out, I don't want to cancel my ticket etc.)
All utterances that corresponds to intents currently not covered in my scope but which users could still ask to a travel bot (for e.g. I want to Book a Cab to the airport, What is the status of my flight)
Long story short, I am trying to identify what kind of utterances should go into my None intent . Is None the right place to handle "Negative" variations of valid utterances.
None intent is not made for "Negative variations" but to tag everything that is not managed by your other intents.
So you should add utterances corresponding to cases that your bot cannot handle but linked to your context (your 3rd idea).
For example in one of my projects, None intent is trained with other use-cases of my customer than the one I will treat with my bot, based on my customer logs. And it's helping avoiding to throw an intent in a bad case
The LUIS docs suggest you should use COMPLETELY off-topic utterances for the None intent:
"Start with something specific that your bot shouldn't answer such [as] 'What kind of dinosaur has blue teeth?'"
They also suggest that for positive and negative reaction to some actions you create separate intents. Eg. Don't want a car / Want a car. Alternatively use single intent and mark relevant terms as positive and negative entities.

Resources