How to train my QnA Bot while using it? - botframework

I made a QnA bot using bot-framework SDK3 and QnAmaker service.
As I know, we can train our knowledge base. But I don't know how to train it while using my QnA bot in other channels like teams.
I think I can let user judge whether the bot's answer has solved the problem. So I make the conversation like below.
User: Ask a question
bot: give a answer
bot: does it solve your problem? yes or no?
user: yes/no
At this time, how could I let my knowledge base know it and master it? Then next time my bot may give a correct answer.

Dialog-based training like you described above is not currently supported in the current (GA) release of the QnA Maker. It is, however, something that the dev team is looking into. Currently, you can only train the QnAMaker portion of your bot from the QnA Maker portal.

Hi you can handle this kind of problem by Scores property
100 An exact match of user query and a KB question
90 High confidence - most of the words match
40-60 Fair confidence - important words match
10 Low confidence - important words don't match
0 No word match
reference https://learn.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/confidence-score

Related

Multiple Intents in DialogFlow CX

I want to have the user respond with a longish response, like talk about their education and family background. In this, I would like to identify multiple intents and then go back to the user with multiple questions (one by one) as a follow-on to the intents detected. Can I do this in DialogFlow CX and how?
Note that your desired use case is currently not feasible in Dialogflow CX. By default, Dialogflow matches a user query to only one intent route. Intents are matched based on the confidence value in the detectIntent Response’s queryResult.match field.
Moreover, the maximum detect intent text input length is 256 characters. If you are using an integration, the integration platform may have a smaller limit. However, you can use the sys.long-utterance built-in event to handle user queries that exceed the 256-character limit. Note that the long user query will still only get matched to only one intent route.
If you want to ask the user multiple questions, you can design your agent to have a conversation flow that asks the user one question at a time. You can utilize State Handlers to control the conversation flow. You may refer to the Voice agent design documentation for best practices for designing your agent.
You could do several intents with no response that has several follow up intents with your questions, u are going to need to change the number of context to match the amount of intents that you are going to use.

Bot greeting stops working when adding QnA Maker resource

Using the awesome resources provided by Microsoft.
Following the documentation at https://learn.microsoft.com/en-us/composer/
Create Folder somewhere
Perform https://learn.microsoft.com/en-us/composer/setup-yarn in that folder
Perform https://learn.microsoft.com/en-us/composer/tutorial/tutorial-create-bot
Test in Emulator: Pressed Restart Conversation - New User ID: Works fine, responds with: Hi! I’m a friendly bot that can help with the weather. Try saying WEATHER or FORECAST.
Perform https://learn.microsoft.com/en-us/composer/tutorial/tutorial-add-dialog
Test in Emulator: Presents ”Let’s check the weather” som response på user input “weather”. Works fine.
Then create new Trigger with Dialog event and Dialog started and continue with: https://learn.microsoft.com/en-us/composer/how-to-add-qna-to-bot, enter the following in the settings
Please note that in order to use the Settings values, an extra “=”
has to proceed the id, e.g. “=settings.qna.knowledgebaseid”.
Please also not that in order to make this work in Europe, with our
“,” instead of “.” as decimal marker, the Threshold has to be set to
“float(‘0.3’)” in order to be evaluated as a float.
Make sure that the settings are accurate according to your QnA Base.
Please note that at this point the LUIS fiels are left mostly empty,
except for the values prefilled as described in
https://learn.microsoft.com/en-us/composer/how-to-add-qna-to-bot
No LUIS added at this point.
Restart bot
Click Test in Emulator
Press Restart Conversation - New User ID
Now there are three problems:
A. There is no longer any greeting phrase.
B. The first response from QnA maker results in a “The given key ‘stepIndex’ was not present in the dictionary.”. Then after this the QnA maker part works, but issue A and C are still present.
C. The weather regex does only trigger once if it is the first entry only, then at the second attempt or after entering something else, it fails to trigger.
Expected behavior:
When Press Restart Conversation - New User ID, the bot should greet
the user.
When the weather regex is the best choice it should trigger
The text “The given key ‘stepIndex’ was not present in the dictionary” should not be presented as the first response, instead the right reply should be presented based on the intent provided.
I`m a bit late to the game on this but I hit the exact same issue in composer and found the same problem. The suggested approach to use unknown intent in MS Docs does not work well. its really just a tutorial to get you up and running as quickly as possible, with no real thought beyond that - and as you point out, it easily gets stuck in an internal loop that prevents other intents from firing.
Assuming you are using Luis.ai, a "QnA intent recognised" should be added & a "duplicate intent recognised". this will make sure that automatic cross-training is implemented such that QNAmaker will know about Luis Questions and vice versa and they will not only understand their own questions but know to exclude the questions in the other approach. This will make for better training. However, depending on how similar questions are in both, they may both return matches of varying confidence anyway - this is what the "duplicate intent recognised" is for. It catches both before they execute their intents and implements checks for confidence against each and re-raises the event that wins out. Thus ensuring only one of the two is recognised and executed.

How to use QnA maker and LUIS

I am a bit lost in terms of how to use the Microsoft QnA maker and LUIS together. If I understand correctly QnA maker only works on FAQ styled data whereas LUIS is for understanding intents and providing the answer.
So the question I have is how to get both of them to work together. First, what technologies are there and how do they determine where the calls get routed to, as in QnA maker or LUIS.
Any insights will be most helpful.
I used this example a few times and it seems to work.
QnAMAker is use when the user would ask a question. "How can I set an alarm on my phone"
Luis is use to execute a command/action and identify entities. "Set an alarm at three o'clock"
Dispatch is used to route the message to the right service, either QNA or Luis (you can have more than one of each, or 5 qna and no Luis)
Hope this helps
To expand on other answers:
QnAMaker is for direct question => answer pairs. It trains based on exact questions, such as the one exampled by Alexandre, and has exact answers.
LUIS parses the question from the user, instead of using it directly, and uses the resulting score to return an 'intent'. The bot dev then uses this score/intent to route the conversation flow to other dialogs. A good example is to think about how many ways you can say 'goodbye' (Goodbye, bye, byebye, cya, peace!, TTYL). All of these can be programmed, or trained, in LUIS to return 'Goodbye' as the main intent. Then you can code 'if Goodbye is return, go to Goodbye dialogs' into your own chatbot.
Dispatch is like an umbrella over both. At it's core, it's a LUIS model (It looks at messages and scores them). Based on that score, it returns an intent, just like LUIS. And again, like LUIS, it would be up to the bot developer to route the returned intent (if QnAIntent is returned, go to QnA dialogs). Using dispatch to route your initial intents means you don't need to hit every single one of your models (both QnA and LUIS) just to test an utterance (message from a user). Just once through dispatch.

Training None Intent in LUIS

I have a travel bot with following intents:-
BookAFlight (trained with 20 utterances)
GetTicketCopy (trained with 20 utterances)
CancelTicket (trained with 20 utterances)
None (default) (currently not trained)
MS documentation suggests that I train None for atleast 1-2 utterances for every 10 utterances added to other intents. Which means I would need say 6-12 utterances to train None.
My query is what kind of utterances should be used to train None?
Everything under the sun apart from what is relevant to my bot ( for e.g. I want to order a pizza, How is the weather today, Who is the president of USA? etc.)
All negative statements corresponding to the utterances used to train my other intents (for e.g. I don't want to book a flight ticket, I don't want to take a print out, I don't want to cancel my ticket etc.)
All utterances that corresponds to intents currently not covered in my scope but which users could still ask to a travel bot (for e.g. I want to Book a Cab to the airport, What is the status of my flight)
Long story short, I am trying to identify what kind of utterances should go into my None intent . Is None the right place to handle "Negative" variations of valid utterances.
None intent is not made for "Negative variations" but to tag everything that is not managed by your other intents.
So you should add utterances corresponding to cases that your bot cannot handle but linked to your context (your 3rd idea).
For example in one of my projects, None intent is trained with other use-cases of my customer than the one I will treat with my bot, based on my customer logs. And it's helping avoiding to throw an intent in a bad case
The LUIS docs suggest you should use COMPLETELY off-topic utterances for the None intent:
"Start with something specific that your bot shouldn't answer such [as] 'What kind of dinosaur has blue teeth?'"
They also suggest that for positive and negative reaction to some actions you create separate intents. Eg. Don't want a car / Want a car. Alternatively use single intent and mark relevant terms as positive and negative entities.

Queries on the Parse

I would like to ask the below queries. Apologies if it was asked before, I couldn't find them.
W.r.t the new pricing it is mentioned as "You can send us your analytics events any time without being limited by your app's request limit." - Is it that any interactions made for the Parse analytics does not count towards to the overall api request limit set for the app?
From the answers to the queries posted a while back in the forums, there was some distinction between the normal and premium customers - Is there any now..?
I am using the android sdk - Just out of curiosity, can two objects(or more) have the same object id by any chance?
Thanks.
Answers to all of your questions:
Correct. Analytics does not contribute to API request limits or burst limit.
On the new pricing, there is no longer a distinction. All previously "Pro-Only" features are available to everyone.
No, multiple objects cannot / will not have duplicate objectId values.

Resources