I am planning to use QnA maker, but does it use LUIS in the background ?
If the questions are asked in a different way than the one trained to QnA maker, will it respond ?
does it use LUIS in the background ?
No, but you can Combining Search, QnA Maker, and/or LUIS.
According to the document, the following three ways are suggested to implement QnA together with LUIS.
Call both QnA Maker and LUIS at the same time, and respond to the user by using information from the first one that returns a score of a specific threshold.
Call LUIS first, and if no intent meets a specific threshold score, i.e., "None" intent is triggered, then call QnA Maker. Alternatively, create a LUIS intent for QnA Maker, feeding your LUIS model with example QnA questions that map to "QnAIntent."
Call QnA Maker first, and if no answer meets a specific threshold score, then call LUIS.
Here I post a code sample just for the third approach wrote in C#.
In MessagesController call QnA Maker first:
if (activity.Type == ActivityTypes.Message)
{
await Conversation.SendAsync(activity, () => new Dialogs.MyQnADialog());
}
In MyQnADialog, see if there is matched answer, if not, call LUIS:
[QnAMakerAttribute("QnASubscriptionKey", "QnAKnowledgebaseId", "No Answer in Knowledgebase, seraching in Luis...", 0.5)]
[Serializable]
public class MyQnADialog : QnAMakerDialog
{
protected override async Task DefaultWaitNextMessageAsync(IDialogContext context, IMessageActivity message, QnAMakerResults result)
{
if (result.Answers.Count == 0)
{
await context.Forward(new MyLuisDialog(), this.ResumeAfterNewOrderDialog, message, CancellationToken.None);
}
context.Wait(this.MessageReceivedAsync);
//return base.DefaultWaitNextMessageAsync(context, message, result);
}
private async Task ResumeAfterNewOrderDialog(IDialogContext context, IAwaitable<object> result)
{
var resultfromnewdialog = await result;
context.Wait(this.MessageReceivedAsync);
}
}
QnA does not use LUIS or intent recognition, rather it uses n-grams to detect similarity as the documentation used to state.
Since MS Build 2018, you can use a LUIS dispatch app. It allows you to incorporate multiple LUIS apps and QnA knowledge bases into a single dispatch app. This means sending user input to both LUIS & QnA, or in a particular order depending on the confidence score should be a thing of the past.
The first call you do is directed towards the LUIS dispatch app. The result will tell you if you need to contact a child luis app, or rather a QnA knowledge base. It can do this because the utterances of the LUIS dispatch app are filled with utterances from the QnA knowledge base. You can add multiple LUIS apps and/or QnA knowledge bases to this dispatch.
I suggest looking into the Bot Builder dispatch tool (CLI).
Related
I am going to create a multi-turn dialog. I didn't get how it should be connected with LUIS models. I checked out documentation, but there are samples with only one turn dialogs. Also, I use Virtual Assistant template.
I want to do something like this.
User: I want to book a flight
Bot: What is the destination?
User: London
Bot: When?
User: 21st of September.
Bot: The ticket was bought.
The questions are what happens on the second step? Should I check out dispatcher? Should I add all possible phrases for all steps inside the intent?
General LUIS stuff
For your LUIS model you will need your intents - BookFlight and None. Under your BookFlight intent you will have your Utterances - all the phrases you want to be able to trigger the BookFlight intent.
MyLuisApp
--BookFlight
----I want to book a flight
----Book a flight
----I need a plane ticket
----etc
--None
----Utterances that don't match any of your intents
The none intent is VERY important as per this documentation.
Adding this functionality to a new bot or the core bot template
There are a couple different samples provided on how you could achieve this, but the best way is using Dialogs. What you want is a Waterfall Dialog. Inside this Dialog you can define each stage in the waterfall e.g. Ask for destination, ask for date etc.
In order to trigger the BookFlight waterfall you would have a MainDialog that handled every request, and checks with the LUIS dispatcher link1 and link2 to find out the users intent as per this example. If the intent is BookFlight then you would start the BookFlightDialog which contains the book flight waterfall.
...
// Check dispatch result
var dispatchResult = await cognitiveModels.DispatchService.RecognizeAsync<DispatchLuis>(dc.Context, CancellationToken.None);
var intent = dispatchResult.TopIntent().intent;
if (intent == "BookFlight")
{
// Start BookFlightDialog
await dc.BeginDialogAsync(nameof(BookFlightDialog));
}
General Waterfall Dialog stuff
You'd define your steps as something like:
var waterfallSteps = new WaterfallStep[]
{
AskDestinationAsync,
AskDepartureDateAsync,
ConfirmStepAsync,
FinishDialogAsync,
};
For your scenario there is actually a sample that has already been created with the BookFlight intent available here. There is a full guide on how to get this setup and working in the official documentation. So you can test to see how everything works then modify it as you need.
Other interesting links:
Custom prompt sample - roll your own.
Multi-turn sample - waterfall dialog.
Virtual Assistant stuff
Once you understand how the above works you will be able to modify the Virtual Assistant template to handle the BookFlight intent by taking the following actions:
Adding a BookFlight intent to your existing LUIS DISPATCH app that is connected to your VA template.
Adding utterances to the BookFlight intent.
Save and train your LUIS app.
Publish your LUIS app.
Running the update_cognitive_models.ps1 script as per step 3 of the instructions here which will pull down the changes (your new intent and utterances).
.\Deployment\Scripts\update_cognitive_models.ps1 -RemoteToLocal
NOTE: This command must be run using PowerShell Core and from the root of your project Directory, i.e. inside your Virtual Assistant folder.
The result of running this script should be a bunch of files created locally, as well as the DispatchLuis.cs file being updated to include your new intent. You should also check the Summary.html file that is created to see that your new intent is there. You will now have to update the VA code to actually do something when your new intent is triggered - add another if/case statement inside the RouteAsync method of the MainDialog.cs file - see here for an example.
Something like this:
MainDialog.cs
protected override async Task RouteAsync(DialogContext dc, CancellationToken cancellationToken = default(CancellationToken))
{
// Call to dispatch to get intent
if (intent == DispatchLuis.Intent.bookflight)
{
// Start BookFlightDialog
await dc.BeginDialogAsync(nameof(BookFlightDialog));
}
...
}
Our bot has a large set of skills (LUIS and QnA Maker intents), but not all skills are useful for all users. We would like to suppress some skills depending on who the user is.
We developed some code to do this for LUIS intents, dynamically filtering intents returned from our model to only include ones appropriate for the current user.
The issue we have though, is that we also have a couple of QnA models in our array of recognizers (recognizer_array) ...
var intents = new builder.IntentDialog({ recognizers: recognizer_array,
intentThreshold: 0.85, recognizeOrder: 'series'});
bot.dialog('/mainDialogue', intents);
But not all users need these QnA models, we would like to exclude them for some users.
Is there a way to dynamically filter intents so that we could exclude these QnA intents for some users?
I have a bot which uses LUIS and QnA Maker.
Now, I am able to send queries and get back response in my bot based on the search keyword. But in case my search keyword is used in multiple questions, QnA Maker just retrieves the first matching QnA pair.
Consider below QnA pair:
What is flexible working? Flexibility to work from home
How to avail flexible working? Get in touch with manager
If the user types the exact question and hits enter, the response would be the answer which matches the question. But, if the user types flexible working in this case the response would be just the first QnA answer. So in this case I would like retrieve both the questions and throw back to the user as a choice of questions to choose from.
I tried overriding the RespondFromQnAMakerResultAsync and also checked the QnA maker APIs. Unfortunately I didn't find any way to do this.
Any help on this please? Let me know if I can rephrase or clarify more on this.
in case my search keyword is used in multiple questions, the QnA maker just retrieves the first matching QnA pair
You can try to specify the top parameter for QnAMakerAttribute, which controls the number of answers to return.
The definition of QnAMakerAttribute:
public QnAMakerAttribute(string subscriptionKey, string knowledgebaseId, string defaultMessage = null, double scoreThreshold = 0.3, int top = 1);
In your QnaDialog, you can specify it like this:
public QnaDialog() : base(new QnAMakerService(new QnAMakerAttribute("{subscriptionKey_here}", "{knowledgebaseId_here}", "Sorry, I couldn't find an answer for that", 0.5, 5)))
{
}
Edit:
The above approach worked for me, it can promote questions and show the answer for selected question.
I am building a bot using the given technology stacks:
Microsoft Bot Builder
Node.js
Dialogflow.ai (api.ai)
We used waterfall model to implement a matched intent Dialog, which includes a couple of handler functions and prompts. After following this scenario I need to identify the entities inside an inner handler function, for a user input.
eg:
Bot : Where do you want to fly?
User: Singapore. (For this we added entities like SIN - Singapore,SIN(Synonym), So I need to resolve the value as SIN)
Any help on this scenario is much appreciated.
Here is a post Using api.ai with microsoft bot framework you can refer for your reqirement, and with a sample at https://github.com/GanadiniAkshay/weatherBot/blob/master/api.ai/index.js. The weather api key leveraged in this sample is out of date, but the waterfall and recognizer api key is still working.
Generally speaking:
Use api-ai-recognizer
Instantiate the apiairecognizer and leveraged builder.IntentDialog to include the recognizer:
var recognizer = new apiairecognizer("<api_key>");
var intents = new builder.IntentDialog({
recognizers: [recognizer]
});
In IntentDialogs, use builder.EntityRecognizer.findEntity(args.entities,'<entity>'); to recognize the intent entities.
I have an app in LUIS with one intent "Help" (appart from None) and when I test it with utterances that my intent does not cover (e.g "Hi man"), LUIS resolves to the "Help" intent... I have no utterances in "None" intent...
What should I do? Should I add all the utterances I don't want to match "Help" intent in "None"?
Should I need to know everything a user can ask to my bot which is not related with "Help"?
For me, that's not make sense at all... and I think that is exactly how LUIS works...
Intent are the action which we define, None is predefined Intent which come along with every LUIS model that you create , coming back to your problem. You have only define one intent i.e "help" so whenever LUIS gets the any query it will show the highest scoring intent i.e. "help". whenever you create an intent make to sure to save at least 5-6 co-related utterance, so that LUIS can generate a pattern out of it 'more you define co-related utterance better accuracy of result you will get'
if you want LUIS to respond on "HI man" create a new intent 'greet' save some utterance let LUIS do the remaining task, lastly about None intent If any user input 'asdsafdasdsfdsf' string like this. Your bot should be able to handle it respond accordingly like 'this asdsafdasdsfdsf is irrelevant to me' in simple term 'any irregular action that user want bot to perform come under none intent' i hope this will help
You can check the score of the Luis intent and then accordingly send the default response from code. For the utterances which are configured will have a greater score. Also, Luis app shud be balanced in terms of utterances configured as there is not a defined way that u can point utterances to None intent. Please check this link for best practices.
https://learn.microsoft.com/en-us/azure/cognitive-services/luis/luis-concept-best-practices. Also to highlight Luis does not work in terms of keyword matching to the utterances which you have configured. It works in terms of data you add in Luis in respective intents.