I am going to create a multi-turn dialog. I didn't get how it should be connected with LUIS models. I checked out documentation, but there are samples with only one turn dialogs. Also, I use Virtual Assistant template.
I want to do something like this.
User: I want to book a flight
Bot: What is the destination?
User: London
Bot: When?
User: 21st of September.
Bot: The ticket was bought.
The questions are what happens on the second step? Should I check out dispatcher? Should I add all possible phrases for all steps inside the intent?
General LUIS stuff
For your LUIS model you will need your intents - BookFlight and None. Under your BookFlight intent you will have your Utterances - all the phrases you want to be able to trigger the BookFlight intent.
MyLuisApp
--BookFlight
----I want to book a flight
----Book a flight
----I need a plane ticket
----etc
--None
----Utterances that don't match any of your intents
The none intent is VERY important as per this documentation.
Adding this functionality to a new bot or the core bot template
There are a couple different samples provided on how you could achieve this, but the best way is using Dialogs. What you want is a Waterfall Dialog. Inside this Dialog you can define each stage in the waterfall e.g. Ask for destination, ask for date etc.
In order to trigger the BookFlight waterfall you would have a MainDialog that handled every request, and checks with the LUIS dispatcher link1 and link2 to find out the users intent as per this example. If the intent is BookFlight then you would start the BookFlightDialog which contains the book flight waterfall.
...
// Check dispatch result
var dispatchResult = await cognitiveModels.DispatchService.RecognizeAsync<DispatchLuis>(dc.Context, CancellationToken.None);
var intent = dispatchResult.TopIntent().intent;
if (intent == "BookFlight")
{
// Start BookFlightDialog
await dc.BeginDialogAsync(nameof(BookFlightDialog));
}
General Waterfall Dialog stuff
You'd define your steps as something like:
var waterfallSteps = new WaterfallStep[]
{
AskDestinationAsync,
AskDepartureDateAsync,
ConfirmStepAsync,
FinishDialogAsync,
};
For your scenario there is actually a sample that has already been created with the BookFlight intent available here. There is a full guide on how to get this setup and working in the official documentation. So you can test to see how everything works then modify it as you need.
Other interesting links:
Custom prompt sample - roll your own.
Multi-turn sample - waterfall dialog.
Virtual Assistant stuff
Once you understand how the above works you will be able to modify the Virtual Assistant template to handle the BookFlight intent by taking the following actions:
Adding a BookFlight intent to your existing LUIS DISPATCH app that is connected to your VA template.
Adding utterances to the BookFlight intent.
Save and train your LUIS app.
Publish your LUIS app.
Running the update_cognitive_models.ps1 script as per step 3 of the instructions here which will pull down the changes (your new intent and utterances).
.\Deployment\Scripts\update_cognitive_models.ps1 -RemoteToLocal
NOTE: This command must be run using PowerShell Core and from the root of your project Directory, i.e. inside your Virtual Assistant folder.
The result of running this script should be a bunch of files created locally, as well as the DispatchLuis.cs file being updated to include your new intent. You should also check the Summary.html file that is created to see that your new intent is there. You will now have to update the VA code to actually do something when your new intent is triggered - add another if/case statement inside the RouteAsync method of the MainDialog.cs file - see here for an example.
Something like this:
MainDialog.cs
protected override async Task RouteAsync(DialogContext dc, CancellationToken cancellationToken = default(CancellationToken))
{
// Call to dispatch to get intent
if (intent == DispatchLuis.Intent.bookflight)
{
// Start BookFlightDialog
await dc.BeginDialogAsync(nameof(BookFlightDialog));
}
...
}
Related
I am trying to implement a bot which uses Qna services and Azure search.
I am taking help of the C# QnA Maker sample github code.
It is using a BotServices.cs class which is taking a QnA service in its constructor. This Botservice object is being passed to the QnABot class constructor.
I want to use Dialog set in QnABot's constructor which need accessors to be added. I really didn't understand how to add accessor class and use them in the startup.cs
I tried to copy some code from other samples but didn't work.
Please help me to add an accessor to the BotServices constructor so that I can use dialog sets inside of it.
I would like to extend the QnA sample for my purpose.
Can you tell us why you want to pass a dialog set tot the botservices class? this class is only used to reference external services such as QnAMaker and LUIS. If you want to start a Dialog, do so in the OnTurnAsync method of the QnABot.cs class. keep in mind that the this method as it is created in this specific sample will send a response on every message the user sends even if they are working through a dialog. You could change the OnTurnAsync in such a way that the first step in the dialog is to check the QnAMaker. See the enterpriseBot sample to see how to start a dialog as well as adding an accessor to a child Dialog. see the following snipped from the MainDialog.cs class how they added the accessor:
protected override async Task OnStartAsync(DialogContext innerDc, CancellationToken cancellationToken = default(CancellationToken))
{
var onboardingAccessor = _userState.CreateProperty<OnboardingState>(nameof(OnboardingState));
var onboardingState = await onboardingAccessor.GetAsync(innerDc.Context, () => new OnboardingState());
var view = new MainResponses();
await view.ReplyWith(innerDc.Context, MainResponses.Intro);
if (string.IsNullOrEmpty(onboardingState.Name))
{
// This is the first time the user is interacting with the bot, so gather onboarding information.
await innerDc.BeginDialogAsync(nameof(OnboardingDialog));
}
}
I am building a bot using the given technology stacks:
Microsoft Bot Builder
Node.js
Dialogflow.ai (api.ai)
We used waterfall model to implement a matched intent Dialog, which includes a couple of handler functions and prompts. After following this scenario I need to identify the entities inside an inner handler function, for a user input.
eg:
Bot : Where do you want to fly?
User: Singapore. (For this we added entities like SIN - Singapore,SIN(Synonym), So I need to resolve the value as SIN)
Any help on this scenario is much appreciated.
Here is a post Using api.ai with microsoft bot framework you can refer for your reqirement, and with a sample at https://github.com/GanadiniAkshay/weatherBot/blob/master/api.ai/index.js. The weather api key leveraged in this sample is out of date, but the waterfall and recognizer api key is still working.
Generally speaking:
Use api-ai-recognizer
Instantiate the apiairecognizer and leveraged builder.IntentDialog to include the recognizer:
var recognizer = new apiairecognizer("<api_key>");
var intents = new builder.IntentDialog({
recognizers: [recognizer]
});
In IntentDialogs, use builder.EntityRecognizer.findEntity(args.entities,'<entity>'); to recognize the intent entities.
I have an app in LUIS with one intent "Help" (appart from None) and when I test it with utterances that my intent does not cover (e.g "Hi man"), LUIS resolves to the "Help" intent... I have no utterances in "None" intent...
What should I do? Should I add all the utterances I don't want to match "Help" intent in "None"?
Should I need to know everything a user can ask to my bot which is not related with "Help"?
For me, that's not make sense at all... and I think that is exactly how LUIS works...
Intent are the action which we define, None is predefined Intent which come along with every LUIS model that you create , coming back to your problem. You have only define one intent i.e "help" so whenever LUIS gets the any query it will show the highest scoring intent i.e. "help". whenever you create an intent make to sure to save at least 5-6 co-related utterance, so that LUIS can generate a pattern out of it 'more you define co-related utterance better accuracy of result you will get'
if you want LUIS to respond on "HI man" create a new intent 'greet' save some utterance let LUIS do the remaining task, lastly about None intent If any user input 'asdsafdasdsfdsf' string like this. Your bot should be able to handle it respond accordingly like 'this asdsafdasdsfdsf is irrelevant to me' in simple term 'any irregular action that user want bot to perform come under none intent' i hope this will help
You can check the score of the Luis intent and then accordingly send the default response from code. For the utterances which are configured will have a greater score. Also, Luis app shud be balanced in terms of utterances configured as there is not a defined way that u can point utterances to None intent. Please check this link for best practices.
https://learn.microsoft.com/en-us/azure/cognitive-services/luis/luis-concept-best-practices. Also to highlight Luis does not work in terms of keyword matching to the utterances which you have configured. It works in terms of data you add in Luis in respective intents.
I have started working with the LUIS and bot framework recently, after having some experience also with API AI / Google home development.
In the sample below that, I will use an example (from https://learn.microsoft.com/en-us/bot-framework/nodejs/bot-builder-nodejs-dialog-waterfall) is exemplified a step by step interaction with a user. First, it asks for a date, then a number, then a name for the reserve, and so on.
var bot = new builder.UniversalBot(connector, [
function (session) {
session.send("Welcome to the dinner reservation.");
builder.Prompts.time(session, "Please provide a reservation date and time (e.g.: June 6th at 5pm)");
},
function (session, results) {
session.dialogData.reservationDate = builder.EntityRecognizer.resolveTime([results.response]);
builder.Prompts.text(session, "How many people are in your party?");
},
function (session, results) {
session.dialogData.partySize = results.response;
builder.Prompts.text(session, "Who's name will this reservation be under?");
},
function (session, results) {
session.dialogData.reservationName = results.response;
// Process request and display reservation details
session.send("Reservation confirmed. Reservation details: <br/>Date/Time: %s <br/>Party size: %s <br/>Reservation name: %s",
session.dialogData.reservationDate, session.dialogData.partySize, session.dialogData.reservationName);
session.endDialog();
}]);
In my code, I have a similar multi-parameter dialog, but I want to allow the user to answer with multiple information at the same time in any of the responses it have. For example, after providing the reservation date the user can say "a reserve for Robert for 10 people", so both numbers of people and reservation name are giving at the same time.
To identify these text entities I suppose I have to call LUIS and get the entities resolved from the session context. I notice that the bot object has a recognized method that I think can work for that.
My question is how do I organize the structure of the code and the LUIS utterances and entities? Right now I have an intent with some entities and several utterances samples, but if I send this 'partial' user sentence I think it will not be mapped to the same intent and may not identify the entities with a small sentence like that.
How should I handle this? Do I need to provide samples for the intent with these partial sentences, that may contain only some of the entities?
Thanks
Yes, you should provide samples for all those utterances that you want to your intent to recognize. Not a million of samples, but just as few to get everything trained.
Then, the other problem that you might want to solve next, is asking for the information for those entities missing in the utterance. You can do that manually or you could go one step further and explore the LUIS Action Binding library.
I'm using the MS BotBuilder to create a bot language understanding bot. I have a dialog readProfile that's triggered on Read intent that is trained on LUIS.
bot.dialog('readProfile', [
function (session, args) {
var entities = args.intent.entities;
console.log("entities : ", entities)
]).triggerAction({
matches: 'Read'
}).cancelAction('cancelReadProfile', "Ok.", {
matches: /^(cancel|nevermind)/i
});
The LUIS model is trained to recognise entities like Profile and others so I do get the entity in console.
However, I wish to trigger the dialog only if the entity recognised is Profile. I can set some logic to work only when the entity in args is Profile but wondering if there's a builtin / more elegant way to do this.
Thanks for your input.
I think using a logic statement in the first step of the readProfile dialog is the best way to do this. If no Profile entity is found, end the dialog with a message like "It looks like you're trying to read a profile, but didn't I couldn't figure out what profile you're trying to read." This has the advantage of giving the user some feedback about their action and helping them figure out what they need to fix.
You could try to train the Luis model to have a strong correlation between having a Profile entity and the Read intent. Enter a few utterances that are really close to the Read intent but don't include a Profile and mark them with the None intent. That doesn't guarantee that it won't ever match a Read intent without a Profile, though, so I'd still recommend the above step.