I am searching for documentation on the integration of QnA Maker API with LUIS in Azure BOT Framework. But after a lot of research, I couldn't find any such document.
If anyone came across the same scenario, please post your efforts.
I am using C# as scripting here.
There are several general ways to do it, but it's ultimately up to you as the Bot developer to decide how to structure it.
A general overview is provided in the docs here, but if you want a more code oriented sample, this blog post should help you -
Dialog management with QnA, Luis, and Scorables
In the sample, the LuisDialog acts as a kind of message controller, which guides the user to a certain kind of dialog based on intent. This also can be used to direct a user to a QnA dialog ->
[Serializable]
[LuisModel("YourLuisAppID", "YourLuisSubscriptionKey")]
public class LuisDialog : LuisDialog<object>
{
// methods to handle LUIS intents
[LuisIntent("")]
[LuisIntent("None")]
public async Task None(IDialogContext context, LuisResult result)
{
// You can forward to QnA Dialog, and let Qna Maker handle the user's
query if no intent is found
await context.Forward(new QnaDialog(), ResumeAfterQnaDialog,
context.Activity, CancellationToken.None);
}
[LuisIntent("Some-Intent-Like-Get-Weather")]
public async Task GetWeather(IDialogContext context, LuisResult result)
{
....
// some tasks, forward to other dialog, etc
}
}
This is one way to do it, and a popular one. In this setup, if there is no intent that LUIS can detect, it'll route the user's query to a QnA dialog for the Qna Service (which you train) to answer.
Alternatively, you can create specifically a "Question intent" and try to forward it to QnA that way, if the user's intent was to ask a question. This is trickier however, as this method requires you to manually create custom code to manage the 'scores' of the responses.
Hope this was enough to help you get to what you need!
EDIT - Apologies, fixed the first link.
In addition, I'll just paste 3 common scenarios listed from the docs as ways you can use LUIS + QnA:
1) Call both QnA Maker and LUIS at the same time, and respond to the user by using information from the first one that returns a score of a specific threshold.
2) Call LUIS first, and if no intent meets a specific threshold score, i.e., "None" intent is triggered, then call QnA Maker. Alternatively, create a LUIS intent for QnA Maker, feeding your LUIS model with example QnA questions that map to "QnAIntent."
3)Call QnA Maker first, and if no answer meets a specific threshold score, then call LUIS.
Related
I'm creating a new bot using the Bot Composer and QnA Maker. The general set-up is pretty straight forward and the bot now answers questions from the QnA Maker knowledgebase. However it's looking like there are some limitations that I can't find reference to in the documentation. Specifically:
We have an existing qna knowledgebase that is used with a custom coded bot. It's updated by various scripts and custom user interfaces. To use this knowledgebase in the composer I seem to need to connect to it so it creates a local copy within composer. At this point I can only update it via composer?
Using the QnA intent recognised trigger with the pre-configured actions doesn't seem to offer any customisable parameters, such as accuracy threshold and metadata filtering?
Using the QnA intent recognised trigger with the pre-configured actions doesn't seem to offer the users alternative answers (if multiple results are returned from qna maker) or allow any form of user feedback to qna maker.
Please can anyone confirm that these points are correct or if I'm missing something. I have attempted to use the 'QnA Maker dialog' action which enabled you to add various parameters etc. but can't see any documentation on how to process questions and answers with it.
Thanks
I've been reading the article on setting up BotFramework Dispatch middleware and there are a few things I don't understand. The article is https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-tutorial-dispatch?view=azure-bot-service-4.0&tabs=cs
Article mentioned to use dispatch when we use multiple LUIS and/or QnA Models. But it seems that as soon as we have 1 of each (LUIS and QnA model) we already need a dispatch. Is there a way to avoid using dispatch if I have just one of each?
How does dispatch LUIS get maintained when changes occur in underlying LUIS or QnA models?
The idea of Dispatch tool (which is based on LUIS in fact) is being able to dispatch intent between several systems. As soon as you have more than 1 system which can understand intents (whether it is LUIS, QnA Maker or 3rd parties), how would you know which one will be the best for your case?
In a few words, dispatch will group intents for each system into global intents, and so when you call it you will know which systems is the best matching one, then you route your sentence to this best matching system to get the right granularity.
And as you mentioned, to maintain it, there's no secret: it must be updated when you update your underlying LUIS intents/utterances or QnA Maker KB systems
From what I am understanding the bing spellcheck api via azure can be integrated with LUIS queries but not qna queries on their own. However, when you use the dispatch multi-model pattern with a parent LUIS app and child apps the overall query or top level query coming from luis can run the bing spellcheck api.
Is this the suggested method for assuring some spellchecking is applied to a qna knowledge base?
Yes, you can do this with Dispatch Bot. When you get the results back from the recognizer, there will be an alteredText value if spell check made a correction. What you're wanting to do is replace the original text with this new value.
const recognizerResult = await this.dispatchRecognizer.recognize(context);
if (recognizerResult.alteredText) {
context.activity.text = recognizerResult.alteredText;
}
<code to select intent>
var processResult = await this.qnaDialog.processAsync(userDialog.qnaState, context.activity)
QnA Maker should now receive the query with the altered text. I don't have this implementation exactly but I had to do something similar where I modified context.activity.text and removed or # mentions from Teams, which were affecting intent identification and QnA answers.
As billoverton mentioned, if you integrate Bing Spell Check with LUIS then you can access the spellchecked utterance in the alteredQuery property of the LUIS results. If you don't have a LUIS model that you're passing utterances through before they reach QnA Maker, you can always call the Bing Spell Check API directly with HTTP requests or using an Azure SDK.
Once you have the spellchecked utterance, you can pass it along to QnA Maker through a QnA dialog by modifying the turn context's activity like billoverton suggested, or again you can call the API directly. There's plenty of information in the QnA Maker documentation about generating answers with the REST API or an SDK.
I am using .Net V3 SDK Azure Bot framework. Integrated a Bing Spell Check API service with my Web App bot and enabled the service in my LUIS model as well. I thought the spell check service will correct the typos once the user ask the question from the bot. I am sure the spell check service works as the number of calls increase each time I test the bot, but how can I get the suggested text from the spell check service? Do I have to code this functionality in the bot code? thanks in advance for any help.
A query that's been corrected by Bing Spell Check gets sent in the LUIS result's alteredQuery property.
In your LUIS dialog, you can access the AlteredQuery property like this:
[LuisIntent("None")]
public async Task NoneIntent(IDialogContext context, LuisResult result)
{
await context.PostAsync($"I think you meant \"{result.AlteredQuery}\"");
}
hi I am implementing one bot using LUIS, so I want to take input from user & give answer, if user enters incomplete info. then I want to reply him with another dialog which I suppose to get from Luis json.
I had implemented the same in Luis. But after the changes of the Luis UI. I am not able to get the steps to implement the above mentioned dynamic conversation in Luis.
Looking guidance on the same. Thanks in advance.
In the old version of luis, I had designed the Luis Application which was handling the action and I was getting the dialog like below:
"dialog": { "prompt": "which food do you want?", "parameterName": "Food Name", "parameterType": "foodName", "contextId": "ae5de259-6a9b-476c-bbb8-1be7fceba761", "status": "Question" } }
But in the current Luis UI(Updated one), I am not getting the steps to implement the same. I am looking the same type of dialog (mentioned above), if user is not enters incomplete info.
Regards,
Lax
Action Binding and Action Parameters were deprecated (as mentioned in the UI). You cannot do that within LUIS anymore.
The good news is that a library was created to support that scenario so you will be able to accomplish pretty much the same, in a bot, a web or even a console app.
Here is a set of blog posts that explain how this library works:
Implementing LUIS Action Binding on the Client
Luis Action Binding for Web Apps
Luis Action Binding for Console Apps