Retrieve multiple questions matching the search keyword in QnA Maker - botframework

I have a bot which uses LUIS and QnA Maker.
Now, I am able to send queries and get back response in my bot based on the search keyword. But in case my search keyword is used in multiple questions, QnA Maker just retrieves the first matching QnA pair.
Consider below QnA pair:
What is flexible working? Flexibility to work from home
How to avail flexible working? Get in touch with manager
If the user types the exact question and hits enter, the response would be the answer which matches the question. But, if the user types flexible working in this case the response would be just the first QnA answer. So in this case I would like retrieve both the questions and throw back to the user as a choice of questions to choose from.
I tried overriding the RespondFromQnAMakerResultAsync and also checked the QnA maker APIs. Unfortunately I didn't find any way to do this.
Any help on this please? Let me know if I can rephrase or clarify more on this.

in case my search keyword is used in multiple questions, the QnA maker just retrieves the first matching QnA pair
You can try to specify the top parameter for QnAMakerAttribute, which controls the number of answers to return.
The definition of QnAMakerAttribute:
public QnAMakerAttribute(string subscriptionKey, string knowledgebaseId, string defaultMessage = null, double scoreThreshold = 0.3, int top = 1);
In your QnaDialog, you can specify it like this:
public QnaDialog() : base(new QnAMakerService(new QnAMakerAttribute("{subscriptionKey_here}", "{knowledgebaseId_here}", "Sorry, I couldn't find an answer for that", 0.5, 5)))
{
}
Edit:
The above approach worked for me, it can promote questions and show the answer for selected question.

Related

How to Intent classification with similar examples in nlu.md in Rasa

I am developing chatbot using Rasa for a Contract Manager Organisation. I am facing few issues and after reading a lot on the forums and Rasa blog, I am unable to conclude to a solution for this. I have several similar intents with similar examples like -
“inform_supplier_start_date” and “inform_contract_start_date”.
“inform_supplier_email” and “inform_customer_email” and “inform_reviewer_email”
Now the issue is, for both the categories of intents the example sentence in nlu.md is same. What I exactly mean is-
##intent:inform_suppler_start_date
-what is the supplier [Microsoft] (supplier_name) start date
-[EON Digital] (supplier_name) start date
##intent:inform_contract_start
1) start-date of [O2 Mobile phones] (contract_name)
2) [O2 Mobile phones] (contract_name) start date
The model isnt able to differentiate and identify the correct intent. It is getting confused and identifying the wrong intent, since the words in these intents are similar.
I need correct intents to be recognised ,so that accordingly, In custom action i can query the Database and get the corresponding result for supplier and contract.
I have many fields like this for which the example data and user queries will be same. For Example-
customer_email & supplier_email & reviewer_email
total_spend_contract & total_spend_supplier & total_spend_customer
contract_number_for_supplier & contract_number_of_contract & contract_number_organisation
What exactly I should be doing to get correct classification. One solution i am thinking of is merging the intents like “supplier_start_date” and "contract_start_date" as one “start_date” and check for the extracted entity inside custom actions in both supplier and contract database. But I dont think that would be proper usage of Natural Language.
Please Suggest, I shall be highly greatful for the same. Regards.
As the examples for your intents are very similar, the model will not be able to differentiate between them. Also the intent is actual the same, inform_suppler_start_date and inform_contract_start inform the bot about a start date. What kind of start date it is should be figured out via the entity recognition. So I would propose to merge the similar intents and check what the entity recognition detected as entities. Depending on whether a supplier or a contract was found, you can execute query A or B.

Qnamaker API Replace Alteration

I have multiple QnA knowledgebases in our Qnamaker. I used QnaMaker API and successfully uploaded alterations data using PUT method
PUT https://westus.api.cognitive.microsoft.com/qnamaker/v4.0/alterations
and I am able retrieve the same using the GET method
GET https://westus.api.cognitive.microsoft.com/qnamaker/v4.0/alterations
But still my bot is not recognizing the alternate keywords. Any thoughts?
Is there any limit on the max number of alternations that we can add.I uploaded around 1000 alternate keyword combinations.
For example:
In my Knowledge Base, here is one example question
What is Variable API?
I want the bot to identify "What is VariableAPI ?" and "What is Variable API" as the same question and respond with the same answer. For this, I uploaded alternations using the Qnamaker API ( PUT) method.
{ "wordAlterations": [
{
"alterations": [
"Variable API",
"VariableAPI"
]
}
] }
Please, anyone, help me understand what I am doing wrong and why my QnAmaker can't identify them as different words

Unable to understand correct use of None intent in LUIS

I have an app in LUIS with one intent "Help" (appart from None) and when I test it with utterances that my intent does not cover (e.g "Hi man"), LUIS resolves to the "Help" intent... I have no utterances in "None" intent...
What should I do? Should I add all the utterances I don't want to match "Help" intent in "None"?
Should I need to know everything a user can ask to my bot which is not related with "Help"?
For me, that's not make sense at all... and I think that is exactly how LUIS works...
Intent are the action which we define, None is predefined Intent which come along with every LUIS model that you create , coming back to your problem. You have only define one intent i.e "help" so whenever LUIS gets the any query it will show the highest scoring intent i.e. "help". whenever you create an intent make to sure to save at least 5-6 co-related utterance, so that LUIS can generate a pattern out of it 'more you define co-related utterance better accuracy of result you will get'
if you want LUIS to respond on "HI man" create a new intent 'greet' save some utterance let LUIS do the remaining task, lastly about None intent If any user input 'asdsafdasdsfdsf' string like this. Your bot should be able to handle it respond accordingly like 'this asdsafdasdsfdsf is irrelevant to me' in simple term 'any irregular action that user want bot to perform come under none intent' i hope this will help
You can check the score of the Luis intent and then accordingly send the default response from code. For the utterances which are configured will have a greater score. Also, Luis app shud be balanced in terms of utterances configured as there is not a defined way that u can point utterances to None intent. Please check this link for best practices.
https://learn.microsoft.com/en-us/azure/cognitive-services/luis/luis-concept-best-practices. Also to highlight Luis does not work in terms of keyword matching to the utterances which you have configured. It works in terms of data you add in Luis in respective intents.

Correct way to handle entities anytime in middle of conversation

I have started working with the LUIS and bot framework recently, after having some experience also with API AI / Google home development.
In the sample below that, I will use an example (from https://learn.microsoft.com/en-us/bot-framework/nodejs/bot-builder-nodejs-dialog-waterfall) is exemplified a step by step interaction with a user. First, it asks for a date, then a number, then a name for the reserve, and so on.
var bot = new builder.UniversalBot(connector, [
function (session) {
session.send("Welcome to the dinner reservation.");
builder.Prompts.time(session, "Please provide a reservation date and time (e.g.: June 6th at 5pm)");
},
function (session, results) {
session.dialogData.reservationDate = builder.EntityRecognizer.resolveTime([results.response]);
builder.Prompts.text(session, "How many people are in your party?");
},
function (session, results) {
session.dialogData.partySize = results.response;
builder.Prompts.text(session, "Who's name will this reservation be under?");
},
function (session, results) {
session.dialogData.reservationName = results.response;
// Process request and display reservation details
session.send("Reservation confirmed. Reservation details: <br/>Date/Time: %s <br/>Party size: %s <br/>Reservation name: %s",
session.dialogData.reservationDate, session.dialogData.partySize, session.dialogData.reservationName);
session.endDialog();
}]);
In my code, I have a similar multi-parameter dialog, but I want to allow the user to answer with multiple information at the same time in any of the responses it have. For example, after providing the reservation date the user can say "a reserve for Robert for 10 people", so both numbers of people and reservation name are giving at the same time.
To identify these text entities I suppose I have to call LUIS and get the entities resolved from the session context. I notice that the bot object has a recognized method that I think can work for that.
My question is how do I organize the structure of the code and the LUIS utterances and entities? Right now I have an intent with some entities and several utterances samples, but if I send this 'partial' user sentence I think it will not be mapped to the same intent and may not identify the entities with a small sentence like that.
How should I handle this? Do I need to provide samples for the intent with these partial sentences, that may contain only some of the entities?
Thanks
Yes, you should provide samples for all those utterances that you want to your intent to recognize. Not a million of samples, but just as few to get everything trained.
Then, the other problem that you might want to solve next, is asking for the information for those entities missing in the utterance. You can do that manually or you could go one step further and explore the LUIS Action Binding library.

MS BotBuilder : How can I set a combination of an intent and an entity to trigger a dialog?

I'm using the MS BotBuilder to create a bot language understanding bot. I have a dialog readProfile that's triggered on Read intent that is trained on LUIS.
bot.dialog('readProfile', [
function (session, args) {
var entities = args.intent.entities;
console.log("entities : ", entities)
]).triggerAction({
matches: 'Read'
}).cancelAction('cancelReadProfile', "Ok.", {
matches: /^(cancel|nevermind)/i
});
The LUIS model is trained to recognise entities like Profile and others so I do get the entity in console.
However, I wish to trigger the dialog only if the entity recognised is Profile. I can set some logic to work only when the entity in args is Profile but wondering if there's a builtin / more elegant way to do this.
Thanks for your input.
I think using a logic statement in the first step of the readProfile dialog is the best way to do this. If no Profile entity is found, end the dialog with a message like "It looks like you're trying to read a profile, but didn't I couldn't figure out what profile you're trying to read." This has the advantage of giving the user some feedback about their action and helping them figure out what they need to fix.
You could try to train the Luis model to have a strong correlation between having a Profile entity and the Read intent. Enter a few utterances that are really close to the Read intent but don't include a Profile and mark them with the None intent. That doesn't guarantee that it won't ever match a Read intent without a Profile, though, so I'd still recommend the above step.

Resources