We are automating our internal policies using Chat BOT. Most of the policies are in pdfs and word documents, what is the best way to store data for responses ? To extract content from documents and store in databases or any other way ? Because in each document there are many possible questions users can ask.
One way is if you look into MS's Cognitive Service called QnA Maker.
What is QnA Maker here.
Sample Bot implementing QnA Maker here.
Related
I'm creating a new bot using the Bot Composer and QnA Maker. The general set-up is pretty straight forward and the bot now answers questions from the QnA Maker knowledgebase. However it's looking like there are some limitations that I can't find reference to in the documentation. Specifically:
We have an existing qna knowledgebase that is used with a custom coded bot. It's updated by various scripts and custom user interfaces. To use this knowledgebase in the composer I seem to need to connect to it so it creates a local copy within composer. At this point I can only update it via composer?
Using the QnA intent recognised trigger with the pre-configured actions doesn't seem to offer any customisable parameters, such as accuracy threshold and metadata filtering?
Using the QnA intent recognised trigger with the pre-configured actions doesn't seem to offer the users alternative answers (if multiple results are returned from qna maker) or allow any form of user feedback to qna maker.
Please can anyone confirm that these points are correct or if I'm missing something. I have attempted to use the 'QnA Maker dialog' action which enabled you to add various parameters etc. but can't see any documentation on how to process questions and answers with it.
Thanks
From what I am understanding the bing spellcheck api via azure can be integrated with LUIS queries but not qna queries on their own. However, when you use the dispatch multi-model pattern with a parent LUIS app and child apps the overall query or top level query coming from luis can run the bing spellcheck api.
Is this the suggested method for assuring some spellchecking is applied to a qna knowledge base?
Yes, you can do this with Dispatch Bot. When you get the results back from the recognizer, there will be an alteredText value if spell check made a correction. What you're wanting to do is replace the original text with this new value.
const recognizerResult = await this.dispatchRecognizer.recognize(context);
if (recognizerResult.alteredText) {
context.activity.text = recognizerResult.alteredText;
}
<code to select intent>
var processResult = await this.qnaDialog.processAsync(userDialog.qnaState, context.activity)
QnA Maker should now receive the query with the altered text. I don't have this implementation exactly but I had to do something similar where I modified context.activity.text and removed or # mentions from Teams, which were affecting intent identification and QnA answers.
As billoverton mentioned, if you integrate Bing Spell Check with LUIS then you can access the spellchecked utterance in the alteredQuery property of the LUIS results. If you don't have a LUIS model that you're passing utterances through before they reach QnA Maker, you can always call the Bing Spell Check API directly with HTTP requests or using an Azure SDK.
Once you have the spellchecked utterance, you can pass it along to QnA Maker through a QnA dialog by modifying the turn context's activity like billoverton suggested, or again you can call the API directly. There's plenty of information in the QnA Maker documentation about generating answers with the REST API or an SDK.
Can someone please provide the solution for integrate LUIS with multiple QnA KB.
You would basically create a LUIS app, then for each intent have it call the appropriate QnA KB. Tutorial on how to do this is here, "Integrate QnA Maker and LUIS to distribute your knowledge base".
I have developed a FAQ Bot using C# and Bot Builder SDK 3.15.3. We have a large set of question/answer pairs which are uploaded to a QNA Maker Service. I have enabled the Direct Line Channel and the bot is displayed on a web page. I have used the Web Chat control provided by Microsoft with some customization and skinning.
Now I want to enable voice interaction with the bot, for that I decided to use the Microsoft Speech to Text Cognitive Service.
What I want to do is that when ever user speaks some utterance, I want to send the utterance to my bot service similar to like text is sent. Then inside C# code I want to run the Speech to Text and further do a Spell Check on the text retrieved and finally send to QNA Maker Service. The response for now will only be showed as text, but can also opt to read the response for the user.
Kindly guide my how this is achievable, as after looking at CognitiveService.js and other articles on enabling speech, I notice that Web Chat control directly sends the voice input to speech to text service.
You can make a hybrid between a calling bot which utilizes speech to text and a QnA bot to achieve your goal. For the calling bot, look over the SimpleIVRbot sample to get you going. For QnAMaker, you can reference the SimpleQnABot. It shouldn't take too much work bridging the two into a single unified bot. Just be sure to remove duplicate code and combine files where necessary.
Hope of help!
I am very new to Microsoft LUIS, and thinking the feasibility to utilize LUIS to build the Q&A ChatBot to provide the IT technical support to our users.
We have the 2-years support log in one email, my original idea is, to extract this support log to train/test LUIS, if the test result could be accepted, we may enable chatbot in SKYPE to provide the support to our users.
After I go through the following document or course:
https://learn.microsoft.com/en-us/azure/cognitive-services/luis/
https://courses.edx.org/courses/course-v1:Microsoft+DEV328x+2T2018/course/
My understanding is, to build technical support chatbot, I have to:
1) Manually create the entities / intents / utterances in LUIS, then train / test LUIS, or achieve this by programming via LUIS API (it may like what is mentioned in the following thread:Approaches to improve Microsoft ChatBot with each user conversation by learning from it?). The purpose of this step is , when users raise the questions, chatbot could match users' questions to the defined intents.
2) Customize the reactions based on the intents. And based on the intents identified in the above, answer the questions to users or redirect the questions to human if questions could not be found out.
My question is whether it is possible for LUIS to be trained by those 2 years' support log (fed in some formats such as in JSON), then automatically generate the intent / entities / utterance as well as answering for our test?
Your kind input or advises would be highly appreciated.
Best regards
Patrick
My question is whether it is possible for LUIS to be trained by those
2 years' support log (fed in some formats such as in JSON), then
automatically generate the intent / entities / utterance as well as
answering for our test?
Yes, by using Batch Testing you can upload no more than 1000 utterances to test which will serve the purpose. In this case you have to create the intents and entities.
My question is whether it is possible for LUIS to be trained by those
2 years' support log (fed in some formats such as in JSON), then
automatically generate the intent / entities / utterance as well as
answering for our test?
Yes, LUIS provides a programmatic API that does everything that the LUIS website does. This can save time when you have pre-existing data and it would be faster to create a LUIS app programmatically than by entering information by hand.