My intent is not matching with expected response - azure-language-understanding

I'm using botkit with luis recognizer and below is the sample code:
bot.dialog('OnboardingDialog', (session) => {
....
....
}).triggerAction({ matches: 'OnboardingBook' })
I have defined onboarding intent in responses as below:
OnboardingBook:{buttonTitle:'',url:'',response:'new hire can be onboarded some extra stuff',title:'Onboarding book'}
CookBook: {buttonTitle:'',url:'',response:'this is about cookbook',title:'cook'}
Ideally, if I type onboarding in my chat bot, it should return response defined in Onboarding intent. But right now its giving me other intent answer. If i type, onboarding book its giving me cook book intent.
Please help me out why few intents are matching with other than expected. Is there any logic behind this? How to resolve this problem. Please help me out.

When you test your LUIS app in the LUIS portal do you get the correct intent? If so then perhaps you need to save and train, the publish your LUIS app again. If you do not get the correct result in the LUIS portal then you can click inspect and edit the top scoring intent (save + train then publish to push the change live).

Related

Microsoft Bot Framework :turncontext

I am new to Microsoft bot framework and have seen the term turncontext many times.
Can some one explain what that actually means and its significance.
eg: turncontext(adapter,activity)
When using a bot, the user and the bot take it in turns to speak. Within the Bot framework a turn is a users incoming activity which the bot responds too. Every message a bot receives from a new user will be in a new turn.
If the user asks a question "What is the weather like today?", the bot may respond with "Where would you like the weather for?". That is all in one turn. The user then responds with "London", this is in a new turn.
The turnContext is the object that gives you access to the information about the current turn from the user. This includes the current message sent by the user among for example. For a full specification see here. It's also used to send messages back to the user, SendActivityAsync is one to do this.
Take a look at this Microsoft article I used as the basic of this answer which goes into way more detail. Also take a look through the Bot Framework Samples and step through the code to learn more about the turnContext.

what is the best practice to review endpoint utterance in Luis dispatch model?

what is the best way to enhance a dispatch model in LUIS, as checking the dispatch app asked utterances "review endpoint utterances" and updating it does not affect the original apps I have from Luis or QnA, should i always update the other apps manually based on the received endpoint utterances or there is a better practice for improving a dispatch bot?
Good question. Updating Dispatch app directly on LUIS portal is not currently recommended as the utterances added directly will be overridden (removed) when Dispatch app is refreshed via the "dispatch refresh" CLI command. To add utterances shown in "Review Endpoint Utterances" to Dispatch, add them to a file (one line per utterance, one file per dispatch intent). Add the intent file to Dispatch as source, ie:
dispatch add -t file -f <> --intentName <>
If the utterances are never seen in the underlying LUIS app, add them directly to the app.
The Review Endpoint Utterances page shows you utterance by users and your tests as well, which you can quickly assign to intents to improve the model. To make changes to the model visible, you need to first Train then Publish

Luis Api 'FewLabels' issue

I am developing bot using LUIS framework by Microsoft. I am able to create application, Intent and utterances but when I try to train and publish my bot I am getting following error.
{
modelId: 'some-model-id',
details: {
statusId: 1,
status: 'Fail',
exampleCount: 0,
failureReason: 'FewLabels'
}
}
Because of this I am not able to publish my LUIS application. I don't find much information about the cause and prevention of this issue in Microsoft documents.
Thanks to #Nicolas R.
This is because of zero utterances in one of the intent while training.
So if you are using Luis Api make sure that each intent atleast have one utterance.
FewLabels seems to be a wrong failure Reason though, it should be like NoUtterance or ZeroUtterance or a detailed message like Unable to train XYZ application because ABC intent have zero utterances

'Oops. Something went wrong and we need to start over.' with Microsoft Bot Service with LUIS Template

I have created a Bot Service with LUIS template using Node.js, I have trained my luis model with our domain utterances and it was working fine. suddenly from a week time , Am facing weird behavior form the Bot Service, for any request the bot is replying as 'Oops. Something went wrong and we need to start over.' message. Could anyone encountered the similar issue and share your inputs to resolve.
Appreciate your help.
When a user query exceeds 500 character Luis gives an error. Check the user message how long it is and then test it in Luis portal. Or please check the subscription key which is used.

Need suggestions in getting the conversation details

I am creating a bot using MS Bot framework - NodeJs. The below information needs to be captured for logging (Using the bot.use method i.e. IMiddleware).
Receive:
a. UserId
b. UserInput (text)
c. ConversationId
Send:
1. Name of Intent or dialog name that handled this (that handled the user input text)
2. Bot output text
3. ConversationId
4. UserId
I am unable to get the required detail for the 'send'. Can anyone provide me some suggestions on this.
Thanks.
I believe your main struggle is to log the name of intent or dialog. You won't know it in your send middleware if you haven't captured it during the routing phase. Once the Bot Framework figured out where to send the incoming message, it just invokes that function.
These two articles may help you get what you want. Just recently I played with capturing the conversation's breadcrumbs and also logging a full transcript:
http://www.pveller.com/smarter-conversations-part-3-breadcrumbs/
http://www.pveller.com/smarter-conversations-part-4-transcript/
If you need to build a reliable capture engine, I would suggest that you didn't use the session.privateConversationData like I did and instead built your own storage/log infrastructure to push the events to. Just stream them out with a timestamp and conversationId and reconcile on the other end later. The asynchronous nature of everything the bot framework does internally will be haunting you along the way so that's why. Plus, once you scale out beyond testing on a few users and your bot spans multiple processes, you will be out of the single-threaded event loop.

Resources