Dialogflow CX - DetectIntent Response - Alternate matched intent not coming in the response - dialogflow-cx

I am trying to replicate a scenario where alternate matched intent returns other intents that have a close match/close confidence score. But the API always returns 1 intent with the highest confidence score and skips all other intents, is there any way to make it function similar to ES version?
Steps followed - Created Intent1 with utterance Check balance and created Intent2 with test balance. And in test window if I type Balance it always returns only Intent2
"Alternative Matched Intents": [
{
"Id": "84383366-215f-40a3-9ba6-464238f0c2aa",
"Score": 0.5985087752342224,
"DisplayName": "Intent2",
"Type": "NLU",
"Active": true
}
]

The "Alternative Matched Intents" field helps you to debug the intents that are matched by user utterance in current flow model, which would also contain the matched intent itself.
Moreover, these intents are only the intents that are referenced in the flow (e.g. used by transition route in the flow page) which are picked up by the flow model.
Note that intents which are not referenced in the flow will not be included in the “Alternative Matched Intents” field.
Here are some sample scenarios where we can use the "Alternative Matched Intents" field for debugging purposes:
Example 1:
If you have Intent-1 and Intent-2, which both contain the training phrase "hello" and they are both referenced in the current flow (see image above - Intent-1 was referenced in the Intent1 page and Intent-2 was referenced in the Intent2 page). When the user says "hello" on the start page, Intent-1 will be triggered and both Intent-1 and Intent-2 will be included in the "Alternative Matched Intents" field. Note that the agent is still matching the user utterance to Intent-2 and it is the configuration of the pages/transitions (i.e. the state model) that is resulting in Intent-1 being triggered.
Result:
Example 2:
If you have Intent-3, which has the training phrase "check balance" and it is referenced in the lower level of your current flow. When the user says "check balance" on the start page, NO INTENT will be matched since based on the current flow, you have to go through Intent-1 and Intent-2 first before Intent-3 could be matched (see image above - Intent-3 was referenced in the Intent3 page). However, Intent-3 will also be included in the "Alternative Matched Intents" field since it is referenced in the current flow.
Result:
Moreover, as you can see in the Results above, there is an “Active” field that indicates whether the intent is active or not (the value is true if it’s active and false if it’s not). Basically, Active intents are the intents that are in the current scope. For example, if the user is currently on "Intent1" page, then only the intents in the current flow's (START_PAGE’s) transition route or the current page's (Intent1’s) transition route or transition route group are considered as active intents. See https://cloud.google.com/dialogflow/cx/docs/concept/handler#scope for more details.

Related

How to search Zoho custom module using API v2?

I am using an access token with ZohoCRM.modules.custom.READ.
When I send a GET request to https://www.zohoapis.com/crm/v2/Custom/search, I get the following error.
{
"code": "INVALID_MODULE",
"details": {},
"message": "the module name given seems to be invalid",
"status": "error"
}
What am I doing wrong and how do I define the module I am trying to pull data from (it is called CustomModule2)?
Figured it out...
First, needed to go to https://crm.zoho.com/crm/{org_id}/settings/modules to find the actual name of CustomModule2 which is Adresses livraison.
Then, needed to go to https://crm.zoho.com/crm/{org_id}/settings/api/modules to find the API name for Adresses livraison which is Adresses_livraison.
Finally, needed to go to https://crm.zoho.com/crm/{org_id}/settings/api/modules/CustomModule2?step=FieldsList to find the API name of the field I wanted to use as a search criteria (it was Compte].
The final query using httpie is as follows.
http GET https://www.zohoapis.com/crm/v2/Adresses_livraison/search \
Authorization:"Zoho-oauthtoken {access_token}" \
criteria=="(Compte:equals:{account_id})"
Zoho is up there in the most awkward developer experiences I have encountered.
Just an update for anyone still looking into this, because I noticed that the links aren't always the same, depending on whether it's a sandbox or not, etc. So the steps are:
Go to your CRM page/dashboard and click on the Settings (cogwheel icon) on the top-right, next to your account image.
A bunch of items in panel boxes open. In the panel named "Developer Space" choose APIs
There it opens a bunch of tabs and sub-tabs and a Dashboard (As shown on the image below). The "Dashboard" sub-tab should be selected, all you have to do is switch to the sub-tab "API-Names"

Watson Assistant Slots: how to allow the user to re-enter data using non-dictionary entities?

I want to capture the following input from the user and am using these entities:
First and last names: use sys-person (I extract first and last names later using a cloud action).
Email address: use pattern entity, name #contactEmail and value "email" pattern \b[A-Za-z0-9._%+-]+#([A-Za-z0-9-]+.)+[A-Za-z]{2,}\b
Mobile Number: use pattern entity, name #contactPhone and value "mobileNumber" pattern ^[1-9][0-9]{7}[0-9]+$
Slots
I created a node with slots:
The setup is as follows:
Check for:
Check for: #sys-person.literal Save it as $person If not present: Type your name
Check for: #contactEmail.literal Save it as $email If not present: Type your email
Check for: #contactPhone.literal Save it as $contactPhone If not present: Type your mobile.
This all works perfectly and I get the name, email address and Phone number.
The Challenge
But I want the user to be able to confirm that the details are correct.
So I have an intent called #response_yes which is activated by things like "yes", "yep", etc.
And I have a fourth slot that checks for #response_yes.
Check for #response_yes, store the value in $confirmed, if not present ask "Got $person, $contactEmail, $contactPhone is that correct?"
If #response_yes is not found, then it displays the values already typed and asks if they are correct and waits for user input.
After the user has responded, then if the #reponse_yes intent is still not found, then:
Respond with, "Let's start again".
Also, we need to clear the values already typed:
Here's where it goes wrong
When I try the chatbot, the node collects the input correctly and displays the values correctly and asks if they are correct. I type "no", the #response_no intent is correctly divined and I would expect the prompt for the first slot to show again, but it doesn't.
No matter what I type at that point, the assistant bypasses the top three slots and drops to the fourth one again. It's as if I need to clear the entities, not the variables.
What am I doing wrong? How do I make the top slot work again?!
Aaaaargh.
For anyone who's interested, I simply didn't reset the values to null.
I guess I assumed that leaving the value blank was the same as null. Not so. Once I did this, the system worked perfectly:

How build a conversation handler with CllbackQueryHandler in it

NB: I use version 12 of python-telegram-bot package.
I would like to build a conversation handler: when a user chooses \charts command, the bot shows them an inline list of choices, and depending on their choice, return them a chart.
charts_handler = ConversationHandler(
entry_points=[CommandHandler('chart', chart_start)],
states={
ChartChoices.choosing_user: [CallbackQueryHandler(
individual_chart,
pass_user_data=True)
],
},
fallbacks=[done_handler],
)
But if I do not set per_message=False then it results in this error:
If 'per_message=False', 'CallbackQueryHandler' will not be tracked for every message.
If I do set per_message=True, then it results in the error:
If 'per_message=True', all entry points and state handlers
must be 'CallbackQueryHandler', since no other handlers have a message context.
So it seems that the only way to build a conversation handler with CallBackQueryHandler (or in other words to show inline keyboard during chat) is to set all handlers to CallbackQueryHandler. Is it correct?
First of all, this is not an error, it's a warning you can safely ignore: If 'per_message=False', 'CallbackQueryHandler' will not be tracked for every message.
Second, you don't need a ConversationHandler for described use case. Example user interaction:
User: /charts
Bot:
Here is the list of available charts:
Bar chart 1 /chart_1
Bar chart 2 /chart_2
Pie chart /chart_3
And this kind of flow you can implement with simple MessageHandler and Filters
Docs: MessageHandler
Docs: Filters
Namely you can use the regex filter.
ConversationHandler is useful when you need a multistep iteraction with the user (like filling out a long form step by step). If you can identify user requests by other means, like generated commands, inline buttons, message text — prefer doing it this way.

Unable to understand correct use of None intent in LUIS

I have an app in LUIS with one intent "Help" (appart from None) and when I test it with utterances that my intent does not cover (e.g "Hi man"), LUIS resolves to the "Help" intent... I have no utterances in "None" intent...
What should I do? Should I add all the utterances I don't want to match "Help" intent in "None"?
Should I need to know everything a user can ask to my bot which is not related with "Help"?
For me, that's not make sense at all... and I think that is exactly how LUIS works...
Intent are the action which we define, None is predefined Intent which come along with every LUIS model that you create , coming back to your problem. You have only define one intent i.e "help" so whenever LUIS gets the any query it will show the highest scoring intent i.e. "help". whenever you create an intent make to sure to save at least 5-6 co-related utterance, so that LUIS can generate a pattern out of it 'more you define co-related utterance better accuracy of result you will get'
if you want LUIS to respond on "HI man" create a new intent 'greet' save some utterance let LUIS do the remaining task, lastly about None intent If any user input 'asdsafdasdsfdsf' string like this. Your bot should be able to handle it respond accordingly like 'this asdsafdasdsfdsf is irrelevant to me' in simple term 'any irregular action that user want bot to perform come under none intent' i hope this will help
You can check the score of the Luis intent and then accordingly send the default response from code. For the utterances which are configured will have a greater score. Also, Luis app shud be balanced in terms of utterances configured as there is not a defined way that u can point utterances to None intent. Please check this link for best practices.
https://learn.microsoft.com/en-us/azure/cognitive-services/luis/luis-concept-best-practices. Also to highlight Luis does not work in terms of keyword matching to the utterances which you have configured. It works in terms of data you add in Luis in respective intents.

How to setup Microsoft LUIS to detect composed names (dash separated)

I want to detect a person's name in LUIS, including a person with a composed name (eg: Mary-Anne)
Setup:
a simple custom entity for names
a pattern feature for dash separated words: ^\w*-\w*$
a feature Phrase List to try and get at least some examples working: [marc-andre, marie-anne, jean-marc]
I trained and published (on staging) and yet, it never detects the whole composed name, but instead will only return the first part as the entity (eg: entity is "marc" instead of "marc-andre").
Do you know how to configure LUIS to properly detect my composed name entity?
Update taking Denise' answer into account
In the Luis.ai UI, i didn't realize that while labelling an utterance, it is possible to click more than once to select multiple words while specifying an entity.
I was able to configure a simple custom entity like you describe. I posted the JSON that you can import into LUIS here.
Without seeing the JSON for your LUIS app it's hard to tell why it fails to recognize the dash-separated names - feel free to post the JSON for your LUIS app here. Sometimes a LUIS app won't recognize an entity due to a lack of labeling. A key part of getting LUIS to recognize an entity is labeling enough examples. A pattern feature is just a hint to LUIS -- you still need to define example utterances that have the labeled entity. For example, if you have defined an intent called MyNameIs and want to recognize the Name entity within them, you'll want to add a variety of utterances to the MyNameIs intent that contain dash-separated names, and label each name with the entity.
When I added the pattern feature I used + to indicate "one or more" in the regex instead of *. However, this difference shouldn't break your pattern feature.
Another problem that can happen with hyphens is in the JSON that LUIS returns. When you inspect the JSON result from LUIS you can see how the Name entity is identified. Notice that in the entity field, LUIS inserts spaces around the hyphen, but the startIndex and endIndex fields identify the indexes of the entity in the original utterance. So if you have code that parses the entity field without using startIndex and endIndex on the query field the behavior might not be as you expect.
{
"query": "my name is anne-marie",
"topScoringIntent": {
"intent": "MyNameIs",
"score": 0.9912877
},
"entities": [
{
"entity": "anne - marie",
"type": "Name",
"startIndex": 11,
"endIndex": 20,
"score": 0.8978088
}
]
}

Resources