When generating a Dispatch model using the CLI, it doesn't pass the Entities from the Luis app in reference. This drastically affects the accuracy of the dispatch app.
For example, for the utterance "My [iPhone] isn't working", iPhone is attached to an entity list name CellPhoneType. There are three items in the list iPhone, Samsung, Smartphone.
In the bot emulator, using the Dispatch, if I write "my iPhone isn't working", the dispatch model passes it to Luis, as it should. However, if I write "my smartphone isn't working", the dispatch tool sends it over to QnA Maker.
I checked the model, and the entities are not passed in reference. I also tested with simple entities, they do not work as well.
I have the most recent version of the CLI installed.
Is this normal, is this a bug? Is there a work around to fix this?
So a couple things to address here with how you've built your LUIS model and what to expect from dispatch. Skip down to 2.) if you're a user who's reading this post and already has entities working in child LUIS models beautifully. #AlexandreViegas, read point 1.) to help properly build your LUIS model to detect intent properly in dispatch.
1. Use a Simple Entity + Phrase List to take Advantage of LUIS's machine learning--not List Entity
Right now it seems like your choice of using a list entity is not the best way to go here, and not how it's intended to be used. Instead list entities are used for terms that might have multiple ways of referring to the same thing.
Examples of When You Would Want to Use a List Entity
For example, California, Cali, CA, and The Golden State are all terms that refer to the same thing (a state). You can create a "States" list entity, include all 50 U.S. states and their nicknames. Now since this is a closed, explicit list, there is no machine learning when you use a list entity--LUIS will only detect "States" list entity if there's an exact text match.
Another example of when you would want to use list entities would be say with "Departments" for a school. You could have "chemistry", "CHEM142", "chem", etc. all meant to refer to that specific department, and do so with the rest of the departments in the school.
Why you want to use a Simple Entity and add a Phrase List
You can refer to this other StackOverflow answer I wrote, regarding how to create a simple entity and boost the signal of the entity using a phrase list.
To not completely duplicate the answer given in the link above, in essence, you want to use a simple entity, so LUIS can properly predict terms as CellPhoneType entity, even though you did not explicitly include it in your model.
For example you could have a Phone intent with utterances labeling various words as CellPhoneIntententity.
When I go to the Test panel, I type in "sunflower" and "moonstone" as made up mobile phones (maybe some phone company in the future creates phones with these names as their models):
Above you can see LUIS correctly predicts Phone intent and correctly extracts sunflower and moonstone as CellPhoneType entities.
However if I enter in brand names of mobiles that don't exist in the English language--for example Blackberry's "Z3" or T-Mobile's "G2X", LUIS cannot detect this with our model as is right now. (See 2 most recent utterances).
Above you can see utterances "i'd like to order a z3" and "my g2x is broken" do not properly predict as Phone intent, nor do z3 or g2x get detected as CellPhoneType entity. This is where phrase lists come in. As specified in the docs, phrase lists are good for boosting the signal of what a cell phone type may look like, as well as adding proprietary or foreign words to your LUIS model, such as the "made-up" words of many cell phone models. Again, refer to the StackOverflow answer I linked to, if you need guidance on how to create a phrase list.
After adding different names of cell phone models to phrase list
2. Query the endpoint of the LUIS model that was created by dispatch directly
Clarification:
When you add a child LUIS model to dispatch, even if that child LUIS model has entities in it, it will not show up in the model of the parent LUIS model created by dispatch.
the exception to the above bullet would be if you labelled an entity in a pattern
Why entities do not need to be labelled in the parent LUIS model, is because when you call the endpoint of the parent LUIS model, it does sort of a shared call, under the hood, so it doesn't have to ping LUIS twice.
You see the entities labelled from the child LUIS model in the connectedServiceResult property
How to extract entities from child LUIS model, using your parent dispatch LUIS app
Make sure to publish both the child LUIS app and the parent dispatch app.
Going to your parent dispatch-created LUIS app, go to Manage > Keys and Endpoints > click "Endpoint" to open a browser tab where your can query the parent app in the URL after q=
type in your utterances in the URL, after q= to see the entities and intents extracted from the child LUIS model under connectedServiceResult
https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx?verbose=true&timezoneOffset=-360&subscription-key=b7xxxxxxxxxxxxxxxxxxxxxxxxxxxx67&q=my%20iphone%20is%20broken
{
"query": "my iphone is broken",
"topScoringIntent": {
"intent": "l_Reminders",
"score": 0.99594605
},
"intents": [
{
"intent": "l_Reminders",
"score": 0.99594605
},
{
"intent": "None",
"score": 0.002990469
}
],
"entities": [],
"connectedServiceResult": {
"query": "my iphone is broken",
"topScoringIntent": {
"intent": "Phone",
"score": 0.9658808
},
"intents": [
{
"intent": "Phone",
"score": 0.9658808
},
{
"intent": "Calendar.Add",
"score": 0.0142210266
},
{
"intent": "Calendar.Find",
"score": 0.0112086516
},
{
"intent": "None",
"score": 0.009813501
},
{
"intent": "Email",
"score": 0.0025855056
}
],
"entities": [
{
"entity": "iphone",
"type": "CellPhoneType",
"startIndex": 3,
"endIndex": 8,
"score": 0.998970151
}
]
}
}
Above you can see that the parent LUIS app created from dispatch properly identifies iphone from the utterance my iphone is broken as a CellphoneType entity.
Note: you will not see results from the child LUIS model in the Test panel of the parent dispatch, because the UI does not show connectedServiceResult
Related
I am having an issue with Teams Bot adaptive card action submit, when testing Bot Emulator it works as expected, but when it's published and performing the same action in Teams conversation, the action submit returns undefined.
I have attempted with both adaptive card version 1.0, and version 1.3, the issue is the same in both cases.
Anyone know a solution for this?
The other answer provides some useful background, but I think it's only applicable to specific scenarios - I think the issue you're having here is that you're sending a raw string value in the data property ("My Action") which Teams doesn't like, but the emulator doesn't mind at all. You can see more about that described in this Microsoft blog post. You should instead be sending an actual object. I describe it in more detail in this answer: QnA Maker Bot AdaptiveCards: how to add Data object in C#.
What #billoverton is referring to is a specific use case where you want the button behaviour to be, for instance, actually putting a message into the text stream as a response, as well as sending it to your bot. There are various options for these specific use cases, as described here, but they are optional if you want this specific behaviour. If you're happy for the user to click the button and that to simply invoke the message to your bot, you don't need the "msteams" section in the data payload.
A "standard" action.submit won't work in Teams. You need to add an msteams object under the data attribute. I'm guessing you are using a standard definition like
{
"type": "ActionSet",
"actions": [
{
"type": "Action.Submit",
"title": "My Action",
"data": "My Action"
}
]
}
For Teams, it instead needs to look like this:
{
"type": "ActionSet",
"actions": [
{
"type": "Action.Submit",
"title": "My Action",
"data": {
"msteams": {
"type": "imBack",
"value": "My Action"
}
}
}
]
}
Of course when you do this, it won't work in the emulator or any non-teams channel. I've seen some people update their onMessage handler to account for this by extracting the value so a single card definition can work for both channels, but it made the selection a backchannel event for one of the channels (can't remember if it was web or Teams) which was not the experience I was looking for. So instead, I just conditionally display a Teams or non-Teams card based on channel. For example:
if (context.activity.channelId == 'msteams') {
var welcomeCard = CardHelper.GetMenuCardTeams();
} else {
var welcomeCard = CardHelper.GetMenuCard();
}
If you are not using a helper to generate your cards you can define them explicitly here, though I do recommend using a helper to keep things clean. This does mean that you need to maintain two versions of your cards, but for me it was worth it to ensure a consistent experience across channels.
When developing a message extension for Microsoft Teams, is it possible to retrieve the ID of a team where the user is invoking the message extension command without first adding the bot to that team?
I can do this when the bot is added to the team manually based on TeamsInfo.getTeamDetails(), however, I don't really need (or want) to add the bot to the team for my goal. All I need is the channel ID (which is available from the context/conversation) and the ID of the underlying team. Retrieving the team details without the bot being added beforehand errors with "The bot is not part of the conversation roster".
Have a look at the ChannelData property on the Activity class, that should give what you need. You can read more about it here.
Here's an example of the underlying payload, for interest:
"channelData": { "eventType": "channelCreated", "tenant": { "id": "72f988bf-86f1-41af-91ab-2d7cd011db47" }, "channel": { "id": "19:693ecdb923ac4458a5c23661b505fc84#thread.skype", "name": "My New Channel" }, "team": { "id": "19:693ecdb923ac4458a5c23661b505fc84#thread.skype" } }
we had the same trouble with the Team documentation and APIs.
However for that specific case, we found a solution that may work for you. I will say is more a hack than a solution. But it worked on my use case. It will only work on messages with attachments.
When the context is received in a message, the message contains an attachments array. Each attachment object has a contentUrl. Inside that url is the mailNickname for the group. That mailNickname field represents a readeable unique id. The format is something like sites/{mailNickname}/General.
from there you can retrieve the field and use the Groups Graph API.
With a query like this one:
https://graph.microsoft.com/v1.0/groups?$filter=startswith(mailNickname, 'themailNicknameFromcontenturl')
You will get the group full information, including the aadGroupId
In general, is a nightmare to work with Teams documentation. Hope this hack helps you.
In LUIS, on the intent page, I have 5 utterances that use the same entity in each. Under the utterances is the list of entities used in the intent. As you can see from the attached image, it lists the entity I have used and the correct count (5), but it also lists another entity that I have not used and says there are 2.
I have refreshed the page and trained and it is still showing the incorrect number of entities. I am concerned this error will affect the results. Is it a known issue, and can it be fixed?
The suggestion to export and reimport the app into a new app did not solve the problem, but when I looked at the export file I figured it out. Actually both of the entities were there in some of the utterances, just hidden one on top of the other, so they didn't show up in the interface.
example utterance:
{
"text": "how much will i be charged for my clonazepam",
"intent": "PHARMACY_DRUG_INQUIRY",
"entities": [
{
"entity": "Prescription",
"startPos": 34,
"endPos": 43
},
{
"entity": "Procedure",
"startPos": 34,
"endPos": 43
}
looks more like a portal glitch to me. Have you tried following
1) Logging out of the portal and logging in again to check?
2) If yes, please try to export and reimport the app definition in a new app.
Let us know your observations
I'm currently working on a project in which we perform "Nearby" queries for places using keywords, and then we make follow-up "Detail" requests to obtain more information about specific places of interest.
With Google's new pricing model in the works, the documentation warns about the cost of the Nearby search, but the warning seems to imply that the follow-up detail request will no longer be necessary because our original search should give us everything we need:
By default, when a user selects a place, Nearby Search returns all of
the available data fields for the selected place, and you will be
billed accordingly. There is no way to constrain Nearby Search
requests to only return specific fields. To keep from requesting (and
paying for) data that you don't need, use a Find Place request
instead.
However, I'm not seeing this. When I run a sample request, the results from my Nearby request contains only minimal data related to the places Google finds. To get details, I still have to do a follow-up detail request.
Does anyone know if there's something I may be overlooking? I'm including my request URL (sans API key).
https://maps.googleapis.com/maps/api/place/nearbysearch/json?key=xxxxxxxxxx&location=30.7329,-88.081987&radius=5000&keyword=insurance
And this is an example of one of the results I received:
{
"geometry": {
"location": {
"lat": 30.69254,
"lng": -88.0443999
},
"viewport": {
"northeast": {
"lat": 30.69387672989272,
"lng": -88.04309162010728
},
"southwest": {
"lat": 30.69117707010728,
"lng": -88.04579127989273
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/generic_business-71.png",
"id": "53744cdc03f8a9726593a767424b14f7f8f86049",
"name": "Ann M Hartwell - Aflac Insurance Agent",
"place_id": "ChIJj29KxNZPmogRJovoXjMDpQI",
"plus_code": {
"compound_code": "MXV4+26 Mobile, Alabama",
"global_code": "862HMXV4+26"
},
"reference": "CmRbAAAAcHM1P7KgNiZgVOm1pWojLto9Bqx96h2BkA-IyfN5oAz1-OICsRXiZOgwmwHb-eX7z679eFjpzPzey0brgect1UMsAiyawKpb5NLlgr_Pk8wBJpagRcKQF1VSvEm7Nq6CEhCfR0pM5wiiwpqAE1eE6eCRGhQPJfQWcWllOVQ5e1yVpZYbCsD01w",
"scope": "GOOGLE",
"types": [
"insurance_agency",
"point_of_interest",
"establishment"
],
"vicinity": "70 N Joachim St, Mobile"
}
I thought about deleting this question, but I guess I'll leave it up in case anyone else is confused like I was.
It turns out the extra detail fields I was looking for in the Nearby Search results were there...sort of.
Google's new pricing model categorizes place data fields into three tiers: Basic, Contact, and Atmosphere (Basic data is free, but the other two add to the cost).
As part of these changes, Place API calls have been expanded to allow users to specify the data fields they want so that they don't have to pay for that extra data if they don't need it.
The Nearby Search query, as per the note in the question, includes all the data fields available, and doesn't support a parameter for controlling the data -- it's always going return data that falls into the [Basic + Contact + Atmosphere] bucket.
So far, that's all well and good.
Where things became confusing to me, though, is the specifics of what is included in the different data tiers. I skimmed through these notes several times before I noticed the contents were different.
This is how the fields break down with the Places details request:
Basic
The Basic category includes the following fields: address_component,
adr_address, alt_id, formatted_address, geometry, icon, id, name,
permanently_closed, photo, place_id, plus_code, scope, type, url,
utc_offset, vicinity
Contact
The Contact category includes the following fields:
formatted_phone_number, international_phone_number, opening_hours,
website
Atmosphere
The Atmosphere category includes the following fields: price_level,
rating, review
And this is how it looks for the Places search request:
Basic
The Basic category includes the following fields: formatted_address,
geometry, icon, id, name, permanently_closed, photos, place_id,
plus_code, scope, types
Contact
The Contact category includes the following field: opening_hours
(Place Search returns only open_now; use a Place Details request to
get the full opening_hours results). Atmosphere
The Atmosphere category includes the following fields: price_level,
rating
I haven't found documentation for it, specifically, but the results from a Nearby Search request seems close (but not identical) to the Place search (including Contact and Atmosphere).
I had originally thought the fact that Nearby Search results now include Contact and Atmosphere data (when available), that meant it would contain all the fields itemized as Contact and Atmosphere data in the Place details documentation, but that's not the case.
I want to detect a person's name in LUIS, including a person with a composed name (eg: Mary-Anne)
Setup:
a simple custom entity for names
a pattern feature for dash separated words: ^\w*-\w*$
a feature Phrase List to try and get at least some examples working: [marc-andre, marie-anne, jean-marc]
I trained and published (on staging) and yet, it never detects the whole composed name, but instead will only return the first part as the entity (eg: entity is "marc" instead of "marc-andre").
Do you know how to configure LUIS to properly detect my composed name entity?
Update taking Denise' answer into account
In the Luis.ai UI, i didn't realize that while labelling an utterance, it is possible to click more than once to select multiple words while specifying an entity.
I was able to configure a simple custom entity like you describe. I posted the JSON that you can import into LUIS here.
Without seeing the JSON for your LUIS app it's hard to tell why it fails to recognize the dash-separated names - feel free to post the JSON for your LUIS app here. Sometimes a LUIS app won't recognize an entity due to a lack of labeling. A key part of getting LUIS to recognize an entity is labeling enough examples. A pattern feature is just a hint to LUIS -- you still need to define example utterances that have the labeled entity. For example, if you have defined an intent called MyNameIs and want to recognize the Name entity within them, you'll want to add a variety of utterances to the MyNameIs intent that contain dash-separated names, and label each name with the entity.
When I added the pattern feature I used + to indicate "one or more" in the regex instead of *. However, this difference shouldn't break your pattern feature.
Another problem that can happen with hyphens is in the JSON that LUIS returns. When you inspect the JSON result from LUIS you can see how the Name entity is identified. Notice that in the entity field, LUIS inserts spaces around the hyphen, but the startIndex and endIndex fields identify the indexes of the entity in the original utterance. So if you have code that parses the entity field without using startIndex and endIndex on the query field the behavior might not be as you expect.
{
"query": "my name is anne-marie",
"topScoringIntent": {
"intent": "MyNameIs",
"score": 0.9912877
},
"entities": [
{
"entity": "anne - marie",
"type": "Name",
"startIndex": 11,
"endIndex": 20,
"score": 0.8978088
}
]
}