Microsoft LUIS: Prebuilt entities and Intent - azure-language-understanding

If I have a prebuilt entity temperature.
And how can I match it to my intent AskTemperature
Because In the AskTemperature.
It can't add the prebuilt entity.
So how do I implement to make entity temperature belongs to the intent AskTemperature?
Thank you!!!

If you provide an example of the labeled utterances, the community may be better equipped to answer your question. That said... Once you train your model, the model should automatically assign the prebuilt entity to the recognized tokens (words, numbers, etc).
After training your model, if this recognition doesn't occur, you may need to train with more straight forward utterances (e.g. "Is the temperature in Redmond right now 16 celsius?").
If all of this doesn't work, then posting relevant portions of your LUIS model may aid in getting help from the SO community.

After you trained your model/intents, the prebuilt entities should be assigned to that utterances.
I have this utterance: are there room available on 12/21/2016?
When I tried to assign the 12/21/2016 to "datetimeV2" entity directly, I cannot find that prebuilt entity; but after I trained it, it shows automatically.
Click to see this image (cannot post image since reputation not reaching 10)
Hope this helps!

Related

Categorizing a patient using FHIR?

We want to categorize patients in our system. For example, in organ transplant, we want to "tag" a Patient FHIR resource as a donor or recipient (ignoring the scenario where a living donor can later become a recipient) since these types of "patients" are stored separately in the back end system. So when someone does a PUT HTTP request with a patient resource, we need to know what kind of patient it is before we can do the update in the database.
It's hard to determine the best way to approach this. Using the meta area seems promising, combined with the UsageContextType of "focus" perhaps, taking on values of "donor" or "recipient".
It's not clear though how to actually code something like this in a Patient resource (JSON for us). Any guidance/examples would be very much appreciated.
Sadly, I think the FHIR folks are going down the same path they used with the V3 RIM....lots of impenetrable standard definitions, but very few practical examples of how to use some of these FHIR standards in the real world. But that is another issue.
Don't understand ignoring the scenario where someone can be both donor and recipient. However, if you needed to, you could add an extension that differentiated. You could also use Patient.meta.tag.
With the RIM there'd have been an esoteric modelling mechanism to define what you wanted, likely walking through 3-4 classes to get to one element (and a whole lot of fixed values along the way). With FHIR, if you're doing something esoteric, you just define an extension.
If you see something in the core specification you find impenetrable, please submit a change request asking for the language to be improved. (There's a "propose a change" link at the bottom of every page and registration is free.)

Luis preBuilt entity personName - unexpected behaviour, missing basic name in utterance

In summary, why does Luis not label the preBuilt entity personName in some cases. Often the second name is not labeled for no discernible reason.
This behaviour does not exist for say geography preBuilt entity with the same kinds of utterances.
If anyone can explain why this happens and how best to address it I'd greatly appreciate it.
This simply dose not make sense to me. I would love to understand more.
personName Image here.
Luis not labelling All personName correctly
geography example here (without the same issues as above)
geography example
Thanks. K.
The scenario you described is currently the expected behavior (it might extract some or all names). However, we are working on improving the built-in personName entity (currently on our road-map). However, we recommend using a machine learned entity where you label the instances of names in your dataset and using the personName entity as a feature to help create your own name entity. Sorry for the inconvenience, but hope this helps!

What is the difference of a machined learned entity with a list entity constraint vs using a list entity itself when using LUIS NLU entities?

In the v3 api for building LUIS apps I notice an emphasis on Machined learned entities. When working with them I notice something that concerns me and I was hoping to get more insight into the matter.
The idea is that when using a machined learned entity you can bind it to descriptors of phrase lists or other entities or list entities as a constraint on that machined learned entity. Why not just aim to extract the list entity by itself? What is the purpose of wrapping it in a machined learnt object?
I ask this because I have always had great success with lists. It very controllable albeit you need to watch for spelling mistakes and variations to assure accuracy. However, when I use machined learnt entities I notice you have to be more careful with word order. If there is a variation it could not pick up that machined learnt entity.
Now training would fix this but in reality if I know I have the intent I want and I just need entities from that what really does the machine learnt entity provide?
It seems you need to be more careful with it.
Now I say this with this suspicion. Would the answer lie in the fact that a machine learnt entity would increase intent detection where a list entity would only serve to increase entity detection. If that is the answer that most fits I think I can see the solution to what it is I am looking for.
EDITED:
I haven't been keeping up with LUIS ever since I went on maternity leave, and lo and behold, it's moving from V2 to V3!
The following shows an email conversation from a writer of the LUIS team's documentation.
LUIS is moving away from different types of entities toward a single ML entity to encapsulate a concept. The ML entity can have children which are ML entities themselves. An ML entity can have a feature directly connected to it, instead of acting as a global feature.
This feature can be a phrase list, or it can be another model such as a prebuilt entity, reg ex entity, or list entity.
So a year ago a customer might have built a composite entity and thrown features into the app. Now they should create an ML entity with children, and these children should have features.
Now (before //MS Build Conference) any non-phrase-list feature can be a constraint (required) so a child entity with a constrained regex entity won’t fire until the regex matches.
At/after //Build, this concept has been reworked in the UI to be a required feature – same idea but different terminology.
...
This is about understanding a whole concept that has parts, so an address is a typical example. An address has street number, street name, street type (street/court/blvd), city, state/province, country, postal code.
Each of the subparts is a feature (strong indicator) that an address is in the utterance.
If you used a list entity but not as a required feature to the address, yes it would trigger, but that wouldn’t help the address entity which is what you are really trying to get.
If however, you really just want to match a list, go head. But when the customer says the app isn’t predicting as well as they thought, the team will come back to this concept of the ML entity parent and its parts and suggest changes to the entities.

LUIS intents not relating to entities

We want to create a chatbot with about 80 intents, where most of them are just questions with direct answers related to HR (benefits, payroll, etc). It could be done with QnA maker but we decided using LUIS and utterances to have the option to query entities in the future if we need to.
I have tried to create some entities to lift the score of some intents, for example, we have a benefit question about inscriptions to sports in our company:
Utterance:
How can I enroll in soccer?
What steps do I have to register in tennis lessons?
...
So I have created two entities, one for "ActivityType" (soccer, tennis, etc), and other for "Enrollment" (register, enrollment, etc). The later is more as a synonym variable so I don't have to write that many utterances.
Then the utterances translates into:
How can I {Enrollment} in {ActivityType}
What steps do I have to {Enrollment} in {ActivityType} lessons?
These are my questions:
1) "Enrollment" entity is used to avoid creating that many utterances, does it make sense here or there is something better in LUIS for that?
2) I have tested the system and in some questions, it picks up the two entities (enrollmnet and activitytype) which are only present in a specific intent, but then it assigns in the top another intent which doesn't have at all those entities. I would expect the entities to somewhat lift the probability of selecting the intents that are using it (in this case, there is only one using them, so it should be pretty obvious but is not being selected)
Any help will be greatly appreciated :)

Interface for viewing demographic information in maps

I am involved in a research project where we try to find correlation between demographics and restaurant types. We want to have an interface where we can apply the demographic information over a map of the city as filters and check how the restaurant types and information changes.
I am lost on what sort of tools to use for this purpose.
Note: I am not sure whether this is the right place to post this question. If there is a specific SO site for this, I will move it there.
There isn't a specific SO site for openstreetmap, but even better there is a https://help.openstreetmap.org/ site which would be an even better place to ask.

Resources