1) In pattern, LUIS does not let you have more than 3 arguments using the OR operator, e.g (a|b|c|d) is illegal (why?)
2) In pattern, is there any way we can specify an optional text, something like "I want to [text] {entity}" so that the user can type in whatever between "to" and {entity} ?
3) In pattern, I cannot make the plural option for a word, e.g "How to contact the supplier[s]" doesn't work when the user types in "How to contact the suppliers". And I had to add "suppliers" to my entities list, which I find inconvenient
4) When you delete an intent, all the utterances automatically go to None intent. I think that should be an option "Do you want to move the utterances to None intent?"
Related
We've implemented the $match operation for patient that takes FHIR parameters with the search criteria. How should this search work when the patient resource in the parameters contains multiple given names? We don't see anything in FHIR that speaks to this. Our best guess is that we treat it as an OR when trying to match on given names in our system.
We do see that composite parameters can be used in the query string as AND or OR, but not sure how this equates when using the $match operation.
$match is intrinsically a 'fuzzy' search. Different servers will implement it differently. Many will allow for alternate spellings, common short names (e.g. 'Dick' for 'Richard'), etc. They may also allow for transposition of month and day and all sorts of similar data entry errors. The 'closeness' of the match is reflected in the score the match is given. It's entirely possible get back a match candidate that doesn't match any of the given names exactly if the score on other elements is high enough.
So technically, I think SEARCH works this way:
AND
/Patient?givenname=John&givenname=Jacob&givenname=Jingerheimer
The above is an AND clause. There is (can be) a person named with multiple given names "John", "Jacob", "Jingerheimer".
Now I realize SEARCH and MATCH are 2 different operations.
But they are loosely related.
But Patient-Matching is an "art". Be careful, a "false positive" (with a high "score") is/could-be a very big deal.
But as mentioned from Lloyd....you have a little more flexibility with your implementation of $match.
I have worked on 2 different "teams".
One team, we never let "out the door" anything that was below a 80% match-score. (How you determine a match-score is a deeper discussion).
Another team, we made $match work with a "IF you give me enough information to find a SINGLE match, I'll give it to you" .. but if not, tell people "not enough info to match a single".
Patient Matching is HARD. Do not let anyone tell you different.
at HIMSS and other events..when people show a demo of moving data, I always ask "how did you match this single person on this side.....as it is that person on the other side?"
As in "without patient matching...alot of work-flows fall a part at the get go"
Side note, I actually reported a bug with the MS-FHIR-Server (which the team fixed very quickly) (for SEARCH) here:
https://github.com/microsoft/fhir-server/issues/760
"name": [
{
"use": "official",
"family": "Kirk",
"given": [
"James",
"Tiberious"
]
},
Sidenote:
The Hapi-Fhir object to represent this is "ca.uhn.fhir.rest.param.TokenAndListParam"
Sidenote:
There is a feature request for Patient Match on the Ms-Fhir-Server github page:
https://github.com/microsoft/fhir-server/issues/943
I have two hierarchical entities like this (simplified): "Order::OpenOrder, Order::AnyOrder, Job::OpenJob, Job::AnyJob" for a search application. I'm trying to train LUIS to correctly understand inputs like (a)"acme open orders", (b)"open acme orders", (c)"acme open jobs", (d)"open acme jobs" using utterances.
If I just use the two simplest utterances "open orders" -> Order::OpenOrder and "open jobs" -> Job:::OpenJob, then inputs (a) and (c) work fine. But example (b) finds Order::Open, but the string "acme" is included in the entity character range. Example (d) is unable to resolve any entities.
Complicating things is that it's also legal to just input "acme orders" or "acme jobs", where I've trained LUIS using utterances like "blah orders" and "blah jobs" where "orders" and "jobs" are mapped to Order:AnyOrder, Job::AnyJob, respectively. And then you can also just input things like "orders", "open Orders", etc.
Anyway, none of this is working consistently and I'm wondering if I'm taking the wrong approach to training LUIS to understand adjective-noun pairs where proper nouns can appear between them. Anyone else had a model like this who could share some advice?
Thanks,
-Erik
I use the Node Botframework Sdk, and the user have to fill out a questionnaire.
This questionnaire have three questions with the same answer "yes", "no", "maybe".
But if the user answer is "yep" or "yes of course" or "always" that can match "yes" (affirmative answer)
If the user answer is "sometimes" or "it depends" or "rarely" that can match "maybe"(nuance answer)
In the future, we must be able to predict new answer not expected at the begining (add easily new answer).
Unfortunately Prompts.choice() don't permit to bind a choice to a intent.
So Two solutions :
Use Prompts.choice() synonyms
Use Prompts.text() and create 3 differents intents (affirmative, nuance, negative) and pass the answer to luis. On the luis response save the good answer (yes | no | maybe)
Which one is the best solution ? Other solution exist ?
Probably the way to go here is using the synonyms of the Prompts.choice; however an alternative you can also explore is overriding some of the behaviors of the Prompts.choice to also call LUIS before parsing the response and returning if it's valid or not.
Why don't you use buttons to get user entry using buttons ? However you can type this code in ResumeAfterAsync function
var r = await result;
if(r.ToLower.Contains("yes") || r.ToLower.Contains("yea" || .....)
{
}
but I think using buttons is a better way
Is it possible to extract the first and follow sets from a rule using ANTLR4? I played around with this a little bit in ANTLR3 and did not find a satisfactory solution, but if anyone has info for either version, it would be appreciated.
I would like to parse user input up the user's cursor location and then provide a list of possible choices for auto-completion. At the moment, I am not interested in auto-completing tokens which are partially entered. I want to display all possible following tokens at some point mid-parse.
For example:
sentence:
subjects verb (adverb)? '.' ;
subjects:
firstSubject (otherSubjects)* ;
firstSubject:
'The' (adjective)? noun ;
otherSubjects:
'and the' (adjective)? noun;
adjective:
'small' | 'orange' ;
noun:
CAT | DOG ;
verb:
'slept' | 'ate' | 'walked' ;
adverb:
'quietly' | 'noisily' ;
CAT : 'cat';
DOG : 'dog';
Given the grammar above...
If the user had not typed anything yet the auto-complete list would be ['The'] (Note that I would have to retrieve the FIRST and not the FOLLOW of rule sentence, since the follow of the base rule is always EOF).
If the input was "The", the auto-complete list would be ['small', 'orange', 'cat', 'dog'].
If the input was "The cat slept, the auto-complete list would be ['quietly', 'noisily', '.'].
So ANTLR3 provides a way to get the set of follows doing this:
BitSet followSet = state.following[state._fsp];
This works well. I can embed some logic into my parser so that when the parser calls the rule at which the user is positioned, it retrieves the follows of that rule and then provides them to the user. However, this does not work as well for nested rules (For instance, the base rule, because the follow set ignores and sub-rule follows, as it should).
I think I need to provide the FIRST set if the user has completed a rule (this could be hard to determine) as well as the FOLLOW set of to cover all valid options. I also think I will need to structure my grammar such that two tokens are never subsequent at the rule level.
I would have break the above "firstSubject" rule into some sub rules...
from
firstSubject:
'The'(adjective)? CAT | DOG;
to
firstSubject:
the (adjective)? CAT | DOG;
the:
'the';
I have yet to find any information on retrieving the FIRST set from a rule.
ANTLR4 appears to have drastically changed the way it works with follows at the level of the generated parser, so at this point I'm not really sure if I should continue with ANTLR3 or make the jump to ANTLR4.
Any suggestions would be greatly appreciated.
ANTLRWorks 2 (AW2) performs a similar operation, which I'll describe here. If you reference the source code for AW2, keep in mind that it is only released under an LGPL license.
Create a special token which represents the location of interest for code completion.
In some ways, this token behaves like the EOF. In particular, the ParserATNSimulator never consumes this token; a decision is always made at or before it is reached.
In other ways, this token is very unique. In particular, if the token is located at an identifier or keyword, it is treated as though the token type was "fuzzy", and allowed to match any identifier or keyword for the language. For ANTLR 4 grammars, if the caret token is located at a location where the user has typed g, the parser will allow that token to match a rule name or the keyword grammar.
Create a specialized ATN interpreter that can return all possible parse trees which lead to the caret token, without looking past the caret for any decision, and without constraining the exact token type of the caret token.
For each possible parse tree, evaluate your code completion in the context of whatever the caret token matched in a parser rule.
The union of all the results found in step 3 is a superset of the complete set of valid code completion results, and can be presented in the IDE.
The following describes AW2's implementation of the above steps.
In AW2, this is the CaretToken, and it always has the token type CARET_TOKEN_TYPE.
In AW2, this specialized operation is represented by the ForestParser<TParser> interface, with most of the reusable implementation in AbstractForestParser<TParser> and specialized for parsing ANTLR 4 grammars for code completion in GrammarForestParser.
In AW2, this analysis is performed primarily by GrammarCompletionQuery.TaskImpl.runImpl(BaseDocument).
Really this is two questions.. but related
1) How would I give a 'greater' weight to phrases that are in the title column/index?
2) How to prevent part matches - eg if I searched for "art" it would ignore words like "part", "cart" etc...
1) setFieldWeights() API function is for that.
2) You dont get part matches by default. You must have done something to enable them - something with min_prefix_len and/or min_infix_len. If you do want infix matches sometimes look at enable_star option.