"Press * to cancel" on date entry with Nuance OSDM? - ivr

I'm currently working on a VXML 2.0 app that uses Nuance OSDMs with GRXML grammars.
One of our prompts asks the caller to enter a date of birth, but if they don't have one handy, they can either say "cancel" or press the asterisk. It's a Date OSDM, and I've added an additional command grammar to handle the "cancel" or the asterisk for speech and DTMF entry, respectively.
Saying "cancel" works; the Date grammar is bypassed, the command grammar activates, and the code runs just as I expect. The asterisk, however, is a different story. When I run a debug call and press the asterisk key on my telephone, it's handled as a nomatch. Combing through the OSDM handbook, it appears that DTMF entry on Nuance Date OSDMs is run through the builtin DTMF digit grammar, with a range of 2-8 digits.
The handbook also states the following:
"If a parallelgrammar is specified, the OSDM matches the DTMF input to both the DTMF collection grammar and the parallelgrammar. If a DTMF character matches both grammars, the parallelgrammar match is returned."
So, I'm thinking that the digit grammar has "*" as a baked-in termination character, and it's overriding my explicit declaration that only "#" can be a termination character:
I press the asterisk.
The DTMF digit grammar gets activated.
The DTMF digit grammar results in a blank, because the asterisk is a term character, and no other input was made.
Blank is OOG, because the length is 2-8 digits.
Nomatch is returned.
I'm stuck using the OSDM, as its operation is vital to the way that our application does event logging. However, I can get creative with responding to the asterisk.
Is there another way to get the asterisk to be counted as valid input, and either have it reach my custom command grammar, or bypass the call to the OSDM and handle it myself?

The solution was to use a custom command grammar, separate from the existing global command grammar.
The OSDM responds with "COMMAND", in place of "SUCCESS", which requires a bit of silliness in the post-processing, but it's not too ugly.
This:
<date-osdm name="ClaimDate">
<dmname value="ClaimDate"/>
<collection_commandgrammar name="Generic_command.grxml"/>
<collection_dtmfcommandgrammar name="Generic_command_dtmf.grxml"/>
In place of this (the name of the grammar isn't code-significant, it just has different content):
<date-osdm name="ClaimDate">
<dmname value="ClaimDate"/>
<collection_parallelgrammar1 name="Generic_inputs.grxml"/>
<collection_dtmfparallelgrammar1 name="Generic_inputs_dtmf.grxml"/>
And viola! It works.

Related

What does writing "\r\027[1A\027[K" to stdout do?

I came across some code for chat application in the terminal (in OCaml) and swa this string (in ASCII?) "\r\027[1A\027[K" being printed into the terminal before a new user message is printed to the terminal.
I have tried googling literals one by one, so I know that "\r" stands for cartridge return and \027 for ESC in ASCII, but what does "[1A" and "[K" do? What character encoding is this?
And finally, what is the aggregate effect of this command?
[ introduces a control sequence. A is the control sequence for "cursor up", and [1A moves the cursor up 1 line. K erases a line. So \x1b[1A\x1b[K moves up one line and deletes it (replaces it with spaces).
Of course, that is only valid if the terminal that receives that string recognizes the control sequences. Not all do.
See https://en.wikipedia.org/wiki/ANSI_escape_code
I'm not sure what 027 is trying to do. It seems like an error and should have been 033.

How to restrict characters using validation rule in Microsoft Access?

I need to create a validation rule, that checks if inserted text complies to national identification number standards (https://en.wikipedia.org/wiki/National_identification_number).
In the identification number, allowed are following: letters (along with letters outside ASCII a-zA-Z), numbers, spaces, hyphens, plus signs, equal signs and slashes.
Writing individual restriction for every character would mean the validation rule to be extremely long
...WHERE (National_ID LIKE "0##########") OR (National_ID LIKE "0############")
and I am wondering if there is better way that would make more sense.
Also, Access does not support Regex, which complicates the task even further.

how to test my Grammar antlr4 successfully? [duplicate]

I have been starting to use ANTLR and have noticed that it is pretty fickle with its lexer rules. An extremely frustrating example is the following:
grammar output;
test: FILEPATH NEWLINE TITLE ;
FILEPATH: ('A'..'Z'|'a'..'z'|'0'..'9'|':'|'\\'|'/'|' '|'-'|'_'|'.')+ ;
NEWLINE: '\r'? '\n' ;
TITLE: ('A'..'Z'|'a'..'z'|' ')+ ;
This grammar will not match something like:
c:\test.txt
x
Oddly if I change TITLE to be TITLE: 'x' ; it still fails this time giving an error message saying "mismatched input 'x' expecting 'x'" which is highly confusing. Even more oddly if I replace the usage of TITLE in test with FILEPATH the whole thing works (although FILEPATH will match more than I am looking to match so in general it isn't a valid solution for me).
I am highly confused as to why ANTLR is giving such extremely strange errors and then suddenly working for no apparent reason when shuffling things around.
This seems to be a common misunderstanding of ANTLR:
Language Processing in ANTLR:
The Language Processing is done in two strictly separated phases:
Lexing, i.e. partitioning the text into tokens
Parsing, i.e. building a parse tree from the tokens
Since lexing must preceed parsing there is a consequence: The lexer is independent of the parser, the parser cannot influence lexing.
Lexing
Lexing in ANTLR works as following:
all rules with uppercase first character are lexer rules
the lexer starts at the beginning and tries to find a rule that matches best to the current input
a best match is a match that has maximum length, i.e. the token that results from appending the next input character to the maximum length match is not matched by any lexer rule
tokens are generated from matches:
if one rule matches the maximum length match the corresponding token is pushed into the token stream
if multiple rules match the maximum length match the first defined token in the grammar is pushed to the token stream
Example: What is wrong with your grammar
Your grammar has two rules that are critical:
FILEPATH: ('A'..'Z'|'a'..'z'|'0'..'9'|':'|'\\'|'/'|' '|'-'|'_'|'.')+ ;
TITLE: ('A'..'Z'|'a'..'z'|' ')+ ;
Each match, that is matched by TITLE will also be matched by FILEPATH. And FILEPATH is defined before TITLE: So each token that you expect to be a title would be a FILEPATH.
There are two hints for that:
keep your lexer rules disjunct (no token should match a superset of another).
if your tokens intentionally match the same strings, then put them into the right order (in your case this will be sufficient).
if you need a parser driven lexer you have to change to another parser generator: PEG-Parsers or GLR-Parsers will do that (but of course this can produce other problems).
This was not directly OP's problem, but for those who have the same error message, here is something you could check.
I had the same Mismatched Input 'x' expecting 'x' vague error message when I introduced a new keyword. The reason for me was that I had placed the new key word after my VARNAME lexer rule, which assigned it as a variable name instead of as the new keyword. I fixed it by putting the keywords before the VARNAME rule.

GS1-128 barcode with ZPL does not put the AI in ()

i was expecting this command
^FO15,240^BY3,2:1^BCN,100,Y,N,Y,^FD>:>842011118888^FS
to generate a
(420) 11118888
interpretation line, instead it generates
~n42011118888
anyone have idea how to generate the expected output?
TIA!
Joey
If the firmware is up to date, D mode can be used.
^BCo,h,f,g,e,m
^XA
^FO15,240
^BY3,2:1
^BCN,100,Y,N,Y,D
^FD(420)11118888^FS
^XZ
D = UCC/EAN Mode (x.11.x and newer firmware)
This allows dealing with UCC/EAN with and without chained
application identifiers. The code starts in the appropriate subset
followed by FNC1 to indicate a UCC/EAN 128 bar code. The printer
automatically strips out parentheses and spaces for encoding, but
prints them in the human-readable section. The printer automatically
determines if a check digit is required, calculate it, and print it.
Automatically sizes the human readable.
The ^BC command's "interpretation line" feature does not support auto-insertion of the parentheses. (I think it's safe to assume this is partly because it has no way of determining what your data identifier is by just looking at the data provided - it could be 420, could be 4, could be any other portion of the data starting from the first character.)
My recommendation is that you create a separate text field which handles the logic for the parentheses, and place it just above or below the barcode itself. This is the way I've always approached these in the past - I prefer this method because I have direct control over the font, font size, and formatting of the interpretation line.

ncurses escape sequence detection

what is the best way to detect escape sequences in ncurses raw mode.
The thing that come to my mind is do getch and then add it to some kind of buffer and then when
the text matches known escape sequence execute appropriate command otherwise ignore the sequence.
However in this algorithm if i hit an escape sequence that system doesn't know about even if a continue typing other characters they will be considered as a part of escape sequence.
Are there any rules of how long the escape sequence can be or some standart characters that end escape sequence
Or is there a way of detecting in ncurses when user stopped typing the characters? like escape sequence usually come as sequence of characters but i have no way to detect the last character because i have a blocking getch that just blocks when there are no characters from input system
like for example if i press page down and c button i have continuous stream of
escape sequence characters ^[[6~ and then character c. how can i detect between those two that the user pressed first escape sequence and then c character if i don't have a predefined set of known escape sequences
You may wish to use libtermkey to parse the key events.
http://www.leonerd.org.uk/code/libtermkey/
Either you can connect it directly to stdin, or feed it bytes as given to you by getch().
As to the general idea of detecting the end; there are certain rules that sequences follow. Sequences starting ESC [ are CSI sequences, and end on the first octet in the range [\x40-\x7e]. There are a few other kinds but they're rare to come from a terminal.
Without a predefined set of escape sequences you will not be able to determine the end of a sequence.
Ncurses uses termcap/terminfo to enumerate the sequences and their meanings.

Resources