I have to create a list with all languages which should look like this:
Col 1 | Col 2
-----------------
English | English
German | Deutsch
French | Français
Spanish | Español
...
Col 1: Language in English
Col 2: Original Country Language
The list should cover all main languages (or in other words: all languages which you can translate in google translator)
Of course this takes quite a while.
Is it possible to generate this list with a script by using the Google API ?
Yes, google has a Translation API which is very similar to the Google Translator. There are a couple of endpoints which are of interest to you in this case.
There is a way to list all the available languages, which would populate your Col 1. By default, this returns all the language (and sometimes language-country) codes that are supported, but you can provide a target query parameter to also include the name of the language in a "target language". In your case, you would want to show it "en-US".
In theory, you could repeat this for every language code and then just use the result for the language's own language code to populate Col 2. (This may be the most accurate way, but you'll get back a lot of extra data you don't want.)
Of course, you can also just translate the text to get your Col 2 results.
Related
In the MEDICAL_SERVICE_LINES table, there is a field ‘PROCEDURE’. The data dictionary notes that this is ‘CPT, HCPCS, or ICD-10-PCS (less commonly)’. Is there a field that indicates which of these terminologies the code is from?
Can you use modifiers to help identify? Or are the code formats the best tool like:
CPT:
5 numbers or 4 numbers and a letter (in that order)
HCPCS:
1 letter and 4 numbers (in that order).
This customer receives PLAID and is not in Sentinel. (data dictionary here)
The code formats would be the best to distinguish definitively what type of code it is. The modifiers are not filled out all the time (some claims may not have modifiers attached to the procedure).
Your layout of the code format is correct (see section HCPCS Coding here for additional confirmation). HCPCS Level 1 is comprised of CPT codes. HCPCS Level 2/3 is what we typically regard as just "HCPCS"
I have been developing a proof of concept of free text analytics. The RUTA scripts which I have developed for account number, date, salutations, addresses, pin codes, name seem to work properly.
But I am stuck on one rule where I want to extract the license number in UK format from a textual paragraph. The rule I developed seems to work properly when it is alone passed as input but for some reason it fails in a text.
Any help would be highly appreciated as I have been with this issue for quite sometime.
PACKAGE uima.ruta.example;
DECLARE VarA;
DECLARE VarB;
DECLARE VarC;
W{REGEXP("^(?i)(a-z){2}") -> MARK(VarA)}
NUM{REGEXP("..") -> MARK(VarB)}
W{REGEXP("(?i)(a-z){3}$") -> MARK(VarC), MARK(EntityType,1,3), UNMARK(VarA), UNMARK(VarB), UNMARK(VarC)};
The format which I am expecting is
C - Character
N - Number
CCNNCCC
CCNN CCC
Your question(or problem) is not totally clear for me. Also the example script doesn't work (EntityType is not declared and the regular expressions are not valid).
I made an example script. Maybe that will help you:
Is it possible to extract the first and follow sets from a rule using ANTLR4? I played around with this a little bit in ANTLR3 and did not find a satisfactory solution, but if anyone has info for either version, it would be appreciated.
I would like to parse user input up the user's cursor location and then provide a list of possible choices for auto-completion. At the moment, I am not interested in auto-completing tokens which are partially entered. I want to display all possible following tokens at some point mid-parse.
For example:
sentence:
subjects verb (adverb)? '.' ;
subjects:
firstSubject (otherSubjects)* ;
firstSubject:
'The' (adjective)? noun ;
otherSubjects:
'and the' (adjective)? noun;
adjective:
'small' | 'orange' ;
noun:
CAT | DOG ;
verb:
'slept' | 'ate' | 'walked' ;
adverb:
'quietly' | 'noisily' ;
CAT : 'cat';
DOG : 'dog';
Given the grammar above...
If the user had not typed anything yet the auto-complete list would be ['The'] (Note that I would have to retrieve the FIRST and not the FOLLOW of rule sentence, since the follow of the base rule is always EOF).
If the input was "The", the auto-complete list would be ['small', 'orange', 'cat', 'dog'].
If the input was "The cat slept, the auto-complete list would be ['quietly', 'noisily', '.'].
So ANTLR3 provides a way to get the set of follows doing this:
BitSet followSet = state.following[state._fsp];
This works well. I can embed some logic into my parser so that when the parser calls the rule at which the user is positioned, it retrieves the follows of that rule and then provides them to the user. However, this does not work as well for nested rules (For instance, the base rule, because the follow set ignores and sub-rule follows, as it should).
I think I need to provide the FIRST set if the user has completed a rule (this could be hard to determine) as well as the FOLLOW set of to cover all valid options. I also think I will need to structure my grammar such that two tokens are never subsequent at the rule level.
I would have break the above "firstSubject" rule into some sub rules...
from
firstSubject:
'The'(adjective)? CAT | DOG;
to
firstSubject:
the (adjective)? CAT | DOG;
the:
'the';
I have yet to find any information on retrieving the FIRST set from a rule.
ANTLR4 appears to have drastically changed the way it works with follows at the level of the generated parser, so at this point I'm not really sure if I should continue with ANTLR3 or make the jump to ANTLR4.
Any suggestions would be greatly appreciated.
ANTLRWorks 2 (AW2) performs a similar operation, which I'll describe here. If you reference the source code for AW2, keep in mind that it is only released under an LGPL license.
Create a special token which represents the location of interest for code completion.
In some ways, this token behaves like the EOF. In particular, the ParserATNSimulator never consumes this token; a decision is always made at or before it is reached.
In other ways, this token is very unique. In particular, if the token is located at an identifier or keyword, it is treated as though the token type was "fuzzy", and allowed to match any identifier or keyword for the language. For ANTLR 4 grammars, if the caret token is located at a location where the user has typed g, the parser will allow that token to match a rule name or the keyword grammar.
Create a specialized ATN interpreter that can return all possible parse trees which lead to the caret token, without looking past the caret for any decision, and without constraining the exact token type of the caret token.
For each possible parse tree, evaluate your code completion in the context of whatever the caret token matched in a parser rule.
The union of all the results found in step 3 is a superset of the complete set of valid code completion results, and can be presented in the IDE.
The following describes AW2's implementation of the above steps.
In AW2, this is the CaretToken, and it always has the token type CARET_TOKEN_TYPE.
In AW2, this specialized operation is represented by the ForestParser<TParser> interface, with most of the reusable implementation in AbstractForestParser<TParser> and specialized for parsing ANTLR 4 grammars for code completion in GrammarForestParser.
In AW2, this analysis is performed primarily by GrammarCompletionQuery.TaskImpl.runImpl(BaseDocument).
iPhone has a pretty good telephone number splitting function, for example:
Singapore mobile: +65 9852 4135
Singapore resident line: +65 6325 6524
China mobile: +86 135-6952-3685
China resident line: +86 10-65236528
HongKong: +886 956-238-82
USA: +1 (732) 865-3286
Notice the nice features here:
- the splitting of country code, area code, and the rest is automatic;
- the delimiter is also nicely adopted to different countries, e.g. "()", "-" and space.
Note the parsing logic is doable to me, however, I don't know where to get the knowledge of most countries' telephone number format.
where could i found such knowledge, or an open source code that implemented it?
You can get similar functionality with the libphonenumber code library.
Interestingly enough, you cannot use an NSNumberFormatter for this, but you can write your own custom class for it. Just create a new class, set properties such as countryCode, areaCode and number, and then create a method that formats the number based on the countryCode.
Here's a great example: http://the-lost-beauty.blogspot.com/2010/01/locale-sensitive-phone-number.html
As an aside: a friend told me about a gigantic regular expression he had to maintain that could pick telephone numbers out of intercepted communications from hundreds of countries around the world. It was very non-trivial.
Thankfully your problem is easier, as you can just have a table with the per-country formats:
format[usa] = "+d (ddd) ddd-dddd";
format[hk] = "+ddd ddd-ddd-dd";
format[china_mobile] = "+dd ddd-dddd-dddd";
...
Then when you're printing, you simply output one digit from the phone number string in each d spot as needed. This assumes you know the country, which is a safe enough assumption for telephone devices -- pick "default" formats for the few surrounding countries.
Since some countries have different formats with different lengths you might need to store your table with additional information:
format[germany][10] = "..."
format[germany][11] = "....."
My company has a client that tracks prices for products from different companies at different locations. This information goes into a database.
These companies email the prices to our client each day, and of course the emails are all formatted differently. It is impossible to have any of the companies change their format - they will not do it.
Some look sort of like this:
This is example text that could be many lines long...
Location 1
Product 1 Product 2 Product 3
$20.99 $21.99 $33.79
Location 2
Product 1 Product 2 Product 3
$24.99 $22.88 $35.59
Others look sort of like this:
PRODUCT PRICE + / -
------------ -------- -------
Location 1
1 2007.30 +048.20
2 2022.50 +048.20
Maybe some multiline text here about a holiday or something...
Location 2
1 2017.30 +048.20
2 2032.50 +048.20
Currently we have individual parsers written for each company's email format. But these formats change slightly pretty frequently. We can't count on the prices being on the same row or column each time.
It's trivial for us to look at the emails and determine which price goes with which product at which location. But not so much for our code. So I'm trying to find a more flexible solution and would like your suggestions about what approaches to take. I'm open to anything from regex to neural networks - I'll learn what I need to to make this work, I just don't know what I need to learn. Is this a lex/parsing problem? More similar to OCR?
The code doesn't have to figure out the formats all on its own. The emails fall into a few main 'styles' like the ones above. We really need the code to just be flexible enough that a new product line or whitespace or something doesn't make the file unparsable.
Thanks for any suggestions about where to start.
I think this problem would be suitable for proper parser generator. Regular expressions are too difficult to test and debug if they go wrong. However, I would go for a parser generator that is simple to use as if it was part of a language.
For these type of tasks I would go with pyparsing as its got the power of a full lr parser but without a difficult grammer to define and very good helper functions. The code is easy to read too.
from pyparsing import *
aaa =""" This is example text that could be many lines long...
another line
Location 1
Product 1 Product 2 Product 3
$20.99 $21.99 $33.79
stuff in here you want to ignore
Location 2
Product 1 Product 2 Product 3
$24.99 $22.88 $35.59 """
result = SkipTo("Location").suppress() \
# in place of "location" could be any type of match like a re.
+ OneOrMore(Word(alphas) + Word(nums)) \
+ OneOrMore(Word(nums+"$.")) \
all_results = OneOrMore(Group(result))
parsed = all_results.parseString(aaa)
for block in parsed:
print block
This returns a list of lists.
['Location', '1', 'Product', '1', 'Product', '2', 'Product', '3', '$20.99', '$21.99', '$33.79']
['Location', '2', 'Product', '1', 'Product', '2', 'Product', '3', '$24.99', '$22.88', '$35.59']
You can group things as you want but for simplicity I have just returned lists. Whitespace is ignored by default which makes things a lot simpler.
I do not know if there are equivalents in other languages.
You have given two pattern samples for text files.
I think these can be handled with scripting.
Something like: AWK, sed, grep with bash scripting.
One pattern in the first sample,
Section starts with keyword Location [Number]
second line of section has columns describing product names
third line of section has columns with prices for the products
There can be variable number of products per section.
There can be variable number of sections per file.
Products and prices are always on their designated lines of a section.
Whitespace separation identifies the (product,price) column-association.
Number of products in a section matches the number of prices in that section.
The collected data would probably be assimilated in a database.
The one thing I know I would use here is regular expressions. Three or four expressions could drive the parse logic for each e-mail format.
Trying to write the parse engine more generally than that would, I think, be skirting the edge of overprogramming it.