Using the Countries gem https://github.com/hexorx/countries
We have the method:
ISO3166::Country.find_all_countries_by_continent('Europe')
We also have:
ISO3166::Country.countries
But we don't have:
ISO3166::Country.continents
I want to have a 'Filter by continent' select which could be translated using I18n.
Is there an easy way to get all available continents?
As I know (I can be wrong), I think hexorx/countries not give you this possibility natively.
Ps :
I have created an feature request for this if it can make things happen.
You can get all the continents using
ISO3166::Data.cache.map {|_,v| v['continent']}.uniq
#=> ["Asia", "North America", "Africa", "Europe", "South America", "Antarctica", "Australia"]
Due to the fact that this list is highly unlikely to change in the foreseeable future you can just cache these results for your self
ISO3166::CONTINENTS = ISO3166::Data.cache.map {|_,v| v['continent']}.uniq.sort
Then you can just reuse the constant ISO3166::CONTINENTS anywhere you need to.
That being said the i18n_data gem (that is used for translation of countries) does not appear to provide translation for continent names so that would likely fall on your shoulders to maintain.
A Set might make this slightly more efficient too: (although since getting the continents is simply Array traversal and Hash lookup it is already reasonably fast)
ISO3166::CONTINENTS = ISO3166::Data.cache.each_with_object(Set.new) do |(_,v),s|
s.add(v['continent'])
end.sort
Related
We've implemented the $match operation for patient that takes FHIR parameters with the search criteria. How should this search work when the patient resource in the parameters contains multiple given names? We don't see anything in FHIR that speaks to this. Our best guess is that we treat it as an OR when trying to match on given names in our system.
We do see that composite parameters can be used in the query string as AND or OR, but not sure how this equates when using the $match operation.
$match is intrinsically a 'fuzzy' search. Different servers will implement it differently. Many will allow for alternate spellings, common short names (e.g. 'Dick' for 'Richard'), etc. They may also allow for transposition of month and day and all sorts of similar data entry errors. The 'closeness' of the match is reflected in the score the match is given. It's entirely possible get back a match candidate that doesn't match any of the given names exactly if the score on other elements is high enough.
So technically, I think SEARCH works this way:
AND
/Patient?givenname=John&givenname=Jacob&givenname=Jingerheimer
The above is an AND clause. There is (can be) a person named with multiple given names "John", "Jacob", "Jingerheimer".
Now I realize SEARCH and MATCH are 2 different operations.
But they are loosely related.
But Patient-Matching is an "art". Be careful, a "false positive" (with a high "score") is/could-be a very big deal.
But as mentioned from Lloyd....you have a little more flexibility with your implementation of $match.
I have worked on 2 different "teams".
One team, we never let "out the door" anything that was below a 80% match-score. (How you determine a match-score is a deeper discussion).
Another team, we made $match work with a "IF you give me enough information to find a SINGLE match, I'll give it to you" .. but if not, tell people "not enough info to match a single".
Patient Matching is HARD. Do not let anyone tell you different.
at HIMSS and other events..when people show a demo of moving data, I always ask "how did you match this single person on this side.....as it is that person on the other side?"
As in "without patient matching...alot of work-flows fall a part at the get go"
Side note, I actually reported a bug with the MS-FHIR-Server (which the team fixed very quickly) (for SEARCH) here:
https://github.com/microsoft/fhir-server/issues/760
"name": [
{
"use": "official",
"family": "Kirk",
"given": [
"James",
"Tiberious"
]
},
Sidenote:
The Hapi-Fhir object to represent this is "ca.uhn.fhir.rest.param.TokenAndListParam"
Sidenote:
There is a feature request for Patient Match on the Ms-Fhir-Server github page:
https://github.com/microsoft/fhir-server/issues/943
I don't know if it can be called an algorithm but i think its close.
I will be pulling data from an API that will have certain words in the title, eg:
Great Software 2.0 Download Now
Buy Great Software for just $10
Great Software Torrent Download
So, i want to do different things based on the presence of certain words such as Download, Buy etc. For eg, if it has the word 'buy' in it, i would like to extract the word buy and the amount value that is present in the title and show it in another div, so in this case it would be "Buy for $10" or "Buy $10" etc. I can do if/else as well but I don't want to use if else because there could be more such conditions in the future. So what i am thinking about is using the send method. eg:
def buy(string)
'Buy for just' + string.scan(/\$\d+/).first
end
def whichkeyword(title)
send (title.scan(/(download|buy)/i)[0][0]).downcase.to_sym, title
end
whichkeyword('Buy this software for $10 now')
is there a better way to do this? Or is this even a good way to do it? Any help would be appreciated
First of all, use send if and only you are to call private method, use public_send otherwise.
In this particular case metaprogramming is an overkill. It requires too much redundant code, plus it requires the code to be changed for new items. I would go with building a hash like:
#hash = { 'buy' => { text: 'Buy for just %{placeholder}', re: /\$\d+/ } }
This hash might be places somewhere outside of the code, e. g. it might be stored in yml file near the code and loaded in advance. That way you might be able to change a behaviour without modifying the code, that is handy for instance in gem.
As we have a hash defined/loaded, I would call the method:
def format string
key = string[/#{Regexp.union(#hash.keys).source}/i].downcase
puts #hash[key][:text] % { placeholder: string[#hash[key][:re]] }
end
Yielding:
▶ format("Buy this software for $10 now")
#⇒ Buy for just $10
There are many advantages over declaring methods, e. g. now matches might contain spaces, you might easily add/remove matchers etc.
First of all, your algorithm can work, but has some troubles in it, like what if no keyword is applied.
I have two solutions for you:
NLP
If you want to do it much more dynamic, you can use NLP - Natural language Processing. NLP will find main words in you sentence and then you can find the good solution for each.
A good gem for that is Treat that you can use with stanford-core-nlp. After processing the data you can find the verbs and even synonyms in the sentence and figure out what to do.
sentence('Buy this software for $10 now').verbs # ['buy']
Simple Hash
This solution is less dynamic, but much more simple. Like you did with the scan, just use Constant to manage your keywords, and the output from them(I would do it with lambdas). you can also add default to the hash
KEYWORDS = Hash.new('Default Title').merge(
buy: -> { },
download: -> { }
)
KEYWORDS[sentence[/(#{KEYWORDS.keys.join('|')})/i].downcase]
I think this solution is good enough.
The only thing that looks strange is scan(/(download|buy)/i)[0][0].
As for me I don't very much like using [] syntax in Ruby.
I think using scan here is not necessary.
What about
def whichkeyword(title)
title =~ /(download|buy)/i
send $1.downcase.to_sym, title unless $1.nil?
end
UPDATE
def whichkeyword(title)
action = title[/(download|buy)/i]
public_send action.downcase.to_sym, title if action
end
I had an interview today and wanted input on how you would solve this issue that came up. I answered the question, but in my mind I was thinking there is a better way.
Here is the scenario. You have two files that you need to compare. In the first file you have a list in string format of NFL team abbreviations for example:
ARI
CHIC
GB
NYG
DET
WASH
PHL
PITT
STL
SF
CLEV
IND
DAL
KC
In the second file you would have the following information in a hash or json for example:
"data":
{"description": name: "CLEV","totfd":26,"totyds":396,"pyds":282,"ryds":114,"pen":4,"penyds":24,
"trnovr":0,"pt":4,"ptyds":163,"ptavg":36,"top":"37:05"}},"players":null}
How would you take the strings in the first file (the abbreviations) and see if that abbreviation was included somewhere in the data of the second file? So, for example I want to see if CLEV, ARI, WASH, so on would be anywhere in the second file. If that abbreviation is included I would want to extract information based on that abbreviation.
Here was my answer:
I would iterate over each abbreviation looking for that specific abbreviation inside the second file.
I felt my answer was poor, but I wanted to see if others had a good idea on what they would do.
thanks
Mike Riley
You should ask questions in your interview. Some questions I'd ask:
Will the hash/json include duplicate data for teams? Meaning, will CLEV have multiple records in there? If not, now you know you have unique data so there's no need to group anything ahead of time.
If it's not unique, I'd get a list of all the names that exist in the hash, so you can do a comparison between the array given and the other file.
This is in O(n) for the traversal + O(logN) for the value lookup:
hash = [{'description': 'some team', 'name': 'CLEV','totfd':26,'totyds':396,'pyds':282 },
{'description': 'some team', 'name': 'PHL','totfd':26,'totyds':396,'pyds':282 }]
hash_names = hash.map { |team| team[:name] }
Now that we have a list of names in the hash, we can find out where there is an overlap. We can add the two arrays together and figure out who shows up in there more than once. There are many ways to do that, but we should keep with our run time of O(n):
list = ["ARI","CHIC","GB","NYG","DET","WASH","PHL","PITT","STL","SF","CLEV","IND","DAL"]
teams_in_both = (list + hash_names).group_by { |team| team }.keep_if { |_, occ| occ.size > 1 }.map(&:first)
Now we have a list of:
["PHL", "CLEV"]
We know enough to say who's important to us and can fetch the remaining data accordingly.
I'm currently building a code generator, which aims to generate boiler plate for me once I write the templates and/or translations, in whatever language I have to work with.
I have a problem with a regex in Ruby. The regex aims to select whatever is between {{{ and }}}, so I can generate functions according to my needs.
My regex is currently :
/\{\{\{(([a-zA-Z]|\s)+)\}\}\}/m
My test data set is:
{{{Demande aaa}}} => {{{tagadatsouintsouin tutu}}}
The results are:
[["Demande aaa", "a"], ["tagadatsouintsouin tutu", "u"]]
Each time the regex picks the last character twice. That's not exactly what I want, I need something more like this:
/\{\{\{((\w|\W)+)\}\}\}/m
But this has a flaw too, the results are:
[["Demande aaa}}} => {{{tagadatsouintsouin tutu", "u"]]
Whereas, I wish to get:
[["Demande aaa"],["tagadatsouintsouin tutu"]]
How do I correct these regexes? I could use two sets of delimiters, but it won't teach me anything.
Edit :
All your regex run against my data sample, so you all got a point.
Regex may be overkill, and probably are overkill for my purpose. So i have two questions.
First, do the regex keep the same exact indentation ? This should be able to handle whole functions.
Second, is there something fitter for that task ?
Detailled explanation of the purpose of this tool. I'm bored to write boiler plate code in php - symfony. So i wish to generate this according to templates.
My intent is to build some views, some controllers, and even parts of model this way.
Pratical example : In my model, i wish to generate some functions according to the type of an object's attribute. For examples, i have functions displaying correctly money. So i need to build the corect function, according to my attribute, and then put in , inside m output file.
So there is some translations which themselves need translations.
So a fictive example :
{{{euro}}} => {{{ function getMyAttributeEuro()
{
return formating($this->get[[MyAttribute]]);
} }}}
In order to stock my translations, should i use regex, like
I wish to build something a bit clever, so it can build most of the basic code with no bug. So i can work on interesting code.
You have one set of capturing parentheses too many.
/\{\{\{([a-zA-Z\s]+)\}\}\}/
Also, you don't need the /m modifier because there is no dot (.) in your regex whose behaviour would be affected by it.
I'm partial to:
data = '{{{Demande aaa}}} => {{{tagadatsouintsouin tutu}}}'
data.scan(/\{{3}(.+?)}{3}/).flatten.map{ |r| r.squeeze(' ') }
=> ["Demande aaa", "tagadatsouintsouin tutu"]
or:
data.scan(/\{{3}(.+?)}{3}/).flatten.map{ |r| [ r.squeeze(' ') ] }
=> [["Demande aaa"], ["tagadatsouintsouin tutu"]]
or:
data.scan(/\{{3}(.+?)}{3}/).map{ |r| [ r[0].squeeze(' ') ] }
=> [["Demande aaa"], ["tagadatsouintsouin tutu"]]
if you need the sub-arrays.
I'm not big on trying to everything possible inside the regex. I prefer to keep it short and sweet, then polish the output once I've found what I was looking for. It's a maintenance issue, because regex make my head hurt, and I stopped thinking of them as a macho thing years ago. Regex are a very useful tool, but too often they are seen as the answer to every problem, which they're not.
Some people, when confronted with a problem, think “I know,
I'll use regular expressions.” Now they have two problems.
-- Jamie Zawinski
You want non capturing groups (?:...), but here is another way.
/\{\{\{(.*?)\}\}\}/m
Just a shot
/\{\{\{([\w\W]+?)\}\}\}/
Added non-greedyness to your regex
Here this seems to work
Let's say I've got a collection of 10 million documents that look something like this:
{
"_id": "33393y33y63i6y3i63y63636",
"Name": "Document23",
"CreatedAt": "5/23/2006",
"Tags": ["website", "shopping", "trust"],
"Keywords": ["hair accessories", "fashion", "hair gel"],
"ContactVia": ["email", "twitter", "phone"],
"Body": "Our website is dedicated to making hair products that are..."}
I would like to be able to query the database for an arbitrary number of, including 0 of, any of the 3 attributes of Tags, Keywords, and ContactVia. I need to be able to select via ANDS (this document includes BOTH attributes of X and Y) or ORs (this document includes attributes of X OR Y).
Example queries:
Give me the first 10 documents that have the tags website and
shopping, with the keywords matching "hair accessories or fashion"
and with a contact_via including "email".
Give me the second 20 documents that have the tags "website" or
"trust", matching the keywords "hair gel" or "hair accessories".
Give me the 50 documents that have the tag "website".
I also need to order these by either other fields in the documents
(score-type) or created or updated dates. So there are basically four "ranges" that are queried regularly.
I started out SQL-based. Then, I moved to Mongo because it had support for Arrays and hashes (which I love). But, it doesn't support more than one range using indexes, so my Mongo database is slow..because it can't use indexes and has to scan 10 million documents.
Is there a better alternative. This is holding up moving this application into production (and the revenue that comes with it). Any thoughts as to the right database or alternative architectures would be greatly appreciated.
I'm in Ruby/Rails if that matters.
When needing to do multiple queries on arrays, we found the best solution, at least for us, was to go with ElasticSearch. We get this, plus some other bonuses. And, we can reduce the index requirements for Mongo.. so it's a win/win.
My two cents are for MongoDB. Not only can your data be represented, saved, and loaded as raw Ruby hashes, but Mongo is modern and fast, and really, really easy to know. Here's all you need to do to start Mongo server:
mongod --dbpath /path/to/dir/w/dbs
Then to get the console , which is just a basic JavaScript console, just invoke mongo. And using it is just this simple:
require 'mongo'
db = Mongo::Connection.new['somedb']
db.stuff.find #=> []
db.stuff.insert({id: 'abcd', name: 'Swedish Chef', says: 'Bork bork bork!'})
db.stuff.find #=> [{id: 'abcd', name: 'Swedish Chef', says: 'Bork bork bork!'}]
db.stuff.update({id: 'abcd', {'$set' => {says: 'Bork bork bork!!!! (Bork)!'}}})
db.stuff.find #=> [{id: 'abcd', name: 'Swedish Chef', says: 'Bork bork bork!!!! (Bork)!'}]