Identify patient visits to specific ICUs (e.g. PICU, NICU, CICU) - hl7-fhir

I'm attempting to find patients that have visited specific ICUs at a hospital (either PICU, NICU, or CICU).
I've looked at the Encounter 2 resource and the Location 3 resource but am not seeing anything that would:
Clearly identify a visit to an ICU and specify what type of ICU it was (PICU, CICU, NICU, etc)
Have the ability to search for patients by seeing who has visited said ICUs, possibly during a certain timeframe.
Find the Admission and Discharge dates to the ICU.
The closest I've found is the ServiceType on the Encounter 2 resource which has the following options:
Intensive Care Medicine
Paediatric Intensive Care Medicine
Neonatology & Perinatology
However, these are too general and don't directly provide the type of ICU or admission/discharge dates to the ICU.
Any suggestions on how to accomplish these goals are welcome. Thanks!

I'm assuming this is a FHIR R4 question:
To capture the details of services available at a location you should use the HealthcareService Resource. You then would search for all encounters of a location where a specific kind of service is available.
You mentioned Encounter.ServiceType only having insufficient options for your problem. This ValueSet is only an example binding. So you can replace it with a VS for your specific use-case.

Related

Optimize Google Places API Query for Prominent Parks, Mountains, Conservation Areas

First post on Stackoverflow.
I am using the Google API to sort images taken while traveling into organized folders, append tags and rename files with relevant information. I have my code working well but am not always happy with the results. I want to be able to focus my query results on major tourist attractions such as National Parks, Ski Resorts, Beaches, etc. The problem I am finding is that the prominence "rankby" variable and the "radius" are not giving satisfactory results. Here is a typical query for Zion National Park.
https://maps.googleapis.com/maps/api/place/nearbysearch/json?location=37.269486111111,-112.948141666667&rankby=prominence&radius=50000&type=natural_feature,tourist_attraction,point_of_interest&keyword=&key=MYAPIKEY
The most prominent result is Springdale which is the town where you enter the part. Zion National Park is listed much further down in the results. What my code does is use the LAT and LON extracted using EXIF and does a Google API nearby search request to find the Place ID for where the photo was taken. It then does another API request for Place Details using the place_id provided by the previous step to cut down on the information I need to parse.
https://maps.googleapis.com/maps/api/place/details/json?place_id=ChIJ8R5RCzaNyoARegi3rqVkstk&fields=name,address_component&key=MYAPIKEY
I can force the nearby search to return a National Park by searching against "National Park" in the keywords variable but that limits my project to only being able to provide National Park results since the keywords field can only accept one string.
I would like a park of my query to be able to return the most prominent tourist attraction at the general level, i.e. Zion National Park, Yosemite National Park, etc. so I can sort images into the general name folders and another part of the query provides the exact location. i.e. I am on this trail or at this lookout. The problem is the Google API sees these specific locations "Trail, Lookout" as tourist attractions, parks, establishments, etc. as well so it chooses those first.
What I need help with is trying to figure out if there is a better way to structure my query to return the high-level / name of the major park. From my understanding, the types field only searches on the first type even if there is more in the list and the keywords field can only accept one string as well making it impossible for one phase to capture all major destinations at a high level.
Perhaps it needs to be done with more queries but I am trying to limit the number of queries to stay inside the free quota. Maybe it will just take a long time to fully sort my files.
Read through and implemented Google API structure. I hoping someone can provide a more detailed query structure or method to parse out truly prominent locations rather than googles interpretation of prominence as it can be affected by user ratings, etc. It is not always accurate.

How do I access h2o xgb model input features after saving a model to disk and reloading it?

I'm using h2o's xgboost implementation in Python. I've saved a model to disk and I'm trying to load it later on for analysis and predicting. I'm trying to access the input features list or, even better, the feature list used by the model which does not include the features it decided not to use. The way people advise doing this is to use varimp function to get the variable importance and while this does remove features that aren't used in the model this actually gives you the variable importance of intermediate features created by OHE the categorical features, not the original categorical feature names.
I've searched for how to do this and so far I've found the following but no concrete way to do this:
Someone asking something very similar to this and being told the feature has been requested in Jira
Said Jira ticket which has been marked resolved but I believe says this was implemented but not customer visible.
A similar ticket requesting this feature (original categorical feature importance) for variable importance heatmaps but it is still open.
Someone else who found an unofficial way to access the columns with model._model_json['output']['names'] but that doesn't give the features that weren't used by the model and they are told to use a different method that doesn't work if you have saved the model to disk and reloaded it (which I am doing).
The only option I see is to just use the varimp features, split on period character to break the OHE feature names, select the first part of all the splits, and then run a set over everything to get the unique column names. But I'm hoping there's a better way to do this.

What is the difference between a concept and a label in XBRL, and do all listed companies share the same US GAAP labels?

Let me show tesla's company facts data with sec's RESTful api:
https://data.sec.gov/api/xbrl/companyfacts/CIK0001318605.json
You can see all labels in 'facts ---- us-gaap' such as :
AccountsAndNotesReceivableNet
AccountsPayableCurrent
AccountsReceivableNetCurrent
AccretionAmortizationOfDiscountsAndPremiumsInvestments
Do all listed companies share same us-gaap label names ?
Can every company create its own customerized us-gaap label names?
concept in xbrl is A taxonomy element that provides the meaning for a fact in the official definition.
https://www.xbrl.org/guidance/xbrl-glossary/
What is the difference between concept in xbrl and us-gaap's label ?
The short answer is yes.
First, a small detail:
AccountsAndNotesReceivableNet
AccountsPayableCurrent
AccountsReceivableNetCurrent
AccretionAmortizationOfDiscountsAndPremiumsInvestments
These are not labels, these are local names of concepts. Labels are something different, human readable, for example "Accounts and notes receivable, net" would be a label. Labels are attached with the label linkbase.
The more complete names (called QNames) of these concepts are:
us-gaap:AccountsAndNotesReceivableNet
us-gaap:AccountsPayableCurrent
us-gaap:AccountsReceivableNetCurrent
us-gaap:AccretionAmortizationOfDiscountsAndPremiumsInvestments
where the us-gaap prefix is bound with the US GAAP namespace, which changes every year and is, for 2021:
http://fasb.org/us-gaap-std/2021-01-31
This makes explicit that these concepts are not maintained by companies, but by the Financial Accounting Standards Board. Thus, all companies filing their reports into the EDGAR system share these concepts.
Two important points:
Companies are allowed to create their own concepts. These are called extension concepts. You will recognize them because they are in a company namespace, not in the US GAAP namespace. Their prefix will not be us-gaap, but some company-specific prefix. These concepts are unique to each company.
An example for Tesla is:
tsla:AccruedAndOtherCurrentLiabilities
Concepts in the US GAAP taxonomy are updated every year, i.e., some get added, some get deprecated, some are removed. However, the FASB tries to maintain consistency across years, i.e., a concept will not suddently change its semantics one year to the next.

HL7 FHIR mark resources as anonymized

I am trying to map an existing domain into HL7 FHIR.
So far it was pretty easy to find FHIR resources that more or less represent the same data and can be used for that purpose. But now I am running into a problem of which I am not sure how to solve it.
The existing domain allows that data can be anonymized depending on the users access level. e.g. a patient's name or address might be removed and marked as anonymized. Other data will be pseudonymised, for example a the birthdate in 1980 will be replaced with 01.01.1980. An Age of 37 will be replaced with a category of 30-40.
So I am unsure how to integrate that into the FHIR domain. I was thinking I could create an extension holding a boolean, indicating if a value was anonymized or not and always replace or remove the original value. This might work, but I will run into big problems when the anonymized value is of a different type than the original value (e.g. Age is replaced by a range of values)
Is that even a valid approach? I thought this might be common problem, but I could not find any examples where people described methods of how to mark data as altered. Unfortunately the documentation at http://build.fhir.org/extensibility-registry.html does not contain anything that would help my case.
You can use security labels for this purpose (Resource.meta.security). Take a look at REDACTED and SUBSETTED in the security label value set: https://www.hl7.org/fhir/valueset-security-labels.html
If you need to convey a data type other than the one allowed by the resource (e.g. wanting to convey a range rather than a birthdate), you'd need to use an extension. (Note that dates are valid even if you only include the year.)

Programmatically find common European street names

I am in the middle of designing a web form for German and French users. Within this form, the users would have to type street names several times.
I want to minimize the annoyance to the user, and offer autocomplete feature based on common French and German street names.
Any idea where I can a royalty-free list?
Would your users have to type the same street name multiple times? Because you could easily prevent this by coding something that prefilled the fields.
Another option could be to use your user database as a resource. Query it for all the available street names entered by your existing users and use that to generate suggestions.
Of course this would only work if you have a considerable number of users.
[EDIT] You could have a look at OpenStreetMap with their Planet.osm dumbs (or have a look here for a dump containing data for just Europe). That is basically the OSM database with all the map information they have, including street names. It's all in an XML format and streets seem to be stored as Ways. There are tools (i.e. Osmosis) to extract the data and put it into a database, or you could write something to plough through the data and filter out the street names for your database.
Start with http://en.wikipedia.org/wiki/Category:Streets_in_Germany and http://en.wikipedia.org/wiki/Category:Streets_in_France. You may want to verify the Wikipedia copyright isn't more protective than would be suitable for your needs.
Edit (merged from my own comment): Of course, to answer the "programmatically" part of your question: figure out how to spider and scrape those Wikipedia category pages. The polite thing to do would be to cache it, rather than hitting it every time you need to get the street list; refreshing once every month or so should be sufficient, since the information is unlikely to change significantly.
You could start by pulling names via Google API (just find e.g. lat/long outer bounds - of Paris and go to the center) - but since Google limits API use, it would probably take very long to do it.
I had once contacted City of Bratislava about the street names list and they sent it to me as XLS. Maybe you could try doing that for your preferred cities.
I like Tom van Enckevort's suggestion, but I would be a little more specific that just looking inside the Planet.osm links, because most of them require the usage of some tool to deal with the supported formats (pbf, osm xml etc)
In fact, take a look at the following link
http://download.gisgraphy.com/openstreetmap/
The files there are all in .txt format and if it's only the street names that you want to use, just extract the second field (name) and you are done.
As an fyi, I didn't have any use for the French files in my project, but mining the German files resulted (after normalization) in a little more than 380K unique entries (~6 MB in size)
#dusoft might be onto something - maybe someone at a government level can help? I don't think that a simple list of street names cannot be copyrighted, nor any royalties be charged. If that is the case, maybe you could even scrape some mapping data from something like a TomTom?
The "Deutsche Post" offers a list with all street names in Germany:
http://www.deutschepost.de/dpag?xmlFile=link1015590_3877
They don't mention the price, but I reckon it's not for free.

Resources