How to store User Fitness / Fitness Device data in FHIR? - hl7-fhir

We are currently in the process of evaluating FHIR for use as part of our medical record infrastructure. For the EHR data (Allergies, Visits, Rx, etc..) the HL7 FHIR seems to have an appropriate mapping.
However, lots of data that we deal with is related to personal Fitness - think Fitbit or Apple HealthKit:
Active exercise (aerobic or workout): quantity, energy, heart-rate
Routine activities such as daily steps or water consumption
Sleep patterns/quality (odd case of inter-lapping states within the same timespan)
Other user-provided: emotional rating, eating activity, women's health, UV
While there is the Observation resource, this still seems best fit (!) for the EHR domain. In particular, the user fitness data is not collected during a visit and is not human-verified.
The goal is to find a "standardized FIHR way" to model this sort of data.
Use an Observation (?) with Extensions? Profiles? Domain-specific rules?
FHIR allows extraordinary flexibility, but each extension/profile may increase the cost of being able to exchange the resource directly later.
An explanation on the appropriate use of an FHIR resource - including when to Extend, use Profiles/tags, or encode differentiation via Coded values - would be useful.
Define a new/custom Resource type?
FHIR DSTU2 does not define a way to define a new Resource type. Wanting to do so may indicate that the role of resources - logical concept vs. an implementation interface? - is not understood.
Don't use FHIR at all? Don't use FHIR except on summary interchanges?
It could also be the case that FHIR is not suitable for our messaging format. But would it be any "worse" to go FIHRa <-> FIHRb than x <-> FIHRc when dealing with external interoperability?
The FHIR Registry did not seem to contain any User-Fitness specific Observation Profiles and none of the Proposed Resources seem to add appropriate resource-refinements.
At the end of the day, it would be nice to be able to claim to be able to - with minimal or no translation, ie. in a "standard manner" - be able to exchange User Fitness data as an FHIR stream.

Certainly the intent is to use Observation, and there's lots of projects already doing this.
There's no need for extensions, it's just a straight forward use. Note that this: " In particular the user fitness data is not collected during a visit and is not human-verified" doesn't matter. There's lots of EHR data of dubious provenance...
You just need to use the right codes, and bingo, it all works. I've provided a bit more detail to the answer here:
http://www.healthintersections.com.au/?p=2487

Related

Event model or schema in event store

Events in an event store (event sourcing) are most often persisted in a serialized format with versions to represent a changed in the model or schema for an event type. I haven't been able to find good documentation showing the actual model or schema for an actual event (often data table in event store schema if using a RDBMS) but understand that ideally it should be generic.
What are the most basic fields/properties that should exist in an event?
I've contemplated using json-api as a specification for my events but perhaps that's too "heavy". The benefits I see are flexibility and maturity.
Am I heading down the "wrong path"?
Any well defined examples would be greatly appreciated.
I've contemplated using json-api as a specification for my events but perhaps that's too "heavy". The benefits I see are flexibility and maturity.
Am I heading down the "wrong path"?
Don't overlook forward and backward compatibility.
You should plan to review Greg Young's book on event versioning; it doesn't directly answer your question, but it does cover a lot about the basics of interpreting an event.
Short answer: pretty much everything is optional, because you need to be able to change it later.
You should also review Hohpe's Enterprise Integration Patterns, in particular his work on messaging, which details a lot of cases you may care about.
de Graauw's Nobody Needs Reliable Messaging helped me to understan an important point.
To summarize: if reliability is important on the business level, do it on the business level.
So while there are some interesting bits of meta data tracking that you may want to do, the domain model is really only going to look at the data; and that is going to tend to be specific to your domain.
You also have the fun that the representation of events that you use in the service that produces them may not match the representation that it shares with other services, and in particular may not be the same message that gets broadcast.
I worked through an exercise trying to figure out what the minimum amount of information necessary for a subscriber to look at an event to understand if it cares. My answers were an id (have I seen this specific event before?), a token that tells you the semantic meaning of the message (is that something I care about?), and a location (URI) to get a richer representation if it is something I care about.
But outside of the domain -- for example, when you are looking at the system as a whole trying to figure out what is going on, having correlation identifiers and causation identifiers, time stamps, signatures of the source location, and so on stored in a consistent location in the meta data can be a big help.
Just modelling with basic types that map to Json to write as you would for an API can go a long way.
You can spend a lot of time generating overly complex models if you throw too much tooling at it - things like Apache Thrift and/or Protocol Buffers (or derived things) will provide all sorts of IDL mechanisms for you to generate incidental complexity with.
In .NET land and many other platforms, if you namespace the types you can do various projections from the types
Personally, I've used records and DUs in F# as a design and representation tool
you get intellisense, syntax hilighting, and types you can use from F# or C# for free
if someone wants to look, types.fs has all they need

Representing PCP/GP History in FHIR

Background:
I have been digging into the FHIR DSTU2 specification to try and determine what is the most appropriate resource(s) to represent a particular patient's historical list of GPs/PCPs. I am struggling to find an ideal resource to house this information.
The primary criteria I have been using is to identify the proper resource is that it must provide values to associate a patient to a practitioner for a period of time.
Question:
What is the proper resource to represent historical pcp/gp information that can be tied back to a patient resource?
What I have explored:
Here is a list of my possible picks thus far. I paired the resource types with my thought process on why I'm not confident about using it:
Episode of Care - This seems to have the most potential. It has the associations between a patient and a set of doctors for a given time period. However, when I read its description and use-case scenarios, it seems like I would be bastardizing its usage to fit my needs, since it embodies a period of time where a group of related health care activities were performed.
Group - Very generic structure that could fit based on its definition. However, I want to rule out other options before taking this approach.
Care Plan - Similar to Episode of Care rational. It seems like a bastardization to just use this to house PCP/GP history information. The scope of this is much bigger and patient/condition-centric.
I understand that there may not be a clear answer and thus, the question might run the risk of becoming subjective and I apologize in advance if this is the case. Just wondering if anyone can provide concrete evidence of where this information should be stored.
Thanks!
That's not a use-case we've really encountered before. The best possibility is to use the new CareTeam resource (we're splitting out CareTeam from EpisodeOfCare and CarePlan) - take a look at the continuous integration build for a draft.
If you need to use DSTU 2, you could just look at Patient.careProvider and rely on "history" to see changes over time. Or use Basic to look like the new CareTeam resource.

Is there any difference between HL7 U.S.A and U.K standards?

Is there any difference between HL7 USA and UK standards ? If its there then what are those ?
I'm not aware of any country-specific HL7 features neither did Google query site:hl7.org country specific reveal something eye striking. But it does not mean there are no differences
I guess your best bet would be to narrow your question down to a particular set of messages, register as member at hl7.org and ask this question through their internal mailing list (Even better best bet might be to talk to some friendly competitor and ask for the lessons learned)
In our country-specific HL7 community the membership is also paid, but the benefits of becoming a member are close to 0 from the developer's perspective. It's prestigious to be a member and it looks good on the business card. But all you get is access to downloadable specifications (but most of them are free for about a year anyway)
In the software I was working with we were exchanging only very small set of hl7 messages, namely: ADT^A04, ADT^A05, ADT^A08, ADT^A18, ADT^A23, ADT^A34, ORM^O01 and ACK in 2 different countries, considering adding 3rd country, without any country-specific features needed, except the convention used for identifying patient with his/her security number. The number format was country-specific but it was handled by upper software layers. The topmost customization was the number input mask and user input validation business rules at GUI level
What we did need, was an installation-specific or 3rd-party-system-specific message customization. For this customization you'll need rather the list of vendors of hardware and software your system will talk to. Once you have it, check their hl7 conformance statements

What is a use-case? How to identify a use-case? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
The question is quite generic. What are the points that should be kept in mind to identify a valid use case? How to approach a use case?
A use case identifies, with specificity, a task or goal that users will be able to accomplish using a program. It should be written in terms that users can understand.
Wikipedia's description is overly formal. I'll dig through my other texts shortly.
In contrast, the original wiki's article is much more accessible.
An early article by Alastair Cockburn, cited positively by The Pragmatic Programmer, contains a good template.
This question, from just a few days ago, is very closely related, but slightly more specific.
The definition of use case is simple:
An actor's interactions with a system to create something of business value.
More formally:
a sequence of transactions performed
by a system that yield a measurable
set of values for a particular actor.
They're intended to be very simple: Actor, Interaction, Value. You can add some details, but not too many.
Using use cases is easy. Read this: http://www.gatherspace.com/static/use_case_example.html
The biggest mistake is overlooking the interaction between actor and system. A use case is not a place to write down long, detailed, technical algorithm designs. A use case is where an actor does something.
People interact with systems so they can take actions (place orders, approve billing, reject an insurance claim, etc.) To take an action, they first make a decision. To make a decision, they need information
Information
Decision
Action
These are the ingredients in the "Interaction" portion of a use case.
A valid use case could describe:
intended audience / user
pre-requisites (ie must have logged in, etc)
expected outcome(s)
possible points of failure
workflow of user
From Guideline: Identify and Outline Actors and Use Cases by the Eclipse people:
Identifying actors
Find the external entities with which
the system under development must
interact. Candidates include groups of
users who will require help from the
system to perform their tasks and run
the system's primary or secondary
functions, as well as external
hardware, software, and other systems.
Define each candidate actor by naming
it and writing a brief description.
Includes the actor's area of
responsibility and the goals that the
actor will attempt to accomplish when
using the system. Eliminate actor
candidates who do not have any goals.
These questions are useful in
identifying actors:
Who will supply, use, or remove
information from the system?
Who will
use the system?
Who is interested in a
certain feature or service provided by
the system?
Who will support and
maintain the system?
What are the
system's external resources?
What
other systems will need to interact
with the system under development?
Review the list of stakeholders that
you captured in the Vision statement.
Not all stakeholders will be actors
(meaning, they will not all interact
directly with the system under
development), but this list of
stakeholders is useful for identifying
candidates for actors.
Identifying use cases
The best way to find use cases is to
consider what each actor requires of
the system. For each actor, human or
not, ask:
What are the goals that the actor will
attempt to accomplish with the system?
What are the primary tasks that the
actor wants the system to perform?
Will the actor create, store, change,
remove, or read data in the system?
Will the actor need to inform the
system about sudden external changes?
Does the actor need to be informed
about certain occurrences, such as
unavailability of a network resource,
in the system?
Will the actor perform
a system startup or shutdown?
Understanding how the target
organization works and how this
information system might be
incorporated into existing operations
gives an idea of system's
surroundings. That information can
reveal other use case candidates.
Give a unique name and brief
description that clearly describes the
goals for each use case. If the
candidate use case does not have
goals, ask yourself why it exists, and
then either identify a goal or
eliminate the use case.
Outlining Use Cases
Without going into details, write a
first draft of the flow of events of
the use cases identified as being of
high priority. Initially, write a
simple step-by-step description of the
basic flow of the use case. The
step-by-step description is a simple
ordered list of interactions between
the actor and the system. For example,
the description of the basic flow of
the Withdraw Cash use case of an
automated teller machine (ATM) would
be something like this:
The customer inserts a bank card.
The system validates the card and prompts
the person to enter a personal
identification number (PIN).
The customer enters a PIN.
The system
validates the PIN and prompts the
customer to select an action.
The customer selects Withdraw Cash.
The system prompts the customer to choose
which account.
The customer selects
the checking account.
The system
prompts for an amount.
The customer
enters the amount to withdraw.
The
system validates the amount (assuming
sufficient funds), and then issues
cash and receipt.
The customer takes the cash and receipt, and then
retrieves the bank card.
The use case ends.
As you create this step-by-step
description of the basic flow of
events, you can discover alternative
and exceptional flows. For example,
what happens if the customer enters an
invalid PIN? Record the name and a
brief description of each alternate
flow that you identified.
Representing relationships between
actors and use cases
The relationship between actors and
use cases can be captured, or
documented. There are several ways to
do this. If you are using a use-case
model on the project, you can create
use-case diagrams to show how actors
and use cases relate to each other.
Refer to Guideline: Use-Case
Model for more information.
If you are not using a use-case model
for the project, make sure that each
use case identifies the associated
primary and secondary actors.

How do you perform address validation? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
The community reviewed whether to reopen this question 11 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Is it even possible to perform address (physical, not e-mail) validation? It seems like the sheer number of address formats, even in the US alone, would make this a fairly difficult task. On the other hand it seems like a task that would be necessary for several business requirements.
Here's a free and sort of "outside the box" way to do it. Not 100% perfect, but it should reject blatantly non-existent addresses.
Submit the entire address to Google's geocoding web service. This service attempts to return the exact coordinates of the location you feed it, i.e. latitude and longitude.
In my experience if the address is invalid you will get a result of 602 from the service. There's definitely a possibility of false positives or false negatives, but used in conjunction with other consistency checks it could be useful.
(Yahoo's geocoding web service, on the other hand, will return the coordinates of the center of the town if the town exists but the rest of the address is bogus. Potentially useful as long as you pay close attention to the "precision" field in the result).
There are a number of good answers in here but most of them make the assumption that the user wants an "API" solution where they must write code to connect to a 3rd-party service and/or screen scrape the USPS. This is all well and good, but should be factored into the business requirements and costs associated with the implementation and then weighed against the desired benefits.
Depending upon the business requirements and the way that the data is received into the system, a real-time address processing solution may be the best bet. If a real-time solution is required, you will want to consider the license agreement and technical limitations of the Google Maps/Bing/Yahoo APIs. They typically limit the number of calls you can make each day. The USPS web tools API is the same in additional they restrict how/why you can use their system and how you are allowed to use the data thereafter.
At the same time, there are a handful of great service providers that can easily process a static list of addresses. Essentially, you give the service provider a CSV file or Excel file, they clean it up and get it back to you. It's a one-time deal with no long-term commitment or obligation—usually.
Full disclosure: I'm the founder of SmartyStreets. We do address verification for addresses within the United States. We are easily able to CASS certify a list and we also offer a address verification web service API. We have no hidden fees, contracts, or anything. You use our service until you no longer need it and you can walk away. (Unlike cell phone companies that require a contract.)
USPS has an address cleaner online, which someone has screen scraped into a poor man's webservice. However, if you're doing this often enough, it'd be a better idea to apply for a USPS account and call their own webservice.
I will refer you to my blog post - A lesson in address storage, I go into some of the techniques and algorithms used in the process of address validation. My key thought is "Don't be lazy with address storage, it will cause you nothing but headaches in the future!"
Also, there is another StackOverflow question that asks this question entitled How should international geographic addresses be stored in a relational database.
In the course of developing an in-house address verification service at a German company I used to work for I've come across a number of ways to tackle this issue. I'll do my best to sum up my findings below:
Free, Open Source Software
Clearly, the first approach anyone would take is an open-source one (like openstreetmap.org), which is never a bad idea. But whether or not you can really put this to good and reliable use depends very much on how much you need to rely on the results.
Addresses are an incredibly variable thing. Verifying U.S. addresses is not an easy task, but bearable, but once you're going for Europe, especially the U.K. with their extensive Postal Code system, the open-source approach will simply lack data.
Web Services / APIs
Enterprise-Class Software
Money gets it done, obviously. But not every business or developer can spend ~$0.15 per address lookup (that's $150 for 1,000 API requests) - a very expensive business model the vast majority of address validation APIs have implemented.
What I ended up integrating: streetlayer API
Since I was not willing to take on the programmatic approach of verifying address data manually I finally came to the conclusion that I was in need of an API with a price tag that would not make my boss want to fire me and still deliver solid and reliable international verification results.
Long story short, I ended up integrating an API built by apilayer, called "streetlayer API". I was easily convinced by a simple JSON integration, surprisingly accurate validation results and their developer-friendly pricing. Also, 100 requests/month are entirely free.
Hope this helps!
I have used the services of http://www.melissadata.com Their "address object" works very well. Its pricey, yes. But when you consider costs of writing your own solutions, the cost of dirty data in your application, returned mailers - lost sales, and the like - the costs can be justified.
For us-based address data my company has used GeoStan. It has bindings for C and Java (and we created a Perl binding). Note that it is a commercial product and isn't cheap. It is quite fast though (~300 addresses per second) and offers features like CASS certification (USPS bulk mail discount), DPV (Delivery point verification) flagging, and LON/LAT geocoding.
There is a Perl module Geo::PostalAddress, but it uses heuristics and doesn't have the other features mentioned for GeoStan.
Edit: some have mentioned 'doing it yourself', if you do decide to do this, a good source of information to start with is the US Census Tiger Data Set, which contains a lot of information about the US including address information.
As seen on reddit:
$address = urlencode('1600 Pennsylvania Avenue, Washington, DC');
$json = json_decode(file_get_contents("http://where.yahooapis.com/geocode?q=$address&flags=J"));
print_r($json);
Fixaddress.com service is available that provides following services,
1) Address Validation.
2) Address Correction.
3) Address spell correcting.
4) Correct addresses phonetic mistakes.
Fixaddress.com uses USPS and Tiger data as reference data.
For more detail visit below link,
http://www.fixaddress.com/
One area where address lookups have to be performed reliably is for VOIP E911 services. I know companies reliably using the following services for this:
Bandwidth.com 9-1-1 Access API MSAG Address Validation
MSAG = Master Street Address Guide
https://www.bandwidth.com/9-1-1/
SmartyStreet US Street Address API
https://smartystreets.com/docs/cloud/us-street-api
There are companies that provide this service. Service bureaus that deal with mass mailing will scrub an entire mailing list to that it's in the proper format, which results in a discount on postage. The USPS sells databases of address information that can be used to develop custom solutions. They also have lists of approved vendors who provide this kind of software and service.
There are some (but not many) packages that have APIs for hooking address validation into your software.
However, you're right that its a pretty nasty problem.
http://www.usps.com/ncsc/ziplookup/vendorslicensees.htm
As mentioned there are many services out there, if you are looking to truly validate the entire address then I highly recommend going with a Web Service type service to ensure that changes can quickly be recognized by your application.
In addition to the services listed above, webservice.net has this US Address Validation service. http://www.webservicex.net/WCF/ServiceDetails.aspx?SID=24
We have had success with Perfect Address.
Their database has all the US street names and street number ranges. Also acts as a pretty decent parser for free-form address fields, if you are lucky enough to have that kind of data.
Validating it is a valid address is one thing.
But if you're trying to validate a given person lives at a given address, your only almost-guarantee would be a test mail to the address, and even that is not certain if the person is organised or knows somebody at that address.
Otherwise people could just specify an arbitrary random address which they know exists and it would mean nothing to you.
The best you can do for immediate results is request the user send a photographed / scanned copy of the head of their bank statement or some other proof-of-recent-residence, because at least then they have to work harder to forget it, and forging said things show up easily with a basic level of image forensic analysis.
There is no global solution. For any given country it is at best rather tricky.
In the UK, the PostOffice controlls postal addresses, and can provide (at a cost) address information for validation purposes.
Government agencies also keep an extensive list of addresses, and these are centrally collated in the NLPG (National Land and Property Gazetteer).
Actually validating against these lists is very difficult. Most people don't even know exactly how their address as it is held by the PostOffice. Some businesses don't even know what number they are on a particular street.
Your best bet is to approach a company that specialises in this kind of thing.
Yahoo has also a Placemaker API. It is good only for locations but it has an universal id for all world locations.
It look that there is no standard in ISO list.
You could also try SAP's Data Quality solutions which are available in both a server platform is processing a large number of requests or as an embeddable SDK if you wanted to run it in process with your application. We use it in our application and it's very robust and scalable.
NAICS.com is coming out with an API that will add all kinds of key business data including street address. This would happen on the fly as your site's forms are processed. https://www.naics.com/business-intelligence-api/
You can try Pitney Bowes “IdentifyAddress” Api available at - https://identify.pitneybowes.com/
The service analyses and compares the input addresses against the known address databases around the world to output a standardized detail. It corrects addresses, adds missing postal information and formats it using the format preferred by the applicable postal authority. I also uses additional address databases so it can provide enhanced detail, including address quality, type of address, transliteration (such as from Chinese Kanji to Latin characters) and whether an address is validated to the premise/house number, street, or city level of reference information.
You will find a lot of samples and sdk available on the site and i found it extremely easy to integrate.
For US addresses you can require a valid state, and verify that the zip is valid. You could even check that the zip code is in the right state, but beyond that I don't think there are many tests you could run that wouldn't provide a lot of false negatives.
What are you trying to do -- prevent simple mistakes or enforcing some kind of identity check?

Resources