Using FHIR for inventory - hl7-fhir

I am looking into applications around hospital inventory systems. Is it correct to use FHIR for things like surgical equipment? We have an encounter (that is a surgery) and we want to list all the devices, including
Manufacturer
Price
Item or part number
Image
Technical specs
CAD or Collada file

Inventory isn't really in the scope of FHIR which is focused on healthcare-related interoperability. (Inventory, personnel management, purchasing, etc. aren't really any different in a hospital setting than they are in a bank or a factory.) That said, if you want to use extensions on resources like Device and/or make use of the Basic resource, it's generally possible to cobble together a conformant FHIR-based solution for pretty much anything if you don't mind the liberal use of extensions.

Related

How do you typically make your CPLEX Studio models more user friendly?

Non-optimization question about CPLEX Studio....
So you make your awesome OPL model in CPLEX Studio and it brilliantly solves your amazeballs problem.
Suppose you wanted to allow other users to access this model in a nice user friendly way: Basically, specify some simple parameters in a simple user interface (without having to edit code etc), then, output the solution in some arbitrary way you coded up like an Excel file, HTML report, or whatever.
1) What are the options for a user interface, without adding in too much other technology?
(eg. I currently have a Java program doing exactly this, but I'd rather not rely on Java code / programmers / compiling / hosting source code etc)
2) What are the options for triggering some user friendly output, eg. in a standard format like Excel, some HTML report you coded up, or maybe just triggering a Python script, etc?
(eg. I currently render them in a Java FX application on grids, charts and HTML windows, I would prefer something more lightweight and accessible, like Python etc, HTML5 output)
3) In industry, what is the typical role of CPLEX in a production environment: Is it just called by an external application (Java/.NET etc), or is CPLEX Studio used more actively?
Embed the optimisation model in wider business applications using Java, C#, Python, C++, whatever. Make it just part of the normal business systems that people use. It is just software. Make it so that the users really appreciate that the new software actually benefits them each time they use it. Make it easier to use the model than to not use it. Hide the model inside other software. Probably never even mention optimisation to your end users.
The best model in the world that could deliver amazing benefits will actually achieve nothing of practical value if it doesn't actually get used.
If your target audience or users have to do extra stuff or perform extra steps to use your model, then it will likely not get used very much and may wither and die. If they have to learn new applications etc to use it, it probably won't get used by most people.
By making your model part of their normal day-to-day processes, it will get used, and the practical benefits will come.
I have implemented and support a number of live optimisation applications in several large companies, making decisions that directly affect billions of pounds/dollars of products/revenues per year. Almost all of them have the real optimisation models totally hidden from the users, most of whom have no idea of optimisation or CPLEX; the software in their business systems just works.
There are many options. You may write the model with an algebraic modeling language (AML) like OPL or a general purpose language. (GPL)
If you use OPL then you may call your model from many GPL like C++, Java, Python ...
Or you could plug that model in an existing application.
You could call OPL from Excel or DSX Python Notebook as can be read at https://www.ibm.com/developerworks/community/forums/html/topic?id=306f3ded-33b8-4d9a-8568-b4288aa64265&ps=25
See the survey I mentioned in 1.
Some users use CPLEX OPL IDE in order to make decisions and simulations
Other use Decision Optimization Centre : https://www.ibm.com/us-en/marketplace/ibm-decision-optimization-center
Finally, some write new applications from zero or plug the model into an existing application.

How to store User Fitness / Fitness Device data in FHIR?

We are currently in the process of evaluating FHIR for use as part of our medical record infrastructure. For the EHR data (Allergies, Visits, Rx, etc..) the HL7 FHIR seems to have an appropriate mapping.
However, lots of data that we deal with is related to personal Fitness - think Fitbit or Apple HealthKit:
Active exercise (aerobic or workout): quantity, energy, heart-rate
Routine activities such as daily steps or water consumption
Sleep patterns/quality (odd case of inter-lapping states within the same timespan)
Other user-provided: emotional rating, eating activity, women's health, UV
While there is the Observation resource, this still seems best fit (!) for the EHR domain. In particular, the user fitness data is not collected during a visit and is not human-verified.
The goal is to find a "standardized FIHR way" to model this sort of data.
Use an Observation (?) with Extensions? Profiles? Domain-specific rules?
FHIR allows extraordinary flexibility, but each extension/profile may increase the cost of being able to exchange the resource directly later.
An explanation on the appropriate use of an FHIR resource - including when to Extend, use Profiles/tags, or encode differentiation via Coded values - would be useful.
Define a new/custom Resource type?
FHIR DSTU2 does not define a way to define a new Resource type. Wanting to do so may indicate that the role of resources - logical concept vs. an implementation interface? - is not understood.
Don't use FHIR at all? Don't use FHIR except on summary interchanges?
It could also be the case that FHIR is not suitable for our messaging format. But would it be any "worse" to go FIHRa <-> FIHRb than x <-> FIHRc when dealing with external interoperability?
The FHIR Registry did not seem to contain any User-Fitness specific Observation Profiles and none of the Proposed Resources seem to add appropriate resource-refinements.
At the end of the day, it would be nice to be able to claim to be able to - with minimal or no translation, ie. in a "standard manner" - be able to exchange User Fitness data as an FHIR stream.
Certainly the intent is to use Observation, and there's lots of projects already doing this.
There's no need for extensions, it's just a straight forward use. Note that this: " In particular the user fitness data is not collected during a visit and is not human-verified" doesn't matter. There's lots of EHR data of dubious provenance...
You just need to use the right codes, and bingo, it all works. I've provided a bit more detail to the answer here:
http://www.healthintersections.com.au/?p=2487

Is there any difference between HL7 U.S.A and U.K standards?

Is there any difference between HL7 USA and UK standards ? If its there then what are those ?
I'm not aware of any country-specific HL7 features neither did Google query site:hl7.org country specific reveal something eye striking. But it does not mean there are no differences
I guess your best bet would be to narrow your question down to a particular set of messages, register as member at hl7.org and ask this question through their internal mailing list (Even better best bet might be to talk to some friendly competitor and ask for the lessons learned)
In our country-specific HL7 community the membership is also paid, but the benefits of becoming a member are close to 0 from the developer's perspective. It's prestigious to be a member and it looks good on the business card. But all you get is access to downloadable specifications (but most of them are free for about a year anyway)
In the software I was working with we were exchanging only very small set of hl7 messages, namely: ADT^A04, ADT^A05, ADT^A08, ADT^A18, ADT^A23, ADT^A34, ORM^O01 and ACK in 2 different countries, considering adding 3rd country, without any country-specific features needed, except the convention used for identifying patient with his/her security number. The number format was country-specific but it was handled by upper software layers. The topmost customization was the number input mask and user input validation business rules at GUI level
What we did need, was an installation-specific or 3rd-party-system-specific message customization. For this customization you'll need rather the list of vendors of hardware and software your system will talk to. Once you have it, check their hl7 conformance statements

Date/Time and Internationalization for Enterprise Application -- Development Guidelines

Together with another developer, I have embarked on a journey to create a hosted 'CRM Style' application that will cater to enterprise level businesses. These businesses will be accessing our application remotely and so the hosted nature of the application will require certain features. For example, to guarantee a level of professional service the following things must be true:
internationalization requires multiple languages and presentation of date/time for various timezones and locales
transactional capability for batch processing of tasks and rollback capabilities
security concerns for keeping data safe and remote invocations secure from attack
etcetera, the list goes on and on
Due to these concerns and my role as the developer most responsible for the server side development, I am very interested in the choices I make early on. Regarding timezones and languages for example, are there issues related to my choice of database or data fields? Do I choose to use a UTC timestamp or date field throughout the application and if so is there a standard format for that? Also, regarding different languages, am I supposed to ensure the data is stored in the database as UTF-8 or unicode?
I really want to avoid laying down the infustructure of the system only to discover later that a fundamental decision was incorrect or not big enough, wide enough, smart enough, etc. Can someone point me in the right direction regarding these basic 'early' decisions?
EDIT _ Ok I appreciate the broad responses and now I see my question was a little too non-specific. I'd like to focus on the more specific elements that WERE present in the question, such as how to choose the proper format for storing a UTC Date/Time or how to save my text data (do I specify a UTF format?)
If you are targeting enterprise CRM, then you will need a very high level of customizability and integrations with all kinds of systems. You will make mistakes in the design. Your only hope is to isolate each little piece of the code so that you can have a chance of fixing it later.
In short, basic software engineering principles are your best bet.
What you are discussing is called a multi-tenant application wherein you have the same code base used by multiple customers (tenants) with logical or physical separation of data. Remember the fundamental rule of development: flexibility is relational to complexity. The more flexible you make the system, the more complicated it will be.
RE: UTC
For a CRM application that stores things like when calls were made and when meetings took place, I would definitely store all those in UTC and let the user set their local timezone. However, you might run into dates which are timezone agnostic and for those, I would store whatever date was entered.
RE: Unicode
Yes, I would use Unicode for all user-entered data. However, that will not get you localization. If for a single company for example, you have a user in Hong Kong entering text in Chinese and user in Amsterdam entering text in Dutch, you are not going to get automatic translation. Things like dates and number formats can be localized, but raw text like names used in drop lists and such can be a chore to localize.
As you have not mentioned what you think about the issue, you may find my answer or parts of it rather basic.
If you don't need to, don't use a low-level language. I'd use python usually for the first version of a CRM application (with the hope that it would be good enough for the next versions), but this decision depends also on the domain community.
Try to write the minimal code on your own, instead relying on the third-party libraries. People may disagree on this, but I would write the code myself as the last option. But the next point is important.
When selecting a library/framework to use, make sure the party behind it is going to last, the library is stable and the software license suits you needs.
Other general rules apply: focus on the customer, use continuous integration/testing, etc., use good software practices like logging etc.
Nothing is ever stored as "unicode" because this is an abstract concept. Unicode is always stored in some kind of unicode transformation format (UTF) (well or UCS but I never saw that used somewhere). The most commonly used UTF is UTF-8 but I suggest to use what is native/default to your platform. wikipedia

How do you perform address validation? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
The community reviewed whether to reopen this question 11 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Is it even possible to perform address (physical, not e-mail) validation? It seems like the sheer number of address formats, even in the US alone, would make this a fairly difficult task. On the other hand it seems like a task that would be necessary for several business requirements.
Here's a free and sort of "outside the box" way to do it. Not 100% perfect, but it should reject blatantly non-existent addresses.
Submit the entire address to Google's geocoding web service. This service attempts to return the exact coordinates of the location you feed it, i.e. latitude and longitude.
In my experience if the address is invalid you will get a result of 602 from the service. There's definitely a possibility of false positives or false negatives, but used in conjunction with other consistency checks it could be useful.
(Yahoo's geocoding web service, on the other hand, will return the coordinates of the center of the town if the town exists but the rest of the address is bogus. Potentially useful as long as you pay close attention to the "precision" field in the result).
There are a number of good answers in here but most of them make the assumption that the user wants an "API" solution where they must write code to connect to a 3rd-party service and/or screen scrape the USPS. This is all well and good, but should be factored into the business requirements and costs associated with the implementation and then weighed against the desired benefits.
Depending upon the business requirements and the way that the data is received into the system, a real-time address processing solution may be the best bet. If a real-time solution is required, you will want to consider the license agreement and technical limitations of the Google Maps/Bing/Yahoo APIs. They typically limit the number of calls you can make each day. The USPS web tools API is the same in additional they restrict how/why you can use their system and how you are allowed to use the data thereafter.
At the same time, there are a handful of great service providers that can easily process a static list of addresses. Essentially, you give the service provider a CSV file or Excel file, they clean it up and get it back to you. It's a one-time deal with no long-term commitment or obligation—usually.
Full disclosure: I'm the founder of SmartyStreets. We do address verification for addresses within the United States. We are easily able to CASS certify a list and we also offer a address verification web service API. We have no hidden fees, contracts, or anything. You use our service until you no longer need it and you can walk away. (Unlike cell phone companies that require a contract.)
USPS has an address cleaner online, which someone has screen scraped into a poor man's webservice. However, if you're doing this often enough, it'd be a better idea to apply for a USPS account and call their own webservice.
I will refer you to my blog post - A lesson in address storage, I go into some of the techniques and algorithms used in the process of address validation. My key thought is "Don't be lazy with address storage, it will cause you nothing but headaches in the future!"
Also, there is another StackOverflow question that asks this question entitled How should international geographic addresses be stored in a relational database.
In the course of developing an in-house address verification service at a German company I used to work for I've come across a number of ways to tackle this issue. I'll do my best to sum up my findings below:
Free, Open Source Software
Clearly, the first approach anyone would take is an open-source one (like openstreetmap.org), which is never a bad idea. But whether or not you can really put this to good and reliable use depends very much on how much you need to rely on the results.
Addresses are an incredibly variable thing. Verifying U.S. addresses is not an easy task, but bearable, but once you're going for Europe, especially the U.K. with their extensive Postal Code system, the open-source approach will simply lack data.
Web Services / APIs
Enterprise-Class Software
Money gets it done, obviously. But not every business or developer can spend ~$0.15 per address lookup (that's $150 for 1,000 API requests) - a very expensive business model the vast majority of address validation APIs have implemented.
What I ended up integrating: streetlayer API
Since I was not willing to take on the programmatic approach of verifying address data manually I finally came to the conclusion that I was in need of an API with a price tag that would not make my boss want to fire me and still deliver solid and reliable international verification results.
Long story short, I ended up integrating an API built by apilayer, called "streetlayer API". I was easily convinced by a simple JSON integration, surprisingly accurate validation results and their developer-friendly pricing. Also, 100 requests/month are entirely free.
Hope this helps!
I have used the services of http://www.melissadata.com Their "address object" works very well. Its pricey, yes. But when you consider costs of writing your own solutions, the cost of dirty data in your application, returned mailers - lost sales, and the like - the costs can be justified.
For us-based address data my company has used GeoStan. It has bindings for C and Java (and we created a Perl binding). Note that it is a commercial product and isn't cheap. It is quite fast though (~300 addresses per second) and offers features like CASS certification (USPS bulk mail discount), DPV (Delivery point verification) flagging, and LON/LAT geocoding.
There is a Perl module Geo::PostalAddress, but it uses heuristics and doesn't have the other features mentioned for GeoStan.
Edit: some have mentioned 'doing it yourself', if you do decide to do this, a good source of information to start with is the US Census Tiger Data Set, which contains a lot of information about the US including address information.
As seen on reddit:
$address = urlencode('1600 Pennsylvania Avenue, Washington, DC');
$json = json_decode(file_get_contents("http://where.yahooapis.com/geocode?q=$address&flags=J"));
print_r($json);
Fixaddress.com service is available that provides following services,
1) Address Validation.
2) Address Correction.
3) Address spell correcting.
4) Correct addresses phonetic mistakes.
Fixaddress.com uses USPS and Tiger data as reference data.
For more detail visit below link,
http://www.fixaddress.com/
One area where address lookups have to be performed reliably is for VOIP E911 services. I know companies reliably using the following services for this:
Bandwidth.com 9-1-1 Access API MSAG Address Validation
MSAG = Master Street Address Guide
https://www.bandwidth.com/9-1-1/
SmartyStreet US Street Address API
https://smartystreets.com/docs/cloud/us-street-api
There are companies that provide this service. Service bureaus that deal with mass mailing will scrub an entire mailing list to that it's in the proper format, which results in a discount on postage. The USPS sells databases of address information that can be used to develop custom solutions. They also have lists of approved vendors who provide this kind of software and service.
There are some (but not many) packages that have APIs for hooking address validation into your software.
However, you're right that its a pretty nasty problem.
http://www.usps.com/ncsc/ziplookup/vendorslicensees.htm
As mentioned there are many services out there, if you are looking to truly validate the entire address then I highly recommend going with a Web Service type service to ensure that changes can quickly be recognized by your application.
In addition to the services listed above, webservice.net has this US Address Validation service. http://www.webservicex.net/WCF/ServiceDetails.aspx?SID=24
We have had success with Perfect Address.
Their database has all the US street names and street number ranges. Also acts as a pretty decent parser for free-form address fields, if you are lucky enough to have that kind of data.
Validating it is a valid address is one thing.
But if you're trying to validate a given person lives at a given address, your only almost-guarantee would be a test mail to the address, and even that is not certain if the person is organised or knows somebody at that address.
Otherwise people could just specify an arbitrary random address which they know exists and it would mean nothing to you.
The best you can do for immediate results is request the user send a photographed / scanned copy of the head of their bank statement or some other proof-of-recent-residence, because at least then they have to work harder to forget it, and forging said things show up easily with a basic level of image forensic analysis.
There is no global solution. For any given country it is at best rather tricky.
In the UK, the PostOffice controlls postal addresses, and can provide (at a cost) address information for validation purposes.
Government agencies also keep an extensive list of addresses, and these are centrally collated in the NLPG (National Land and Property Gazetteer).
Actually validating against these lists is very difficult. Most people don't even know exactly how their address as it is held by the PostOffice. Some businesses don't even know what number they are on a particular street.
Your best bet is to approach a company that specialises in this kind of thing.
Yahoo has also a Placemaker API. It is good only for locations but it has an universal id for all world locations.
It look that there is no standard in ISO list.
You could also try SAP's Data Quality solutions which are available in both a server platform is processing a large number of requests or as an embeddable SDK if you wanted to run it in process with your application. We use it in our application and it's very robust and scalable.
NAICS.com is coming out with an API that will add all kinds of key business data including street address. This would happen on the fly as your site's forms are processed. https://www.naics.com/business-intelligence-api/
You can try Pitney Bowes “IdentifyAddress” Api available at - https://identify.pitneybowes.com/
The service analyses and compares the input addresses against the known address databases around the world to output a standardized detail. It corrects addresses, adds missing postal information and formats it using the format preferred by the applicable postal authority. I also uses additional address databases so it can provide enhanced detail, including address quality, type of address, transliteration (such as from Chinese Kanji to Latin characters) and whether an address is validated to the premise/house number, street, or city level of reference information.
You will find a lot of samples and sdk available on the site and i found it extremely easy to integrate.
For US addresses you can require a valid state, and verify that the zip is valid. You could even check that the zip code is in the right state, but beyond that I don't think there are many tests you could run that wouldn't provide a lot of false negatives.
What are you trying to do -- prevent simple mistakes or enforcing some kind of identity check?

Resources