Setting up SMART app for FHIR and utilizing existing resources - hl7-fhir

I am learning about building a SMART app on FHIR for accessing EHR. I was going through the available documentation and reading material but there's one thing which I am not clear about.
Is there a way that I can access the existing patient's data i.e. I skip the part where I create and upload the FHIR data/resource and just validate and utilize the existing data?
Are there any services, that are available for that?

Many around the industry host FHIR servers as "sandbox" environments so you don't have to spin up an Azure server and load it with resources.
SMART hosts a sandbox at https://launch.smarthealthit.org with sample patients - and can do the SMART OAuth functionality as well.
The SMART landing page lists several EHR vendor sandboxes which will give you a mix of testing tools - some can get you all the way to working SMART on FHIR app with fake patients.
There are also many sandboxes that don't have SMART on FHIR in front of them -
Redox
ONC
Vonk/Firely
test.fhir.org

SMART accesses information over a FHIR interface - so the desired data must be exposed in a manner compliant with the FHIR's requirements for a RESTful search call. That doesn't mean the data must be stored as FHIR or must be moved to a FHIR server. Lots of systems with non-FHIR-based data stores are capable of providing access to that data over a FHIR interface. However, you'll have to build that interface. There are third-party tools that can help you build the interface, but given that each legacy system stores data differently, there's nothing that would do it 'automatically' without manual mapping effort.

Related

Should event driven architecture be targeted for all data & analytics platforms?

For example,
You have an IT estate where a mix of batch and real-time data sources exists from multiple systems, e.g. ERP, Project management, asset, website, monitoring etc.
The aim is to integrate the datasources into a cloud environment (agnostic).
There is a need for reporting and analytics on combinations of all data sources.
Inevitably, some source systems are not capable of streaming, hence batch loading is required.
Potential use-cases for performing functionality/changes/updates based on the ingested data.
Given a steer for creating a future-proofed platform, architecturally, how would you look to design it?
It's a very open-end question, but there are some good principles you can adopt to help direct you in the right direction:
Avoid point-to-point integration, and get everything going through a few common points - ideally one. Using an API Gateway can be a good place to start, the big players (Azure, AWS, GCP) all have their own options, plus there's lots of decent independent ones like Tyk or Kong.
Batches and event-streams are totally different, but even then you can still potentially route them all through the gateway so that you get the centralised observability (reporting, analytics, alerting, etc).
Use standards-based API specifications where possible. A good REST based API, based off a proper resource model is a non-trivial undertaking, not sure if it fits with what you are doing if you are dealing with lots of disparate legacy integration. If you are going to adopt REST, use OpenAPI to specify the API's. Using this standard not only makes it easier for consumers, but also helps you with better tooling as many design, build and test tools support OpenAPI. There's also AsyncAPI for event/async API's
Do some architecture. Moving sh*t to cloud doesn't remove the sh*t - it just moves it to the cloud. Don't recreate old problems in a new place.
Work out the logical components in your new solution: what does each of them do (what's it's reason to exist)? Don't forget ancillary components like API catalogues, etc.
Think about layering the integration (usually depending on how they will be consumed and what role they need to play, e.g. system interface, orchestration, experience APIs, etc).
Want to handle data in a consistent way regardless of source (your 'agnostic' comment)? You'll need to think through how data is ingested and processed. This might lead you into more data / ETL centric considerations rather than integration ones.
Co-design. Is the integration mainly data coming in or going out? Is the integration with 3rd parties or strictly internal?
If you are designing for external / 3rd party consumers then a co-design process is advised, since you're essentially designing the API for them.
If the API's are for internal use, consider designing them for external use so that when/if you decide to do that later it's not so hard.
Taker a step back:
Continually ask yourselves "what problem are we trying to solve?". Usually, a technology initiate is successful if there's a well understood reason for doing it, which has solid buy-in from the business (non-IT).
Who wants the reporting, and why - what problem are they trying to solve?
As you mentioned its an IT estate aka enterprise level solution mix of batch and real time so first you have to identify what is end goal of this migration. You can think of refactoring applications. If you are trying to make it event driven then assess the refactoring efforts and cost. Separation of responsibility is the key factor for refactoring and migration.
If you are thinking about future proofing your solution then consider Cloud for storing and processing your data. Not necessary it will be cheap but mix of Cloud and on-prem could be a way. There are services available by cloud providers to move your data in minimal cost. Cloud native solutions are there for performing analysis on your data. Database migration service in AWS or Azure can move data and then capture on-going changes. So you can keep using on-prem db & apps and perform analysis for reporting on cloud. It will ease out load on your transactional DB. Most data sync from on-prem to cloud is near real time.

FHIR interoperability platform choice

I want to create an interoperability platform FHIR compliance with a complex business logic.
Our clients can send FHIR resources to platform.
The best architecture by best practise documentation is an ibrid system FHIR + SOA, as this link says.
Now I write two examples of scenario I must to manage:
The first:
I want to create a ServiceRequest resource with a subject where I know only the fiscal code as identifier. If I need other informations about the subject I can query an external database, for example, to know name, surname and others.
I can do this, send to my interoperability platform only a Service Request as follow?
"resourceType" : "ServiceRequest",
"subject" : {
"reference" : "Patient?identifier=FISCALCODE"
}
and so on
The second:
I want to create a ServiceRequest resource with a RelatedPerson linked in the requester tag.
The RelatedPerson is not a fully registry, I know only name and surname and a link to patient.
I must create a SOA method createServiceRequest where I must to pass two parameters the ServiceRequest and RelatedPerson? Or I can use a CRUD method for Bundle resource where I put as entries my ServiceRequest and my RelatedPerson?
So if I try to summarize, the possible ways are:
Create a method createMyMethodName(ServiceRequest serviceRequest, RelatedPerson relatedPerson)
Creation and exposure of this method is it FHIR standard?
If the answer of first quesiton point is YES, in my platform I'll have a lot of custom methods but I have a very strict control on the input informations
Use a CRUD Bundle method where I pass into the Bundle resource the following entries: ServiceRequest, RelatedPerson
In this way I expose only one method to write on my platform, but I must to implement a lot of code to manage all input bundles with several different entries (I suppose a mega switch and then for each branch I apply the business logic controls to accomplish my business logic rules)
This response is not intended as a complete response to your question and comes from a US perspective; however, you may find the perspective useful.
Gotcha with identifier queries
"reference" : "Patient?identifier=FISCALCODE"
As written, your ?identifier=FISCALCODE will query the FISCALCODE key against all code systems. I think what you want is to specify a code system, e.g. ?identifier=<CodeSystem>|<FiscalCode>
This is a common gotcha that's buried in the FHIR search documentation.
You'll either have to reference an existing code system, e.g. an Italy specific implementation guide analogous to US Core that contains the list of FiscalCodes, or author your own.
Which FHIR integration paradigm are you using?
Before diving into the createMethod vs Bundle question, I think it'd be useful to step back and pick an overall FHIR integration approach.
In my opinion, there are three major approaches:
Load data into an existing stand-alone FHIR server
Challenge: Drift between data loaded in FHIR server and other data warehouses
FHIR server queries non-FHIR API
Challenge: Duplication between FHIR API and non-FHIR API
NB: In the limiting case, there is no data stored in the FHIR server proper. Adding to the confusion, some will call this implementation a "FHIR gateway" instead of a "FHIR server."
FHIR server queries staging database for FHIR data
Challenge: Must write data access for each FHIR resource and each data element.
In the future, there may be a fourth approach where one uses the FHIR mapping language in real-time from an intermediate source model to multiple targets.
Your "CRUD Bundle method" is more in-line with POSTing data to a stand-alone FHIR server, whereas your "createMyMethodName" is more in-line with writing DAOs (Data Access Objects) to an external database.
In the limit where you don't need to maintain synchrony between the FHIR server and source data systems, importing data into a stand-alone FHIR server is much less work.
In the limit where you already have mappings to an intermediate data model (in the US, many large service providers will have mappings to either the USCDI or the Common Clinical Dataset), you'll have an easier lift writing CRUD in the FHIR server against an existing database.
For a more in-depth discussion, there was a FHIR integration patterns talk at FHIR Dev Days 2018, starting at Slide 21. Note that the author assumes a familiarity with architectural patterns such as the facade pattern.
Select a stand-alone server or library
Unless you have a compelling requirement or are a large company, it's advisable to use an existing open-source stand-alone server or library implementation. The three most popular are:
HAPI-FHIR (Java)
Microsoft (.NET)
IBM (Java)
If taking the stand-alone option, popular commercial FHIR servers
Microsoft (hosted in Azure)
Smile CDR (commercial version of HAPI-FHIR)
Firely Vonk

Backends For Frontends BFFs or API Gateway

In a micro-services architecture we can have:
A single API gateway providing a single API for all clients.
A single API gateway providing an API for each kind of client.
A per-client API gateway providing each client with an API. which is the BFF pattern.
Netflix uses the second style Inside the Netflix API Redesign. we can surely say that they have created a smart-piece of middleware in their architecture that takes on multiple responsibilities.
But how much work this single API back-end can handle, it seems that it can become a bottleneck so easily.
So my question is what are the benefits of choosing the single API to handle requests for more than 1000 clients instead of creating an API Gateway specifically designed to one type of clients? Aren't they facing many challenges to manage and maintain this complex piece?
It all depends where your end users are, in case of Netflix, they have differnt types of clients, web/mobile/streaming sticks/bluray players/what not, while web (updated to latest all the time), mobile (updated to latest eventually), bluray player with pre-installed app for example may never get updated.
And you have to version your apis accordingly for each platform and maintain them based on client update cycle for backward compatibility. If you have too many variations in a single api it will be hard to maintain instead it is easier to write an api for each type of client. Unless you have real need for #3 and have enough resources to develop for each type of client I wouldn't jump into it, as you have to maintain many variations of api for the same purpose.
I would start small with #1.

Organizing large Web API solution

Good day,
I will begin developing a Web API solution for a multi-company organization. I'm hoping to make available all useful data to any company across the organization.
Given that I expect there to be a lot of growth with this solution, I want to ensure that it's organized properly from the start.
I want to organize various services by company, and then again by application or function.
So, with regards to the URL, should I target a structure like:
/company1/application1/serviceOperation1
or is there some way to leverage namespaces:
/company2.billing/serviceOperation2
Is it possible to create a separate Web API project in my solution for each company? Is there any value in doing so?
Hope we're not getting too subjective, but the examples I have seen have a smaller scope, and I really see my solution eventually exposing a lot of Web API services.
Thanks,
Chris
Before writing a line of code I would be looking at how the information is to be secured and deployed, versioned and culture of the company.
Will the same security mechanisms (protocols, certificates, modes, etc.) be shared across all companies and divisions?
If they are shared then there is a case for keeping them in the same solution
Will the services cause differing amounts of load and be deployed onto multiple servers with different patching schedules?
If the services are going onto different servers then they should probably be split to match
Will the deployment and subsequent versioning schedule be independent for each service or are all services always deployed together?
If they are versioned independently then you would probably split the solution accordingly
How often does the company restructure and keep their applications?
If the company is constantly restructuring without you would probably want to split the services by application. If the company is somewhat stable and focused on changing the application capabilities then you would probably want to split the services by division function (accounts, legal, human resources, etc.)
As for the URL to access this should naturally flow from the answers above. Hope this helps.

Modern reporting solutions for distributed data systems

We've built a SAAS solution, which has a Frontend in PHP/MySQL. The solution uses our in-house "Backend" API to manage user transactions (financial-ish type of stuff). So basically, some of our data is in the "Frontend" database, while all transactional data is in the "Backend" database.
When it comes to reporting, the Frontend requests transactional reports from the Backend, augments it with Frontend data (user attributes, etc), and draws the report. Usually it's slow and cumbersome to create a new report, and they lack robust features like sorting & filtering. This is partially because there is no single data-source for all the info. Also, we are constantly being asked to provide "adhoc" reporting capabilities - the type of thing that is complex, and has the potential to bring a server to its knees if you aren't careful.
I think we're at the point where we need to invest in a Reporting system, which would be responsible for combining data dumps from Frontend/Backend, and would allow a non-developer to create new reports. One thing that would be important to us is to provide as seamless of an interface as possible to the reports via our Frontend. That might mean the Reporting system exposes web widgets, or perhaps has a web interface that can be accessed with SSO between our system and the Reporting system. In a nutshell, we aren't looking for a dinosaur, we need something modern. Hosted solutions are preferred, but we'd consider something we need to run ourselves. Looking for advice. Thanks!
EDIT: A hosted solution might not work for us. We are located in Canada, and many customers have policies about having data reside in the US (Patriot Act).
Have a look at myDBR reporting solution. Reports are built using stored procedures, so anyone familiar with SQL will be able to create reports. There is also a built in wizard to get you started quickly. It is also very easy to link reports to each other allowing for easy drill-down style reports.
The solution is very reasonably priced at 129 EUR (~ 170 USD) and can be installed in minutes on any standard web server (PHP being to only requirement).
myDBR can be easily integrated into your existing web-pages via the built-in SSO and styled via CSS to match your sites overall look and feel.

Resources