Resource Identifiers between two FHIR servers - hl7-fhir

Our scenario is there is an EHR system that is integrating with a device sensor partner using FHIR. In this scenario both companies will have independent FHIR servers. Each of them has different Patient and Organization(s) records with their own identifiers. The preference is the the sensor FHIR server keep the mapping of EHR identifiers to it's own internal identifiers for these resources
The EHR wants to assign a Patient to a Device with the sensor FHIR server.
Step 1: First the EHR would #GET the list of Device resources for a given Organization where a Patient is not currently assigned from the sensor FHIR server e.g.
/api/Device?organization.identifier=xyz&patient:missing=true
Here I would assume the Organization identifier is that of the EHR system since the EHR system doesn't have knowledge of the sensor system Organization identifier at this point.
The reply to this call would be a bundle of devices:
... snip ...
"owner": {
"reference": "http://sensor-server.com/api/Organization/3"
},
... snip ...
Question 2: Would the owner Organization reference have the identifier from the search or the internal/logical ID as it's known by the sensor FHIR server as in the snippet above?
Step 2: The clinician of the EHR system chooses a Device from the list to assign it to a Patient in the EHR system
Step 3: The EHR system will now issue a #PUT /api/Device/{id} request back to the sensor FHIR server to assign a Patient resource to a Device resource e.g.
{
"resourceType": "Device",
"owner": {
"reference": "http://sensor-server.com/api/Organization/3"
},
"id": "b4994c31f906",
"patient": {
"reference": "https://ehr-server.com/api/Patient/4754475"
},
"identifier": [
{
"use": "official",
"system": "bluetooth",
"value": "b4:99:4c:31:f9:06",
"label": "Bluetooth address"
}
]
}
Question 3: What resource URI/identifier should be used for the Patient resource? I would assume it is that of the EHR system since the EHR system doesn't have knowledge of the sensor system Patient identifier. Notice however, that the Organization reference is to a URI in the sensor FHIR server while the Patient reference is a URI to the EHR system - this smells funny.
Step 4: The EHR can issue a #GET /api/Device/{id} on the sensor FHIR server and get back the Device resource e.g.
{
"resourceType": "Device",
"owner": {
"reference": "http://sensor-server.com/api/Organization/3"
},
"id": "b4994c31f906",
"patient": {
"reference": "https://sensor-server.com/api/Patient/abcdefg"
},
"identifier": [
{
"use": "official",
"system": "bluetooth",
"value": "b4:99:4c:31:f9:06",
"label": "Bluetooth address"
}
]
}
Question 4: Would we expect to see a reference to the Patient containing the absolute URI to the EHR FHIR server (as it was on the #PUT in Step 3) or would/could the sensor FHIR server have modified that to return a reference to a resource in it's FHIR server using it's internal logical ID?

I didn't see a Question 1, so I'll presume it's the "assume" sentence in front of your first example. If the EHR is querying the device sensor server and the organizations on the device sensor server include the business identifier known by the EHR, then that's reasonable. You would need some sort of business process to ensure that occurs though.
Question 2: The device owner element would be using a resource reference, which means it's pointing to the "id" element of the target organization. Think of resource ids as primary keys. They're typically assigned by the server that's storing the data, though in some architectures, they can be set by the client (who creates the record using PUT instead of POST). In any event, you can't count on them being meaningful business identifiers - and according to most data storage best practices, they generally shouldn't be. And if, as I expect, your scenario involves multiple EHR clients potentially talking to the "device" server, the resource id couldn't possibly align with the business ids of all of the EHRs. (That's a long way of saying "no 'xyz' probably won't be '3')
Question 3: If the EHR has its own server, the EHR client could update the device on the "sensor" server to point to a URL on the EHR server. Whether that's appropriate or not depends on your architecture. If you want other EHRs to recognize the patient, then you'd probably want the "sensor" server to host patients too and for the EHR to look up the patient by business id and then reference the "sensor" server's URL. If not, then pointing to the EHR server's URL is fine.
Question 4: When you do a "GET", its normal to receive back the same data you specified on a POST. It's legal for the server to change the data, including possibly updating references. But that's likely to confuse a lot of client systems, so it's not generally recommended or typical.

Related

enableAutoTierToHotFromCool Does not move from cool to hot

I have some Azure Storage Accounts (StorageV2) located in West Europe. All blobs uploaded are by default in the Hot tier and I have this lifecycle rule defined on them:
{
"rules": [
{
"enabled": true,
"name": "moveToCool",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"enableAutoTierToHotFromCool": true,
"tierToCool": {
"daysAfterLastAccessTimeGreaterThan": 1
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
]
}
}
}
]
}
Somehow the uploaded blobs are moved to cool but then after I access them again, in the portal they still appear under Cool tier. Any idea why? (I have waited more than 24 for the rule to be in effect)
Some more questions about: "enableAutoTierToHotFromCool": true:
does it depend on the blob size? (for example if some blobs were moved to cool and then they accessed simultaneously the time between a 1 Gib is moved back to hot is the same for 10KiB blob)
does it depend on the number of blobs that are accessed? (it there a queue and if multiple blobs from cool are accessed in the same time, the requests are served based on a queue order)
The enableAutoTierToHotFromCool property is a Boolean value that indicates whether a blob should automatically be tiered from cool back to hot if it is accessed again after being tiered to cool.
And to apply new policy it takes 48hrs, and enableAutoTierToHotFromCool": true doesn’t depend on size of blob , and not depends on the number of blobs
If you enable firewall rules for your storage account, lifecycle management requests may be blocked. You can unblock these requests by providing exceptions for trusted Microsoft services. For more information, refer this document the Exceptions section in Configure firewalls and virtual networks.
A lifecycle management policy must be read or written in full. Partial updates are not supported. So try with writing
"prefixMatch": [
"containerName/log"
]
For more details refer this document:

JSON-LD multiple types

I am currently doing some json-ld. I am quite new to this(also with coding). I am trying to figure it out how could I use different Types in one script, as you can see below. I cannot get a hold onto what am I doing wrong and what should I change to make it work? Thanks
<script type="application/ld+json">
{
"#context": "https://schema.org",
"#type": "Course",
"name": "MSc in IT- Web Communication Design",
"coursePrerequisites": "The following bachelor degree programmes from the University of Southern Denmark and from other universities provide access to the Master’s degree in Web Communication Design: A relevant professional bachelor's degree, e.g. web developer, software developer, business language and IT-based marketing communication, school teacher, nurse, educator, social worker.",
"occupationalCredentialAwarded": "As a student of the MSc in IT – Web Communication Design you will gain specialised skills in web-based communication and knowledge management. Your choice of elective courses, your projects, your thesis as well as your bachelor background qualify you to work with: Web development, digitalisation, web design, digital skills development, social media, etc.",
"description":"Master of Science in IT Web Communication Design. A multi-disciplinary graduate programme that combines IT, communication and organisation. We emphasise the interaction between humans and information technology and combine research-based knowledge with challenges from practice."
},
"provider": {
"#type": "Organization",
"name": "University of Southern Denmark",
"department": "Institute for Design and Communication",
"address": "Universitetsparken 1, 6000 Kolding, Denmark",
"telephone": "+45 65 50 10 00"
},
{
"#context": "http://schema.org",
"#type": "EducationalOccupationalCredential",
"programPrerequisites": "You are expected to have basic knowledge of HTML and CSS before you commence the programme. This may be from courses in your Bachelor's, but it is also possible to obtain this knowledge through online tutorials, e.g. w3schools.com."
}
</script>
Here's a version that validates:
<script type="application/ld+json">{
"#context": "https://schema.org",
"#type": "Course",
"name": "MSc in IT- Web Communication Design",
"coursePrerequisites": "You are expected to have basic knowledge of HTML and CSS before you commence the programme. This may be from courses in your Bachelor's, but it is also possible to obtain this knowledge through online tutorials, e.g. w3schools.com.",
"occupationalCredentialAwarded": "As a student of the MSc in IT – Web Communication Design you will gain specialised skills in web-based communication and knowledge management. Your choice of elective courses, your projects, your thesis as well as your bachelor background qualify you to work with: Web development, digitalisation, web design, digital skills development, social media, etc.",
"description": "Master of Science in IT Web Communication Design. A multi-disciplinary graduate programme that combines IT, communication and organisation. We emphasise the interaction between humans and information technology and combine research-based knowledge with challenges from practice.",
"provider": {
"#type": "Organization",
"name": "University of Southern Denmark",
"department": "Institute for Design and Communication",
"address": "Universitetsparken 1, 6000 Kolding, Denmark",
"telephone": "+45 65 50 10 00"
}
}</script>
The script expects one top level object, not a list of objects. To get around it you can use #graph. My changes meant there is only one top object anyhow.
This is because you want to connect your information. The organization is the provider of the course, so that information should be in the course object.
I wasn't sure about your EducationalOccupationalCredential. I'm guessing a coursePrerequisites is closer to what you want.

Any way to monitor Nifi Processors? Any Utility Dashboard?

If I have developed a NiFi flow and a support person wants to view what's the current state and which processor is currently running, which processor already ran, which ones completed?
I mean to say any dashboard kind of utility provided by NiFi to monitor activities ?
You can use the Reporting tasks and NiFi itself, or a new NiFi instance, that is what I choose.
To do that you must do the following:
Open the reporting task menu
And add the desired reporting tasks
And configure it properly
Then create a flow to manage the reporting data
In my case I am putting the information into an Elasticsearch
There are numerous ways to monitor NiFi flows and status. The status bar along the top of the UI shows running/stopped/invalid processor counts, and cluster status, thread count, etc. The global menu at the top right has options for monitoring JVM usage, flowfiles processed/in/out, CPU, etc.
Each individual processor will show a status icon for running/stopped/invalid/disabled, and can be right-clicked for the same JVM usage, flowfile status, etc. graphs as the global view, but for the individual processor. There are also some Reporting Tasks provided by default to integrate with external monitoring systems, and custom reporting tasks can be written for any other desired visualization or monitoring dashboard.
NiFi doesn’t have the concept of batch/job processing, so processors aren’t “complete”.
1. In built monitoring in Apache NiFi.
Bulletin Board
The bulletin board shows the latest ERROR and WARNING getting generated by NiFi processors in real time. To access the bulletin board, a user will have to go the right hand drop down menu and select the Bulletin Board option. It refreshes automatically and a user can disable it also. A user can also navigate to the actual processor by double-clicking the error. A user can also filter the bulletins by working out with the following −
by message
by name
by id
by group id
Data provenance UI
To monitor the Events occurring on any specific processor or throughout NiFi, a user can access the Data provenance from the same menu as the bulletin board. A user can also filter the events in data provenance repository by working out with the following fields −
by component name
by component type
by type
NiFi Summary UI
Apache NiFi summary also can be accessed from the same menu as the bulletin board. This UI contains information about all the components of that particular NiFi instance or cluster. They can be filtered by name, by type or by URI. There are different tabs for different component types. Following are the components, which can be monitored in the NiFi summary UI −
Processors
Input ports
Output ports
Remote process groups
Connections
Process groups
In this UI, there is a link at the bottom right hand side named system diagnostics to check the JVM statistics.
2. Reporting Tasks
Apache NiFi provides multiple reporting tasks to support external monitoring systems like Ambari, Grafana, etc. A developer can create a custom reporting task or can configure the inbuilt ones to send the metrics of NiFi to the externals monitoring systems. The following table lists down the reporting tasks offered by NiFi 1.7.1.
Reporting Task:
AmbariReportingTask - To setup Ambari Metrics Service for NiFi.
ControllerStatusReportingTask - To report the information from the NiFi summary UI for the last 5 minute.
MonitorDiskUsage - To report and warn about the disk usage of a specific directory.
MonitorMemory To monitor the amount of Java Heap used in a Java Memory pool of JVM.
SiteToSiteBulletinReportingTask To report the errors and warning in bulletins using Site to Site protocol.
SiteToSiteProvenanceReportingTask To report the NiFi Data Provenance events using Site to Site protocol.
3. NiFi API
There is an API named system diagnostics, which can be used to monitor the NiFI stats in any custom developed application.
Request
http://localhost:8080/nifi-api/system-diagnostics
Response
{
"systemDiagnostics": {
"aggregateSnapshot": {
"totalNonHeap": "183.89 MB",
"totalNonHeapBytes": 192819200,
"usedNonHeap": "173.47 MB",
"usedNonHeapBytes": 181894560,
"freeNonHeap": "10.42 MB",
"freeNonHeapBytes": 10924640,
"maxNonHeap": "-1 bytes",
"maxNonHeapBytes": -1,
"totalHeap": "512 MB",
"totalHeapBytes": 536870912,
"usedHeap": "273.37 MB",
"usedHeapBytes": 286652264,
"freeHeap": "238.63 MB",
"freeHeapBytes": 250218648,
"maxHeap": "512 MB",
"maxHeapBytes": 536870912,
"heapUtilization": "53.0%",
"availableProcessors": 4,
"processorLoadAverage": -1,
"totalThreads": 71,
"daemonThreads": 31,
"uptime": "17:30:35.277",
"flowFileRepositoryStorageUsage": {
"freeSpace": "286.93 GB",
"totalSpace": "464.78 GB",
"usedSpace": "177.85 GB",
"freeSpaceBytes": 308090789888,
"totalSpaceBytes": 499057160192,
"usedSpaceBytes": 190966370304,
"utilization": "38.0%"
},
"contentRepositoryStorageUsage": [
{
"identifier": "default",
"freeSpace": "286.93 GB",
"totalSpace": "464.78 GB",
"usedSpace": "177.85 GB",
"freeSpaceBytes": 308090789888,
"totalSpaceBytes": 499057160192,
"usedSpaceBytes": 190966370304,
"utilization": "38.0%"
}
],
"provenanceRepositoryStorageUsage": [
{
"identifier": "default",
"freeSpace": "286.93 GB",
"totalSpace": "464.78 GB",
"usedSpace": "177.85 GB",
"freeSpaceBytes": 308090789888,
"totalSpaceBytes": 499057160192,
"usedSpaceBytes": 190966370304,
"utilization": "38.0%"
}
],
"garbageCollection": [
{
"name": "G1 Young Generation",
"collectionCount": 344,
"collectionTime": "00:00:06.239",
"collectionMillis": 6239
},
{
"name": "G1 Old Generation",
"collectionCount": 0,
"collectionTime": "00:00:00.000",
"collectionMillis": 0
}
],
"statsLastRefreshed": "09:30:20 SGT",
"versionInfo": {
"niFiVersion": "1.7.1",
"javaVendor": "Oracle Corporation",
"javaVersion": "1.8.0_151",
"osName": "Windows 7",
"osVersion": "6.1",
"osArchitecture": "amd64",
"buildTag": "nifi-1.7.1-RC1",
"buildTimestamp": "07/12/2018 12:54:43 SGT"
}
}
}
}
You also can use nifi-api for monitoring. You can receive detailed information about each processor group, controller service or processor.
You can use MonitoFi. It is an open source tool that is highly configurable, uses nifi-api to collect stats about various nifi processors and stores those in influxdb. It also comes with Grafana Dashboards and Alerting functionality.
http://www.monitofi.com
or
https://github.com/microsoft/MonitoFi

Why isn't the Google QPX Express API returning results for all airlines?

I enabled access to the Google QPX Express API to do some analytics on the prices of Delta's tickets and Fare Classes. But the response seems to only include flights from a limited set of airlines.
For example, the following request
{
"request": {
"passengers": {
"adultCount": 1
},
"slice": [
{
"origin": "JFK",
"destination": "SFO",
"date": "2015-02-15",
"maxStops": 0
}
],
"solutions": 500
}
}
only returns flights for AS (Alaska Airlines), US (US Air), VX (Virgin America), B6 (JetBlue), and UA (United Airlines).
If I add "permittedCarriers": [DL], then I get an empty response. Likewise, I get an empty response if I leave out permittedCarriers and look for flights between Delta hubs (e.g., "origin": "ATL", "destination": "MSP").
The documentation suggests that QPX Express is supposed to have most airline tickets available. Is there something wrong with my request? Why am I not seeing any results for Delta?
I received a response from Google's QPX Express help team about missing data for Delta. The response was that
Delta's data, as well as American Airline's data, is not included in
QPX Express search results as a default. Access to their data
requires approval by those carriers.
After informing him that my plans to use the data were for research purpsoses, he responded,
American and Delta restrict access to their pricing and availability
to companies which they approve, which are primarily organizations
driving the sale of airline tickets. Unfortunately, requests for
access are only being reviewed for companies that plan to use the API
for commercial purposes.

Steam trading cards API or achievements api

Is there an API to get a user's steam trading cards?
I'm not very familiar with steam but it doesn't seem to be on this page.
https://developer.valvesoftware.com/wiki/Steam_Web_API#GetPlayerSummaries_.28v0001.29
There's an achievements API would that get me the trading cards info as well?
There is not an API for the trading cards (yet). You can, however, still find them. It does depend on the user's privacy settings though. I went into more detail on this question and believe it will help you out.
Achievements can be pulled via the GetPlayerAchievements API call using the following format:
http://api.steampowered.com/ISteamUserStats/GetPlayerAchievements/v0001/?appid=<APPID>&key=<APIKEY>&steamid=<PROFILEID>l=<LANG>
APPID is the application ID the achievements are associated with (ie. Team Fortress 2 is 440)
APIKEY is the API key Valve assigned you
PROFILEID is the 64bit player ID give to you when you sign up on steam
LANG is the language you wish to return the descriptions in (this parameter is optional and not including it removes the name and description fields from the results). en is for English.
In the response is a listing of all of the achievements in the game.
{
"apiname": "TF_MVM_PYRO_BOMB_RESET",
"achieved": 0,
"name": "Hard Reset",
"description": "As a Pyro, reset the bomb 3 times in a single wave."
},
{
"apiname": "TF_MVM_ENGINEER_ESCAPE_SENTRY_BUSTER",
"achieved": 1,
"name": "Real Steal",
"description": "As an Engineer, escape with your sentry as a sentry buster is about to detonate."
},

Resources