As per this link https://www.hl7.org/FHIR/2015Sep/auditevent.html,
audit event is based on the IHE-ATNA Audit record definitions, originally from RFC 3881, and now managed by DICOM (see DICOM Part 15 Annex A5 ).
DICOM Part 15 Annex A5 has few DICOM extension nodes as shown below:
SOPClass,Accession,MPPS,NumberOfInstances, ParticipantObjectContainsStudy.
Where can I map this information in Fhir AuditEvent?
These parts of the DICOM 'Audit Trail Message' standard were not seen as fitting the 80% principle of FHIR.
In most of the case these can nicely be encoded as an identifier in the AuditEvent.object.identifier. This is the more general use.
You could define an extension.
Are you trying to create a general-purpose translation, or do you have these elements in practice? The experience of those participating in the efforts indicated that these are not used in practice, or would be better encoded simply as AuditEvent.object.identifier.
Related
I am interested in visualizing melodic contours of polyphonic music with Processing. It is still unclear to me, though, what the most convenient format for imported data (pitch and onset/duration) would be: tabular (e.g. Humdrum), XML (e.g. MEI, musicXML), or JSON? Maybe another format?
Any suggestions/thoughts on this would be really helpful! Thanks.
Using MIDI files would be optimal, because of the combination of those 3 reasons
MIDI is widely used. You can export a .midi file from pratically any score editor plus you can create your own by recording the input from a midi instrument.
You can already find .midi files of iconic polyphonic music on the web (Bach's counterpoints, Reinaissance vocal music, etc)
It just contain music/playback information. It doesn't contain notation information like music XML. So if you just want to see pitches and note position/duration (like in this video) then .midi will contain just what you need
You can use the Java Midi Package in Processing and it already contains everything you need to read the MIDI files.
While other formats might also apply for 1, 2, 3 or 4 only MIDI applies for all of them.
The best answer I can give you is that you should put together a simple hello world program that tests out each format and see which one you like the best.
In the end, you're the one that has to deal with the code, so only you can really decide on the best format.
We are currently in the process of evaluating FHIR for use as part of our medical record infrastructure. For the EHR data (Allergies, Visits, Rx, etc..) the HL7 FHIR seems to have an appropriate mapping.
However, lots of data that we deal with is related to personal Fitness - think Fitbit or Apple HealthKit:
Active exercise (aerobic or workout): quantity, energy, heart-rate
Routine activities such as daily steps or water consumption
Sleep patterns/quality (odd case of inter-lapping states within the same timespan)
Other user-provided: emotional rating, eating activity, women's health, UV
While there is the Observation resource, this still seems best fit (!) for the EHR domain. In particular, the user fitness data is not collected during a visit and is not human-verified.
The goal is to find a "standardized FIHR way" to model this sort of data.
Use an Observation (?) with Extensions? Profiles? Domain-specific rules?
FHIR allows extraordinary flexibility, but each extension/profile may increase the cost of being able to exchange the resource directly later.
An explanation on the appropriate use of an FHIR resource - including when to Extend, use Profiles/tags, or encode differentiation via Coded values - would be useful.
Define a new/custom Resource type?
FHIR DSTU2 does not define a way to define a new Resource type. Wanting to do so may indicate that the role of resources - logical concept vs. an implementation interface? - is not understood.
Don't use FHIR at all? Don't use FHIR except on summary interchanges?
It could also be the case that FHIR is not suitable for our messaging format. But would it be any "worse" to go FIHRa <-> FIHRb than x <-> FIHRc when dealing with external interoperability?
The FHIR Registry did not seem to contain any User-Fitness specific Observation Profiles and none of the Proposed Resources seem to add appropriate resource-refinements.
At the end of the day, it would be nice to be able to claim to be able to - with minimal or no translation, ie. in a "standard manner" - be able to exchange User Fitness data as an FHIR stream.
Certainly the intent is to use Observation, and there's lots of projects already doing this.
There's no need for extensions, it's just a straight forward use. Note that this: " In particular the user fitness data is not collected during a visit and is not human-verified" doesn't matter. There's lots of EHR data of dubious provenance...
You just need to use the right codes, and bingo, it all works. I've provided a bit more detail to the answer here:
http://www.healthintersections.com.au/?p=2487
I am going to implement a generic HMIS with true implementation of HL7. I have studied all the advantages and disadvantages of both versions of HL7 i.e v2 and v3. But still the confusion exists that which version is better to go with implementation either it is v2 for its stability or v3 for its plug and play compatibility. Need your opinion.
The HL7 is the organization but also is a set of interoperability standards. It means it is not a function in your system that operates on its own, it is a way that your system communicates with other systems. So the interface that you need to implement in your system - HL7v2 or HL7v3 or HL7 FHIR – is actually dictated by your counterparts.
For example, if you are in US, most likely you'll end up with HL7v2 for messaging, HL7v3 CDA for documents (better know as a separate C-CDA standard) and HL7 FHIR for SMART initiatives. (Let's assume we are not talking about IHE profiles with "v3" suffix.) For Canada and UK it will be the same with the only difference that these countries are using HL7v2 and HL7v3 for messaging.
I would like to answer your question based on Implementation and Data consumption.
HL7v2 is pipe delimited and v3 is of XML, FHIR comes in JSON and XML flavors. Before discussing advantage and disadvantage, it is essential to understand how the end system consumes data. What provision they have, and based on that you can proceed further.
If this question is regarding how efficiently all patient data can be captured in a message format? . I will go with both V2 and V3. V3 is much more standardized, gives more specifications and descriptions. V2 is also has HL7 specific standards for it, if you think that specific message format of yours (ADT/ORU/DFT) lacks specific features to capture, you can use Z-segment or NTE. V3 CDA standards makes sure (upto what I have used), covers most information with its specification itself.
For (Eg:consider CDA standards) Based on the needs CDA can come with its own flavor, as of HL7 standards there are separate Progress notes C-CDA, Procedure notes C-CDA, Transition of Care C-CDA, Diagnostic Imaging Report C-CDA and so on.
I am doing Research on the HL7 Version 3 messaging standard. I was told that hl7 version 2 implementations don't really support multimedia data processing (images, videos, etc.). However this blog: http://www.hl7standards.com/blog/2006/10/18/how-do-i-send-a-binary-file-inside-of-an-hl7-message/ states, that the ed (encapsulated data) data type already exists in the version 2 standard. i even found the speciation for the ed data type in chapter 2 of the hl7 v2.3.1 standard. So it is possible to send image data in hl7 v2 messages.
Also, the processing is the same: there can be a reference to the multimedia data (i.e. url) and there can be base64 encoded data.
I am aware of the fact that both sending system and receiving system have to support the ed data type. So there is the possibility that hl7 v2 implementations don’t support this data type. But other than that, is there really a difference?
Thank you!
PS:Of course I’m not talking about the main difference: the model driven methodology of hl7 v3. my scope is only the processing of multimedia data.
I used to work for a large Hospital group in the middleware department where we transfered ORU messages with embedded AND linked (url) PDF's inside of HL7 V2.3.1 or V2.2, can't remember. As for the binary messages, we used the OBX-5 field to store the messages.
So yes, HL7 V2.x should support this.
However, you have to be careful since each country has its "own" implementation of HL7 - even each hospital "misuses" the one or another field for their own purpose.
In HL7 v2.5 OBX-5 length is variable, e.g. you can use a ED datatype to put binary data of size 65536 (64KB), so it can hold small images. But for multimedia messages I recommend to use the DICOM protocol.
In version 2.2 the OBX-5 field is defined as "Observation Results", string data up to a maximum length of 65 bytes. It also says it can be repeated up to two times. That doesn't sound like you could fit much binary data in there.
I am in the process of selecting an image format that will be used as the storage format for all in-house textures.
The format will be used as a source format from which compressed textures for different platforms and configurations will be generated, and so needs to cover all possible texture types (2D, cube, volymetric, varying number of mip-maps, floating point pixel formats, etc.) and be completely lossless.
In addition the format has to be able to keep a bit of metadata.
Currently a custom format is used for this, but a commonly available format will be easier to work with for the artists since its viewable in most image editors.
I have thought of using DDS, but this format does not support metadata as far as I can see.
All suggestions appreciated!
With your requirements you should stay with your selfmade format. I don't know about any image-format besides DDS that supports volumetric and cube-textures. Unfortunately DDS does not support meta-data.
The closest thing you can find is TIFF. It does not directly support cube-maps or volumetric textures, but it supports any number of sub-images. That way you could re-use the sub-images as slices or cube-sides.
TIFF also has a very good support for custom meta-data. The libtiff image reading/writing library works pretty good. It looks a bit archaic if you come from a OO side, but it gets it's job done.
Nils
When peeking inside various games' resources I found out that most of them store textures (I don't know whether they're compressed or not) in TGA
TIFF would probably be your closest bet for a format which supports arbitrary meta-data and multiple frames, but I think you are better off keeping the assets (in this case, images) separate from how they are converted and utilized in your engine.
Keep images in 32 bit PNG format, and put type- and meta information in XML. That keeps your data human viewable, readable and editable. Obscure custom formats are for engines, not people.
Stick with whatever your artists work with.
If you are a windows/mac shop and use
photoshop stick with .psd
If you are a unix shop and use gimp
stick with .xcf
These formats will store layers and all the stuff your artists need and are used to.
Since your artists will be creating loads of assets make their life as easy as possible,
even if it means to write some extra code.
Put the meta data (whatever it may be) somewhere "along" the images if the native format (psd/xcf) doesn't support it.
For stuff like cube maps, mipmaps (if not generated by the converter) stick to naming guidlines or guidlines on how to put them into one file.
Depending on what tool you use to create the volumetric stuff, just stick with that tools native format.
While writing custom formats for the target is usually a good idea,
writing custom formats for artists results in mayhem...
My experience with DDS is that it is a poorly documented and difficult format to work with and offers few advantages. It is generally simpler to just store a master file for each image class that has references to the source images that make it up ( i.e. 6 faces for a cube map, an arbitrary number of slices for a volume texture ) as well as any other useful meta-data. It's always going to be a good idea to keep the meta-data in a seperate file ( or in a database ) as you do not want to be loading large numbers of images when carryong out searches, populating browsers, etc. It also makes sense to seperate your source image format ( tiff, tga, jpeg, dds ... ) from your "meta-format" ( cube, volume ... ) since you may well find that you need to use lossy compression to support HDR formats or very large source volume data.
Have you tried PNG? http://java.sun.com/javase/6/docs/api/javax/imageio/metadata/doc-files/png_metadata.html
As an alternative solution, maybe spend some time writing a plugin for a Free Image Editor for your file format? I've never done it before, so I don't know the work involved, but there is boatloads of example code out there for you.