How to embed metadata with in mpeg2ts - mpeg

I want to embed metadata about a particular stream with in a mpeg2ts. which is the best field with in transport stream to embed this information. Can i embed this information with in the adaptation field of the mpeg2ts header.
Thank you
Guru

Under MPEG 2 systems standard there potentially two ways to do this.
One can either define a user private table. See Table 2-26 : table_id 0x40-0xFE See. section 2.4.4.10 Syntax of the Private section.
Or one can define user private stream. See Table 2-29 stream id under PES packet as 0x80 to 0xFF).
Refer to ISO/IEC 13818-1 MPEG 2 Systems.

Related

How are similar-fields that are optionally written, distinguished by flatbuffer reader?

Flatbuffer documentation mentions that the fields are optional in the data.
Each field is optional: It does not have to appear in the wire
representation, and you can choose to omit fields for each individual
object.
I am bit confused on how does the flatbuffer differentiates between two similar fields if one of them is not written.
For eg.
table Monster {
hp:short;
hpNew:short;
}
here if I write only hpNew in the data file, how will the reader know this is hpNew or hp ?
medium article explains that the table in memory is started with a reference to a virtual table which tells us where to find the properties.
If there are fields that are not written (optional), this virtual table will have its offset marked as 0.
PS: seems this is the reason tables have higher cost than structs, and also why flatbuffer is better than Cap'n Proto (which doesn't support this).

How is NameSpaceID, universalID and UniversalID type used in HL7?

In hl7 messages the HD DataType consists of NameSpaceID, UniversalID and UniversalID Type. What do these different entities specify and how are they used? I mainly want to know what is NamespaceID in scope of an organization and when is it used?
HD is most commonly used as part of other data types like CX(Patient IDs, for example) or XCN(Provider name + ID). Usually it gets labeled as "Assigning Authority" or "Assigning Facility".
The best example I can think of is a Provider's NPI number. NPI is issued by CMS/NPPES, so that is what the HD data type is trying to encode. The Namespace ID, might just be NPI, or whatever two parties agree upon to use to identify that the number is an NPI number. NPI has an OID as well. So any of the following would be valid HD uses:
Just namespace ID:
NPI
All three:
NPI&2.16.840.1.113883.4.6&ISO
Just the Universal parts:
&2.16.840.1.113883.4.6&ISO
Or some other local code instead of the ISO code:
&NPI&HEALTHSYSTEM123
Part of the reason that there is a Namespace ID at all is for backwards compatibility reasons. It's probably best to send all three if you can, but expect most systems to only look at and process the namespace ID.
From Chapter 2.A.33 of the HL7v2.7 spec:
The HD is designed to be a more powerful and more general replacement for the application identifier of HL7 versions 2.1 and 2.2. It adds two additional components, the and the to the former application ID (which is renamed more generically to be the namespace ID).

Searching through the protocol buffer file

I'm new to protocol buffers and I was wondering whether it was possible to search a protocol buffers binary file and read the data in a structured format. For example if a message in my .proto file has 4 fields I would like to serialize the message and write multiple messages into a file and then search for a particular field in the file. If I find the field I would like to read back the message in the same structured format as it was written. Is this possible with protocol buffers ? If possible any sample code or examples would be very helpful. Thank you
You should treat protobuf library as one serialization protocol, not an all-in-one library which supports complex operations (such as querying, indexing, picking up particular data). Google has various libraries on top of open-sourced portion of protobuf to do so, but they are not released as open source, as they are tied with their unique infrastructure. That being said, what you want is certainly possible, yet you need to write some code.
Anyhow, some of your requirements are:
one file contains various serialized binaries.
search a particular field in each serialized binary and extract that chunk.
There are several ways to achieve them.
The most popular way for serial read/write is that the file contains a series of [size, type, serialization output]. That is, one serialized output is always prefixed by size and type (either 4/8 byte or variable-length) to help reading and parsing. So you just repeat this procedure: 1) read size and type, 2) read binary with given size, 3) parse with given type 4) goto 1). If you use union type or one file shares same type, you may skip type. You cannot drop size, as there is no way know the end of output by itself. If you want random read/write, other type of data structure is necessary.
'search field' in binary file is more tricky. One way is to read/parse output one by one and to check the existance of field by HasField(). It is most obvious and slow yet straightforward way to do so. If you want to search field by number (say, you want to search 'optional string email = 3;'), thus search by binary blob (like 0x1A, field number 3, wire type 2), it is not possible. In a serialized binary stream, field information is saved merely a number. Without an exact context (.proto scheme or binary file's structure), the number alone doesn't mean anything. There is no guarantee that 0x1A is from field information, or field information from other message type, or actually number 26, or part of other number, etc. That is, you need to maintain the information by yourself. You may create another file or database with necessary information to fetch particular message (like the location of serialization output with given field).
Long story short, what you ask is beyond what open-sourced protobuf library itself does, yet you can write them with your requirements.
I hope, this is what you are looking for:
http://temk.github.io/protobuf-utils/
This is a command line utility for searching within protobuf file.

extract images from HL7 files

Given a HL7 file which I know that in its TXA segment there's a byte code of an image, how can I extract that image?
I know my question might be blurry, but that's the details I have
EDIT: The TXA segment is as follows:
TXA|1|25^PathologyResultsReport|8^HTML|||||||||||||||||||908^מעבדת^פתולוגיה^^^^^^^^^^^^20110710084900|||PCFET0NUWVBFIGh0bWwgUFVCTElDICItLy9XM0MvL0RURCBYSFRNTCAxLjAgU3RyaWN0Ly9FTiIgImh0dHA6Ly93d3cudzMub3JnL1RSL3hodG1sMS9EVEQveGh0bWwxLXN0cmljdC5kdGQiPg0KPGh0bWw+PGhlYWQ+PG1ld...
+PGJyLz48L3RkPjwvdHI+DQo8dHI+PHRkPg0KPC90ZD48L3RyPg0KPC90Ym9keT4NCjwvdGFibGU+DQo8L3RkPjxTb2ZUb3ZOZXdDb2x1bW4gLz48L3RyPjxTb2ZUb3ZOZXdMaW5lIC8+DQo8L3Rib2R5Pg0KPC90YWJsZT4NCjwvYm9keT4NCjwvaHRtbD4NCg==|
Thanks in advance
From reading the documentation it appears that images are stored in this form:
OBX||TX|11490-0^^LN||^IM^TIFF^Base64^
SUkqANQAAABXQU5HIFRJRkYgAQC8AAAAVGl0bGU6AEF1dGhvcjoAU3ViamVjdDoAS2V5d29yZHM6~
AENvbW1lbnRzOgAAAFQAaQB0AGwAZQA6AAAAAABBAHUAdABoAG8AcgA6AAAAAABTAHUAYgBqAGUA~
YwB0ADoAAAAAAEsAZQB5AHcAbwByAGQAcwA6AAAAAABDAG8AbQBtAGUAbgB0AHMAOgAAAAAAAAAA~
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAASAP4ABAABAAAAAAAAAAAB~
(681 lines omitted)
1qqQS/cFpaSVeD1QP1/SX1VJfpPSfXr+tIOKrN2aSrB8OHoH1kfz2tnPLpB/6WkksJ0w5G6WKVNe~
vSisJQdhLdQjODpbznVXXDMPdBNhVtBNpOqqtkY60qYoJxQK17cUoS0v4ijYztCapqqYUKmIUJhJ~
sKqoIO2opiqr7lupIMFBBhNQmtOIzG4naS7XsQuDBLFOP/gAgAgAAKMHAACcBgAACRcAALcYAAC4~
EwAA5RoAALQXAADyBAAAnAMAAD8LAADbEQAA5CgAAJtBAABTVQAAOHAAAOyHAAA=|||||||F
This looks like a simple structure, where the image data is base64 encoded and stored as a long stream, you know its an image because it has ^IM and the image type because of ^TIFF
More specifically:
When an image is sent, OBX-2 must contain the value ED which stands for encapsulated data. The components of OBX-5 must be as described below.
The first component, source application, must be null.
Component 2, type of data, must contain IM, indicating image data.
Component 3, data subtype, must contain TIFF
Component 4, encoding, must contain Base64
Base64 encoding of non-structured (standard HL7) data, normally in an OBX (but could be anywhere) is the norm. Older systems may have a 32K or 64K byte limit, and when that happens the data will be spread over multiple segments.
The target system will first have to potentially concatenate multiple segments and then decode the Base64 encoding.
The target system must know what the expected data type is so that it can be properly displayed or further decoded/interpreted.
This would be a great question on our new StackExchange site for IT Healthcare: http://area51.stackexchange.com/proposals/51758/healthcare-it

Creating a custom security encoding/decoding transform

I'd like to create a Base16 encoder and decoder for Lion's new Security.framework to complement kSecBase32Encoding and kSecBase64Encoding. Apple's documentation shows how to write a custom transform (Caesar cipher) using SecTransformRegister. As far as I can tell, custom transforms registered this way have to operate symmetrically on the data and can't be used to encode and decode data differently. Does anyone know if writing a custom encoder/decoder is possible, and if so, how?
I don't see a way to tie custom encoders into SecEncodeTransformCreate(), which is what kSecBase32Encoding and the others are based on. But it's easy to create a transform that accepts a "encoding" bool and makes use of that to decide whether to encode or decode. In the CaesarTransform example, they attach an attribute called key with SecTransformSetAttribute(). You'd do the same thing, but with a bool encode.
And of course you could just create an encoding transform and a decoding transform.

Resources