How are similar-fields that are optionally written, distinguished by flatbuffer reader? - protocol-buffers

Flatbuffer documentation mentions that the fields are optional in the data.
Each field is optional: It does not have to appear in the wire
representation, and you can choose to omit fields for each individual
object.
I am bit confused on how does the flatbuffer differentiates between two similar fields if one of them is not written.
For eg.
table Monster {
hp:short;
hpNew:short;
}
here if I write only hpNew in the data file, how will the reader know this is hpNew or hp ?

medium article explains that the table in memory is started with a reference to a virtual table which tells us where to find the properties.
If there are fields that are not written (optional), this virtual table will have its offset marked as 0.
PS: seems this is the reason tables have higher cost than structs, and also why flatbuffer is better than Cap'n Proto (which doesn't support this).

Related

Validating protobuf using json schema

Are there any examples / references to see how protobuf data can be validated using json schema?
Apologies if I'm starting off with something too basic...
Protobuf data can be validated using protobuf deserialisers; if data is parsed by the parser that was generated for the message (and is part of the class representing that message), then it's valid data. To generate that parser / class, you'd have started with a protobuf schema and compiled that with protoc.
Generally speaking, I'd say that wanting to validate such data against a json schema is possibly not a good idea. The point is that, to also have a json schema for the same data is to then have "two versions of the truth", which is generally a bad idea. Which one is right; the .proto schema, or the json schema? If I edit one, have I accurately edited the other?
JSON Can Do More Than Protobuf
I can see why you may want to check such data against a json schema. In a json schema you can define things like value and size constraints that cannot be expressed in a protobuf schema. For example, a message field "bearing" might in the application have a limited valid value between 0 and 359. There is no way to implement such a constraint in protobuf, but if expressed in a json schema used to validate json data, the validator would object if "bearing" were set to 412.
So, why not generate code from the json schema? I have tried (some time ago - I'm out of date) code generators for languages like C# using json schema as input, but found the result unsatisfactory (the code generators I tried didn't want to implement all the things in my json schema, e.g. unions). Things may have got a lot better since then.
Is there a Better Solution?
If this is indeed the kind of thing you need to do, then it's likely that choosing protobuf is not ideal for the purpose (due to the lack of constraints in protobuf schema). The question then is, what are the alternatives?
In my experience, if you want to stick to the concept of starting with a schema and generating code, the best I've ever used is ASN.1 (where "best" assumes you're willing to pay for good commercial ASN.1 tools from companies like Objective Systems or Nokalva - I've been a customer of both).
These days, ASN.1 can even serialise to json (or xml in several flavours, or other text and packed / unpacked binary data formats). The ASN.1 schema language does have constraints on sizes of lists and/or values of fields. There is an official translation between ASN.1 schema and XML schema (XSD), with the better ASN.1 tools able to do that translation. There may now be an defined translation between ASN.1 and json schema too (I don't know), plus tools to do that.
The point of that is, with translation tools, one can then say that the ASN.1 schema and XSD (or json) schema are "one single truth" - one being automatically generated from the other which was hand written.
A Good Halfway Hosue?
I notice (from a quick search) that there are various git* projects purporting to translate between protobuf and json schema, which if satisfactory means that your json and protocol buffer schema can be automatically translated between one and the other (which means that my 2nd para above is junk!).
Unless something has happened recently, those json protobuf schema translations are going to be limited, or disappointing. ASN.1, XSD and json schema are broadly similar in terms of what their syntaxes allow to be expressed (including size and value constraints), so translation between them doesn't necessarily lose "information". However, the syntax of protobuf schema is a lot more limited than that of json schema, so a translation from json schema to protobuf might lose the very information that you want.
The good news though would be that the protobuf schema would still be a "form of the truth" having been translated from the json schema. If you were using protobuf to generate json data instead of protobuf binary format data, the "original form of the truth" (the json schema) can be used to validate the protobuf generated json, with constraints on value and size still intact. That would be a good result!
Good luck!

Why does protobuf's FieldMask use field names instead of field numbers?

In the docs for FieldMask the paths use the field names (e.g., foo.bar.buzz), which means renaming the message field names can result in a breaking change.
Why doesn't FieldMask use the field numbers to define the path?
Something like 1.3.1?
You may want to consider filing an issue on the GitHub protocolbuffers repo for a definitive answer from the code's authors.
Your proposal seems logical. Using names may be a historical artifact. There's a possibly relevant comment on an issue thread in that repo:
https://github.com/protocolbuffers/protobuf/issues/3793#issuecomment-339734117
"You are right that if you use FieldMasks then you can't safely rename fields. But for that matter, if you use the JSON format or text format then you have the same issue that field names are significant and can't be changed easily. Changing field names really only works if you use the binary format only and avoid FieldMasks."
The answer for your question lies in the fact FieldMasks are a convention/utility developed on top of the proto3 schema definition language, and not a feature of it (and that utility is not present in all of the language bindings)
While you’re right in your observation that it can break easily (as schemas tend evolve and change), you need to consider this design choice from a user friendliness POV:
If you’re building an API and want to allow the user to select the field set present inside the response payload (the common use case for field masks), it’ll be much more convenient for you to allow that using field paths, rather then binary fields indices, as the latter would force the user of the gRPC/protocol generated code to be “aware” of the schema. That’s not always the desired case when providing API as a code software packages.
While implementing this as a proto schema feature can allow the user to have the best of both worlds (specify field paths, have them encoded as binary indices) for binary encoding, it would also:
Complicate code generation requirements
Still be an issue for plain text encoding.
So, you can understand why it was left as an “external utility”.

HL7 FHIR mark resources as anonymized

I am trying to map an existing domain into HL7 FHIR.
So far it was pretty easy to find FHIR resources that more or less represent the same data and can be used for that purpose. But now I am running into a problem of which I am not sure how to solve it.
The existing domain allows that data can be anonymized depending on the users access level. e.g. a patient's name or address might be removed and marked as anonymized. Other data will be pseudonymised, for example a the birthdate in 1980 will be replaced with 01.01.1980. An Age of 37 will be replaced with a category of 30-40.
So I am unsure how to integrate that into the FHIR domain. I was thinking I could create an extension holding a boolean, indicating if a value was anonymized or not and always replace or remove the original value. This might work, but I will run into big problems when the anonymized value is of a different type than the original value (e.g. Age is replaced by a range of values)
Is that even a valid approach? I thought this might be common problem, but I could not find any examples where people described methods of how to mark data as altered. Unfortunately the documentation at http://build.fhir.org/extensibility-registry.html does not contain anything that would help my case.
You can use security labels for this purpose (Resource.meta.security). Take a look at REDACTED and SUBSETTED in the security label value set: https://www.hl7.org/fhir/valueset-security-labels.html
If you need to convey a data type other than the one allowed by the resource (e.g. wanting to convey a range rather than a birthdate), you'd need to use an extension. (Note that dates are valid even if you only include the year.)

Globally unique option field numbers for protobufs

In the protobuf documentation (https://developers.google.com/protocol-buffers/docs/proto#customoptions) it says this about custom options:
One last thing: Since custom options are extensions, they must be
assigned field numbers like any other field or extension. In the
examples above, we have used field numbers in the range 50000-99999.
This range is reserved for internal use within individual
organizations, so you can use numbers in this range freely for
in-house applications. If you intend to use custom options in public
applications, however, then it is important that you make sure that
your field numbers are globally unique. To obtain globally unique
field numbers, please send a request to
protobuf-global-extension-registry#google.com. Simply provide your
project name (e.g. Object-C plugin) and your project website (if
available). Usually you only need one extension number.
Why do the options field numbers have to be globally unique for public applications? In what way can collisions be a problem?
Basically, because you wouldn't know whether the data you get is correct.
The protobuf binary wire format only stores the field numbers and the payload (which is itself, for complex types, just field numbers and sub-payloads). There is no name data. So: when you store and retrieve an extension field, all you're saying is "fetch field {field number}, interpret it as {type}". If two different systems have extended the same data using the same field number, then you don't have any way of knowing whether the data you're fetching was actually in that format.
Normally this isn't a problem - as it is rare to conflict like this on the same data; but custom options are different! I'm a library author; I might want to add a custom option that my schema parsing tools recognize, by extending (say) MessageOptions. MessageOptions is the extension point for DescriptorProto, which is to say: that's what option (foo) = "bar"; goes to inside a message.
To do that, I need to assign a number for foo. I choose 5000 arbitrarily (MessageOptions defines extensions 1000 to max;, so that's fine). All is good. My tooling works.
Unknown to me, another library author has chosen to do something similar and has also used 5000. Once the schema is compiled (by protoc or similar), all I have is numbers. If I ask for the data from field 5000, I don't know whether I'm getting my extension, or the other one. The meaning is lost. OK, at a push I could also check the dependency list on the FileDescriptorProto, but ... that's hit and miss.
I don't know whether the presense of value 1 in field 5000 is:
option (.mystuff.someext) = 1;
vs
option (.anotherlib.whatever) = -1; // stored as sint32
vs
option (.yetanother.library.option) = true;
If those extensions all have number 5000, they appear identically on the wire.

Searching through the protocol buffer file

I'm new to protocol buffers and I was wondering whether it was possible to search a protocol buffers binary file and read the data in a structured format. For example if a message in my .proto file has 4 fields I would like to serialize the message and write multiple messages into a file and then search for a particular field in the file. If I find the field I would like to read back the message in the same structured format as it was written. Is this possible with protocol buffers ? If possible any sample code or examples would be very helpful. Thank you
You should treat protobuf library as one serialization protocol, not an all-in-one library which supports complex operations (such as querying, indexing, picking up particular data). Google has various libraries on top of open-sourced portion of protobuf to do so, but they are not released as open source, as they are tied with their unique infrastructure. That being said, what you want is certainly possible, yet you need to write some code.
Anyhow, some of your requirements are:
one file contains various serialized binaries.
search a particular field in each serialized binary and extract that chunk.
There are several ways to achieve them.
The most popular way for serial read/write is that the file contains a series of [size, type, serialization output]. That is, one serialized output is always prefixed by size and type (either 4/8 byte or variable-length) to help reading and parsing. So you just repeat this procedure: 1) read size and type, 2) read binary with given size, 3) parse with given type 4) goto 1). If you use union type or one file shares same type, you may skip type. You cannot drop size, as there is no way know the end of output by itself. If you want random read/write, other type of data structure is necessary.
'search field' in binary file is more tricky. One way is to read/parse output one by one and to check the existance of field by HasField(). It is most obvious and slow yet straightforward way to do so. If you want to search field by number (say, you want to search 'optional string email = 3;'), thus search by binary blob (like 0x1A, field number 3, wire type 2), it is not possible. In a serialized binary stream, field information is saved merely a number. Without an exact context (.proto scheme or binary file's structure), the number alone doesn't mean anything. There is no guarantee that 0x1A is from field information, or field information from other message type, or actually number 26, or part of other number, etc. That is, you need to maintain the information by yourself. You may create another file or database with necessary information to fetch particular message (like the location of serialization output with given field).
Long story short, what you ask is beyond what open-sourced protobuf library itself does, yet you can write them with your requirements.
I hope, this is what you are looking for:
http://temk.github.io/protobuf-utils/
This is a command line utility for searching within protobuf file.

Resources