serialize an array of strings and null values - protocol-buffers

I'm using protobuf to serialize json from api for flutter app.
however I'm having an issue where I need to serialize this list for example:
"value_array": [ "",
"",
null
]
If I use the usual:
repeated string value_array = 6;
I get an exception during parsing the json.
and sadly I can't have the json changed from api. even worse I can't just manually remove the null from json before parsing it as this element in json is repeated in many different api calls.
PS. I don't need to differentiate the empty string from null, just want to avoid the exception.
thanks in advance for any help.

protobuf has a very opinionated view on JSON, and not all JSON concepts map cleanly to protobuf concepts; for example, protobuf has no notion of null
It might be fine and reasonable to use the protobuf JSON variant if you're always talking protobuf-to-protobuf and want readability (hence text over binary), but if you're working with an external (non-protobuf) JSON tool, honestly: don't use protobuf. Use any relevant JSON-specific tool for your platform - it will do a better job of handling the JSON and supporting your needs. You can always re-map that data to your protobuf model after you have deserialized it, if you need.

Related

Is there a way that we can maintain a json in FHIR?

I wanted to store a JSON in FHIR. JSON can be anything, for example, it can be something like
{
assignee: "PCC OFF SHORE",
dueby: "30-03-1991",
description: "This will be assigned to PCC off shoure"
}
You can store any arbitrary data as a Binary resource in an Attachment data type (e.g. DocumentReference or in an extension). Technically, you could also put it in a string data type, but that wouldn't be terribly appropriate as strings are not expected to be parseable.
Certain FHIR resources accept elements of type value[x]. That type accepts variations like valueString or valueCode. For your case, valueBase64Binary may be a good alternative.
Read: https://www.hl7.org/fhir/extensibility.html#extension

Updating data from protobuf

I'm building a microservice system with multiple disconnected components, and I'm currently trying to find out how to implement knowing which fields on an object should be updated based on the protobuf data provided.
The flow is this:
The client sends a JSON-request to an API.
The API translates the JSON-data into a protobuf struct, which is then sent along to the microservice responsible for handling it.
The microservice receives the data from the API and performs any action on it, in this case, I'm trying to change a single value in a MySQL table, such as a client's email address.
Now, the problem I have is that since protobuf (understandably) doesn't allow pointers, the protobuf object will contain zero-values for everything not provided. This means that if a customer wants to update their email address, I can't know if they also set IncludeInMailLists to false - or if it was simply not provided (having its zero-value) and shouldn't change.
The question is: how will I - from the protobuf object - know if a value is expressively set to 0, or just not provided?
My current solution is pretty much having a special UpdateCustomer-object which also has an array of Fields specifying which fields the microservice should care about, but it feels like bad solution.
Someone must have solved this better already. How should I implement it?
Protobufs field masks are one way.
https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#google.protobuf.FieldMask
https://github.com/golang/protobuf/issues/225
But if you are using grpc then there's a (sort of) built in way.
Grpc wrappers
Since proto3 (protobufs v3) there's been no distinction between a primitive that is not set, and a primitive that's been set to the "zero value" (false, 0, "", etc).
Instead you can use objects or in protobufs language a "message", as objects can be nil / null. You've not mentioned what language you are working in but hopefully these examples make sense.
Given an RPC service such as:
import "google/protobuf/wrappers.proto";
service Users {
rpc UpdateUser(UpdateUserRequest) returns (UpdateUserResponse)
}
message UpdateUserRequest {
int32 user_id = 1;
google.protobuf.StringValue email = 2;
}
message UpdateUserResponse {}
Note the import "google/protobuf/wrappers.proto"; is important.
It given you access to the google protobufs wrappers source code here. These are not objects that have methods that allow you to test for presence.
Grpc generated code in java gives you methods such as .hasEmail() which returns true if the value is present. The getter on an unset value will still return you the zero value. I think the golang version uses pointers that you can test for nil instead of an explicit hasX() method.
More info / discussion in this github issue

creating JSON payloads schemas to fit javascript D3

I have some neo4j graphs and I want to export their information in a JSON that’s compatible with javascript.D3.
I found a fairly reasonable tutorial of doing so at this link :
https://neo4j.com/developer/example-project/
However, the one thing I don’t understand is how the following data was generated
// JSON object for whole graph viz (nodes, links - arrays)
curl http://localhost:8080/graph[?limit=50]
{"nodes":
[{"title":"Apollo 13","label":"movie"},{"title":"Kevin Bacon","label":"actor"},
{"title":"Tom Hanks","label":"actor"},{"title":"Gary Sinise","label":"actor"},
{"title":"Ed Harris","label":"actor"},{"title":"Bill Paxton","label":"actor"}],
"links":
[{"source":1,"target":0},{"source":2,"target":0},{"source":3,"target":0},
{"source":4,"target":0},{"source":5,"target":0}]}
I don’t understand how the JSON payload above is generated.
All of my neo4j graphs are exported in a neo4j JSON (which is a more complex payload structure than the one above). Which is alright but I specifically want to generate the code shown above. A curl command is just going to fetch existing data, so at the very least I need existing data formatted properly which I don’t have.

Why Elastic Search favorite JSON?

I'm new beginner of Elastic Search. One feature I found is that elastic search documents is particularly expressed in JSON. I google a while but I can not found any reason about that.
Can someone help to explain why JSON not XML or other format?
It is because json document has key, value structure and it helps elasticsearch to index on basis of keys. Suppose if there is an XML, then a lot of effort will be required to just parse the data whereas in json , according to key value elastic search can directly index the required data.
Basically there are mainly 2 standard ways to transport data between a server and client, XML and JSON. Old services use XML as well as JSON as a way to transfer data as most of the old consumers of the services are stick to XML parsers, but recent services use JSON as a standard mainly because of simplicity that comes with JSON. JSON parsers are easy to build and use. At the same time XML parsers needs to be customized as per fields. Although there are some great libraries for parsing a XML response like SAX parser in JAVA, its still not that straight forward. Also JSON can be directly used in javascript. I hope I have answered your question.

Decode a YAML serialized object

I have serialized an object in YAML and send it to a remote worker.
The worker doesent have the object definition so i get a YAML::Object.
How can i access the field inside it?
A text field seems like that base64 encoded, how can i decode that? (no, decode64 not works).
you can pass the object as something "known between both sides" (like an openstruct or hash) or give the description to the client.
It would be interesting to have a serialization format that also serialized the class and its methods...I'll have to think about that one...
try c["bar"]
you can also see all the provided keys using c.keys

Resources