Best way to model gRPC messages - protocol-buffers

I would like to model messages for bidirectional streaming. In both directions I can expect different types of messages and I am unsure as to what the better practice would be. The two ideas as of now:
message MyMessage {
MessageType type = 1;
string payload = 2;
}
In this case I would have an enum that defines which type of message that is and a JSON payload that will be serialized and deserialized into models both client and sever side. The second approach is:
message MyMessage {
oneof type {
A typeA = 1;
B typeB = 2;
C typeC = 3;
}
}
In the second example a oneof is defined such that only one of the message types can be set. Both sides a switch must be made on each of the cases (A, B, C or None).

If you know all of the possible types ahead of time, using oneof would be the way to go here as you have described.
The major reason for using protocol buffers is the schema definition. With the schema definition, you get types, code generation, safe schema evolution, efficient encoding, etc. With the oneof approach, you will get these benefits for your nested payload.
I would not recommend using string payload since using a string for the actual payload removes many of the benefits of using protocol buffers. Also, even though you don't need a switch statement to deserialize the payload, you'll likely need a switch statement at some point to make use of the data (unless you're just forwarding the data on to some other system).
Alternative option
If you don't know the possible types ahead of time, you can use the Any type to embed an arbitrary protocol buffer message into your message.
import "google/protobuf/any.proto";
message MyMessage {
google.protobuf.Any payload = 1;
}
The Any message is basically your string payload approach, but using bytes instead of string.
message Any {
string type_url = 1;
bytes value = 2;
}
Using Any has the following advantages over string payload:
Encourages the use of protocol buffer to define the dynamic payload contents
Tooling in the protocol buffer library for each language for packing and unpacking protocol buffer messages into the Any type
For more information on Any, see:
https://developers.google.com/protocol-buffers/docs/proto3#any
https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto

Related

How to handle a change in the interpretation of a field in a protobuf message?

If a field stores a specific value and is interpreted in a specific manner, is it possible to change this interpretation in a backwards compatible way?
Let's say I have a field that stores values of different data types.
The most generic case is to store it as a byte array and let the apps encode and decode it to the correct data type.
Common cases for data types are integers and strings, so support for those types is present.
Using a oneof structure this looks as follows:
message Foo
{
...
oneof value
{
uint32 integer = 1;
string text = 2;
bytes data = 3;
}
}
Applications that want to store an ip prefix in the value field, have to use the generic data field and do the encoding and decoding correctly.
If I now want to add support for ip prefixes to the Foo message itself so the apps don't have to deal with the encoding and decoding anymore, I could add a new field to the oneof structure with an IpPrefix datatype:
message Foo
{
...
oneof value
{
uint32 integer = 1;
string text = 2;
bytes data = 3;
IpPrefix ip_prefix = 4;
}
}
Even though this makes life easier for the apps, I believe it breaks backwards compatibility.
If a sending app has support for the new field, it will put its ip prefix value in the ip_prefix field.
But if a receiving app does not have support for this new field yet, it will ignore the field.
It will look for the ip prefix value in the data field, as it always did, but it won't find it there.
So the receiving app can no longer correctly read the ip prefix value anymore.
Is there a way to make this scenario somehow backwards compatible?
PS: I realize this is a somewhat vague and perhaps unrealistic example. The exact case I need it for is for the representation of RADIUS attributes in a protobuf message. These attributes are in essence a byte array that is sent over the network, but the bytes in the array have meaning and could be stored as different fields in the protobuf message. A basic attribute exists of a Type field and a Value field where the value field can be a string, integer, ip address... From time to time new datatypes (even complex ones) are added and I would like to be able to add new datatypes in a backwards compatible way.
There are two ways to go about this:
1. Enforce an update schedule, readers before writers
Add the new type of field to the .proto definition, but document that it should not be used except for testing and reception. Document that all readers of the message must support both the old and the new field by a specific milestone/date, after which the writers can start using it. Eventually you can deprecate the old field and new readers don't need to support it anymore.
2. Have both fields during the transition period
message Foo
{
...
oneof value
{
uint32 integer = 1;
string text = 2;
bytes data = 3;
}
IpPrefix ip_prefix = 4;
}
Document that writers should set both data and ip_prefix during the transition period. The readers can start using ip_prefix as soon as writers have added support, and can optionally fall back to data.
Later, you can deprecate data and move ip_prefix to inside the oneof without breaking compatibility.

Protobuf message / enum type rename and wire compatibility?

Is it (practically) possible to change the type name of a protobuf message type (or enum) without breaking communications?
Obviously the using code would need to be adpated to re-compile. The question is if old clients that use the same structure, but the old names, would continue to work?
Example, base on the real file:
test.proto:
syntax = "proto3";
package test;
// ...
message TestMsgA {
message TestMsgB { // should be called TestMsgZZZ going forward
// ...
enum TestMsgBEnum { // should be called TestMsgZZZEnum going forward
// ...
}
TestMsgBEnum foo = 1;
// ...
}
repeated TestMsgB bar = 1;
// ...
}
Does the on-the-wire format of the protobuf payload change in any way if type or enum names are changed?
If you're talking about the binary format, then no: names don't matter and will not impact your ability to load data; For enums, only the integer value is stored in the payload. For fields, only the field-number is stored.
Obviously if you swap two names, confusion could happen, but: it should load as long as the structure matches.
If you're talking about the JSON format, then it may matter.

Support multiple schemas in Proto3

I am creating a proto3 schema that allow multiple/arbitrary data in incoming jsonObject. I would like to convert the incoming json object in one shot.
for example
{"key1":"value",
"key2": { //schema A}
}
I also want to support for schema B for key2 in different request.
{"key1":"value",
"key2": { //schema B}
}
I tried couple of different approach such as oneof but for oneof it needs different key names, since I am using the same key2, it does not work for me in this case.
and here is the schema.
message IncomingRequest {
string key1 = 1;
//google.protobuf.Any key2 = 2; --> not working
oneof message{
A payload = 2;
B payload = 3; --> duplicate key
}
}
Anyone know how to achieve this?
Two ways I can think of:
Message types for each request
If you know, based on the request (e.g. HTTP URL called), if it should be schema A or B, I recommend to create separate message types for each request. This might result in more proto types you have to define, but will be simple to use in the actual code you have to write to consume the payloads.
Struct type
If you really want/have to reuse the same message type you can use the Struct proto type to encode/decode any JSON structure.
message IncomingRequest {
string key1 = 1;
google.protobuf.Struct key2 = 2;
}
Although from the proto type definition it doesn't look like it does what you want, Protobuf decoders/encoders will handle this type in a special way to give you the behavior wanted.
The issue with this option is that what you gain in flexibility in the proto, you lose in expressiveness in the generated code, as you have to do a lot edge-case checking if a specific value/type is set.

With Protocol Buffers, is it safe to move enum from inside message to outside message?

I've run into a use case where I'd like to move an enum declared inside a protocol buffer message to outside the message so that other messages van use the same Enum.
ie, I'm wondering if there are any issues moving from this
message Message {
enum Enum {
VALUE1 = 1;
VALUE2 = 2;
}
optional Enum enum_value = 1;
}
to this
enum Enum {
VALUE1 = 1;
VALUE2 = 2;
}
message Message {
optional Enum enum_value = 1;
}
Would this cause any issues de-serializing data created with the first protocol buffer definition into the second?
It doesn't change the serialization data at all - the location / name of the enums are irrelevant for the actual data, since it just stores the integer value.
What might change is how some languages consume the enum, i.e. how they qualify it. Is it X.Y.Foo, X.Foo, or just Foo. Note that since enums follow C++ naming/scoping rules, some things (such as conflicts) aren't an issue: but it may impact some languages as consumers.
So: if you're the only consumer of the .proto, you're absolutely fine here. If you have shared the .proto with other people, it may be problematic to change it unless they are happy to update their code to match any new qualification requirements.

Protobuf: Nesting a message of arbitrary type

in short, is there a way to define a protobuf Message that contains another Message of arbitrary type? Something like:
message OuterMsg {
required int32 type = 1;
required Message nestedMsg = 2; //Any sort of message can go here
}
I suspect that there's a way to do this because in the various protobuf-implementations, compiled messages extend from a common Message base class.
Otherwise I guess I have to create a common base Message for all sorts of messages like this:
message BaseNestedMessage {
extensions 1 to max;
}
and then do
message OuterMessage {
required int32 type = 1;
required BaseNestedMessage nestedMsg = 2;
}
Is this the only way to achieve this?
The most popular way to do is to make optional fields for each message type:
message UnionMessage
{
optional MsgType1 msg1 = 1;
optional MsgType2 msg2 = 2;
optional MsgType3 msg3 = 3;
}
This technique is also described in the official Google documentation, and is well-supported across implementations:
https://developers.google.com/protocol-buffers/docs/techniques#union
Not directly, basically; protocol buffers very much wants to know the structure in advance, and the type of the message is not included on the wire. The common Message base-class is an implementation detail for providing common plumbing code - the protocol buffers specification does not include inheritance.
There are, therefore, limited options:
use different field-numbers per message-type
serialize the message separately, and include it as a bytes type, and convey the "what is this?" information separately (presumably a discriminator / enumeration)
I should also note that some implementations may provide more support for this; protobuf-net (C# / .NET) supports (separately) both inheritance and dynamic message-types (i.e. what you have above), but that is primarily intended for use only from that one library to that one library. Because this is all in addition to the specification (remaining 100% valid in terms of the wire format), it may be unnecessarily messy to interpret such data from other implementations.
Alternatively to multiple optional fields, the oneof keyword can be used since v2.6 of Protocol Buffers.
message UnionMessage {
oneof data {
string a = 1;
bytes b = 2;
int32 c = 3;
}
}

Resources