How to use version control in GRPC Protocol Buffer - protocol-buffers

I have a protocol buffer file in GRPC server, and the SayHello is defined under package hello.v1
syntax = "proto3";
package hello.v1;
service GreetService {
rpc SayHello(SayHelloRequest) returns (SayHelloResponse) {}
}
message SayHelloRequest {
string name = 1;
}
message SayHelloResponse {
}
the function may be changed over time like this:
service GreetService {
rpc SayHello(SayHelloRequest) returns (SayHelloResponse) {}
}
message SayHelloRequest {
string name = 1;
uint32 age = 2;
}
message SayHelloResponse {
string reply_message = 1;
}
But for users, they want to have a non-breaking service, so I want to minimize the impact for them. My question is how to keep both the two versionsSayHello? Client can call them throw different namespace in cpp or different package in golang.

Generally speaking, you should have a single source of truth for your protobuf files. One solution is using a monorepo, i.e. housing both your client and server in the same monorepo. However, this approach is uncommon outside of certain large companies.
Organizations with multiple repos tend to create a separate repo for all of their protobufs which is then included in other repos via submodules or pulled down over the network at buildtime.
Regardless, it is always possible to introduce a breaking change by modifying the proto, even if there is a single source of truth. You might consider running Buf as a presubmit test on all code changes to detect if a breaking change has been introduced.

Related

gRPC Proto Buff URL/URI type

Is there a way to use Url or Uri data type inside a message of gRPC? And if not what is the best way to do this?
I am using gRPC and Protocol Buffers for this and I run a backend go app to trigger popup notifications that display in my Flutter app. The notification has a link that takes the user to a webpage when clicked on in my Flutter app.
Right now I am using a String for this, like so:
message NotificationResponse{
string title = 1;
string url = 2;
}
I can't seem to find a way to use Url/Uri as a type. Is there such a thing?
Storing in a string is a totally viable solution. But if you would like to reduce the payload size a little and make the instantiation little bit safer, you could remove the schema part of the url (eg: https://).
To do that you could do the following:
message Url {
enum Schema {
UNSPECIFIED = 0;
HTTP = 1;
HTTPS = 2;
// more if needed
}
Schema schema = 1;
string rest = 2;
}
and then you can use it in your message like this:
message NotificationResponse {
string title = 1;
Url url = 2;
}
This would exclude these non necessary character in the string and reduce the payload size (remove http(s) and ://). Enum are serialized more efficiently than string.
This would make instantiation safer because you can restrict at least the schema that you and other developers can use (in my example, no ftp or even restrict to only https for security).
That has one inconvenience though. You will have to concatenate that back in your client code, but in my opinion this is worth the effort since the concatenation is pretty trivial (change enum value to is text value and add ://).
Note: doing the same enum trick for Domain Name (.com, .net, ...) would not be as trivial and would force you to store the path and the host in different field (not worth it since it increase payload).
Let me know if you need more help.

gRPC client-side streaming with initial arguments

I'm creating client side streaming method which proto file definition should be looking similar to this:
service DataService {
rpc Send(stream SendRequest) returns (SendResponse) {}
}
message SendRequest {
string id = 1;
bytes data = 2;
}
message SendResponse {
}
The problem here is that ID is sent with each streaming message even it is needed only once. What's your recommendation and most optimal way for such a use cases?
One hacky approach would be to set ID only once within first message and after that left if blank. But this API is supposed to be used by third party and method definition like above doesn't explaining that well.
I don't think something like this is supported either:
service DataService {
rpc Send(InitialSendRequest, stream DataOnlyRequest) returns (SendResponse) {}
}
I'm currently considering SendRequest message to be something like this, but will have to check how optimal is this compared to the first case considering proto marshaling:
message SendRequest {
oneof request{
string id = 1;
bytes data = 2;
}
}
Your approach of using a oneof with the fields clearly documented saying id is expected only in the first message on the stream and that server implementations will terminate the stream if id is set on subsequent messages on the stream sounds good to me.
The following is a usage of the above described pattern in grpc-lb-v1. Even though the grpc team is moving away from grpc-lb-v1, the above mentioned pattern is a commonly used one.
I'm not very sure about its implications with respect to proto marshaling. That might be a question for the protobuf team.
Hope this helps.

Protocol Buffers between two different languages

We are using Golang and .NET Core for our inter-communication microservices infrastructure.
All the data across the services are coming based on Protobuffs Protocols that we have created.
Here is an example of one of our Protobuffs:
syntax = "proto3";
package Protos;
option csharp_namespace = "Protos";
option go_package="Protos";
message EventMessage {
string actionType = 1;
string payload = 2;
bool auditIsActive = 3;
}
Golang is working well and the service is generating the content as needed and sending it to the SQS queue, once that happens the .NET core service is getting the data and trying to serialize it.
Here are the contents of the SQS message example:
{"#type":"type.googleapis.com/Protos.EventMessage","actionType":"PushPayload","payload":"<<INTERNAL>>"}
But we are getting an Exception that saying the wire-type is not defined as mentioned below:
Google.Protobuf.InvalidProtocolBufferException: Protocol message contained a tag with an invalid wire type.
at Google.Protobuf.UnknownFieldSet.MergeFieldFrom(CodedInputStream input)
at Google.Protobuf.UnknownFieldSet.MergeFieldFrom(CodedInputStream input)
at Google.Protobuf.UnknownFieldSet.MergeFieldFrom(UnknownFieldSet unknownFields, CodedInputStream input)
at Protos.EventMessage.MergeFrom(CodedInputStream input) in /Users/maordavidzon/projects/github_connector/GithubConnector/GithubConnector/obj/Debug/netcoreapp3.0/EventMessage.cs:line 232
at Google.Protobuf.MessageExtensions.MergeFrom(IMessage message, Byte[] data, Boolean discardUnknownFields, ExtensionRegistry registry)
at Google.Protobuf.MessageParser`1.ParseFrom(Byte[] data)
The Proto file is exactly the same in both of the services.
Is there any potential missing options or property that we need to add?
It looks like you're using the JSON format rather than the binary format. In that case, you want ParseJson(string json), not ParseFrom(byte[] data).
Note: the binary format is more efficient, if that matters to you. It also has better support across protobuf libraries / tools.
Basically there are two possible scenarios, or your protos files generated for .NET and GoLang are not in the same version or your data has been corrupted while transferring between GoLang and .NET application.
Protobuf is a binary protocol, check if you have any http filter or anything else that can change incoming or outgoing stream of bytes.

How to handle weird API flow with implicit create step in custom terraform provider

Most terraform providers demand a predefined flow, Create/Read/Update/Delete/Exists
I am in a weird situation developing a provider against an API where this behavior diverges a bit.
There are two kinds of resources, Host and Scope. A host can have many scopes. Scopes are updated with configurations.
This generally fits well into the terraform flow, it has a full CRUDE flow possible - except for one instance.
When a new Host is made, it automatically has a default scope attached to it. It is always there, cannot be deleted etc.
I can't figure out how to have my provider gracefully handle this, as I would want the tf to treat it like any other resource, but it doesn't have an explicit CREATE/DELETE, only READ/UPDATE/EXISTS - but every other scope attached to the host would have CREATE/DELETE.
Importing is not an option due to density, requiring an import for every host would render the entire thing pointless.
I originally was going to attempt to split Scopes and Configurations into separate resources so one could be full-filled by the Host (the host providing the Scope ID for a configuration, and then other configurations can get their scope IDs from a scope resource)
However this approach falls apart because the API for both are the same, unless I wanted to add the abstraction of creating an empty scope then applying a configuration against it, which may not be fully supported. It would essentially be two resources controlling one resource which could lead to dramatic conflicts.
A paraphrased example of an execution I thought about implementing
resource "host" "test_integrations" {
name = "test.integrations.domain.com"
account_hash = "${local.integrationAccountHash}"
services = [40]
}
resource "configuration" "test_integrations_root_configuration" {
name = "root"
parent_host = "${host.test_integrations.id}"
account_hash = "${local.integrationAccountHash}"
scope_id = "${host.test_integrations.root_scope_id}"
hostnames = ["test.integrations.domain.com"]
}
resource "scope" "test_integrations_other" {
account_hash = "${local.integrationAccountHash}"
host_hash = "${host.test_integrations.id}"
path = "/non/root/path"
name = "Some Other URI Path"
}
resource "configuration" "test_integrations_other_configuration" {
name = "other"
parent_host = "${host.test_integrations.id}"
account_hash = "${local.integrationAccountHash}"
scope_id = "${host.test_integrations_other.id}"
}
In this example flow, a configuration and scope resource unfortunately are pointing to the same resource which I am worried would cause conflicts or confusion on who is responsible for what and dramatically confuses the create/delete lifecycle
But I can't figure out how the TF lifecycle would allow for a resource that would only UPDATE/READ/EXISTS if say a flag was given (and how state would handle that)
An alternative would be to just have a Configuration resource, but then if it was the root configuration it would need to skip create/delete as it is inherently tied to the host
Ideally I'd be able to handle this situation gracefully. I am trying to avoid including the root scope/configuration in the host definition as it would create a split in how they are written and handled.
The documentation for providers implies you can use a resource AS a schema object in a resource, but does not explain how or why. If it works the way I imagine it, it may work to create a resource that is only used to inject into the host perhaps - but I don't know if that is how it works and if it is how to accomplish it.
I believe I tentatively have found a solution after asking some folks on the gopher slack.
Using AWS Provider Default VPC as a reference, I can "clone" the resource into one with a custom Create/Delete lifecycle
Loose Example:
func defaultResourceConfiguration() *schema.Resource {
drc := resourceConfiguration()
drc.Create = resourceDefaultConfigurationCreate
drc.Delete = resourceDefaultConfigurationDelete
return drc
}
func resourceDefaultConfigurationCreate(d *schema.ResourceData, m interface{}) error {
// double check it exists and update the resource instead
return resourceConfigurationUpdate(d, m)
}
func resourceDefaultConfigurationDelete(d *schema.ResourceData, m interface{}) error {
log.Printf("[WARN] Cannot destroy Default Scope Configuration. Terraform will remove this resource from the state file, however resources may remain.")
return nil
}
This should allow me to provide an identical resource that is designed to interact with the already existing one created by its parent host.

How to send a list of objects in Haxe between client and server?

I am trying to write an online message board in Haxe (OpenFL). There are lots of server/client examples online. But I am new to this area and I do not understand any of them. What is the easiest way to send a list of objects between server and client? Could you guys give an example?
You could use JSON
You can put this in your openFL project (client):
var myData = [1,2,3,4,5];
var http = new haxe.Http("server.php");
http.addParameter("myData", haxe.Json.stringify(myData));
http.onData = function(resultData) {
trace('the data is send to server, this is the response:' + resultData);
}
http.request(true);
If you have a server.php file, you can access the data like this:
$myData = json_decode($_POST["myData"]);
If the server returns Json data which needs to be read in the client, then in Haxe you need to do haxe.Json.parse(resultData);
EDIT: I'm not still sure if the user's problem is really about sending "a list of objects"; see comment to the question...
The easiest way is to use Haxe Serialization, either with Haxe Remoting or with your own protocol on top of TCP/UDP. The choice of protocol depends whether you already have something built and whether you will be calling functions or simply getting/posting data.
In either case, haxe.Serializer/Unserializer will give you a format to transmit most (if not all) Haxe objects from client to server with minimal code.
See the following minimal example (from the manual) on how to use the serialization APIs. The format is string based and specified.
import haxe.Serializer;
import haxe.Unserializer;
class Main {
static function main() {
var serializer = new Serializer();
serializer.serialize("foo");
serializer.serialize(12);
var s = serializer.toString();
trace(s); // y3:fooi12
var unserializer = new Unserializer(s);
trace(unserializer.unserialize()); // foo
trace(unserializer.unserialize()); // 12
}
}
Finally, you could also use other serialization formats like JSON (with haxe.Json.stringify/parse) or XML, but they wouldn't be so convenient if you're dealing with enums, class instances or other data not fully supported by these formats.

Resources