How to create a protocol buffer for a client server in ballerina? - protocol-buffers

We wish to build a repository of functions that a developer can assemble to build a complex program.
There can exist several versions of a function, each with its metadata. This, function metadata includes the developer's full name and email address, the language the function is implemented in and a set of keywords related to the functionality fulfilled by the function.
The versions of a function can be represented as a directed acyclic graph.
To manage the repository, we use a remote invocation based gRPC that allows a client to interact with a server and execute the following operations:
add_new_fn: to add either a brand new function or a new version to an existing function;
add_fns: to add multiple functions streamed by the client (note that multiple versions of a function are not allowed);
delete_fn: to delete a function (this might require reordering the versions of the function);
show_fn: to view a specific version of a function;
show_all_fns: to view all versions of a function (the versions are streamed back by the server)
show_all_with_criteria: to view all latest versions of functions implemented in a given language or related to a set of keywords (bi-directional streaming).

Assuming you just want the API defined:
syntax = "proto3";
package foo;
import "google/protobuf/any.proto";
// Convention is to name RPC method messages after the method
// Provides flexibility in evolving them distinctly too
service FnRepository {
rpc AddFn(AddFnPackageRequest) returns (AddFnPackageResponse) {}
rpc AddFns(stream
AddFnPackageRequest) returns (stream AddFnPackageResponse) {}
rpc DeleteFn(DelFnPackageRequest) returns (DelFnPackageResponse) {}
...
}
// Requests containing messages provides extensibility
message AddFnPackageRequest {
FnPackage fn_package = 1;
}
// Unique ID returned here
// Would likely be used to delete
message AddFnPackageResponse {
string id = 1;
}
message Developer {
string name = 1;
string email = 2;
}
// Enum assumes a predefined list of languages
message Fn {
string name = 1;
Signature signature = 2;
enum Language {
GOLANG = 0;
RUST = 1;
}
Language language = 3;
}
// Conceptually cleaner to define subtypes for e.g. Developer
// This does disconnect e.g. Fn from Developer
message FnPackage {
Fn fn = 1;
Developer developer = 2;
Metadata metadata = 3;
}
message Metadata {
Developer developer = 1;
repeated string keywords = 2;
}
// Any is a way to provide polymorphism
// Since Fn signatures are arbitrary types
message Signature {
google.protobuf.Any params = 1;
google.protobuf.Any return = 2;
}
...
This is written entirely here and so there's no warranty that it will actually compile 😃

Related

Is there known way to add new syntax features to Protobuf?

Protobuf provides service keyword that defines rpc-interface of one application.
I also want to use concept of entity which means that is part of service (one service contains multiple entities). Each entity type has own unique identifier that gives possibility to address different entities in service.
I would like to use proto like this
message UserReq {
string username = 1;
string password = 2;
}
message RegReq {
uint8 result_code = 1;
}
message RemoteEntityInterface
{
MyEntity entity = 1;
}
message GiveItemResult
{
uint8 result_code = 1;
}
service MyService {
rpc RegisterUser (UserReq) returns (RegReq) {}
rpc Login(UserReq) returns (RemoteEntityInterface) {}
}
entity MyEntity
{
rpc GiveItem (GiveItemReq) returns (GiveItemResult) {}
}
As you can see in example, I used unknown for protobuf keyword entity, this keyword means that MyService can return the interface to some remote object (MyEntity) by using Login remote method.
What are the ways to do this? (maybe write plugin or known way to modify source code of protobuf). Or maybe there are more flexible solutions than protobuf?
I also would like to use multiple parameters per one rpc; adding java-like attributes to rpc; service and entity; and data-model for entity (variables/fields) to add real-time replication support from entity to another service.
I think it is very flexible for services in game-development.
The only official way to extend .proto syntax is to define custom options.
For example, you could have something like:
extend google.protobuf.ServiceOptions {
optional bool is_entity = 123456;
}
service MyEntity
{
option (is_entity) = true;
rpc GiveItem (GiveItemReq) returns (GiveItemResult) {}
}
The default code generator will not do anything special with this option, but you can access it from your own code and from a protoc plugin if you write one.

Build proto3 gRPC API with overloaded endpoints

I am fairly new to API building, so this may be a broader question than I originally posed.
I am creating an API in Golang (using protobuf 3 and gRPC) that has two similar endpoints:
GET /project/genres
GET /project/{id}
The problem is that when I run curl localhost:8080/project/genres, the pattern matching results in the /project/{id} endpoint getting called with genres as the id. Is there some simple way around this, or do I have to build something into the server code to call the proper function based on the type?
I also tried flipping the ordering of these definitions, just in case the pattern matching had some order of operations that I didn't know about, but this didn't make a difference.
Here are the definitions in my proto file:
message EmptyRequest { }
message ProjectRequest {
string id = 1;
}
message GenreResponse {
int32 id = 1;
string name = 2;
}
message ProjectResponse {
int32 id = 1;
string name = 2;
}
service ProjectService {
rpc GetProject(ProjectRequest) returns (ProjectResponse) {
option (google.api.http) = {
get: "/v1/project/{id}"
};
}
rpc GetGenres(EmptyRequest) returns (GenreResponse) {
option (google.api.http) = {
get: "/v1/project/genres"
};
}
}
I was able to specify a check in the url path template to get around this issue:
rpc GetGenres(EmptyRequest) returns (GenreResponse) {
option (google.api.http) = {
get: "/v1/project/{id=genres}"
};
}
This seems to have fixed the problem. I don't know if there are other solutions, or if this is the right way to do this, but I'm happy to accept other answers if something better comes in.

Is change of a message type to a similar will break backward compatibility?

I want to preserve my application from future issues with backward compatibility. Now I have this version of test.proto:
syntax = "proto3";
service TestApi {
rpc DeleteFoo(DeleteFooIn) returns (BoolResult) {}
rpc DeleteBar(DeleteBarIn) returns (BoolResult) {}
}
message DeleteFooIn {
int32 id = 1;
}
message DeleteBarIn {
int32 id = 1;
}
message BoolResult {
bool result = 1;
}
I'm interested in a case when I will want to change result message of DeleteBar() to a message like "DeleteBarOut":
syntax = "proto3";
service TestApi {
rpc DeleteFoo(DeleteFooIn) returns (BoolResult) {}
rpc DeleteBar(DeleteBarIn) returns (DeleteBarOut) {}
}
message DeleteFooIn {
int32 id = 1;
}
message DeleteBarIn {
int32 id = 1;
}
message DeleteBarOut {
reserved 1;
string time = 2;
}
message BoolResult {
bool result = 1;
}
The question is about backward compatibility on-wire with the old .proto. Can I change the name of the result message from "BoolResult" to "DeleteBarOut"?
Or I should save the old name of the message and edit fields list of "BoolResult"? But then how can I save DeleteFoo() from any changes in this solution?
When making a breaking change to an API like this, it is a common practice to support both versions while transitioning. In order to do this, you would need to add a version field to the request message and then in your request handler, route the message to different backends based on which version is specified. Once there is no more traffic going to the v1 backend you can make a hard cutover to v2 and stop supporting v1.
Unfortunately, if you just change the RPC definition without versioning, it is impossible to avoid a version incompatibility between the server and the client. The other option of course is to add a new RPC endpoint rather than modifying an existing one.
In general if you are making breaking API changes you're going to have an unpleasant time.

Protobuf: how do I get right options order of a method?

I am writing protoc plugin in Go which should generate documentation for our GRPC services and currently struggle in attempt to know right order of options.
First, how the protobuf looks like
syntax = "proto3";
option go_package = "sample";
package sample
import "common/extensions.proto"
message SimpleMessage {
// Id represents the message identifier.
string id = 1;
int64 num = 2;
}
message Response {
int32 code = 1;
}
enum ErrorCodes {
RESERVED = 0;
OK = 200
ERROR = 6000
PANIC = 6001
}
service EchoService {
rpc Echo (SimpleMessage) returns (Response) {
// common.grpc_status is an extension defined somewhere
// these are list of possible return statuses
option (common.grpc_status) = {
status: "OK"
status: "ERROR"
status: "PANIC" // Every status string will must be one of ErrorCodes items
};
option (common.middlewares) = {
middleware: "csrf"
middleware: "session"
}
}
};
As you see, there're two options here. The problem is protoc doesn't bind position directly to tokens. It leaves this information in a special sections where it can be restored via using so called "paths". And these paths are rely on order, while options are hidden and can only be retrieved using proto.GetExtension function which doesn't report option index either.
I need this token location information to report errors. Is there any way to get option index or something equivalent?
I am thinking about using standalone parser just to retrieve the right order, but this feels somewhat awkward. Hope there's a better way.

How to provide read access to implementers of a protocol while others have write access in Swift

I'd like to provide read access for certain properties to all classes/structs that implement a protocol while client classes of the protocol are allowed read+write access. Is there a way to do this in Swift?
protocol WheelsProtocol {
var count: Int {get set}
}
struct Car: WheelsProtocol {
var count: Int = 0
func checkTirePressure() {
// Here, we will iterate over the count of wheels but we should
// not allow the number of wheels to be changed
}
}
struct CarFactory {
var wheels: WheelsProtocol
init(wheels: WheelsProtocol) {
self.wheels = wheels
}
mutating func configureVehicle() {
self.wheels.count = 4
}
}
What about a protocol for car makers and a different one for cars, something like
protocol MakesCars {
var wheelCount: Int {get set}
}
protocol HasWheels{
var wheelCount: Int { get }
}
struct Car: HasWheels {
let wheelCount: Int
init(wheelCount: Int) {
self.wheelCount = wheelCount
}
func checkTirePressure() {
// Here, we will iterate over the count of wheels but we should
// not allow the number of wheels to be changed
wheelCount = 5 //COMPILER ERROR
}
}
struct CarFactory: MakesCars {
...
}
The key is that you have to define a read-only property in the protocol as a var with { get } but in the object that adopts that protocol you have to put let and set it in an initializer.
There is not a way to limit access to a particular method in the way you're describing. From the documentation on Access Control:
Swift provides three different access levels for entities within your
code. These access levels are relative to the source file in which an
entity is defined, and also relative to the module that source file
belongs to.
Public access enables entities to be used within any source file from
their defining module, and also in a source file from another module
that imports the defining module. You typically use public access when
specifying the public interface to a framework.
Internal access enables entities to be used within any source file
from their defining module, but not in any source file outside of that
module. You typically use internal access when defining an app’s or a
framework’s internal structure.
Private access restricts the use of an entity to its own defining
source file. Use private access to hide the implementation details of
a specific piece of functionality.
To accomplish this, you would need to create a separate module (i.e., a Framework) and limit the writes to the internal scope and make the reads of the public scope.

Resources