How to wrap a proto message in a go model - go

I am currently working on moving our rest api based go service to gRPC, using protobuf. it's a huge service with a lot of APIs and already in production so i don't want to make so many changes to ruin the existing system.
So i want to use my go models as source of truth, and to generate .proto messages i think i can manage with this - Generate proto file from golang struct
Now my APIs also expect the request and response according to the defined go models, i will change them to use the .proto models for request and response. but when request/response is passed i want to wrap them in my go models and then the rest of the code doesn't need any changes.
In that case if the request is small i can simply copy all the fields in my go model but in case of big requests or nested models it's a big problem.
1) Am i doing this right way ?
2) No, what's the right way ?
3) Yes, how i can copy the big proto messages to go model and vice-versa for response ?

If you want to use the go models as the source of truth, why you do want to use the .proto generated ones for the REST request/response? Is it because you'd like to use proteus service generation (and share the code between REST and gRPC)?
Usually, if you wanted to migrate from REST to gRPC, the most common way probably would be to use grpc-gateway (note that since around 1.10.x you can use it in-process without resorting to the reverse proxy), but this would be "gRPC-first", where you derive REST from that, while it seems you want "REST- first", since you have your REST APIs already in production. In fact for this reason grpc-gateway probably wouldn't be totally suitable, because it could generate slightly different endpoints from your existing ones. It depends on how much can you afford to break backward compatibility (maybe you could generate a "v2" set of APIs and keep the old "v1" around for a little while, giving time to existing clients to migrate).

Related

Do you need copies of protobufs in both client and server in web applications?

I'm not sure if this is the right forum to post this question, but I'm trying to learn gRPC/protobufs in the context of a web application. I am building the UI in Flutter, and the backend in Go with MongoDB. I was able to get a simple go service running and I was able to query it using Kreya, however my question now is - how do I integrate the UI with the backend? In order to make the Kreya call, I needed to import the protobufs. Do I need to maintain identical protobufs in both the front end and backend? Meaning, do I literally have to copy all of my protobufs in the backend into my UI codebase and compile locally there as well? This seems like a nightmare to maintain, as now the protobufs have to be maintained in two places, as opposed to one.
What is the best way to maintain the protobufs?
Yes, but think of the protos as a shared (contract) between your clients and servers.
The protos define the interface by which the client is able to communicate with the server. In order for this to be effective, the client and server need to implement the same interface.
One way to do this is to store your protos in a repo that you share in any clients and servers that implement it. This provides a single source of truth of the protos. I also generate copies of the protos compiled (protoc) to the languages I will use e.g. Golang, Dart etc. in this shared protos repo and import from the repo where needed.
Then, in your case, the client imports the Dart-generated sources and the Golang server imports the Golang-generated sources from the shared repo.
Alternatively, your client and your server could protoc compile appropriate sources when they need them, on-the-fly, usually as part of an automated build process.
Try not to duplicate the protos across clients and servers because it will make it challenging to maintain consistency; it will be challenging to ensure every copy remains synchronized.

consuming grpc service using go

I plan to use grpc as an inter-service sync communication protocol.
There are lots of different services and I have generated a pb.go file with all the relevant code for client and server using protoc with the go-rpc plugin.
Now I'm trying to figure out the best way or the common way of consuming this service from another service.
Here is what I have so far:
Option 1
use the .proto file from the service (download it)
run the protoc compiler and generate the ...pb.go file for the consumer to use
Option 2
because the ...pb.go is already generated on the grpc service side
to implement the server and my client is another service written in
go I can expose this as a sub module (another .mod file in a sub-
directory)
use go get github.com/usr/my-cool-grpc-service/client
Option 2 seems more appealing to me because it makes the consumption of a service very easy and available for all other services that may require it.
On the other hand I know that the .proto file is the contract that cann generate clients for many different languages and should be used the the source of truth.
I fear that by choosing option 2 I'm unaware of any possible pitfalls I might encounter with regards to backwards compatibility or any other topic..
So, what is the idiomatic way of consuming a gRPC service?

gRPC and schema evolution guarantees?

I am evaluating using gRPC. On the subject of compatibility with 'schema evolution', i did find the information that protocol buffers, which are exchanged by gRPC for data serialization, have a format such that different evolutions of a data in protobuf format can stay compatible, as long as the schemas evolution allows it.
But that does not tell me wether two iterations of a gRPC client/server will be able to exchange a command that did not change in the schema, regardless of the changes in the rest of the schema?
Does gRPC quarantee that an older generated version of a client or server code will always be able to launch/answer a command that has not changed in the schema file, with any more recent schema generated code on the interlocutor side, reglardless of the rest of the schema? (Assuming no other breaking changes like a non backward compatible gRPC version change)
gRPC has compatibility for a method as long as:
the proto package, service name, and method name is unchanged
the proto request and response messages are still compatible
the cardinality (unary vs streaming) of the request and response message remains unchanged
New services and methods can be added at will and not impact compatibility of preexisting methods. This doesn't get discussed much because those restrictions are mostly what people would expect.
There is actually some wiggle room in changing cardinality as unary is encoded the same as streaming on-the-wire, but it's generally just better to assume you can't change cardinality and add a new method instead.
This topic is discussed in the Modifying gRPC Services over Time talk (PDF slides and Youtube links available). I'll note that the slides are not intended to stand alone.

Management layer above Thrift

Thrift sounds awesome but can't find some basic stuff I'm used to in RPC frameworks (as HttpServlet). Example of the things I can't find: session management, filtering, upload/download progress.
I understand that the missing stuff might be a management layer on top of Thrift. If so, any example of such a layer? Perhaps AOP (Aspect Oriented)?
I can't imagine such a layer that compiles to all languages and that's I'm missing. Taking session management as an example, there might be several clients that all need to do some authentication and pass the session_id upon each RPC. I would expect a similar API for all languages doing so.
Anyone knows of a a management layer for Thrift?
So thrift itself is not going to help you out a lot here.
I have had similar desires, and have a few suggestions:
1. Put your management objects into the IDL
Simply add an api token or common transfer data struct as a parameter to all of your service methods. Set it as parameter id 15 so that it will always be the last parameter, even if you add others in the middle.
As the first step in your handler you can validate/store/do whatever with the extra data.
This has the advantage that it is valid in any platform that thrift supports.
2. Use thrift over http
If you use http as your transport, you can include whatever data as you want as http headers, and the thrift content as the body.
This will often require a custom http client for every platform you use to inject the data, and a custom handler on the server to use the data, but neither of those are prohibitively difficult.
3. Hack the protocol
It is possible to create your own custom protocol that wraps another protocol and injects custom data. Take a look at how the multiplexed protocol works in the thrift library for most languages:
c# here. It sends the method name across the wire as service:method. The multiplexed processor unwraps this encoding and passes it on to the appropriate processor.
I have used a similar method to encode arbitrary key/value pairs (like http headers) inside the method name.
The downside to this is that you need to write a more complicated extension for each platform you will be using. Once. It varies a bit from language to language how this works, but it is generally simple enough once you figure it out once.
These are just a few ideas I have had, and I am sure there are others. The nice thing about thrift is how the individual components are decoupled from each other. If you have special needs you can swap any of them out as you need to to add specific functionality.

Should I build a REST backend for GWT application

I am planning a new application and have been experimenting with GWT as a possible frontend. The design question I am facing is this.
Should I use
Option A: GWT-RPC and build the app quickly
Option B: Build a REST backend using Spring MVC 3.0 with all the great #Controller, #Service, #Repository annotations and build a client side library to talk to the backend using the GWT overlay features and the GWT Request builder?
I am interested in all the pros and cons and people experiences with this type of design?
Ask yourself the question: "Will I need to reuse the server-side interface with a non-GWT front-end?"
If the answer is "no, I'll just have a GWT client": You can use GWT-RPC, and take advantage of the fact that you can use your Java objects both on the server and the client-side. This can also make the communication a bit more efficient, at least when used with <inherits name="com.google.gwt.user.RemoteServiceObfuscateTypeNames" />, which shortens the type names to small numeric values. You'll also get the advantage of better error handling (using Exceptions), type safety, etc.
If the answer is "yes, I'll make my service accessible for multiple kinds of front-ends": You can use REST with JSON (or XML), which can also be understood by non-GWT clients. In addition to switching clients, this would also allow you to switch to a different server implementation (maybe non-Java) in the future more easily. The disadvantage is, that you'll probably have to write wrappers (JavaScript Overlay Types) or transformation code on the GWT client side to build nice Java objects from the JSON objects. You'll have to be especially careful when you deploy a new version of the service, which brings us back to the lack of type safety.
The third option of course would be to build both. I'd choose this option, if the public REST interface should be different from the GWT-RPC interface anyway - maybe providing just a subset of easy to use services.
You can do both if use also use the RestyGWT project. It will make calling REST based JSON resources as easy as using GWT-RPC. Plus you can typically reuse the same request response DTOs from the server side on the client side.
We ran into the same issue when we created the Spiffy UI Framework. We chose REST and I would never go back. I'd even say GWT-RPC is a GWT Anti-pattern.
REST is a good idea even if you never intend to expose your REST endpoints. Creating a REST API will make your UI faster, your API better, and your entire application more maintainable.
I would say build a REST backend. In my last project we started by developing using GWT-RPC for the first few months, we wanted fast bootstrapping. Later on, when we needed the REST API, it was so expensive to do the refactoring we ended up with two backend APIs (REST and RPC)
If you build a proper REST backend, and a deserialization infra on the client side (to transform the json\xml to GWT Java objects) then the benefit of the RPC is almost nothing.
Another sometimes forgotten advantage of the REST approach is that it's more natural to the browser running the client, RPC is a propitiatory protocol, where all the requests are using POST. You can benefit from client side caching when reading resources in the standard way.
Answering ams comments:
Regarding the RPC protocol, last time I "sniffed" it using firebug it didn't look like json, so I don't know about that. Though, even if it is json based, it still uses only the HTTP POST method to communicate with the server, so my point here about caching is still valid, the browser won't cache POST requests.
Regarding the retrospective and what could have done better, writing the RPC service in a resource oriented architecture could lead later to easier porting to REST. remember that in REST one usually exposes resources with the basic CRUD operations, if you focus on that approach when writing the RPC service then you should be fine.
The REST architectural style promotes inspectable messages (which aids debugging and security), API evolution, multiple platforms, simple interfaces, failure recovery, high scalability, and (optionally) extensible systems via code on demand. It trades per-interaction performance for overall network efficiency. It reduces the server's control over consistent application behavior.
The "RPC style" (as we speak of it in opposition to REST) promotes platform uniformity, interface variability, code generation (and thereby the ability to pretend the network doesn't exist, but see the Fallacies), and customized interactions. It trades overall network efficiency for high per-interaction performance. It increases the server's control over consistent application behavior.
If your application desires the former qualities, use the REST style. If it desires the latter, use the RPC style.
If you're planning on using using Hibernate/JPA on the server-side and sending the resulting POJO's with relational data in them to the client (ie. an Employee object with a collection of Phones), definitely go with the REST implementation.
I started my GWT project a month ago using GWT RPC. All was well until I tried to serialize an object from the underlying db with a One-To-Many relationship in it. And got the dreaded:
com.google.gwt.user.client.rpc.SerializationException: Type 'org.hibernate.collection.PersistentList' was not included in the set of types which can be serialized by this SerializationPolicy
If you encounter this and want to stay with GWT RPC you will have to use something like:
GWT Request Factory (www.gwtproject.org/doc/latest/DevGuideRequestFactory.html) - which forces you to write 3+ classes/interfaces per POJO you want to share with the client. OUCH!
Gilead (sourceforge.net/projects/gilead/) - which appears to a dead project.
I'm now using RestyGWT. The switch was fairly painless and my POJO's serialize without issue.
I would say that this depends on the scope of your total application. If your backend should be used by other clients, needs to be extendable etc. then create a separate module using REST. If the backend is to be used by only this client, then go for the GWT-RPC solution.

Resources