consuming grpc service using go - go

I plan to use grpc as an inter-service sync communication protocol.
There are lots of different services and I have generated a pb.go file with all the relevant code for client and server using protoc with the go-rpc plugin.
Now I'm trying to figure out the best way or the common way of consuming this service from another service.
Here is what I have so far:
Option 1
use the .proto file from the service (download it)
run the protoc compiler and generate the ...pb.go file for the consumer to use
Option 2
because the ...pb.go is already generated on the grpc service side
to implement the server and my client is another service written in
go I can expose this as a sub module (another .mod file in a sub-
directory)
use go get github.com/usr/my-cool-grpc-service/client
Option 2 seems more appealing to me because it makes the consumption of a service very easy and available for all other services that may require it.
On the other hand I know that the .proto file is the contract that cann generate clients for many different languages and should be used the the source of truth.
I fear that by choosing option 2 I'm unaware of any possible pitfalls I might encounter with regards to backwards compatibility or any other topic..
So, what is the idiomatic way of consuming a gRPC service?

Related

One-click OpenAPI/Swagger build architecture for backend server code

I use swagger to generate both client/server api code. On my frontend (react/ts/axios), integrating the generated code is very easy: I make a change to my spec, consume the new version through NPM, and immediately the new API methods are available
On the server side, however, the process is a lot more janky. I am using java, and I have to copy and paste specific directories over (such as the routes, data types etc) and a lot of the generated code doesn't play nice with my existing application (in terms of imports etc). I am having the same experience with a flask instance that I have.
I know that comparing client to server is apple to oranges, but is there a better way to construct a build architecture so that I don't have to go through this error prone/tedious process every time? Any pointers here?

Can I generate a grpc stub file by referring to an external url?

I was start to learning gRPC / protobuf from last week and I wanna find out best architectures for microservices. So one of the things is to have a IDL repository separately. If so, Any service can generate stub files without proto file copy / paste from another service. Is it possible?
IIRC protoc does not enable referencing protos via URL which is unfortunate as it's a reasonable requirement. It's possible that language-specific implementations of the code generation, do enable this.
I recommend you do publish a project's protos (and possibly cache code protoc-generated from them) in a separate (proto) repo. This facilitates reuse, independent versioning and encourages cross-language use.
If protos are bundled in e.g. a repo including a Golang server implementation, it's more difficult to just clone the protos in order to generate e.g. a Python client.

Do you need copies of protobufs in both client and server in web applications?

I'm not sure if this is the right forum to post this question, but I'm trying to learn gRPC/protobufs in the context of a web application. I am building the UI in Flutter, and the backend in Go with MongoDB. I was able to get a simple go service running and I was able to query it using Kreya, however my question now is - how do I integrate the UI with the backend? In order to make the Kreya call, I needed to import the protobufs. Do I need to maintain identical protobufs in both the front end and backend? Meaning, do I literally have to copy all of my protobufs in the backend into my UI codebase and compile locally there as well? This seems like a nightmare to maintain, as now the protobufs have to be maintained in two places, as opposed to one.
What is the best way to maintain the protobufs?
Yes, but think of the protos as a shared (contract) between your clients and servers.
The protos define the interface by which the client is able to communicate with the server. In order for this to be effective, the client and server need to implement the same interface.
One way to do this is to store your protos in a repo that you share in any clients and servers that implement it. This provides a single source of truth of the protos. I also generate copies of the protos compiled (protoc) to the languages I will use e.g. Golang, Dart etc. in this shared protos repo and import from the repo where needed.
Then, in your case, the client imports the Dart-generated sources and the Golang server imports the Golang-generated sources from the shared repo.
Alternatively, your client and your server could protoc compile appropriate sources when they need them, on-the-fly, usually as part of an automated build process.
Try not to duplicate the protos across clients and servers because it will make it challenging to maintain consistency; it will be challenging to ensure every copy remains synchronized.

Convert Resuable ErrorHandling flow in to connector/component in Mule4

I'm Using Mule 4.2.2 Runtime. We use the errorHandling generated by APIKIT and we customized it according to customer requirement's, which is quite standard across all the upcoming api's.
Thinking to convert this as a connector so that it will appear as component/connector in palette to reuse across all the api's instead copy paste everytime.
Like RestConnect for API specification which will automatically convert in to connector as soon as published in Exchange ( https://help.mulesoft.com/s/article/How-to-generate-a-connector-for-a-REST-API-for-Mule-3-x-and-4-x).
Do we have any option like above publishing mule common flow which will convert to component/connector?
If not, which one is the best way suits in my scenario
1) using SDK
https://dzone.com/articles/mulesoft-custom-connector-using-mule-sdk-for-mule (or)
2) creating jar as mentioned in this page
[https://www.linkedin.com/pulse/flow-reusability-mule-4-nagaraju-kshathriya][2]
Please suggest which one is best and easy way in this case? Thanks in advance.
Using the Mule SDK (1) is useful to create a connector or module in Java. Your questions wasn't fully clear about what do want to encapsulate in a connector. I understand that you want is to share parts of a flow as a connector in the palette, which is different. The XML SDK seems to be more inline with that. You will need to make some changes to encapsulate the flow elements, as described in the documentation. That's actually very similar to how REST connect works.
The method described in (2) is for importing XML flows from a JAR file, but the method described by that link is actually incorrect for Mule 4. The right way to implement sharing flows through a library is the one described at https://help.mulesoft.com/s/article/How-to-add-a-call-to-an-external-flow-in-Mule-4. Note that this method doesn't create a connector that can be used from Anypoint Studio palette.
From personal experience - use common flow, put it to repository and include it as dependency to pom file. Even better solution - include is as flow to the Domain app and use it alone with your shared https connector.
I wrote a lot of Java based custom components. I liked them a lot and was proud of them. But transition from Mule3 to Mule4 killed most of them. Even in Mule4 Mulesoft makes changes periodically which make components incompatible with runtime.

How to wrap a proto message in a go model

I am currently working on moving our rest api based go service to gRPC, using protobuf. it's a huge service with a lot of APIs and already in production so i don't want to make so many changes to ruin the existing system.
So i want to use my go models as source of truth, and to generate .proto messages i think i can manage with this - Generate proto file from golang struct
Now my APIs also expect the request and response according to the defined go models, i will change them to use the .proto models for request and response. but when request/response is passed i want to wrap them in my go models and then the rest of the code doesn't need any changes.
In that case if the request is small i can simply copy all the fields in my go model but in case of big requests or nested models it's a big problem.
1) Am i doing this right way ?
2) No, what's the right way ?
3) Yes, how i can copy the big proto messages to go model and vice-versa for response ?
If you want to use the go models as the source of truth, why you do want to use the .proto generated ones for the REST request/response? Is it because you'd like to use proteus service generation (and share the code between REST and gRPC)?
Usually, if you wanted to migrate from REST to gRPC, the most common way probably would be to use grpc-gateway (note that since around 1.10.x you can use it in-process without resorting to the reverse proxy), but this would be "gRPC-first", where you derive REST from that, while it seems you want "REST- first", since you have your REST APIs already in production. In fact for this reason grpc-gateway probably wouldn't be totally suitable, because it could generate slightly different endpoints from your existing ones. It depends on how much can you afford to break backward compatibility (maybe you could generate a "v2" set of APIs and keep the old "v1" around for a little while, giving time to existing clients to migrate).

Resources