multiple gRPC servers and a unified API - protocol-buffers

Suppose we have multiple gRPC servers that provides different services. We would like to have one unified API in the client side that acts like a proxy (i.e a translator) that calls appropriate function based on the given request. What do you suggest for implementing of such idea? Should I have a gRPC proxy server (in the client side) that acts as a gRPC server and a client at the same time?

Related

Disambiguating and controlling access to public, internal and hybrid gRPC APIs

I currently have a mobile application talks to a GraphQL API service which terminates SSL and then proxies requests to gRPC services. The gRPC services only talk to each other via gRPC.
This system works okay but writing all of the boilerplate to plumb the gRPC APIs through the GraphQL layer to the client is tedious and can be error prone.
I’ve started exploring the idea of talking directly to the backend via gRPC as the tooling has improved substantially over the last few years.
One issue I’m still wondering about, though, is the best way to disambiguate APIs only meant to be called internally by other services from those callable publicly by the native client.
There is also a third category, “hybrid” APIs where it can be called either internally or externally.
Examples —-
Internal: Sending an SMS via Twilio
Public: Log in to account
Hybrid: Update whether an inbox item is read (both from the app when opening a conversation and on the backend when a message is sent)
One option I thought of was an interceptor that passes along a context to indicate if the request is internal or public and use this in the code to return an error or perform additional validation on public requests.
Another option is creating an API service which is still gRPC but fulfills the same purpose as the GraphQL API service.
A third option is disambiguating public and internal services at an organizational level which might require duplicating some APIs that exist for both.
Are there other options I’m unaware of? How have you tackled this issue?

Service to intercept client-server websocket communication for purposes of API-untangling

I'm writing a custom client for an existing framework but in a different language than the supported client. Now I would like to intercept the traffic of the existing client-server connection to get a better grasp of the API intrinsics, which is always quite tedious if you have to extract it from the code alone. I had a look at postman but it doesn't seem to allow interception of websocket messages (https appears to be possible thorugh postman interceptor. Is there a similar tool for websockets?
Basically what I'm looking for is just a relay that forwards every request as-is to the server and vice-versa for the response.

how to integrate grpc service with graphql?

I have a grpc service that contains several apis(getName, getInfo, etc), and a grpc endpoint, something like this,
configuration-dev-grpc.kmc-default.us-west-2**.com:443
I create a graphql project, how can I connect the graplql with grpc service through that endpoint, or I need to do it in another way?
gRPC and GraphQL are often considered alternatives but, if we consider gRPC as just procedure calls, there's no reason why a GraphQL server could not be implemented against a gRPC client to serve GraphQL clients.
At least one group has a solution:
https://github.com/ysugimoto/grpc-graphql-gateway
If you control the gRPC server, it would possibly be preferable to implement the GraphQL server alongside it, i.e. directly against whatever API it provides. Doing this would avoid the networking between gRPC client and server and the Protobuf (un)marshaling.

Client-side load balancing in practice seems to be almost the same as server-side load balancing. Is that so?

In server-side load balancing, the clients call an intermediate server, which then decides which instance of the actual server (or microservice) to call.
In client-side load balancing also, the clients call an intermediate server (the API gateway - Zuul for instance, configured with a load-balancer - Ribbon for instance and a naming server - Eureka for instance), which then decides which instance of the microservice to call.
Unless we include the API gateway as part of the client, the client still doesn't know the IP address of the exact server to which it should send the request. Seems to me, to be a lot like server-side load-balancing. Is there something I'm missing?
(Including the API gateway as part of client seems weird, since its usually deployed on a different server from the client)
In Client Side load balancing, the Client is doing the heavy lifting of discovery and connection to the origin server. The client may reference a lookup (Eureka, Consul, maybe DDNS), to discover what the end destination is and the registry will dole out a valid origin. The communication is direct, client to server without a middle man.
In Server Side load balancing, the client is dumb, and makes a call to a predetermined address (usually DNS or static IP). That device then either proxies (TCP or protocol level) the connection to the origin server based on either a lookup, heartbeat, etc.
I've seen benefits in client side routing in that as long as you have IP connectivity between client and server, the work of the infrastructure is trivial to add new services, locations, products, apps, etc. As long as the new server can "register" with the registry, and the client has IP access to the server, it just works and IT does not have to be involved in rolling out your new service.
The drawback is it makes the client a little more heavy, it does require IP access direct from client to server, and may be confusing for traditional IT folks and auditors. Each client needs to be aware of the registry and have code to make calls (or use a sidecar/sidekick).
I've seen it in practice where a group started to transition their apps to a Docker environment, and they were able to run their Docker based apps along side their non-docker versions at the same time w/o having to get IT involved and do a lot of experimentation and testing quickly and autonomously.
If you have autonomous teams, are highly advanced on the devops spectrum, and have a lot of trust with your teams, Client Side routing and load balancing may be a good experience for you.

One Web API calls the other Web APIs

I have 3 Web API Servers which have the same functionality. I am going to add another Web API server which will be used only for Proxy. So All clients from anywhere and any devices will call Web API Proxy server and the Proxy server will transfer randomly the client requests to the other 3 Web API servers.
I am doing this way because:
There are a lot of client requests in a minute and I can not use only 1 Web API server.
If one server was dead, clients can still send request to the other servers. (I need at least 1 web servers response to the clients )
The Question is:
What is the best way to implement the Web API Proxy server?
Is there a better way to handle high volume client requests?
I need at least 1 web server response to the clients. If I have 3 servers and 2 of them are dead.
Please give me some links or documents that can help me.
Thanks
Sounds like you need a reverse proxy. Apache HTTP Server and NGINX can be configured to act as a load balanced reverse proxy.
NGINX documentation: http://nginx.com/resources/admin-guide/reverse-proxy/
Apache HTTP Server documentation: http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
What you are describing is call Load Balancing and Azure (which seems you are using from your comments) provides it out of the box both for Cloud Services and Websites. You should create as many instances as you like under the same cloud service and open a specific port (which will be load balanced) under cloud service endpoints.

Resources