How to get the gRPC server IP from Go client - go

I use docker-compose or kubernetes to deploy my gRPC servers, and want to get the server IP address in the go client. Does the gRPC library provides the get server-side IP method?
BTW, the scenario here is that i want to log the server ip to check whether nginx, envoy and other L7 load balancers make correct routing decisions.

It's an interesting question, as you already have knowledge about the gRPC server address prior to a client creation
conn, err := grpc.Dial(*serverAddr)
if err != nil {
...
}
defer conn.Close()
client := pb.NewRouteGuideClient(conn)
I don't believe there is a method for getting server IP in the standard gRPC library. If you use external load balancers/proxies (e.g. Nginx or Envoy) you can add the end server IP to the metadata on the server side
md := metadata.Pairs(
"serverIP", "127.0.0.1",
)
The LB distributes the RPC call to one
of the available backend servers that implement the actual logic for
serving the call. The LB keeps track of load on each backend and
implements algorithms for distributing load fairly. The clients
themselves do not know about the backend servers.
read more https://grpc.io/blog/loadbalancing
Remote Procedure Call (RPC) is a higher level abstraction, you shouldn't deal with networking using RPC.
If you're trying to implement client side load balancing you should consider load balancing within gRPC (https://github.com/grpc/grpc/blob/master/doc/load-balancing.md), which is a special protocol you have to implement.

you can define a custom picker, like round-robin 1.
The struct map[balancer.SubConn]SubConnInfo can be used when you pick an address, allowing you to get SubConnInfo.

Related

Google Cloud HTTP(S) load balancer does not cancel connection with backend

I have a Google Kubernetes Engine cluster, inside several pods with NodePorts, and all is exposed via an Ingress, which creates an HTTP load balancer (LB). I am using custom domain with Google managed SSL certificate for the LB.
My backend is an HTTP server written in Go, using its "net/http" package. It is using self-signed certificate for mTLS with LB (Google's HTTP LB accepts any certificate for mTLS).
Everything works fine, except for one case, and that is when a client creates an HTTP 1.1 connection with the LB and then cancels the request. This cancels the connection between the client and the LB, but LB holds an open connection with my backend until server's timeout.
My use-case requires requests to be opened even for hours, so my server has huge timeout values. Business logic inside the request is correctly using the request's Context and takes into account if the request is canceled by the client.
Everything works as expected if the client makes an HTTP2 request and cancels it i.e. the whole connection down to my backend is canceled.
Here is an example Go handler that simulates a cancelable long-running task:
func handleLongRunningTask(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
t := time.Now()
select {
case <-ctx.Done():
log.Println("request canceled")
case <-time.After(30 * time.Second):
log.Println("request finished")
}
log.Printf("here after: %v\n", time.Since(t))
w.WriteHeader(http.StatusOK)
}
The case <-ctx.Done(): is never called for canceled HTTP 1.1 requests.
For easy testing I am using curl and Ctrl+C; this works:
curl -v --http2 'https://example.com/long_running_task'
and this does not:
curl -v --http1.1 'https://example.com/long_running_task'
It does not matter if the NodePort is HTTPS or HTTP2, the LB has exactly the same behaviour regarding requests canceled by clients.
I tried compiling the server with Go 1.14.4 and 1.13.12 and the results are the same.
Is this a bug in Kubernetes, Ingress, Google Kubernetes Engine, Google's HTTP Load Balancer, Go's HTTP server? Or is it something with HTTP 1.1 that I am missing? What can be wrong and how can I fix this?
...it is not possible to know the HTTP version in the backend, so I could reject all HTTP 1.1 requests. LB is always using the same HTTP version when communicating with its backends, no matter the client's HTTP version.
From your description it looks like the issue might be between the GFE and the backends, since GFE might hold the connections for reuse.
My take is that you're seeing this variation between protocol version because how both handle connection persistence.
For HTTP2, the connection will be open until one of the parties send a termination signal and the earliest takes preference. But for HTTP1.1, it might be prolonged until an explicit connection header is sent, specifying the termination:
An HTTP/1.1 server MAY assume that a HTTP/1.1 client intends to maintain a persistent connection unless a Connection header including the connection-token "close" was sent in the request. If the server chooses to close the connection immediately after sending the response, it SHOULD send a Connection header including the connection-token close.
This might explain why HTTP1.1 follows the same timeout configuration as the LB and HTTP2 doesn't.
I'd recommend trying to actively send termination headers whenever you want to terminate a connection. An example taken from Github:
func (m *MyHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
log.Printf("HTTP request from %s", r.RemoteAddr)
// Add this header to force to close the connection after serving the request.
w.Header().Add("Connection", "close")
fmt.Fprintf(w, "%s", m.hostname)
}
Additionally, there seem to be some success stories switching your cluster to be VPC Native, as it takes out of the equation the kube-proxy connection management.
Finally, it might be that you're in a very specific situation that is worth being evaluated separately. You might want to try to send some replication steps to the GKE team using Issue Tracker.
I hope this helps.

How to connect to deployed grpc-server by ip address and host

I want to connect to deployed grpc server
by given ipaddress and host like 192.168.0.1:50032
i tried many stuff but as i checked grpc recommendation to
have grpc client but i want to try how to post via postman or any
by grpc interfaces server. Any suggestion?
conn, err := grpc.Dial("192.168.0.1:50032")
if err != nil {
...
}
Here's a basic tutorial you should follow
Basically you're not able to post GRPC request via Postman, because GRPC messages are binary (protobuf-serialized), while Postman is designed to work only with plain HTTP requests. You'll have to deploy some kind of proxy in front of your service in order to use Postman.
From my point of view, it's much easier just to write your client that fits your needs. The greatest part of job is already done by protoc-gen-grpc, because it generates client API, and you need just to build request and send it.
You cant use http client to send requests against http2 server, but you can do it with any of available h2 client tools.. For example https://github.com/grpc/grpc/blob/master/doc/command_line_tool.md.
#Eli Bendersky : Setting up the client side answer my question also
this code I used
conn, err := grpc.Dial("192.168.0.1:50032", grpc.WithInsecure)
Thank you for your help.

Client-side load balancing in practice seems to be almost the same as server-side load balancing. Is that so?

In server-side load balancing, the clients call an intermediate server, which then decides which instance of the actual server (or microservice) to call.
In client-side load balancing also, the clients call an intermediate server (the API gateway - Zuul for instance, configured with a load-balancer - Ribbon for instance and a naming server - Eureka for instance), which then decides which instance of the microservice to call.
Unless we include the API gateway as part of the client, the client still doesn't know the IP address of the exact server to which it should send the request. Seems to me, to be a lot like server-side load-balancing. Is there something I'm missing?
(Including the API gateway as part of client seems weird, since its usually deployed on a different server from the client)
In Client Side load balancing, the Client is doing the heavy lifting of discovery and connection to the origin server. The client may reference a lookup (Eureka, Consul, maybe DDNS), to discover what the end destination is and the registry will dole out a valid origin. The communication is direct, client to server without a middle man.
In Server Side load balancing, the client is dumb, and makes a call to a predetermined address (usually DNS or static IP). That device then either proxies (TCP or protocol level) the connection to the origin server based on either a lookup, heartbeat, etc.
I've seen benefits in client side routing in that as long as you have IP connectivity between client and server, the work of the infrastructure is trivial to add new services, locations, products, apps, etc. As long as the new server can "register" with the registry, and the client has IP access to the server, it just works and IT does not have to be involved in rolling out your new service.
The drawback is it makes the client a little more heavy, it does require IP access direct from client to server, and may be confusing for traditional IT folks and auditors. Each client needs to be aware of the registry and have code to make calls (or use a sidecar/sidekick).
I've seen it in practice where a group started to transition their apps to a Docker environment, and they were able to run their Docker based apps along side their non-docker versions at the same time w/o having to get IT involved and do a lot of experimentation and testing quickly and autonomously.
If you have autonomous teams, are highly advanced on the devops spectrum, and have a lot of trust with your teams, Client Side routing and load balancing may be a good experience for you.

WebSockets and Load Balancing, a bottleneck?

When having a bunch of systems that act as WebSocket drones and a Load Balancer in front of those drones. When a WebSocket request comes into the LB it chooses a WebSocket drone, and the WebSocket is established. (I use AWS ELB tcp SSL-terminated at ELB)
Question:
Now does the created WebSocket go through the LB, or does the LB forward the WebSocket request to a WebSocket drone and thus there is a direct link between client and a WebSocket drone?
If the WebSocket connection goes through the LB, this would make the LB a huge bottleneck.
Removing the LB and handing clients a direct IP of a WebSocket drone could circumvent this bottleneck but requires creating this logic myself, which I'm planning to do (depending on this questions' answers).
So are my thoughts on how this works correct?
AWS ELB as LB
After looking at the possible duplicate suggested by Pavel K I conclude that the WebSocket connection will go through the AWS ELB, as in:
Browser <--WebSocket--> LB <--WebSocket--> WebSocketServer
This makes the ELB a bottleneck, what I would have wanted is:
Browser <--WebSocket--> WebSocketServer
Where the ELB is only used to give the client a hostname/IP of an available WebSocketServer.
DNS as LB
The above problem could be circumvented by balancing on DNS-level, as explained in the possible duplicate. Since that way DNS will give an IP of an available WebSocketServer when ws.myapp.com is requested.
Downside is that this would require constantly updating DNS with up/down WebSocketServer changes (if your app is elastic this becomes even more of a problem).
Custom LB
Another option could be to create a custom LB which constantly monitors WebSocketServers and gives back the IP of an available WebSocketServer when the client requests so.
Downside is that the client needs to perform a separate (AJAX) request to get the IP of an available WebSocketServer, whereas with AWS ELB the Load Balancing happens implicitly.
Conclusion
Choosing the better evil..

Getting (non-HTTP) Client IP with load-balancer

Say I want to run something like the nyan cat telnet server (http://miku.acm.uiuc.edu/) and I need to handle 10,000 concurrent connections total. I have 10 servers in addition to a load balancer. Each server can handle 1,000 concurrent connections, and I want to put a load balancer in front of it to randomly divide the traffic to the 10 servers.
From what I've read, it's fairly simple for a load balancer to pass an HTTP request (along with the client IP) to the backend server, perhaps with FastCGI or with an X- header.
What would be the simplest way for the load balancer to pass the client IP to the backend server in this case with a simple TCP server? Would a hardware load balancer be needed, or are there ways to do this simply through software?
In other words, is there a uniform way to pass client IP when load balancing for non-HTTP stuff? The same way Google gets client IP when they load-balances Google Talk XMPP server or their Gmail IMAP server
This isn't for anything in specific; I'm just curious about if and how it can be done. Thanks in advance!
The simplest way would be for the load balancer to make itself completely invisible and pass the connection on with the source and destination IP address unmolested. For this to work, the same IP address must be assigned (as a loopback address, not to a physical interface) to all 10 servers and that would be the IP address the clients connect to. Internet traffic to that IP address has to go to the load balancer. The load balancer must be the default gateway for the servers.

Resources