I have bunch of GRPC microservices and they are using self signed certs. I add authentication info to the GRPC channel which is then used to identify endpoints and provide right services.
Now I want migrate to Istio mTLS.
In phase one, I got Istio to BYPASS all GRPC connections and my services works as it is now.
In Phase two, I want to hand off TLS to Istio, but I am stuck on how to pass the authentication information to GRPC?
How do you handle auth in Istio mTLS setup?
GRPC can support other authentication mechanisms Has anyone used this to inject Istio auth info to GRPC? any other suggestions on how you implemented this in your setup
I am using go-lang just in case if this can be useful to provide any additional information.
Thanks
One way of doing this is using grpc.WithInsecure(), this way you don't have to add certificates to your services, since istio-proxy containers in your pods will TLS terminate any incoming connections.
Client side:
conn, _ := grpc.Dial("localhost:50051", grpc.WithInsecure())
Server side:
s := grpc.NewServer()
lis, _ := net.Listen("tcp", "localhost:50051")
// error handling omitted
s.Serve(lis)
If you still need to use TLS for on-prem deployments, etc. you can simply use a configuration option to specify this such as:
var conn *grpc.ClientConn
var err error
// error handling omitted do not copy paste
if ( config.IstioEnabled ) {
conn, err = grpc.Dial("localhost:50051", grpc.WithInsecure())
} else {
creds, _ := credentials.NewClientTLSFromFile(certFile, "")
conn, err = grpc.Dial("localhost:50051", grpc.WithTransportCredentials(creds))
}
Reference.
I resolved this by generating JWT token for my requests, and injected the token using an Interceptor. Took inspiration from GRPC interceptor for authorization with jwt
Related
To comply with company guidelines of utilizing only HTTP2, how can I establish a new TLS with the underlying connection of the HTTP2 request?
I have successfully used the Hijack method on the http.ResponseWriter to retrieve the conn object from an HTTP1.1 request, and with the help of the conn object and TLS configuration, I have established a TLS server. My goal is to establish a TLS server in a similar manner, but with incoming HTTP2 requests.
Code snippet for HTTP1.1 request
hijacker, ok := w.(http.Hijacker)
if !ok {
return nil, errors.New("hijacking not supported")
}
clientConn, _, err := hijacker.Hijack()
if err != nil {
return nil, errors.New("hijacking failed")
}
tlsConn := tls.Server(clientConn, &tlsConfig)
tlsConn.Write([]byte("hello there"))
We are trying to set up a secure server that can handle different certificates based on the hostname. To do this, Service A sends information about the certificate to Service B through HTTP 1.1. Service B uses the certificate information to create a new secure server (TLS). However, if the incoming request is using HTTP2, this process is not possible as HTTP2 does not support retrieving the underlying connection.
The reason for this setup is to allow Service A, which acts as a proxy, to communicate with Service B in a secure manner. The client's original request to Service A may not trust Service B's certificate, so Service A sends the host information to Service B first, allowing B to create a secure server using the proper certificate. Service A then forwards the client's request to this new secure server which is still using the same underlying connection.
I have a Google Kubernetes Engine cluster, inside several pods with NodePorts, and all is exposed via an Ingress, which creates an HTTP load balancer (LB). I am using custom domain with Google managed SSL certificate for the LB.
My backend is an HTTP server written in Go, using its "net/http" package. It is using self-signed certificate for mTLS with LB (Google's HTTP LB accepts any certificate for mTLS).
Everything works fine, except for one case, and that is when a client creates an HTTP 1.1 connection with the LB and then cancels the request. This cancels the connection between the client and the LB, but LB holds an open connection with my backend until server's timeout.
My use-case requires requests to be opened even for hours, so my server has huge timeout values. Business logic inside the request is correctly using the request's Context and takes into account if the request is canceled by the client.
Everything works as expected if the client makes an HTTP2 request and cancels it i.e. the whole connection down to my backend is canceled.
Here is an example Go handler that simulates a cancelable long-running task:
func handleLongRunningTask(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
t := time.Now()
select {
case <-ctx.Done():
log.Println("request canceled")
case <-time.After(30 * time.Second):
log.Println("request finished")
}
log.Printf("here after: %v\n", time.Since(t))
w.WriteHeader(http.StatusOK)
}
The case <-ctx.Done(): is never called for canceled HTTP 1.1 requests.
For easy testing I am using curl and Ctrl+C; this works:
curl -v --http2 'https://example.com/long_running_task'
and this does not:
curl -v --http1.1 'https://example.com/long_running_task'
It does not matter if the NodePort is HTTPS or HTTP2, the LB has exactly the same behaviour regarding requests canceled by clients.
I tried compiling the server with Go 1.14.4 and 1.13.12 and the results are the same.
Is this a bug in Kubernetes, Ingress, Google Kubernetes Engine, Google's HTTP Load Balancer, Go's HTTP server? Or is it something with HTTP 1.1 that I am missing? What can be wrong and how can I fix this?
...it is not possible to know the HTTP version in the backend, so I could reject all HTTP 1.1 requests. LB is always using the same HTTP version when communicating with its backends, no matter the client's HTTP version.
From your description it looks like the issue might be between the GFE and the backends, since GFE might hold the connections for reuse.
My take is that you're seeing this variation between protocol version because how both handle connection persistence.
For HTTP2, the connection will be open until one of the parties send a termination signal and the earliest takes preference. But for HTTP1.1, it might be prolonged until an explicit connection header is sent, specifying the termination:
An HTTP/1.1 server MAY assume that a HTTP/1.1 client intends to maintain a persistent connection unless a Connection header including the connection-token "close" was sent in the request. If the server chooses to close the connection immediately after sending the response, it SHOULD send a Connection header including the connection-token close.
This might explain why HTTP1.1 follows the same timeout configuration as the LB and HTTP2 doesn't.
I'd recommend trying to actively send termination headers whenever you want to terminate a connection. An example taken from Github:
func (m *MyHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
log.Printf("HTTP request from %s", r.RemoteAddr)
// Add this header to force to close the connection after serving the request.
w.Header().Add("Connection", "close")
fmt.Fprintf(w, "%s", m.hostname)
}
Additionally, there seem to be some success stories switching your cluster to be VPC Native, as it takes out of the equation the kube-proxy connection management.
Finally, it might be that you're in a very specific situation that is worth being evaluated separately. You might want to try to send some replication steps to the GKE team using Issue Tracker.
I hope this helps.
I have a server that redirects to server:443 when connecting to server:80. I have a grpc client that is connecting to server:80 with
clientConn, err = grpc.Dial("server:80", grpc.WithTransportCredentials(credentials.NewTLS(config)))
Its throwing a "tls: first record does not look like a TLS handshake" error. Is there a way to make the client follow the redirects?
gRPC client does not handle 302 redirects. See this https://github.com/grpc/grpc-java/issues/5330 - this is for Java but also applies to Golang.
I want to connect to deployed grpc server
by given ipaddress and host like 192.168.0.1:50032
i tried many stuff but as i checked grpc recommendation to
have grpc client but i want to try how to post via postman or any
by grpc interfaces server. Any suggestion?
conn, err := grpc.Dial("192.168.0.1:50032")
if err != nil {
...
}
Here's a basic tutorial you should follow
Basically you're not able to post GRPC request via Postman, because GRPC messages are binary (protobuf-serialized), while Postman is designed to work only with plain HTTP requests. You'll have to deploy some kind of proxy in front of your service in order to use Postman.
From my point of view, it's much easier just to write your client that fits your needs. The greatest part of job is already done by protoc-gen-grpc, because it generates client API, and you need just to build request and send it.
You cant use http client to send requests against http2 server, but you can do it with any of available h2 client tools.. For example https://github.com/grpc/grpc/blob/master/doc/command_line_tool.md.
#Eli Bendersky : Setting up the client side answer my question also
this code I used
conn, err := grpc.Dial("192.168.0.1:50032", grpc.WithInsecure)
Thank you for your help.
Considering the example from the Go gRPC code base:
func main() {
// Set up a connection to the server.
conn, err := grpc.Dial(address, grpc.WithInsecure())
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
c := pb.NewGreeterClient(conn)
// Contact the server and print out its response.
name := defaultName
if len(os.Args) > 1 {
name = os.Args[1]
}
r, err := c.SayHello(context.Background(), &pb.HelloRequest{Name: name})
if err != nil {
log.Fatalf("could not greet: %v", err)
}
log.Printf("Greeting: %s", r.Message)
}
When consuming a gRPC service from another service what should the scope of the connection (conn) be? I assume it should have affinity with the scope of the request being handled be the consumer service, but I have yet to find any documentation around this. Should I be using a connection pool here?
E.G.
gRPC consumer service receives request
establish connection to gRPC service (either directly or via pool)
make n requests to gRPC service
close gRPC connection (or release back to the pool)
From experience, gRPC client connections should be re-used for the lifetime of the client application as they are safe for concurrent use. Furthermore, one of the key features of gRPC is rapid response from remote procedural calls, which would not be achieved if you have to reconnect on every request received.
Nonetheless, it is highly recommended to use some kind of gRPC load balancing along with these persistent connections. Otherwise, a lot of the load may end up on a few long-lived grpc client-server connections. Load Balancing options include:
A gRPC connection pool on client side combined with a server side TCP (Layer 4) load balancer. This will create a pool of client connections initially, and re-use this pool of connections for subsequent gRPC requests. This is the easier route to implement in my opinion. See Pooling gRPC Connections for an example of grpc connection pooling on grpc client side which uses the grpc-go-pool library.
HTTP/2(Layer 7) load balancer with gRPC support for load balancing requests. See gRPC Load Balancing which gives an overview of different grpc load balancing options. nginx recently added support for gRPC load balancing.