Establishing a new TLS Server with incoming HTTP2 Requests - go

To comply with company guidelines of utilizing only HTTP2, how can I establish a new TLS with the underlying connection of the HTTP2 request?
I have successfully used the Hijack method on the http.ResponseWriter to retrieve the conn object from an HTTP1.1 request, and with the help of the conn object and TLS configuration, I have established a TLS server. My goal is to establish a TLS server in a similar manner, but with incoming HTTP2 requests.
Code snippet for HTTP1.1 request
hijacker, ok := w.(http.Hijacker)
if !ok {
return nil, errors.New("hijacking not supported")
}
clientConn, _, err := hijacker.Hijack()
if err != nil {
return nil, errors.New("hijacking failed")
}
tlsConn := tls.Server(clientConn, &tlsConfig)
tlsConn.Write([]byte("hello there"))
We are trying to set up a secure server that can handle different certificates based on the hostname. To do this, Service A sends information about the certificate to Service B through HTTP 1.1. Service B uses the certificate information to create a new secure server (TLS). However, if the incoming request is using HTTP2, this process is not possible as HTTP2 does not support retrieving the underlying connection.
The reason for this setup is to allow Service A, which acts as a proxy, to communicate with Service B in a secure manner. The client's original request to Service A may not trust Service B's certificate, so Service A sends the host information to Service B first, allowing B to create a secure server using the proper certificate. Service A then forwards the client's request to this new secure server which is still using the same underlying connection.

Related

How to send Client Hello over http proxy tunnel

I have a client which will establish TLS connection to backend service.
There are two kind of scenarios that I encounter.
Direct network: client--->server
In this environment, The client connect directly to server as below code.
var d tls.Dialer
//...
d.Config = &tls.Config{
//...
}
//...
c1 := d.Dial("tcp", addr)
Proxy network: client--->proxy--->server
In this environment, The client is behind a http proxy, client need leverage proxy http tunnel to forward traffic between client and server.
I use golang.org/x/net/proxy in client to connect proxy, as proxy is http proxy, client should use net.Dialer to connect proxy via tcp.
dailer, err := proxy.FromURL(proxy, &net.Dialer{
Timeout: TCP_CONNECT_TIMEOUT,
KeepAlive: TCP_KEEPALIVE_TIMEOUT,
})
c2 := dailer.Dial("tcp", addr)
In case1, client start a TLS connection, in network traffic packets, client trigger TCP connection, after 3 way handshakes, client send Client Hello to server.
In case2, client first use TCP to connect http proxy (ex, 10.0.0.1:8080), next, send CONNECT to proxy, then proxy return Connection Established, however, client do NOT send Client Hello to server.
For case2, I do not know how and where to implement to send Client Hello in client?
Thanks in advance.

HTTPS over Socks5 server implementation

I am trying to implement a Socks5 server that could relay both HTTP and HTTPS traffic.
As the RFC1928 mentions, the following steps to establish a connection and forward the data must be taken :
Client sends a greeting message to the proxy.
Client & proxy authentication (assuming it is successful).
Client sends a request to the proxy to connect to the destination.
The proxy connects to the destination and sends back a response to the client to indicate a successful open tunnel.
The proxy reads the data from the client and forwards it to the destination.
The proxy reads the data from the destination and forwards it to the client.
So far, the proxy works as it should. It is able to relay HTTP traffic using its basic data forwarding mechanism. However, any request from the client to an HTTPS website will be aborted because of SSL/TLS encryption.
Is there another sequence/steps that should be followed to be able to handle SSL/TLS (HTTPS) traffic?
The sequence you have described is correct, even for HTTPS. When the client wants to send a request to an HTTPS server through a proxy, it will request the proxy to connect to the target server's HTTPS port, and then once the tunnel is established, the client will negotiate a TLS handshake with the target server, then send an (encrypted) HTTP request and receive an (encrypted) HTTP response. The tunnel is just a passthrough of raw bytes, the proxy has no concept of any encryption between the client and server. It doesn't care what the bytes represent, its job is just to pass them along as-is.

grpc server ruby with TLS/SSL

I am trying to implement a secure gRPC TLS connection between a ruby client and a ruby server. I am unable to figure out how to configure the server to use the secure connection.
In production, our server is implemented in Go. However, we have been unable to connect to it from ruby by anything other than an insecure connection. I have been tasked with creating a reference TLS connection to show a secure connection from a ruby client will work.
I have the grpc quickstart example greeter working for ruby as an insecure connection.
In the gRPC authentication documentation the Go example replaces this
s := grpc.NewServer()
with this
creds, _ := credentials.NewServerTLSFromFile(certFile, keyFile)
s := grpc.NewServer(grpc.Creds(creds))
for ruby there is this in the quickstart greeter app
s = GRPC::RpcServer.new
but I have been unable to find how to create a secure server.
The requirements include that we must have the server validate the client's public key as trusted in order to allow access to the server. (The client will also need to trust the server's public key to validate the server.)
I've not used Ruby w/ gRPC but am familiar with the Golang SDK.
See here for what appears to be a Ruby gRPC server w/ TLS:
https://developers.google.com/maps-booking/legacy/booking-server-code-samples/gRPC-v0-legacy/partner-api-ruby

Google Cloud HTTP(S) load balancer does not cancel connection with backend

I have a Google Kubernetes Engine cluster, inside several pods with NodePorts, and all is exposed via an Ingress, which creates an HTTP load balancer (LB). I am using custom domain with Google managed SSL certificate for the LB.
My backend is an HTTP server written in Go, using its "net/http" package. It is using self-signed certificate for mTLS with LB (Google's HTTP LB accepts any certificate for mTLS).
Everything works fine, except for one case, and that is when a client creates an HTTP 1.1 connection with the LB and then cancels the request. This cancels the connection between the client and the LB, but LB holds an open connection with my backend until server's timeout.
My use-case requires requests to be opened even for hours, so my server has huge timeout values. Business logic inside the request is correctly using the request's Context and takes into account if the request is canceled by the client.
Everything works as expected if the client makes an HTTP2 request and cancels it i.e. the whole connection down to my backend is canceled.
Here is an example Go handler that simulates a cancelable long-running task:
func handleLongRunningTask(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
t := time.Now()
select {
case <-ctx.Done():
log.Println("request canceled")
case <-time.After(30 * time.Second):
log.Println("request finished")
}
log.Printf("here after: %v\n", time.Since(t))
w.WriteHeader(http.StatusOK)
}
The case <-ctx.Done(): is never called for canceled HTTP 1.1 requests.
For easy testing I am using curl and Ctrl+C; this works:
curl -v --http2 'https://example.com/long_running_task'
and this does not:
curl -v --http1.1 'https://example.com/long_running_task'
It does not matter if the NodePort is HTTPS or HTTP2, the LB has exactly the same behaviour regarding requests canceled by clients.
I tried compiling the server with Go 1.14.4 and 1.13.12 and the results are the same.
Is this a bug in Kubernetes, Ingress, Google Kubernetes Engine, Google's HTTP Load Balancer, Go's HTTP server? Or is it something with HTTP 1.1 that I am missing? What can be wrong and how can I fix this?
...it is not possible to know the HTTP version in the backend, so I could reject all HTTP 1.1 requests. LB is always using the same HTTP version when communicating with its backends, no matter the client's HTTP version.
From your description it looks like the issue might be between the GFE and the backends, since GFE might hold the connections for reuse.
My take is that you're seeing this variation between protocol version because how both handle connection persistence.
For HTTP2, the connection will be open until one of the parties send a termination signal and the earliest takes preference. But for HTTP1.1, it might be prolonged until an explicit connection header is sent, specifying the termination:
An HTTP/1.1 server MAY assume that a HTTP/1.1 client intends to maintain a persistent connection unless a Connection header including the connection-token "close" was sent in the request. If the server chooses to close the connection immediately after sending the response, it SHOULD send a Connection header including the connection-token close.
This might explain why HTTP1.1 follows the same timeout configuration as the LB and HTTP2 doesn't.
I'd recommend trying to actively send termination headers whenever you want to terminate a connection. An example taken from Github:
func (m *MyHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
log.Printf("HTTP request from %s", r.RemoteAddr)
// Add this header to force to close the connection after serving the request.
w.Header().Add("Connection", "close")
fmt.Fprintf(w, "%s", m.hostname)
}
Additionally, there seem to be some success stories switching your cluster to be VPC Native, as it takes out of the equation the kube-proxy connection management.
Finally, it might be that you're in a very specific situation that is worth being evaluated separately. You might want to try to send some replication steps to the GKE team using Issue Tracker.
I hope this helps.

Go gRPC Client Connection Scope and Pooling

Considering the example from the Go gRPC code base:
func main() {
// Set up a connection to the server.
conn, err := grpc.Dial(address, grpc.WithInsecure())
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
c := pb.NewGreeterClient(conn)
// Contact the server and print out its response.
name := defaultName
if len(os.Args) > 1 {
name = os.Args[1]
}
r, err := c.SayHello(context.Background(), &pb.HelloRequest{Name: name})
if err != nil {
log.Fatalf("could not greet: %v", err)
}
log.Printf("Greeting: %s", r.Message)
}
When consuming a gRPC service from another service what should the scope of the connection (conn) be? I assume it should have affinity with the scope of the request being handled be the consumer service, but I have yet to find any documentation around this. Should I be using a connection pool here?
E.G.
gRPC consumer service receives request
establish connection to gRPC service (either directly or via pool)
make n requests to gRPC service
close gRPC connection (or release back to the pool)
From experience, gRPC client connections should be re-used for the lifetime of the client application as they are safe for concurrent use. Furthermore, one of the key features of gRPC is rapid response from remote procedural calls, which would not be achieved if you have to reconnect on every request received.
Nonetheless, it is highly recommended to use some kind of gRPC load balancing along with these persistent connections. Otherwise, a lot of the load may end up on a few long-lived grpc client-server connections. Load Balancing options include:
A gRPC connection pool on client side combined with a server side TCP (Layer 4) load balancer. This will create a pool of client connections initially, and re-use this pool of connections for subsequent gRPC requests. This is the easier route to implement in my opinion. See Pooling gRPC Connections for an example of grpc connection pooling on grpc client side which uses the grpc-go-pool library.
HTTP/2(Layer 7) load balancer with gRPC support for load balancing requests. See gRPC Load Balancing which gives an overview of different grpc load balancing options. nginx recently added support for gRPC load balancing.

Resources