Golang goroutine-safe http client with different timeout? - go

Suppose I have the following function:
func SendRequest(c *Client, timeout time.Duration) {
if timeout > 0 {
c.Timeout = timeout
} else {
c.Timeout = defaultTimeout
}
...
}
I want to allow multiple go-routines to call this function (to share the same HTTP client), but the way this is written apparently can't guarantee goroutine safety. (Also changing the timeout of the client passed in is weird too...)
I'm not sure what's the best way to do this. Should I use different client for different timeouts? Should I use some mutex? Or in general how do I share a HTTP client with different timeouts?
Thanks!

You need to use different Clients. Even if you protect your function with a mutex, you can't protect the internal access by the Client, and another goroutine could change it while making the request.
Multiple Clients can still share the same Transport, and they both will use the DefaultTransport if you don't specify one.

Related

How to disable HTTP/2 in Golang's standard http.Client, or avoid tons of INTERNAL_ERRORs from Stream ID=N?

I want to send a fairly large number (several thousand) of HTTP requests ASAP, without putting too much load on the CDN (has an https: URL, and ALPN selects HTTP/2 during the TLS phase) So, staggering (i.e. time shifting) the requests is an option, but I don't want to wait TOO long (minimize errors AND total round-trip time) and I'm not being rate limited by the server at the scale I'm operating yet.
The problem I'm seeing originates from h2_bundle.go and specifically in either writeFrame or onWriteTimeout when about 500-1k requests are in-flight, which manifests during io.Copy(fileWriter, response.Body) as:
http2ErrCodeInternal = "INTERNAL_ERROR" // also IDs a Stream number
// ^ then io.Copy observes the reader encountering "unexpected EOF"
I'm fine sticking with HTTP/1.x for now, but I would love an explanation re: what's going on. Clearly, people DO use Go to make a lot of round-trips happen per unit time, but most advice I can find is from the perspective of the server, not clients. I've already tried specifying all the relevant time-outs I can find, and cranking up connection pool max sizes.
Here's my best guess at what's going on:
The rate of requests is overwhelming a queue of connections or some other resource in the HTTP/2 internals. Maybe this is fix-able in general or possible to fine tune for my specific use case, but the fastest way to overcome this kind of problem is to rely on HTTP/1.1 entirely, as well as implement limited retry + rate limiting mechanisms.
Aside, I am now using a single retry and rate.Limiter from https://pkg.go.dev/golang.org/x/time/rate#Limiter in addition to the "ugly hack" of disabled HTTP/2, so that outbound requests are able send an initial "burst" of M requests, and then "leak more gradually" at a given rate of N/sec. Ultimately, the errors from h2_bundle.go are just too ugly for end-users to parse. An expected/unexpected EOF should result in the client "giving it another try" or two, which is more pragmatic anyway.
As per the docs, the easiest way to disable h2 in Go's http.Client at runtime is env GODEBUG=http2client=0 ... which I can also achieve in other ways as well. Especially important to understand is that the "next protocol" is pre-negotiated "early" during TLS, so Go's http.Transport must manage that configuration along with a cache/memo to provide its functionality in a performant way. Therefore, use your own httpClient to .Do(req) (and don't forget to give your Request a context.Context so that it's easy to cancel) using a custom http.RoundTripper for Transport. Here's some example code:
type forwardRoundTripper struct {
rt http.RoundTripper
}
func (my *forwardRoundTripper) RoundTrip(r *http.Request) (*http.Response, error) {
return my.rt.RoundTrip(r) // adjust URLs, or transport as necessary per-request
}
// httpTransport is the http.RoundTripper given to a Client as Transport
// (don't forget to set up a reasonable Timeout and other behavior as desired)
var httpTransport = &customRoundTripper{rt: http.DefaultTransport}
func h2Disabled(rt *http.Transport) *http.Transport {
log.Println("--- only using HTTP/1.x ...")
rt.ForceAttemptHTTP2 = false // not good enough
// at least one of the following is ALSO required:
rt.TLSClientConfig.NextProtos = []string{"http/1.1"}
// need to Clone() or replace the TLSClientConfig if a request already occurred
// - Why? Because the first time the transport is used, it caches certain structures.
// (if you do this replacement, don't forget to set a minimum TLS version)
rt.TLSHandshakeTimeout = longTimeout // not related to h2, but necessary for stability
rt.TLSNextProto = make(map[string]func(authority string, c *tls.Conn) http.RoundTripper)
// ^ some sources seem to think this is necessary, but not in all cases
// (it WILL be required if an "h2" key is already present in this map)
return rt
}
func init() {
h2ok := ...
if t, ok := httpTransport.rt.(*http.Transport); ok && !h2ok {
httpTransport.rt = h2Disabled(t.Clone())
}
// tweak rate limits here
}
This lets me make the volume of requests that I need to OR get more-reasonable errors in edge cases.

How to deal with back pressure in GO GRPC?

I have a scenario where the clients can connect to a server via GRPC and I would like to implement backpressure on it, meaning that I would like to accept many simultaneous requests 10000, but have only 50 simultaneous threads executing the requests (this is inspired in Apache Tomcat NIO interface behaviour). I also would like the communication to be asynchronous, in a reactive manner, meaning that the client send the request but does not wait on it and the server sends the response back later and the client then execute some function registered to be executed.
How can I do that in GO GRPC? Should I use streams? Is there any example?
The GoLang API is a synchronous API, this is how GoLang usually works. You block in a while true loop until an event happens, and then you proceed to handle that event. With respect to having more simultaneous threads executing requests, we don't control that on the Client Side. On the client side at the application layer above gRPC, you can fork more Goroutines, each executing requests. The server side already forks a goroutine for each accepted connection and even stream on the connection so there is already inherent multi threading on the server side.
Note that there are no threads in go. Go us using goroutines.
The behavior described, is already built in to the GRC server. For example, see this option.
// NumStreamWorkers returns a ServerOption that sets the number of worker
// goroutines that should be used to process incoming streams. Setting this to
// zero (default) will disable workers and spawn a new goroutine for each
// stream.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func NumStreamWorkers(numServerWorkers uint32) ServerOption {
// TODO: If/when this API gets stabilized (i.e. stream workers become the
// only way streams are processed), change the behavior of the zero value to
// a sane default. Preliminary experiments suggest that a value equal to the
// number of CPUs available is most performant; requires thorough testing.
return newFuncServerOption(func(o *serverOptions) {
o.numServerWorkers = numServerWorkers
})
}
The workers are at some point initialized.
// initServerWorkers creates worker goroutines and channels to process incoming
// connections to reduce the time spent overall on runtime.morestack.
func (s *Server) initServerWorkers() {
s.serverWorkerChannels = make([]chan *serverWorkerData, s.opts.numServerWorkers)
for i := uint32(0); i < s.opts.numServerWorkers; i++ {
s.serverWorkerChannels[i] = make(chan *serverWorkerData)
go s.serverWorker(s.serverWorkerChannels[i])
}
}
I suggest you read the server code yourself, to learn more.

Is this example tcp socket programming sequence of events safe?

I plan on having two services.
HTTP REST service written in Ruby
JSON RPC service written in Go
The Ruby service will open a TCP socket connection to a Go JSON RPC service. It'll do this for each incoming HTTP request it receives. It will send some data over the socket to the Go service and that service will subsequently send back the corresponding data back down the socket.
Go code
The Go service go would look something like this (simplified):
srv := new(service.App) // this would expose a Process method
rpc.Register(srv)
listener, err := net.Listen("tcp", ":8080")
if err != nil {
// handle error
}
for {
conn, err := listener.Accept()
if err != nil {
// handle error
}
go jsonrpc.ServeConn(conn)
}
Notice we serve the incoming connection using a goroutine, so we can handle requests concurrently.
Ruby code
Below is a simple snippet of Ruby code that demonstrates (in theory) the way I would send data to the Go service:
require "socket"
require "json"
socket = TCPSocket.new "localhost", "8080"
b = {
:method => "App.Process",
:params => [{ :Config => JSON.generate({ :foo => :bar }) }],
:id => "0"
}
socket.write(JSON.dump(b))
response = JSON.load socket.readline
My concern is: will this be a safe sequence of events?
I'm not asking if this will be 'thread safe', because i'm not worried about manipulating shared memory across the go routines. I'm more concerned around whether my Ruby HTTP service will get back the data it's expecting?
If I have two parallel requests coming into my HTTP Service (or maybe the Ruby app is hosted behind a load balancer and so different instances of the HTTP service is handling multiple requests), then I could have instance A send the message Foo to the Go service; while instance B sends the message Bar.
The business logic inside the Go service will return different responses depending on its input so I want to be sure that Ruby instance A gets back the correct response for Foo, and B gets back the correct response for Bar.
I assume a socket connection is more like a queue in that if instance A makes a request to the Go service first and then B does, but B is quicker responding for whatever reason, then the Go service will write the response for B to the socket and instance A of the Ruby app will end up reading in the wrong socket data (this is obviously just one possible scenario considering that I could get lucky and have instance B read the socket data before instance A does).
Solutions?
I'm not sure if there is simple solution to this problem. Unless I don't use a TCP socket or RPC and instead rely on standard HTTP in the Go service. But I wanted the performance and less overhead of TCP.
I'm worried the design could get more complicated by maybe having to implement an external queue as a way of synchronising the responses with the Ruby service.
It maybe because the nature of my Ruby service is fundamentally synchronous (HTTP response/request) that I have no option but to switch to HTTP for the Go service.
But wanted to double check with the community first just in case I'm missing something obvious.
Yes this is safe if you create a new connection every time.
That said there are latent issues with your approach:
TCP connections are rather expensive to establish, so you probably want to re-use connections with a connection pool
If you make too many simultaneous requests you will exhaust ports/open file descriptors which will cause your program to crash
You don't have any timeouts in place, so it's possible to end up with orphaned TCP connections which never complete (either because of something bad on the Go side, or network problems)
I think you'd be better off using HTTP (despite the overhead) since libraries are already written to cope with these problems. HTTP is also much more debuggable since you can just curl an endpoint to test it.
Personally I'd probably go with gRPC.

concurrent relaying of data between multiple clients

I am currently working on an application relaying data sent from a mobile phone via a server to a browser using WebSockets. I am writing the server in go and I have a one-to-one relation between the mobile phones and the browsers as shown by the following illustration.
.
However, I want multiple sessions to work simultaneously.
I have read that go provides concurrency models that follow the principle "share memory by communicating" using goroutines and channels. I would prefer using the mentioned principle rather than locks using the sync.Mutex primitive.
Nevertheless, I have not been able to map this information to my issue and wanted to ask you if you could suggest a solution.
I had a similar to your problem, I needed multiple connections which each send data to each other through multiple servers.
I went with the WAMP protocol
WAMP is an open standard WebSocket subprotocol that provides two application messaging patterns in one unified protocol:
Remote Procedure Calls + Publish & Subscribe.
You can also take a look at a project of mine which is written in go and uses the protocol at hand: github.com/neutrinoapp/neutrino
There's nothing wrong with using a mutex in Go. Here's a solution using a mutex.
Declare a map of endpoints. I assume that a string key is sufficient to identify an endpoint:
type endpoint struct {
c *websocket.Conn
sync.Mutex // protects write to c
}
var (
endpoints = map[string]*endpoint
endpointsMu sync.Mutex // protects endpoints
)
func addEndpoint(key string, c *websocket.Connection) {
endpointsMu.Lock()
endpoints[key] = &endpoint{c:c}
endpointsMu.Unlock()
}
func removeEndpoint(key string) {
endpointsMu.Lock()
delete(endpoints, key)
endpointsMu.Unlock()
}
func sendToEndpoint(key string, message []byte) error {
endpointsMu.Lock()
e := endpoints[key]
endpointsMu.Unlock()
if e === nil {
return errors.New("no endpoint")
}
e.Lock()
defer e.Unlock()
return e.c.WriteMessage(websocket.TextMessage, message)
}
Add the connection to the map with addEndpoint when the client connects. Remove the connection from the map with removeEndpoint when closing the connection. Send messages to a named endpoint with sendToEndpoint.
The Gorilla chat example can be adapted to solve this problem. Change the hub map to connections map[string]*connection, update channels to send a type with connection and key and change the broadcast loop to send to a single connection.

How can I orchestrate concurrent request-response flow?

I'm new to concurrent programming, and have no idea what concepts to start with, so please be gentle.
I am writing a webservice as a front-end to a TCP server. This server listens to the port I give it, and returns the response to the TCP connection for each request.
Here is why I'm writing a web-service front-end for this server:
The server can handle one request at a time, and I'm trying to make it be able to process several inputs concurrently, by launching multiple processes and giving them a different port to listen on. For example, I want to launch 30 instances and tell them to listen on ports 20000-20029.
Our team uses PHP, and PHP does not have the capacity to launch server instances and maintain them concurrently, so I'm trying to write an API they can just send HTTP requests to.
So, here is the structure I have thought of.
I will have a main() function. This function launches the processes concurrently, then starts an HTTP server on port 80 and listens.
I have an http.Handler that adds the content of a request to a channel,.
I will have gorutines, one per server instance, that are in an infinite loop.
The code for the function mentioned in item three would be something like this:
func handleRequest(queue chan string) {
for {
request := <-queue
conn, err := connectToServer()
err = sendRequestToServer(conn)
response, err := readResponseFromServer(conn)
}
}
So, my http.Handler can simply do something like queue<- request to add the request to the queue, and handleRequest, which has blocked, waiting for the channel to have something to get, will simply get the request and continue on. When done, the loop finishes, execution comes back to the request := <-queue, and the same thing continues.
My problem starts in the http.Handler. It makes perfect sense to put requests in a channel, because multiple gorutines are all listening to it. However, how can these gorutines return the result to my http.Handler?
One way is to use a channel, let's call it responseQueue, that all of these gorutines would then write to. The problem is that when a response is added to the channel, I don't know which request it belongs to. In other words, when multiple http.Handlers send requests, each executing handler will not know which response the current message in the channel belongs to.
Is there a best practice, or a pattern, to send data to a gorutine from another gorutine and receive the data back?
Create a per request response channel and include it in the value sent to the worker. The handler receives from the channel. The worker sends the result to the channel.

Resources