I am looking at the properties on WebSocket. Specifically I want to handle connection errors, but I am a little perturbed by the fact that I can only set the listener property after the connection establishment has been started. Is there any risk here that I might miss for example an onerror message triggered by the connection attempt? eg no network interface
Looking at the spec. it sounds like the connection attempt will happen in parallel/asynch, and I know that will anyway be pretty slow in comparison to executing a couple extra lines of code, even though my connection is LAN based. My concern is that I am calling the js functions from WASM compiled down from Go. And I know there is some performance overhead in these calls across the language boundaries. Should I be concerned?
example code from MDN:
// Create WebSocket connection.
const socket = new WebSocket('ws://localhost:8080');
// Connection opened
socket.addEventListener('open', function (event) {
socket.send('Hello Server!');
});
example source
what I am doing:
webSocketCon := js.Global().Get("WebSocket")
ws := webSocketCon.New(url)
handlerError := js.FuncOf(func(this js.Value, args []js.Value) interface{} {
//code to handle error
return nil
})
ws.Set("onerror", handlerError)
Related
I’m currently maintaining a few HTTP APIs based on the standard library and gorilla mux and running in kubernetes (GKE).
We’ve adopted the http.TimeoutHandler as our “standard” way to have a consistent timeout error management.
A typical endpoint implementation will use the following “chain”:
MonitoringMiddleware => TimeoutMiddleware => … => handler
so that we can monitor a few key metrics per endpoint.
One of our API is typically used in a “fire and forget” mode meaning that clients will push some data and not care for the API response. We are facing the issue that
the Golang standard HTTP server will cancel a request context when the client connection is no longer active (godoc)
the TimeoutHandler will return a “timeout” response whenever the request context is done (see code)
This means that we are not processing requests to completion when the client disconnects which is not what we want and I’m therefore looking for solutions.
The only discussion I could find that somewhat relates to my issue is https://github.com/golang/go/issues/18527; however
The workaround is your application can ignore the Handler's Request.Context()
would mean that the monitoring middleware would not report the "proper" status since the Handler would perform the request processing in its goroutine but the TimeoutHandler would be enforcing the status and observability would be broken.
For now, I’m not considering removing our middlewares as they’re helpful to have consistency across our APIs both in terms of behaviours and observability. My conclusion so far is that I need to “fork” the TimeoutHandler and use a custom context for when an handler should not depend on the client waiting for the response or not.
The gist of my current idea is to have:
type TimeoutHandler struct {
handler Handler
body string
dt time.Duration
// BaseContext optionally specifies a function that returns
// the base context for controling if the server request processing.
// If BaseContext is nil, the default is req.Context().
// If non-nil, it must return a non-nil context.
BaseContext func(*http.Request) context.Context
}
func (h *TimeoutHandler) ServeHTTP(w ResponseWriter, r *Request) {
reqCtx := r.Context()
if h.BaseContext != nil {
reqCtx = h.BaseContext(r)
}
ctx, cancelCtx := context.WithTimeout(reqCtx, h.dt)
defer cancelCtx()
r = r.WithContext(ctx)
...
case <-reqCtx.Done():
tw.mu.Lock()
defer tw.mu.Unlock()
w.WriteHeader(499) // write status for monitoring;
// no need to write a body since no client is listening.
case <-ctx.Done():
tw.mu.Lock()
defer tw.mu.Unlock()
w.WriteHeader(StatusServiceUnavailable)
io.WriteString(w, h.errorBody())
tw.timedOut = true
}
The middleware BaseContext callback would return context.Background() for requests to the “fire and forget” endpoint.
One thing I don’t like is that in doing so I’m losing any context keys written so this new middleware would have strong usage constraints. Overall I feel like this is more complex than it should be.
Am I completely missing something obvious?
Any feedback on API instrumentation (maybe our middlewares are an antipattern) /fire and forget implementations would be welcomed!
EDIT: as most comments are that a request for which the client does not wait for a response has unspecified behavior, I checked for more information on typical clients for which this happens.
From our logs, this happens for user agents that seem to be mobile devices. I can imagine that connections can be much more unstable and the problem will likely not disappear.
I would therefore not conclude that I shouldn't find a solution since this is currently creating false-positive alerts.
When serving a bidirectional stream in gRPC in golang, the canonical stream handler looks something like this:
func (s *MyServer) MyBidiRPC(stream somepb.MyServer_MyBidiServer) error {
for {
data, err := stream.Recv()
if err == io.EOF {
return nil // clean close
}
if err != nil {
return err // some other error
}
// do things with data here
}
}
Specifically, when the handler for the bidi RPC returns, that is the signal to consider the server side closed.
This is a synchronous programming model -- the server stays blocked inside this goroutine (created by the grpc library) while waiting for messages from the client.
Now, I would like to unblock this Recv() call (which ends up calling RecvMsg() on an underlying grpc.ServerStream,) and return/close the stream, because the server process has decided that it is done with this client.
Unfortunately, I can find no obvious way to do this:
There's no Close() or CloseSend() or CloseRecv() or Shutdown()-like function on the bidi server interface generated for my service
The context inside the stream, which I can get at with stream.Context(), doesn't expose user-accessible the cancel function
I can't find a way to pass in a context on the "starting side" for a new connection accepted by the grpc.Server, where I could inject my own cancel function
I could close the entire grpc.Server by calling Stop(), but that's not what I want to do -- only this particular client connection (grpc.ServerStream) should be finished.
I could send a message to the client that makes the client in turn shut down the conection. However, this doesn't work if the client has fallen off the network, which would be solved with a timeout, which has to be pretty long to be generally robust. I want it now because I'm impatient, and, more importantly, at scale, dangling unresponsive clients can be a high cost.
I could (perhaps) dig through the grpc.ServerStream with reflection until I find the transportStream, and then dig out the cancel function out of that and call it. Or dig through the stream.Context() with reflection, and make my own cancel function reference to call. Neither of these seem well advised for future maintainers.
But surely these can't be the only options? Deciding that a particular client no longer needs to be connected is not magic space-alien science. How do I close this stream such that the Recv() call un-blocks, from the server process side, without involving a round-trip to the client?
Unfortunately I don't think there is a great way to do what you are asking. Depending on your goal, I think you have two options:
Run Recv in a goroutine and return from the bidi handler when you need it to return. This will close the context and unblock Recv. This is obviously suboptimal, as it requires care because you now have code executing outside the scope of the handler's execution. It is, however, the closest answer I can seem to find.
If you are trying to mitigate the impact of misbehaving clients by instituting timeouts, you might be able to offload the work of this to the framework with KeepaliveEnforcementPolicy and/or KeepaliveParams. This is probably preferable if this aligns with the reason you are hoping to close the connection, but otherwise isn't of much use.
I'm writing a TCP server which simultaneously accepts multiple connections from mobile devices and some WiFi devices (IOT). The connections needs to be maintained once established, with the 30 seconds timeout if there is no heartbeat received. So it is something like the following:
// clientsMap map[string] conn
func someFunction() {
conn, err := s.listener.Accept()
// I store the conn in clientsMap
// so I can access it, for brevity not
// shown here, then:
go serve(connn)
}
func serve(conn net.Conn) {
timeoutDuration := 30 * time.Second
conn.SetReadDeadline(time.Now().Add(timeoutDuration))
for {
msgBuffer := make([]byte, 2048)
msgBufferLen, err := conn.Read(msgBuffer)
// do something with the stuff
}
}
So there is one goroutine for each client. And each client, once connected to the server, is pending on the read. The server then processes the stuff read.
The problem is that I sometimes need to read things off one client, and then pass data to another (Between a mobile device and a WiFi device). I have stored the connections in clientsMap. So I can always access that. But since each client is handled by one goroutine, shall I be passing the data from one client to another by using a channel? But if the goroutine is blocked waiting for a pending read, how do I make it also wait for data from a channel? Or shall I just obtain the connection for the other party from the clientsMap and write to it?
The documentation for net.Conn clearly states:
Multiple goroutines may invoke methods on a Conn simultaneously.
So yes, it is okay to simply Write to the connections. You should take care to issue a single Write call per message you want to send. If you call Write more than once you risk interleaving messages from different mobile devices. This implies calling Write directly and not via some other API (in other words don't wrap the connection). For instance, the following would not be safe:
json.NewEncoder(conn).Encode(myValue) // use json.Marshal(myValue) instead
io.Copy(conn, src) // use io.ReadAll(src) instead
I plan on having two services.
HTTP REST service written in Ruby
JSON RPC service written in Go
The Ruby service will open a TCP socket connection to a Go JSON RPC service. It'll do this for each incoming HTTP request it receives. It will send some data over the socket to the Go service and that service will subsequently send back the corresponding data back down the socket.
Go code
The Go service go would look something like this (simplified):
srv := new(service.App) // this would expose a Process method
rpc.Register(srv)
listener, err := net.Listen("tcp", ":8080")
if err != nil {
// handle error
}
for {
conn, err := listener.Accept()
if err != nil {
// handle error
}
go jsonrpc.ServeConn(conn)
}
Notice we serve the incoming connection using a goroutine, so we can handle requests concurrently.
Ruby code
Below is a simple snippet of Ruby code that demonstrates (in theory) the way I would send data to the Go service:
require "socket"
require "json"
socket = TCPSocket.new "localhost", "8080"
b = {
:method => "App.Process",
:params => [{ :Config => JSON.generate({ :foo => :bar }) }],
:id => "0"
}
socket.write(JSON.dump(b))
response = JSON.load socket.readline
My concern is: will this be a safe sequence of events?
I'm not asking if this will be 'thread safe', because i'm not worried about manipulating shared memory across the go routines. I'm more concerned around whether my Ruby HTTP service will get back the data it's expecting?
If I have two parallel requests coming into my HTTP Service (or maybe the Ruby app is hosted behind a load balancer and so different instances of the HTTP service is handling multiple requests), then I could have instance A send the message Foo to the Go service; while instance B sends the message Bar.
The business logic inside the Go service will return different responses depending on its input so I want to be sure that Ruby instance A gets back the correct response for Foo, and B gets back the correct response for Bar.
I assume a socket connection is more like a queue in that if instance A makes a request to the Go service first and then B does, but B is quicker responding for whatever reason, then the Go service will write the response for B to the socket and instance A of the Ruby app will end up reading in the wrong socket data (this is obviously just one possible scenario considering that I could get lucky and have instance B read the socket data before instance A does).
Solutions?
I'm not sure if there is simple solution to this problem. Unless I don't use a TCP socket or RPC and instead rely on standard HTTP in the Go service. But I wanted the performance and less overhead of TCP.
I'm worried the design could get more complicated by maybe having to implement an external queue as a way of synchronising the responses with the Ruby service.
It maybe because the nature of my Ruby service is fundamentally synchronous (HTTP response/request) that I have no option but to switch to HTTP for the Go service.
But wanted to double check with the community first just in case I'm missing something obvious.
Yes this is safe if you create a new connection every time.
That said there are latent issues with your approach:
TCP connections are rather expensive to establish, so you probably want to re-use connections with a connection pool
If you make too many simultaneous requests you will exhaust ports/open file descriptors which will cause your program to crash
You don't have any timeouts in place, so it's possible to end up with orphaned TCP connections which never complete (either because of something bad on the Go side, or network problems)
I think you'd be better off using HTTP (despite the overhead) since libraries are already written to cope with these problems. HTTP is also much more debuggable since you can just curl an endpoint to test it.
Personally I'd probably go with gRPC.
I am currently working on an application relaying data sent from a mobile phone via a server to a browser using WebSockets. I am writing the server in go and I have a one-to-one relation between the mobile phones and the browsers as shown by the following illustration.
.
However, I want multiple sessions to work simultaneously.
I have read that go provides concurrency models that follow the principle "share memory by communicating" using goroutines and channels. I would prefer using the mentioned principle rather than locks using the sync.Mutex primitive.
Nevertheless, I have not been able to map this information to my issue and wanted to ask you if you could suggest a solution.
I had a similar to your problem, I needed multiple connections which each send data to each other through multiple servers.
I went with the WAMP protocol
WAMP is an open standard WebSocket subprotocol that provides two application messaging patterns in one unified protocol:
Remote Procedure Calls + Publish & Subscribe.
You can also take a look at a project of mine which is written in go and uses the protocol at hand: github.com/neutrinoapp/neutrino
There's nothing wrong with using a mutex in Go. Here's a solution using a mutex.
Declare a map of endpoints. I assume that a string key is sufficient to identify an endpoint:
type endpoint struct {
c *websocket.Conn
sync.Mutex // protects write to c
}
var (
endpoints = map[string]*endpoint
endpointsMu sync.Mutex // protects endpoints
)
func addEndpoint(key string, c *websocket.Connection) {
endpointsMu.Lock()
endpoints[key] = &endpoint{c:c}
endpointsMu.Unlock()
}
func removeEndpoint(key string) {
endpointsMu.Lock()
delete(endpoints, key)
endpointsMu.Unlock()
}
func sendToEndpoint(key string, message []byte) error {
endpointsMu.Lock()
e := endpoints[key]
endpointsMu.Unlock()
if e === nil {
return errors.New("no endpoint")
}
e.Lock()
defer e.Unlock()
return e.c.WriteMessage(websocket.TextMessage, message)
}
Add the connection to the map with addEndpoint when the client connects. Remove the connection from the map with removeEndpoint when closing the connection. Send messages to a named endpoint with sendToEndpoint.
The Gorilla chat example can be adapted to solve this problem. Change the hub map to connections map[string]*connection, update channels to send a type with connection and key and change the broadcast loop to send to a single connection.