How is it possible to close a websocket connection and pass it a message / code?
The docs only define func (ws *Conn) Close() error without any arguments
I would like to receive the event from JavaScript like this:
websocket.onclose = function(event) {
console.log(event);
};
I am using golang.org/x/net/websocket
Send a close message before closing the connection:
cm := websocket.FormatCloseMessage(websocket.CloseNormalClosure, "add your message here")
if err := c.WriteMessage(websocket.CloseMessage, cm); err != nil {
// handle error
}
c.Close()
It is not possible to specify the close message with the golang.org/x/net/websocket package.
Related
Using the following proto buffer code :
syntax = "proto3";
package pb;
message SimpleRequest {
int64 number = 1;
}
message SimpleResponse {
int64 doubled = 1;
}
// All the calls in this serivce preform the action of doubling a number.
// The streams will continuously send the next double, eg. 1, 2, 4, 8, 16.
service Test {
// This RPC streams from the server only.
rpc Downstream(SimpleRequest) returns (stream SimpleResponse);
}
I'm able to successfully open a stream, and continuously get the next doubled number from the server.
My go code for running this looks like :
ctxDownstream, cancel := context.WithCancel(ctx)
downstream, err := testClient.Downstream(ctxDownstream, &pb.SimpleRequest{Number: 1})
for {
responseDownstream, err := downstream.Recv()
if err != io.EOF {
println(fmt.Sprintf("downstream response: %d, error: %v", responseDownstream.Doubled, err))
if responseDownstream.Doubled >= 32 {
break
}
}
}
cancel() // !!This is not a graceful shutdown
println(fmt.Sprintf("%v", downstream.Trailer()))
The problem I'm having is using a context cancellation means my downstream.Trailer() response is empty. Is there a way to gracefully close this connection from the client side and receive downstream.Trailer().
Note: if I close the downstream connection from the server side, my trailers are populated. But I have no way of instructing my server side to close this particular stream. So there must be a way to gracefully close a stream client side.
Thanks.
As requested some server code :
func (b *binding) Downstream(req *pb.SimpleRequest, stream pb.Test_DownstreamServer) error {
request := req
r := make(chan *pb.SimpleResponse)
e := make(chan error)
ticker := time.NewTicker(200 * time.Millisecond)
defer func() { ticker.Stop(); close(r); close(e) }()
go func() {
defer func() { recover() }()
for {
select {
case <-ticker.C:
response, err := b.Endpoint(stream.Context(), request)
if err != nil {
e <- err
}
r <- response
}
}
}()
for {
select {
case err := <-e:
return err
case response := <-r:
if err := stream.Send(response); err != nil {
return err
}
request.Number = response.Doubled
case <-stream.Context().Done():
return nil
}
}
}
You will still need to populate the trailer with some information. I use the grpc.StreamServerInterceptor to do this.
According to the grpc go documentation
Trailer returns the trailer metadata from the server, if there is any.
It must only be called after stream.CloseAndRecv has returned, or
stream.Recv has returned a non-nil error (including io.EOF).
So if you want to read the trailer in client try something like this
ctxDownstream, cancel := context.WithCancel(ctx)
defer cancel()
for {
...
// on error or EOF
break;
}
println(fmt.Sprintf("%v", downstream.Trailer()))
Break from the infinate loop when there is a error and print the trailer. cancel will be called at the end of the function as it is deferred.
I can't find a reference that explains it clearly, but this doesn't appear to be possible.
On the wire, grpc-status is followed by the trailer metadata when the call completes normally (i.e. the server exits the call).
When the client cancels the call, neither of these are sent.
Seems that gRPC treats call cancellation as a quick abort of the rpc, not much different than the socket being dropped.
Adding a "cancel message" via request streaming works; the server can pick this up and cancel the stream from its end and trailers will still get sent:
message SimpleRequest {
oneof RequestType {
int64 number = 1;
bool cancel = 2;
}
}
....
rpc Downstream(stream SimpleRequest) returns (stream SimpleResponse);
Although this does add a bit of complication to the code.
I have two projects on Golang server and client.
Problem is when I send controll message from server, i can't get it by type on client side.
Few server code example:
send PingMessage:
ws.SetWriteDeadline(time.Now().Add(10 * time.Second))
ws.WriteMessage(websocket.PingMessage, new_msg)
send CloseMessage:
ws.WriteControl(websocket.CloseMessage,
websocket.FormatCloseMessage(websocket.CloseNormalClosure, "socket close"),
time.Now().Add(3 * time.Second))
client side:
for{
t, socketMsg, err := ws.ReadMessage()
if websocket.IsUnexpectedCloseError(err) {
webSock.keepLive()
}
switch t {
case websocket.CloseNormalClosure:
webSock.keepLive()
case websocket.PingMessage:
log.Warn("get ping!!!")
case websocket.TextMessage:
SocketChannel <- socketMsg
}
}
for example CloseNormalClosure message i can get only with:
if websocket.IsCloseError(err, websocket.CloseNormalClosure){
log.Warn("CloseNormalClosure message")
}
But PingMessage, I can't get by type:
case websocket.PingMessage:
log.Warn("get ping!!!")
Could you help me, please, what Im doing wrong ?
The documentation says:
Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer.
Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer.
Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong.
Write the code above as:
ws.SetPingHandler(func(s string) error {
log.Warn("get ping!!!")
return nil
})
for {
t, socketMsg, err := ws.ReadMessage()
switch {
case websocket.IsCloseError(websocket.CloseNormalClosure):
webSock.keepLive()
case websocket.IsUnexpectedCloseError(err):
webSock.keepLive()
case t == websocket.TextMessage:
SocketChannel <- socketMsg
}
}
Most applications break out of the receive loop on any error. A more typical approach is to write the code above as:
for {
t, socketMsg, err := ws.ReadMessage()
if err != nil {
break
}
SocketChannel <- socketMsg
}
I am using grpc go
i have an rpc which looks roughly like this
196 service MyService {
197 // Operation 1
198 rpc Operation1(OperationRequest) returns (OperationResponse) {
199 option (google.api.http) = {
200 post: "/apiver/myser/oper1"
201 body: "*"
202 };
203 }
Client connects by using grpc.Dial() method
When a client connects, the server does some book keeping. when the client disconnects, the bookkeeping needs to be removed.
is there any callback that can be registered which can be used to know that client has closed the session.
Based on your code, it's an unary rpc call, the client connect to server for only one time, send a request and get a response. The client will wait for the response until timeout.
In server side streaming, you can get the client disconnect from
<-grpc.ServerStream.Context.Done()
signal.
With that above, you can implement your own channel in a go routine to build your logic. Use select statement as:
select {
case <-srv.Context().Done():
return
case res := <-<YOUR OWN CHANNEL, WITH RECEIVED RESQUEST OR YOUR RESPONSE>
....
}
I provide some detailed code here
In client streaming, besides the above signal, you can check whether the server can receive the msg:
req, err := grpc.ServerStream.Recv()
if err == io.EOF {
break
} else if err != nil {
return err
}
Assuming that the server is implemented in go, there's an API on the *grpc.ClientConn that reports state changes in the connection.
func (cc *ClientConn) WaitForStateChange(ctx context.Context, sourceState connectivity.State) bool
https://godoc.org/google.golang.org/grpc#ClientConn.WaitForStateChange
These are the docs on each of the connectivity.State
https://github.com/grpc/grpc/blob/master/doc/connectivity-semantics-and-api.md
If you need to expose a channel that you can listen to for the client closing the connection then you could do something like this:
func connectionOnState(ctx context.Context, conn *grpc.ClientConn, states ...connectivity.State) <-chan struct{} {
done := make(chan struct{})
go func() {
// any return from this func will close the channel
defer close(done)
// continue checking for state change
// until one of break states is found
for {
change := conn.WaitForStateChange(ctx, conn.GetState())
if !change {
// ctx is done, return
// something upstream is cancelling
return
}
currentState := conn.GetState()
for _, s := range states {
if currentState == s {
// matches one of the states passed
// return, closing the done channel
return
}
}
}
}()
return done
}
If you only want to consider connections that are shutting down or shutdown, then you could call it like so:
// any receives from shutdownCh will mean the state Shutdown
shutdownCh := connectionOnState(ctx, conn, connectivity.Shutdown)
as the github issue:link
you can do like this
err = stream.Context().Err()
if err != nil {
break
}
I am trying out the gorilla websocket library to get a feel of how websockets work with Go. But I keep getting this error message when I hit refresh button on the browser.
When I reload the web page that I am using to test the websocket, I get these error messages on the Go console:
2015/09/18 19:04:41 websocket: close 1001
2015/09/18 19:04:41 http: response.Write on hijacked connection
The first one is status code for "going away". I am assuming it is because when I hit refresh it goes away from the websocket connection so that makes sense to me.
But then I get an error message that I don't understand. The hijacked one. Why do I get it and what does it mean?
I am running my code on localhost:8080 on a windows machine.
The code I am using:
func wsHandler(w http.ResponseWriter, r *http.Request, _ httprouter.Params) error {
conn, err := websocket.Upgrade(w, r, nil, 1024, 1024)
if err != nil {
return err
}
defer conn.Close()
for {
_, msg, err := conn.ReadMessage()
if err != nil {
return err
}
log.Println(string(msg))
}
return nil
}
Client side:
var conn = new WebSocket("ws://localhost:8080/api/messages/websocket");
conn.onclose = function (e) {
console.log("onclose fired");
};
conn.onopen = function (e) {
console.log("onopen fired");
};
conn.onmessage = function (e) {
console.log(e.data);
};
setTimeout(function () {
conn.send("foo!");
}, 1500);
When I load the page first time, only foo! is outputted to the console. So all in all, after loading the page once, and then reloading it twice I get an output like this:
2015/09/18 19:04:39 foo!
2015/09/18 19:04:41 websocket: close 1001
2015/09/18 19:04:41 http: response.Write on hijacked connection
2015/09/18 19:04:43 foo!
2015/09/18 19:04:44 websocket: close 1001
2015/09/18 19:04:44 http: response.Write on hijacked connection
2015/09/18 19:04:46 foo!
What does this mean? I'm I doing something wrong?
The websocket.Upgrade function hijacks the the underlying network connection from the http.ResponseWriter.
The ResponseWriter.Write method logs this message when write is called after the connection is hijacked from the writer.
It looks like the router or middleware used by the application is writing to connection after the websocket handler returns.
I'm trying to implement a websocket proxy server for GlassFish. If I try to connect more than one client I'm getting error:
ReadMessage Failed: websocket: close 1007 Illegal UTF-8 Sequence.
I'm sure the GlassFish server sending right data, because the same server works properly with another proxy server implemented with node.js.
func GlassFishHandler(conn *websocket.Conn){
defer conn.Close()
conn.SetReadDeadline(time.Now().Add(1000 * time.Second))
conn.SetWriteDeadline(time.Now().Add(1000 * time.Second))
fmt.Println("WS-GOLANG PROXY SERVER: Connected to GlassFish")
for {
messageType, reader, err := conn.NextReader()
if err != nil {
fmt.Println("ReadMessage Failed: ", err) // <- error here
} else {
message, err := ioutil.ReadAll(reader)
if (err == nil && messageType == websocket.TextMessage){
var dat map[string]interface{}
if err := json.Unmarshal(message, &dat); err != nil {
panic(err)
}
// get client destination id
clientId := dat["target"].(string)
fmt.Println("Msg from GlassFish for Client: ", dat);
// pass through
clients[clientId].WriteMessage(websocket.TextMessage, message)
}
}
}
}
Summing up my comments as an answer:
When you are writing to the client, you are taking the clientId from the GlassFish message, fetching the client from a map, and then writing to it - basically clients[clientId].WriteMessage(...).
While your map access can be thread safe, writing is not, as this can be seen as:
// map access - can be safe if you're using a concurrent map
client := clients[clientId]
// writing to a client, not protected at all
client.WriteMessage(...)
So what's probably happening is that two separate goroutines are writing to the same client at the same time. You should protect your client from it by adding a mutex in the WriteMessage method implementation.
BTW actually instead of protecting this method with a mutex, a better, more "go-ish" approach would be to use a channel to write the message, and a goroutine per client that consumes from the channel and writes to the actual socket.
So in the client struct I'd do something like this:
type message struct {
msgtype string
msg string
}
type client struct {
...
msgqueue chan *message
}
func (c *client)WriteMessage(messageType, messageText string) {
// I'm simplifying here, but you get the idea
c.msgqueue <- &message{msgtype: messageType, msg: messageText}
}
func (c *client)writeLoop() {
go func() {
for msg := ragne c.msgqueue {
c.actuallyWriteMessage(msg)
}
}()
}
and when creating a new client instance, just launch the write loop