Gorilla websocket error: close 1007 Illegal UTF-8 Sequence - go

I'm trying to implement a websocket proxy server for GlassFish. If I try to connect more than one client I'm getting error:
ReadMessage Failed: websocket: close 1007 Illegal UTF-8 Sequence.
I'm sure the GlassFish server sending right data, because the same server works properly with another proxy server implemented with node.js.
func GlassFishHandler(conn *websocket.Conn){
defer conn.Close()
conn.SetReadDeadline(time.Now().Add(1000 * time.Second))
conn.SetWriteDeadline(time.Now().Add(1000 * time.Second))
fmt.Println("WS-GOLANG PROXY SERVER: Connected to GlassFish")
for {
messageType, reader, err := conn.NextReader()
if err != nil {
fmt.Println("ReadMessage Failed: ", err) // <- error here
} else {
message, err := ioutil.ReadAll(reader)
if (err == nil && messageType == websocket.TextMessage){
var dat map[string]interface{}
if err := json.Unmarshal(message, &dat); err != nil {
panic(err)
}
// get client destination id
clientId := dat["target"].(string)
fmt.Println("Msg from GlassFish for Client: ", dat);
// pass through
clients[clientId].WriteMessage(websocket.TextMessage, message)
}
}
}
}

Summing up my comments as an answer:
When you are writing to the client, you are taking the clientId from the GlassFish message, fetching the client from a map, and then writing to it - basically clients[clientId].WriteMessage(...).
While your map access can be thread safe, writing is not, as this can be seen as:
// map access - can be safe if you're using a concurrent map
client := clients[clientId]
// writing to a client, not protected at all
client.WriteMessage(...)
So what's probably happening is that two separate goroutines are writing to the same client at the same time. You should protect your client from it by adding a mutex in the WriteMessage method implementation.
BTW actually instead of protecting this method with a mutex, a better, more "go-ish" approach would be to use a channel to write the message, and a goroutine per client that consumes from the channel and writes to the actual socket.
So in the client struct I'd do something like this:
type message struct {
msgtype string
msg string
}
type client struct {
...
msgqueue chan *message
}
func (c *client)WriteMessage(messageType, messageText string) {
// I'm simplifying here, but you get the idea
c.msgqueue <- &message{msgtype: messageType, msg: messageText}
}
func (c *client)writeLoop() {
go func() {
for msg := ragne c.msgqueue {
c.actuallyWriteMessage(msg)
}
}()
}
and when creating a new client instance, just launch the write loop

Related

How to release a websocket and redis gateway server resource in golang?

I have a gateway server, which can push message to client side by using websocket, A new client connected to my server, I will generate a cid for it. And then I also subscribe a channel, which using cid. If any message publish to this channel, My server will push it to client side. For now, all unit are working fine, but when I try to test with benchmark test by thor, it will crash, I fine the DeliverMessage has some issue, it would never exit, since it has a die-loop. but since redis need to subscribe something, I don't know how to avoid loop.
func (h *Hub) DeliverMessage(pool *redis.Pool) {
conn := pool.Get()
defer conn.Close()
var gPubSubConn *redis.PubSubConn
gPubSubConn = &redis.PubSubConn{Conn: conn}
defer gPubSubConn.Close()
for {
switch v := gPubSubConn.Receive().(type) {
case redis.Message:
// fmt.Printf("Channel=%q | Data=%s\n", v.Channel, string(v.Data))
h.Push(string(v.Data))
case redis.Subscription:
fmt.Printf("Subscription message: %s : %s %d\n", v.Channel, v.Kind, v.Count)
case error:
fmt.Println("Error pub/sub, delivery has stopped", v)
panic("Error pub/sub")
}
}
}
In the main function, I have call the above function as:
go h.DeliverMessage(pool)
But when I test it with huge connection, it get me some error like:
ERR max number of clients reached
So, I change the redis pool size by change MaxIdle:
func newPool(addr string) *redis.Pool {
return &redis.Pool{
MaxIdle: 5000,
IdleTimeout: 240 * time.Second,
Dial: func() (redis.Conn, error) { return redis.Dial("tcp", addr) },
}
}
But it still doesn't work, so I wonder to know, if there any good way to kill a goroutine after my websocket disconnected to my server on the below selection:
case client := <-h.Unregister:
if _, ok := h.Clients[client]; ok {
delete(h.Clients, client)
delete(h.Connections, client.CID)
close(client.Send)
if err := gPubSubConn.Unsubscribe(client.CID); err != nil {
panic(err)
}
// TODO kill subscribe goroutine if don't client-side disconnected ...
}
But How do I identify this goroutine? How can I do it like unix way. kill -9 <PID>?
Look at the example here
You can make your goroutine exit by having a return statement inside your switch case in your DeliverMessage, once you're not receiving anything more. I'm guessing case error, or as seen in the example, case 0 you'd want to return from that, and your goroutine will cancel. Or if I'm misunderstanding things, and case client := <-h.Unregister: is inside the DeliverMessage, just return.
You're also closing your connection twice. defer gPubSubConn.Close() simply calls conn.Close() so you don't need defer conn.Close()
Also take a look at the Pool and look at what all the parameters actually do. If you want to handle many connections, set MaxActive to 0 "When zero, there is no limit on the number of connections in the pool." (and do you actually want the idle timeout?)
Actually, I got wrong design architecture, I am going to explain what I want to do.
A client can connect to my websocket server;
The server have several handler of http, and the admin can post data via the handler, the structure of the data can be like:
{
"cid": "something",
"body": {
}
}
Since, I have several Nodes are running to service our client, and the Nginx can dispatch each request from admin to totally different Node, but only one Node has hold on the connection about cid with "something", so I will need to publish this data to Redis, if any Node has got the data, it's going to send this message to the client side.
3.Looking for the NodeID, which i am going to Publish to by given an cid.
// redis code & golang
NodeID, err := conn.Do("HGET", "NODE_MAP", cid)
4.For now, I can publish any message from the admin, and publish to the NodeID, which we have got at step 3.
// redis code & golang
NodeID, err := conn.Do("PUBLISH", NodeID, data)
Time to show the core code, which related to this question. I am going to subscribe a channel, which name is NodeID. like the following.
go func(){
for {
switch v := gPubSubConn.Receive().(type) {
case redis.Message:
fmt.Println("Got a message", v.Data)
h.Broadcast <- v.Data
pipeline <- v.Data
case error:
panic(v)
}
}
}()
6.To manage your websocket, you do also need a goroutine to do that. like the following way:
go func () {
for {
select {
case client := <-h.Register:
h.Clients[client] = true
cid := client.CID
h.Connections[cid] = client
body := "something"
client.Send <- msg // greeting
case client := <-h.Unregister:
if _, ok := h.Clients[client]; ok {
delete(h.Clients, client)
delete(h.Connections, client.CID)
close(client.Send)
}
case message := <-h.Broadcast:
fmt.Println("message is", message)
}
}
}()
The last thing is manage a redis pool, you don't really need a connection pool right now. since we only have two goroutine, one main process.
func newPool(addr string) *redis.Pool {
return &redis.Pool{
MaxIdle: 100,
IdleTimeout: 240 * time.Second,
Dial: func() (redis.Conn, error) { return redis.Dial("tcp", addr) },
}
}
var (
pool *redis.Pool
redisServer = flag.String("redisServer", ":6379", "")
)
pool = newPool(*redisServer)
conn := pool.Get()
defer conn.Close()

Syncing websocket loops with channels in Golang

I'm facing a dilemma here trying to keep certain websockets in sync for a given user. Here's the basic setup:
type msg struct {
Key string
Value string
}
type connStruct struct {
//...
ConnRoutineChans []*chan string
LoggedIn bool
Login string
//...
Sockets []*websocket.Conn
}
var (
//...
/* LIST OF CONNECTED USERS AN THEIR IP ADDRESSES */
guestMap sync.Map
)
func main() {
post("Started...")
rand.Seed(time.Now().UTC().UnixNano())
http.HandleFunc("/wss", wsHandler)
panic(http.ListenAndServeTLS("...", "...", "...", nil))
}
func wsHandler(w http.ResponseWriter, r *http.Request) {
if r.Header.Get("Origin")+":8080" != "https://...:8080" {
http.Error(w, "Origin not allowed", 403)
fmt.Println("Client origin not allowed! (https://"+r.Host+")")
fmt.Println("r.Header Origin: "+r.Header.Get("Origin"))
return
}
///
conn, err := websocket.Upgrade(w, r, w.Header(), 1024, 1024)
if err != nil {
http.Error(w, "Could not open websocket connection", http.StatusBadRequest)
fmt.Println("Could not open websocket connection with client!")
}
//ADD CONNECTION TO guestMap IF CONNECTION IS nil
var authString string = /*gets device identity*/;
var authChan chan string = make(chan string);
authValue, authOK := guestMap.Load(authString);
if !authOK {
// NO SESSION, CREATE A NEW ONE
newSession = getSession();
//defer newSession.Close();
guestMap.Store(authString, connStruct{ LoggedIn: false,
ConnRoutineChans: []*chan string{&authChan},
Login: "",
Sockets: []*websocket.Conn{conn}
/* .... */ });
}else{
//SESSION STARTED, ADD NEW SOCKET TO Sockets
var tempConn connStruct = authValue.(connStruct);
tempConn.Sockets = append(tempConn.Sockets, conn);
tempConn.ConnRoutineChans = append(tempConn.ConnRoutineChans, &authChan)
guestMap.Store(authString, tempConn);
}
//
go echo(conn, authString, &authChan);
}
func echo(conn *websocket.Conn, authString string, authChan *chan string) {
var message msg;
//TEST CHANNEL
authValue, _ := guestMap.Load(authString);
go sendToChans(authValue.(connStruct).ConnRoutineChans, "sup dude?")
fmt.Println("got past send...");
for true {
select {
case val := <-*authChan:
// use value of channel
fmt.Println("AuthChan for user #"+strconv.Itoa(myConnNumb)+" spat out: ", val)
default:
// if channels are empty, this is executed
}
readError := conn.ReadJSON(&message)
fmt.Println("got past readJson...");
if readError != nil || message.Key == "" {
//DISCONNECT USER
//.....
return
}
//
_key, _value := chief(message.Key, message.Value, &*conn, browserAndOS, authString)
if writeError := conn.WriteJSON(_key + "|" + _value); writeError != nil {
//...
return
}
fmt.Println("got past writeJson...");
}
}
func sendToChans(chans []*chan string, message string){
for i := 0; i < len(chans); i++ {
*chans[i] <- message
}
}
I know, a big block of code eh? And I commented out most of it...
Anyway, if you've ever used a websocket most of it should be quite familiar:
1) func wsHandler() fires every time a user connects. It makes an entry in guestMap (for each unique device that connects) which holds a connStruct which holds a list of channels: ConnRoutineChans []*chan string. This all gets passed to:
2) echo(), which is a goroutine that constantly runs for each websocket connection. Here I'm just testing out sending a message to other running goroutines, but it seems my for loop isn't actually constantly firing. It only fires when the websocket receives a message from the open tab/window it's connected to. (If anyone can clarify this mechanic, I'd love to know why it's not looping constantly?)
3) For each window or tab that the user has open on a given device there is a websocket and channel stored in an arrays. I want to be able to send a message to all the channels in the array (essentially the other goroutines for open tabs/windows on that device) and receive the message in the other goroutines to change some variables set in the constantly running goroutine.
What I have right now works only for the very first connection on a device, and (of course) it sends "sup dude?" to itself since it's the only channel in the array at the time. Then if you open a new tab (or even many), the message doesn't get sent to anyone at all! Strange?... Then when I close all the tabs (and my commented out logic removes the device item from guestMap) and start up a new device session, still only the first connection gets it's own message.
I already have a method for sending a message to all the other websockets on a device, but sending to a goroutine seems to be a little more tricky than I thought.
To answer my own question:
First, I've switched from a sync.map to a normal map. Secondly, in order for nobody to be reading/writing to it at the same time I've made a channel that you call to do any read/write operation on the map. I've been trying my best to keep my data access and manipulation quick to execute so the channel doesn't get crowded so easily. Here's a small example of that:
package main
import (
"fmt"
)
var (
guestMap map[string]*guestStruct = make(map[string]*guestStruct);
guestMapActionChan = make (chan actionStruct);
)
type actionStruct struct {
Action func([]interface{})[]interface{}
Params []interface{}
ReturnChan chan []interface{}
}
type guestStruct struct {
Name string
Numb int
}
func main(){
//make chan listener
go guestMapActionChanListener(guestMapActionChan)
//some guest logs in...
newGuest := guestStruct{Name: "Larry Josher", Numb: 1337}
//add to the map
addRetChan := make(chan []interface{})
guestMapActionChan <- actionStruct{Action: guestMapAdd,
Params: []interface{}{&newGuest},
ReturnChan: addRetChan}
addReturned := <-addRetChan
fmt.Println(addReturned)
fmt.Println("Also, numb was changed by listener to:", newGuest.Numb)
// Same kind of thing for removing, except (of course) there's
// a lot more logic to a real-life application.
}
func guestMapActionChanListener (c chan actionStruct){
for{
value := <-c;
//
returned := value.Action(value.Params);
value.ReturnChan <- returned;
close(value.ReturnChan)
}
}
func guestMapAdd(params []interface{}) []interface{} {
//.. do some parameter verification checks
theStruct := params[0].(*guestStruct)
name := theStruct.Name
theStruct.Numb = 75
guestMap[name] = &*theStruct
return []interface{}{"Added '"+name+"' to the guestMap"}
}
For communication between connections, I just have each socket loop hold onto their guestStruct, and have more guestMapActionChan functions that take care of distributing data to other guests' guestStructs
Now, I'm not going to mark this as the correct answer unless I get some better suggestions as how to do something like this the right way. But for now this is working and should guarantee no races for reading/writing to the map.
Edit: The correct approach should really have been to just use a sync.Mutex like I do in the (mostly) finished project GopherGameServer

Graceful shutdown of gRPC downstream

Using the following proto buffer code :
syntax = "proto3";
package pb;
message SimpleRequest {
int64 number = 1;
}
message SimpleResponse {
int64 doubled = 1;
}
// All the calls in this serivce preform the action of doubling a number.
// The streams will continuously send the next double, eg. 1, 2, 4, 8, 16.
service Test {
// This RPC streams from the server only.
rpc Downstream(SimpleRequest) returns (stream SimpleResponse);
}
I'm able to successfully open a stream, and continuously get the next doubled number from the server.
My go code for running this looks like :
ctxDownstream, cancel := context.WithCancel(ctx)
downstream, err := testClient.Downstream(ctxDownstream, &pb.SimpleRequest{Number: 1})
for {
responseDownstream, err := downstream.Recv()
if err != io.EOF {
println(fmt.Sprintf("downstream response: %d, error: %v", responseDownstream.Doubled, err))
if responseDownstream.Doubled >= 32 {
break
}
}
}
cancel() // !!This is not a graceful shutdown
println(fmt.Sprintf("%v", downstream.Trailer()))
The problem I'm having is using a context cancellation means my downstream.Trailer() response is empty. Is there a way to gracefully close this connection from the client side and receive downstream.Trailer().
Note: if I close the downstream connection from the server side, my trailers are populated. But I have no way of instructing my server side to close this particular stream. So there must be a way to gracefully close a stream client side.
Thanks.
As requested some server code :
func (b *binding) Downstream(req *pb.SimpleRequest, stream pb.Test_DownstreamServer) error {
request := req
r := make(chan *pb.SimpleResponse)
e := make(chan error)
ticker := time.NewTicker(200 * time.Millisecond)
defer func() { ticker.Stop(); close(r); close(e) }()
go func() {
defer func() { recover() }()
for {
select {
case <-ticker.C:
response, err := b.Endpoint(stream.Context(), request)
if err != nil {
e <- err
}
r <- response
}
}
}()
for {
select {
case err := <-e:
return err
case response := <-r:
if err := stream.Send(response); err != nil {
return err
}
request.Number = response.Doubled
case <-stream.Context().Done():
return nil
}
}
}
You will still need to populate the trailer with some information. I use the grpc.StreamServerInterceptor to do this.
According to the grpc go documentation
Trailer returns the trailer metadata from the server, if there is any.
It must only be called after stream.CloseAndRecv has returned, or
stream.Recv has returned a non-nil error (including io.EOF).
So if you want to read the trailer in client try something like this
ctxDownstream, cancel := context.WithCancel(ctx)
defer cancel()
for {
...
// on error or EOF
break;
}
println(fmt.Sprintf("%v", downstream.Trailer()))
Break from the infinate loop when there is a error and print the trailer. cancel will be called at the end of the function as it is deferred.
I can't find a reference that explains it clearly, but this doesn't appear to be possible.
On the wire, grpc-status is followed by the trailer metadata when the call completes normally (i.e. the server exits the call).
When the client cancels the call, neither of these are sent.
Seems that gRPC treats call cancellation as a quick abort of the rpc, not much different than the socket being dropped.
Adding a "cancel message" via request streaming works; the server can pick this up and cancel the stream from its end and trailers will still get sent:
message SimpleRequest {
oneof RequestType {
int64 number = 1;
bool cancel = 2;
}
}
....
rpc Downstream(stream SimpleRequest) returns (stream SimpleResponse);
Although this does add a bit of complication to the code.

grpc go : how to know in server side, when client closes the connection

I am using grpc go
i have an rpc which looks roughly like this
196 service MyService {
197 // Operation 1
198 rpc Operation1(OperationRequest) returns (OperationResponse) {
199 option (google.api.http) = {
200 post: "/apiver/myser/oper1"
201 body: "*"
202 };
203 }
Client connects by using grpc.Dial() method
When a client connects, the server does some book keeping. when the client disconnects, the bookkeeping needs to be removed.
is there any callback that can be registered which can be used to know that client has closed the session.
Based on your code, it's an unary rpc call, the client connect to server for only one time, send a request and get a response. The client will wait for the response until timeout.
In server side streaming, you can get the client disconnect from
<-grpc.ServerStream.Context.Done()
signal.
With that above, you can implement your own channel in a go routine to build your logic. Use select statement as:
select {
case <-srv.Context().Done():
return
case res := <-<YOUR OWN CHANNEL, WITH RECEIVED RESQUEST OR YOUR RESPONSE>
....
}
I provide some detailed code here
In client streaming, besides the above signal, you can check whether the server can receive the msg:
req, err := grpc.ServerStream.Recv()
if err == io.EOF {
break
} else if err != nil {
return err
}
Assuming that the server is implemented in go, there's an API on the *grpc.ClientConn that reports state changes in the connection.
func (cc *ClientConn) WaitForStateChange(ctx context.Context, sourceState connectivity.State) bool
https://godoc.org/google.golang.org/grpc#ClientConn.WaitForStateChange
These are the docs on each of the connectivity.State
https://github.com/grpc/grpc/blob/master/doc/connectivity-semantics-and-api.md
If you need to expose a channel that you can listen to for the client closing the connection then you could do something like this:
func connectionOnState(ctx context.Context, conn *grpc.ClientConn, states ...connectivity.State) <-chan struct{} {
done := make(chan struct{})
go func() {
// any return from this func will close the channel
defer close(done)
// continue checking for state change
// until one of break states is found
for {
change := conn.WaitForStateChange(ctx, conn.GetState())
if !change {
// ctx is done, return
// something upstream is cancelling
return
}
currentState := conn.GetState()
for _, s := range states {
if currentState == s {
// matches one of the states passed
// return, closing the done channel
return
}
}
}
}()
return done
}
If you only want to consider connections that are shutting down or shutdown, then you could call it like so:
// any receives from shutdownCh will mean the state Shutdown
shutdownCh := connectionOnState(ctx, conn, connectivity.Shutdown)
as the github issue:link
you can do like this
err = stream.Context().Err()
if err != nil {
break
}

RPC from both client and server in Go

Is it actually possible to do RPC calls from a server to a client with the net/rpc package in Go? If no, is there a better solution out there?
I am currently using thrift (thrift4go) for server->client and client->server RPC functionality. By default, thrift does only client->server calls just like net/rpc. As I also required server->client communication, I did some research and found bidi-thrift. Bidi-thrift explains how to connect a java server + java client to have bidirectional thrift communication.
What bidi-thrift does, and it's limitations.
A TCP connection has an incomming and outgoing communication line (RC and TX). The idea of bidi-thrift is to split RS and TX and provide these to a server(processor) and client(remote) on both client-application and server-application. I found this to be hard to do in Go. Also, this way there is no "response" possible (the response line is in use). Therefore, all methods in the service's must be "oneway void". (fire and forget, call gives no result).
The solution
I changed the idea of bidi-thrift and made the client open two connections to the server, A and B. The first connection(A) is used to perform client -> server communication (where client makes the calls, as usual). The second connection(B) is 'hijacked', and connected to a server(processor) on the client, while it is connected to a client(remote) on the server. I've got this working with a Go server and a Java client. It works very well. It's fast and reliable (just like normal thrift is).
Some sources.. The B connection (server->client) is set up like this:
Go server
// factories
framedTransportFactory := thrift.NewTFramedTransportFactory(thrift.NewTTransportFactory())
protocolFactory := thrift.NewTBinaryProtocolFactoryDefault()
// create socket listener
addr, err := net.ResolveTCPAddr("tcp", "127.0.0.1:9091")
if err != nil {
log.Print("Error resolving address: ", err.Error(), "\n")
return
}
serverTransport, err := thrift.NewTServerSocketAddr(addr)
if err != nil {
log.Print("Error creating server socket: ", err.Error(), "\n")
return
}
// Start the server to listen for connections
log.Print("Starting the server for B communication (server->client) on ", addr, "\n")
err = serverTransport.Listen()
if err != nil {
log.Print("Error during B server: ", err.Error(), "\n")
return //err
}
// Accept new connections and handle those
for {
transport, err := serverTransport.Accept()
if err != nil {
return //err
}
if transport != nil {
// Each transport is handled in a goroutine so the server is availiable again.
go func() {
useTransport := framedTransportFactory.GetTransport(transport)
client := worldclient.NewWorldClientClientFactory(useTransport, protocolFactory)
// Thats it!
// Lets do something with the connction
result, err := client.Hello()
if err != nil {
log.Printf("Errror when calling Hello on client: %s\n", err)
}
// client.CallSomething()
}()
}
}
Java client
// preparations for B connection
TTransportFactory transportFactory = new TTransportFactory();
TProtocolFactory protocolFactory = new TBinaryProtocol.Factory();
YourServiceProcessor processor = new YourService.Processor<YourServiceProcessor>(new YourServiceProcessor(this));
/* Create thrift connection for B calls (server -> client) */
try {
// create the transport
final TTransport transport = new TSocket("127.0.0.1", 9091);
// open the transport
transport.open();
// add framing to the transport layer
final TTransport framedTransport = new TFramedTransport(transportFactory.getTransport(transport));
// connect framed transports to protocols
final TProtocol protocol = protocolFactory.getProtocol(framedTransport);
// let the processor handle the requests in new Thread
new Thread() {
public void run() {
try {
while (processor.process(protocol, protocol)) {}
} catch (TException e) {
e.printStackTrace();
} catch (NullPointerException e) {
e.printStackTrace();
}
}
}.start();
} catch(Exception e) {
e.printStackTrace();
}
I came across rpc2 which implements it. An example:
Server.go
// server.go
package main
import (
"net"
"github.com/cenkalti/rpc2"
"fmt"
)
type Args struct{ A, B int }
type Reply int
func main(){
srv := rpc2.NewServer()
srv.Handle("add", func(client *rpc2.Client, args *Args, reply *Reply) error{
// Reversed call (server to client)
var rep Reply
client.Call("mult", Args{2, 3}, &rep)
fmt.Println("mult result:", rep)
*reply = Reply(args.A + args.B)
return nil
})
lis, _ := net.Listen("tcp", "127.0.0.1:5000")
srv.Accept(lis)
}
Client.go
// client.go
package main
import (
"fmt"
"github.com/cenkalti/rpc2"
"net"
)
type Args struct{ A, B int }
type Reply int
func main(){
conn, _ := net.Dial("tcp", "127.0.0.1:5000")
clt := rpc2.NewClient(conn)
clt.Handle("mult", func(client *rpc2.Client, args *Args, reply *Reply) error {
*reply = Reply(args.A * args.B)
return nil
})
go clt.Run()
var rep Reply
clt.Call("add", Args{5, 2}, &rep)
fmt.Println("add result:", rep)
}
RPC is a (remote) service. Whenever some computer requests a remote service then it is acting as a client asking the server to provide the service. Within this "definition" the concept of a server calling client RPC has no well defined meaning.

Resources