Can't revel save a session in WebSocket Controller? - websocket

I want create a Goroutine for each user to send websocket data sequentially. So, I wrote a code as below.
func (c User) List(ws *websocket.Conn) revel.Result {
disconnect := make(chan bool)
if c.Session["connected"] != "true" {
c.Session["connected"] = "true"
go func() {
for {
select {
case <-ticker.C:
if websocket.JSON.Send(ws, &map[string]interface{}{"hoge": "fuga"}) != nil {
c.Session["connected"] = "false"
disconnect <- true
}
}
}
}()
}
<-disconnect
return nil
}
However, the code makes goroutines for each access.
And I tried the code as below.
func (c App) WebSocket(ws *websocket.Conn) revel.Result {
fmt.Println(c.Session)
c.Session["connected"] = "true"
return nil
}
https://gist.github.com/uzimith/0066e863a0809d4a91ec
Output is this.
map[]
map[]
Can't revel save a session in WebSocket Controller?
I think we need a Session.Save method, don't you?

I understood.
Revel uses cookie to save Session data. These value are stored in the cookie of the key "REVEL_SESSION". So, revel uses http header Set-Cookie when it save Cookie. However, we use websocket protocol. Therefore, we can't save cookie.

Related

When is right time to run Automigrate with GORM

Most Go/GORM examples I've seen show Automigrate being called immediately after opening the database connection, including GORM documentation here. For API services, this would be an expensive/wanted call with every API requests. So, I assume, for API services, Automigrate should be removed from regular flow and handled separately. Is my understanding correct?
From GORM Documentation
...
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
db.AutoMigrate(&Product{})
...
It wouldn't happen on every API request. Not even close. It'd happen every time the application is started, so basically: connect to the DB in main, and run AutoMigrate there. Pass the connection as a dependency to your handlers/service packages/wherever you need them. The HTTP handler can just access it there.
Basically this:
package main
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
fmt.Printf("Failed to connect to DB: %v", err)
os.Exit(1)
}
// see below how this is handled
fRepo := foo.New(db) // all repos here
fRepo.Migrate() // this handles migrations
// create request handlers
fHandler := handlers.NewFoo(fRepo) // migrations have already been handled
mux := http.NewServeMux()
mux.HandleFunc("/foo/list", fHandler.List) // set up handlers
// start server etc...
}
Have the code that interacts with the DB in some package like this:
package foo
// The DB connection interface as you use it
type Connection interface {
Create()
Find()
AutoMigrate(any)
}
type Foo struct {
db Connection
}
func New(db Connection) *Foo {
return &Foo{
db: db,
}
}
func (f *Foo) Migrate() {
f.db.AutoMigrate(&Stuff{}) // all types this repo deals with go here
}
func (f *Foo) GetAll() ([]Stuff, error) {
ret := []Stuff{}
res := f.db.Find(&ret)
return ret, res.Error
}
Then have your handlers structured in a sensible way, and provide them with the repository (aka foo package stuff):
package handlers
type FooRepo interface {
GetAll() ([]Stuff, error)
}
type FooHandler struct {
repo FooRepo
}
func NewFoo(repo FooRepo) *FooHandler {
return &FooHandler{
repo: repo,
}
}
func (f *FooHandler) List(res http.ResponseWriter, req *http.Request) {
all, err := f.repo.GetAll()
if err != nil {
res.WriteHeader(http.StatusInternalServerError)
io.WriteString(w, err.Error())
return
}
// write response as needed
}
Whenever you deploy an updated version of your application, the main function will call AutoMigrate, and the application will handle requests without constantly re-connecting to the DB or attempting to handle migrations time and time again.
I don't know why you'd think that your application would have to run through the setup for each request, especially given that your main function (or some function you call from main) explicitly creates an HTTP server, and listens on a specific port for requests. The DB connection and subsequent migrations should be handled before you start listening for requests. It's not part of handling requests, ever...

Can't access certain session data with Gin sessions?

I'm new to Gin and Gin sessions, and I have a weird problem I cannot explain: It seems that session data I write in my controller can't be accessed from my middlware.
Allow me to demonstrate:
I do the following in my controller:
func OidcCallBack(c *gin.Context) {
...
// store user info and redirect to original location
userJsonData, err := json.Marshal(*user)
if err != nil {
log.Errorf("error encoding user data: %s", err.Error())
}
session.Set(SessionUserInfo, string(userJsonData))
session.Save()
}
In my middleware, I want to read this session data:
func OidcMiddleware() gin.HandlerFunc {
return func(c *gin.Context) {
// get our session variables
session := sessions.Default(c)
// some test code, this works btw
session.Set("bla", "test")
session.Save()
blavar := session.Get("bla")
// get our userinfo
userinfo := session.Get(controllers.SessionUserInfo) // this results in nil
log.Debug(blavar)
if userinfo == nil {
log.Debugf("filter incoming url: %s", c.Request.URL.String())
session.Set(controllers.SessionRedirectTarget, c.Request.URL.String())
session.Save()
c.Redirect(http.StatusFound, controllers.AuthLogin)
c.Abort()
}
c.Next()
}
}
When I look in my debugger, I notice that the data set in the controller, ended up in a different place of session.store.MemStore.cache.data:
session data set in the middleware ends up in data[0], and the data set in the controller ends up in data1.
Can someone explain me what I need to do to make sure all data ends up in the same spot, and also explain me why this happens? I can't find in any info on this in the readme file.

Syncing websocket loops with channels in Golang

I'm facing a dilemma here trying to keep certain websockets in sync for a given user. Here's the basic setup:
type msg struct {
Key string
Value string
}
type connStruct struct {
//...
ConnRoutineChans []*chan string
LoggedIn bool
Login string
//...
Sockets []*websocket.Conn
}
var (
//...
/* LIST OF CONNECTED USERS AN THEIR IP ADDRESSES */
guestMap sync.Map
)
func main() {
post("Started...")
rand.Seed(time.Now().UTC().UnixNano())
http.HandleFunc("/wss", wsHandler)
panic(http.ListenAndServeTLS("...", "...", "...", nil))
}
func wsHandler(w http.ResponseWriter, r *http.Request) {
if r.Header.Get("Origin")+":8080" != "https://...:8080" {
http.Error(w, "Origin not allowed", 403)
fmt.Println("Client origin not allowed! (https://"+r.Host+")")
fmt.Println("r.Header Origin: "+r.Header.Get("Origin"))
return
}
///
conn, err := websocket.Upgrade(w, r, w.Header(), 1024, 1024)
if err != nil {
http.Error(w, "Could not open websocket connection", http.StatusBadRequest)
fmt.Println("Could not open websocket connection with client!")
}
//ADD CONNECTION TO guestMap IF CONNECTION IS nil
var authString string = /*gets device identity*/;
var authChan chan string = make(chan string);
authValue, authOK := guestMap.Load(authString);
if !authOK {
// NO SESSION, CREATE A NEW ONE
newSession = getSession();
//defer newSession.Close();
guestMap.Store(authString, connStruct{ LoggedIn: false,
ConnRoutineChans: []*chan string{&authChan},
Login: "",
Sockets: []*websocket.Conn{conn}
/* .... */ });
}else{
//SESSION STARTED, ADD NEW SOCKET TO Sockets
var tempConn connStruct = authValue.(connStruct);
tempConn.Sockets = append(tempConn.Sockets, conn);
tempConn.ConnRoutineChans = append(tempConn.ConnRoutineChans, &authChan)
guestMap.Store(authString, tempConn);
}
//
go echo(conn, authString, &authChan);
}
func echo(conn *websocket.Conn, authString string, authChan *chan string) {
var message msg;
//TEST CHANNEL
authValue, _ := guestMap.Load(authString);
go sendToChans(authValue.(connStruct).ConnRoutineChans, "sup dude?")
fmt.Println("got past send...");
for true {
select {
case val := <-*authChan:
// use value of channel
fmt.Println("AuthChan for user #"+strconv.Itoa(myConnNumb)+" spat out: ", val)
default:
// if channels are empty, this is executed
}
readError := conn.ReadJSON(&message)
fmt.Println("got past readJson...");
if readError != nil || message.Key == "" {
//DISCONNECT USER
//.....
return
}
//
_key, _value := chief(message.Key, message.Value, &*conn, browserAndOS, authString)
if writeError := conn.WriteJSON(_key + "|" + _value); writeError != nil {
//...
return
}
fmt.Println("got past writeJson...");
}
}
func sendToChans(chans []*chan string, message string){
for i := 0; i < len(chans); i++ {
*chans[i] <- message
}
}
I know, a big block of code eh? And I commented out most of it...
Anyway, if you've ever used a websocket most of it should be quite familiar:
1) func wsHandler() fires every time a user connects. It makes an entry in guestMap (for each unique device that connects) which holds a connStruct which holds a list of channels: ConnRoutineChans []*chan string. This all gets passed to:
2) echo(), which is a goroutine that constantly runs for each websocket connection. Here I'm just testing out sending a message to other running goroutines, but it seems my for loop isn't actually constantly firing. It only fires when the websocket receives a message from the open tab/window it's connected to. (If anyone can clarify this mechanic, I'd love to know why it's not looping constantly?)
3) For each window or tab that the user has open on a given device there is a websocket and channel stored in an arrays. I want to be able to send a message to all the channels in the array (essentially the other goroutines for open tabs/windows on that device) and receive the message in the other goroutines to change some variables set in the constantly running goroutine.
What I have right now works only for the very first connection on a device, and (of course) it sends "sup dude?" to itself since it's the only channel in the array at the time. Then if you open a new tab (or even many), the message doesn't get sent to anyone at all! Strange?... Then when I close all the tabs (and my commented out logic removes the device item from guestMap) and start up a new device session, still only the first connection gets it's own message.
I already have a method for sending a message to all the other websockets on a device, but sending to a goroutine seems to be a little more tricky than I thought.
To answer my own question:
First, I've switched from a sync.map to a normal map. Secondly, in order for nobody to be reading/writing to it at the same time I've made a channel that you call to do any read/write operation on the map. I've been trying my best to keep my data access and manipulation quick to execute so the channel doesn't get crowded so easily. Here's a small example of that:
package main
import (
"fmt"
)
var (
guestMap map[string]*guestStruct = make(map[string]*guestStruct);
guestMapActionChan = make (chan actionStruct);
)
type actionStruct struct {
Action func([]interface{})[]interface{}
Params []interface{}
ReturnChan chan []interface{}
}
type guestStruct struct {
Name string
Numb int
}
func main(){
//make chan listener
go guestMapActionChanListener(guestMapActionChan)
//some guest logs in...
newGuest := guestStruct{Name: "Larry Josher", Numb: 1337}
//add to the map
addRetChan := make(chan []interface{})
guestMapActionChan <- actionStruct{Action: guestMapAdd,
Params: []interface{}{&newGuest},
ReturnChan: addRetChan}
addReturned := <-addRetChan
fmt.Println(addReturned)
fmt.Println("Also, numb was changed by listener to:", newGuest.Numb)
// Same kind of thing for removing, except (of course) there's
// a lot more logic to a real-life application.
}
func guestMapActionChanListener (c chan actionStruct){
for{
value := <-c;
//
returned := value.Action(value.Params);
value.ReturnChan <- returned;
close(value.ReturnChan)
}
}
func guestMapAdd(params []interface{}) []interface{} {
//.. do some parameter verification checks
theStruct := params[0].(*guestStruct)
name := theStruct.Name
theStruct.Numb = 75
guestMap[name] = &*theStruct
return []interface{}{"Added '"+name+"' to the guestMap"}
}
For communication between connections, I just have each socket loop hold onto their guestStruct, and have more guestMapActionChan functions that take care of distributing data to other guests' guestStructs
Now, I'm not going to mark this as the correct answer unless I get some better suggestions as how to do something like this the right way. But for now this is working and should guarantee no races for reading/writing to the map.
Edit: The correct approach should really have been to just use a sync.Mutex like I do in the (mostly) finished project GopherGameServer

grpc go : how to know in server side, when client closes the connection

I am using grpc go
i have an rpc which looks roughly like this
196 service MyService {
197 // Operation 1
198 rpc Operation1(OperationRequest) returns (OperationResponse) {
199 option (google.api.http) = {
200 post: "/apiver/myser/oper1"
201 body: "*"
202 };
203 }
Client connects by using grpc.Dial() method
When a client connects, the server does some book keeping. when the client disconnects, the bookkeeping needs to be removed.
is there any callback that can be registered which can be used to know that client has closed the session.
Based on your code, it's an unary rpc call, the client connect to server for only one time, send a request and get a response. The client will wait for the response until timeout.
In server side streaming, you can get the client disconnect from
<-grpc.ServerStream.Context.Done()
signal.
With that above, you can implement your own channel in a go routine to build your logic. Use select statement as:
select {
case <-srv.Context().Done():
return
case res := <-<YOUR OWN CHANNEL, WITH RECEIVED RESQUEST OR YOUR RESPONSE>
....
}
I provide some detailed code here
In client streaming, besides the above signal, you can check whether the server can receive the msg:
req, err := grpc.ServerStream.Recv()
if err == io.EOF {
break
} else if err != nil {
return err
}
Assuming that the server is implemented in go, there's an API on the *grpc.ClientConn that reports state changes in the connection.
func (cc *ClientConn) WaitForStateChange(ctx context.Context, sourceState connectivity.State) bool
https://godoc.org/google.golang.org/grpc#ClientConn.WaitForStateChange
These are the docs on each of the connectivity.State
https://github.com/grpc/grpc/blob/master/doc/connectivity-semantics-and-api.md
If you need to expose a channel that you can listen to for the client closing the connection then you could do something like this:
func connectionOnState(ctx context.Context, conn *grpc.ClientConn, states ...connectivity.State) <-chan struct{} {
done := make(chan struct{})
go func() {
// any return from this func will close the channel
defer close(done)
// continue checking for state change
// until one of break states is found
for {
change := conn.WaitForStateChange(ctx, conn.GetState())
if !change {
// ctx is done, return
// something upstream is cancelling
return
}
currentState := conn.GetState()
for _, s := range states {
if currentState == s {
// matches one of the states passed
// return, closing the done channel
return
}
}
}
}()
return done
}
If you only want to consider connections that are shutting down or shutdown, then you could call it like so:
// any receives from shutdownCh will mean the state Shutdown
shutdownCh := connectionOnState(ctx, conn, connectivity.Shutdown)
as the github issue:link
you can do like this
err = stream.Context().Err()
if err != nil {
break
}

Gorilla websocket error: close 1007 Illegal UTF-8 Sequence

I'm trying to implement a websocket proxy server for GlassFish. If I try to connect more than one client I'm getting error:
ReadMessage Failed: websocket: close 1007 Illegal UTF-8 Sequence.
I'm sure the GlassFish server sending right data, because the same server works properly with another proxy server implemented with node.js.
func GlassFishHandler(conn *websocket.Conn){
defer conn.Close()
conn.SetReadDeadline(time.Now().Add(1000 * time.Second))
conn.SetWriteDeadline(time.Now().Add(1000 * time.Second))
fmt.Println("WS-GOLANG PROXY SERVER: Connected to GlassFish")
for {
messageType, reader, err := conn.NextReader()
if err != nil {
fmt.Println("ReadMessage Failed: ", err) // <- error here
} else {
message, err := ioutil.ReadAll(reader)
if (err == nil && messageType == websocket.TextMessage){
var dat map[string]interface{}
if err := json.Unmarshal(message, &dat); err != nil {
panic(err)
}
// get client destination id
clientId := dat["target"].(string)
fmt.Println("Msg from GlassFish for Client: ", dat);
// pass through
clients[clientId].WriteMessage(websocket.TextMessage, message)
}
}
}
}
Summing up my comments as an answer:
When you are writing to the client, you are taking the clientId from the GlassFish message, fetching the client from a map, and then writing to it - basically clients[clientId].WriteMessage(...).
While your map access can be thread safe, writing is not, as this can be seen as:
// map access - can be safe if you're using a concurrent map
client := clients[clientId]
// writing to a client, not protected at all
client.WriteMessage(...)
So what's probably happening is that two separate goroutines are writing to the same client at the same time. You should protect your client from it by adding a mutex in the WriteMessage method implementation.
BTW actually instead of protecting this method with a mutex, a better, more "go-ish" approach would be to use a channel to write the message, and a goroutine per client that consumes from the channel and writes to the actual socket.
So in the client struct I'd do something like this:
type message struct {
msgtype string
msg string
}
type client struct {
...
msgqueue chan *message
}
func (c *client)WriteMessage(messageType, messageText string) {
// I'm simplifying here, but you get the idea
c.msgqueue <- &message{msgtype: messageType, msg: messageText}
}
func (c *client)writeLoop() {
go func() {
for msg := ragne c.msgqueue {
c.actuallyWriteMessage(msg)
}
}()
}
and when creating a new client instance, just launch the write loop

Resources