For a goal to broadcast message from a goroutine to multiple http URL handlers, I am trying to register these http URL handlers, with below code in main.go:
type webSocketHandler func(http.ResponseWriter, *http.Request)
type threadSafeSlice struct {
sync.Mutex
handlers []*webSocketHandler
}
var sliceOfHandlers threadSafeSlice
func (slice *threadSafeSlice) push(handle *webSocketHandler) { //register
slice.Lock()
defer slice.Unlock()
slice.handlers = append(slice.handlers, handle)
}
where forWardMsgToClient() is the http URL handler that need to be registered,
broadCastMessage() goroutine can broadcast message to multiple forWardMsgToClient() handlers, in the below code:
func main() {
go broadcastMessage()
http.HandleFunc("/websocket", forwardMsgToClient)
http.ListenAndServe(":3000", nil)
}
func forwardMsgToClient(w http.ResponseWriter, r *http.Request) {
conn, err := upgrader.Upgrade(w, r, nil)
for {
// Forward message to the client upon receiving a msg from publisher
}
}
All the above code is in main.go
But the problem is, goroutine forwardMsgToClient() gets spawned for respective client after rw, e := l.Accept() call in ../go/src/net/http/server.go.
Reason to register(push()) http URL handler function(forwardMsgToClient()) is to make broadcastMessage() goroutine know the number of channels to create for all http URL handlers and delete the channel when un-registering http URL handler function(forwardMsgToClient()).
Bit nervous, if we need to modify /go/src/net/http/server.go to achieve this goal
How to register(push()) a http URL handler function forwardMsgToClient() in sliceOfHandlers.handlers?
To broadcast a message to all connected websocket clients, do the following:
Add the connection to a collection on upgrade.
Remove the connection from a collection when the connection is closed.
Broadcast by iterating through the collection.
A simple approach is:
type Clients struct {
sync.Mutex
m map[*websocket.Conn]struct{}
}
var clients = Clients{m: map[*websocket.Conn]struct{}{}}
func (cs *Clients) add(c *websocket.Conn) {
cs.Lock()
cs.m[c] = struct{}{}
cs.Unlock()
}
func (cs *Clients) remove(c *websocket.Conn) {
cs.Lock()
delete(cs.m, c)
cs.Unlock()
}
func (cs *Clients) broadcast(message []byte) {
cs.Lock()
defer cs.Unlock()
for c, _ := range m {
c.WriteMessage(websocket.TextMessage, message)
}
}
The handler adds and removes connections from the collection as follows:
func forwardMsgToClient(w http.ResponseWriter, r *http.Request) {
conn, err := upgrader.Upgrade(w, r, nil)
if err != nil {
// handle error
}
defer c.Close()
clients.add(c)
defer clients.remove(c)
// Read the connection as required by the package.
for {
if _, _, err := c.NextReader(); err != nil {
break
}
}
}
To send a message to all connected clients, call clients.broadcast(message).
This simple approach is not production ready for a couple of reasons: it does not the handle the error returned from WriteMessage, broadcast can block on a stuck client.
For a more robust solution, see Gorilla chat example hub. The hub interposes a channel between the broadcaster and the connection, thus allowing the hub to broadcast without blocking. The go broadcastMessage() in the question corresponds to go hub.run() in the Gorilla example. The forwardMsgToClient handler in the question will create a *client and sent it to hub register channel on upgrade and send that *client to the hub unregister channel on disconnect. The *client has a channel that's pumped to the connection.
Related
I have different Clients, each has its own goroutines which reads messages from a websocket.
Those clients have in their a struct a Service struct that processes messages and updates the DB (postgres) accordingly.
Once the processing is finished, it send the output to a central Hub struct via a channel.
My question is - is the ProcessMessage func of Service done concurrently?
Meaning, are the message processing, DB interactions, etc. done concurrently in the example posted here? or is each processing blocking all the rest ?
Main :
func main() {
...
// delivery channel (service --> hub)
delivery := make(chan []byte)
// hub
hub := connection.NewHub(delivery)
go hub.Run()
// service
service := chat.NewService(delivery, storage, cache)
}
Then for each connecting client, I create a Client struct, passing the Service and the Hub to it.
Also, each Client's reading from a websocket is done in a goroutine :
func ServeWs(hub *Hub, service chat.Service, w http.ResponseWriter, r *http.Request) {
conn, err := upgrader.Upgrade(w, r, nil)
if err != nil {
log.Println(err)
return
}
client := &Client{hub: hub, conn: conn, service: service, send: make(chan []byte, 256)}
client.hub.register <- client
go client.writePump()
go client.readPump()
}
Client :
func (c *Client) readPump() {
for {
_, message, err := c.conn.ReadMessage()
...
...
message = bytes.TrimSpace(bytes.Replace(message, newline, space, -1))
...
c.service.ProcessMessage(message)
}
}
Service :
func (s *service) ProcessMessage(incoming protocol.Incoming) {
is this blocking other clients? or is is executed concurrently for each Client ?
// all the logic with postgres, redis, etc...
result, ok := s.storage.InsertMessage(...)
...
...
s.delivery <- result // sends the output to Hub via the delivery channel
}
I'm using a channel to pass messages from an HTTP handler:
package server
import (
"bytes"
"errors"
"io/ioutil"
"log"
"net/http"
)
type Server struct {}
func (s Server) Listen() chan interface{} {
ch := make(chan interface{})
http.HandleFunc("/", handle(ch))
go http.ListenAndServe(":8080", nil)
return ch
}
func handle(ch chan interface{}) func(http.ResponseWriter, *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
b, err := ioutil.ReadAll(r.Body)
defer r.Body.Close()
if err != nil {
ch <- errors.New(string(500))
return
}
w.Write([]byte("Hello World"))
log.Print("about to pass to handler channel")
ch <- bytes.NewBuffer(b)
log.Print("passed to handler channel")
}
}
When I make a request to the server running on port 8080, the thread blocks on this line:
ch <- bytes.NewBuffer(b)
Why is this happening? If you notice, I'm running the listener in a goroutine. I also figured that HTTP handles happen in a separate thread. If I delete the above line, the thread becomes unblocked and the program works as expected. What am I doing wrong?
To clarify, I want to be able to pass the body of a POST request to a channel. Help.
UPDATE:
I'm reading from the channel on the main thread:
listenerChan := n.Listen()
go SendRequest("POST", "http://localhost:8080", []byte("hello"))
for listenedMsg := range listenerChan {
log.Print("listened message>>>> ", listenedMsg)
}
But the thread still blocks on the same line. For clarification, there is nothing wrong with how im sending the request. If I remove the channel send line above, the thread doesnt block.
Because the channel is unbuffered, the send operation blocks until there's someone who is ready to receive from them. Making the channel buffered will only defer the blocking, so you always need some reading goroutine.
Update to your update: the control flow of the program would go like this:
Server starts listening
main sends the request and waits for the response
Server receives the request and tries to write to the channel
main reads from the channel
4 may happen only after 2, which is blocked by 3 which is blocked because 4 is not happening yet. A classical deadlock.
I think #bereal gave a good explanation about using an unbuffered or synchronous channel.
Another way to make things work is to make the channel buffered by changing the line that creates the channel to:
ch := make(chan interface{}, 1) // added the 1
This will prevent the function from being blocked.
I added missing parts in your code and run it, everything works well. I don't see any block. Here's the code:
package main
import (
"bytes"
"errors"
"io/ioutil"
"log"
"net/http"
"time"
)
type Server struct{}
func (s *Server) Listen() chan interface{} {
ch := make(chan interface{})
http.HandleFunc("/", handle(ch))
go http.ListenAndServe(":8080", nil)
return ch
}
func handle(ch chan interface{}) func(http.ResponseWriter, *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
b, err := ioutil.ReadAll(r.Body)
defer r.Body.Close()
if err != nil {
ch <- errors.New(string(500))
return
}
w.Write([]byte("Hello World"))
log.Print("about to pass to handler channel")
ch <- bytes.NewBuffer(b)
log.Print("passed to handler channel")
}
}
// SendRequest send request
func SendRequest(method string, url string, data []byte) {
tr := &http.Transport{
MaxIdleConns: 10,
IdleConnTimeout: 30 * time.Second,
DisableCompression: true,
}
client := &http.Client{Transport: tr}
reader := bytes.NewReader(data)
req, err := http.NewRequest(method, url, reader)
if err != nil {
panic(err)
}
client.Do(req)
}
func main() {
n := new(Server)
listenerChan := n.Listen()
go SendRequest("POST", "http://localhost:8080", []byte("hello"))
for listenedMsg := range listenerChan {
log.Print("listened message>>>> ", listenedMsg)
}
}
And the output are:
2018/06/28 17:22:10 about to pass to handler channel
2018/06/28 17:22:10 passed to handler channel
2018/06/28 17:22:10 listened message>>>> hello
I want to create one to one chat in revel framework but it gives error. Firstly work in revel chat according to demo but refreshing page did not work so I tried this method and dont know how to handle single chat.
Here is an error:
app server.go:2848: http: panic serving 127.0.0.1:50420: interface conversion: interface is nil, not io.Writer goroutine 166 [running]: net/http.(*conn).serve.func1(0xc4201d03c0)
my go code is where I handle ws root,single user chat need to db connection to. I'm using posgres for it
package main
import (
"log"
"net/http"
"github.com/gorilla/websocket"
)
var clients = make(map[*websocket.Conn]bool) // connected clients
var broadcast = make(chan Message) // broadcast channel
// Configure the upgrader
var upgrader = websocket.Upgrader{
CheckOrigin: func(r *http.Request) bool {
return true
},
}
// Define our message object
type Message struct {
Email string `json:"email"`
Username string `json:"username"`
Message string `json:"message"`
Created string `json:"created"`
}
func main() {
// Create a simple file server
fs := http.FileServer(http.Dir("public"))
http.Handle("/", fs)
// Configure websocket route
http.HandleFunc("/ws", handleConnections)
// Start listening for incoming chat messages
go handleMessages()
// Start the server on localhost port 8000 and log any errors
log.Println("http server started on :8090")
err := http.ListenAndServe(":8090", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
func handleConnections(w http.ResponseWriter, r *http.Request) {
// Upgrade initial GET request to a websocket
ws, err := upgrader.Upgrade(w, r, nil)
if err != nil {
log.Fatal(err)
}
// Make sure we close the connection when the function returns
defer ws.Close()
// Register our new client
clients[ws] = true
for {
var msg Message
// Read in a new message as JSON and map it to a Message object
err := ws.ReadJSON(&msg)
if err != nil {
log.Printf("error: %v", err)
delete(clients, ws)
break
}
// Send the newly received message to the broadcast channel
broadcast <- msg
}
}
func handleMessages() {
for {
// Grab the next message from the broadcast channel
msg := <-broadcast
// Send it out to every client that is currently connected
for client := range clients {
err := client.WriteJSON(msg)
if err != nil {
log.Printf("error: %v", err)
client.Close()
delete(clients, client)
}
}
}
}
I think you use gorilla/websocket API incorrectly. You copied the echo example which, being a basic demo, is expected to handle a single ws connection only. Start with the chat example. Particularly pay attention to the fact that serveWs is a non-blocking call while your handleConnections is blocking, i.e. it never returns. Take a look here for a full-featured example of gorilla/websocket API use:
https://github.com/tinode/chat/blob/master/server/wshandler.go
As correctly pointed out by Cerise L, you most certainly have a race on your clients although I think it's unlikely to produce a panic. I think the most likely source of panic is a call to Upgrade on a closed http connection. It's impossible to say exactly because you did not post the full output of the panic.
I've created a simple websocket that publishes a JSON stream. I't works fine most of the time except for few cases where I think while looping through the clients to send them message, it gets hung up on a client that is being disconnected abnormally. What measure can I add to this code to mitigate it?
Client.go
import (
"github.com/gorilla/websocket"
)
type client struct {
socket *websocket.Conn
send chan *Message
}
func (c *client) read() {
defer c.socket.Close()
for {
_, _, err := c.socket.ReadMessage()
if err != nil {
log.Info("Websocket: %s", err)
break
}
}
}
func (c *client) write() {
defer c.socket.Close()
for msg := range c.send {
err := c.socket.WriteJSON(msg)
if err != nil {
break
}
}
}
Stream.go
import (
"net/http"
"github.com/gorilla/websocket"
)
const (
socketBufferSize = 1024
messageBufferSize = 256
)
var upgrader = &websocket.Upgrader{
ReadBufferSize: socketBufferSize,
WriteBufferSize: socketBufferSize,
}
type Stream struct {
Send chan *Message
join chan *client
leave chan *client
clients map[*client]bool
}
func (s *Stream) Run() {
for {
select {
case client := <-s.join: // joining
s.clients[client] = true
case client := <-s.leave: // leaving
delete(s.clients, client)
close(client.send)
case msg := <-s.Send: // send message to all clients
for client := range s.clients {
client.send <- msg
}
}
}
}
func (s *Stream) ServeHTTP(w http.ResponseWriter, res *http.Request) {
socket, err := upgrader.Upgrade(w, res, nil)
if err != nil {
log.Error(err)
return
}
defer func() {
socket.Close()
}()
client := &client{
socket: socket,
send: make(chan *Message, messageBufferSize),
}
s.join <- client
defer func() { s.leave <- client }()
go client.write()
client.read()
}
See the Gorilla Chat Application for an example of how to avoid blocking on a client.
The key parts are:
Use a buffered channel for sending to the client. Your application is already doing this.
Send to the client using select/default to avoid blocking. Assume that the client is blocked on write when the client cannot immediately receive a message. Close the client's channel in this situation to cause the client's write loop to exit.
Write with a deadline.
I am creating a streaming API similar to the Twitter firehose/streaming API.
As far as I can gather this is based on HTTP connections that are kept open and when the backend gets data it then writes to the chucked HTTP connection. It seems that any code I write closes the HTTP connection as soon as anything connects.
Is there a way to keep this open at all?
func startHTTP(pathPrefix string) {
log.Println("Starting HTTPS Server")
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
// Wait here until a write happens to w
// Or we timeout, we can reset this timeout after each write
})
log.Print("HTTPS listening on :5556")
log.Fatal(http.ListenAndServeTLS(":5556", pathPrefix+".crt", pathPrefix+".key", nil))
}
When you want to send HTTP response to client not immediately but after some event, it's called long polling.
Here's simple example of long polling with request cancellation on client disconnect:
package main
import (
"context"
"fmt"
"net/http"
"time"
)
func longOperation(ctx context.Context, ch chan<- string) {
// Simulate long operation.
// Change it to more than 10 seconds to get server timeout.
select {
case <-time.After(time.Second * 3):
ch <- "Successful result."
case <-ctx.Done():
close(ch)
}
}
func handler(w http.ResponseWriter, _ *http.Request) {
notifier, ok := w.(http.CloseNotifier)
if !ok {
panic("Expected http.ResponseWriter to be an http.CloseNotifier")
}
ctx, cancel := context.WithCancel(context.Background())
ch := make(chan string)
go longOperation(ctx, ch)
select {
case result := <-ch:
fmt.Fprint(w, result)
cancel()
return
case <-time.After(time.Second * 10):
fmt.Fprint(w, "Server is busy.")
case <-notifier.CloseNotify():
fmt.Println("Client has disconnected.")
}
cancel()
<-ch
}
func main() {
http.HandleFunc("/", handler)
http.ListenAndServe("localhost:8080", nil)
}
URLs:
anonymous struct and empty struct.
Send a chunked HTTP response from a Go server.
Go Concurrency Patterns: Context.
Gists:
Golang long polling example.
Golang long polling example with request cancellation.