Stream events to potentially slow clients - go

Given a (infinite) stream of events, a server program must stream the events to several clients. For example:
for _, event := range events {
for _, client := range clients {
client.write(event) // blocking operation
}
}
However, if a client is slow, it can throttle the other clients. Therefore, for each client, we can add a channel (and a client specific goroutine consuming that channel):
for _, event := range events {
for _, client := range clients {
client.writer <- event // writer channel is consumed by a per-client go routine
}
}
As long as the buffered channel is not full, this works. However, if the channel gets full, it'll block again. I can think of the following options:
Drop the event on channel full, close the channel, force the client to reconnect. This requires re-negotiating the channel position, or a complete re-stream. The force reset will potentially waste resources, making the slow client even slower
Add a channel that goes the other way, make the client signal the server that it is ready to receive. This requires the server to keep a track of stream positions for each client (perhaps that is not too bad?). It seems to have a bit more sync than necessary in the nominal (not-slow) case, increasing latency.
Something else? What's the idiomatic go way to do this?
EDIT: Here's a simple, good looking, but broken solution: track the stream position in the client object, and send the outstanding events:
for _, event := range events {
persisted_events = append(persisted_events, event)
for _, client := range clients {
for _, event := range persisted_events[client.last_event:] {
select {
case client.writer <- event:
client.last_event++
default:
break
}
}
}
}
This allows slow clients to catch-up, without starwing the others. It does not require a disconnect. However, it is also broken: if the stream of events stops while a client is catching up, it is possible that the slow client gets stuck - as the main loop if waiting for new events. Adding a ticker that triggers the loop sometimes is not efficient. Requiring the client to notify the main loop that it is now idle is complex, and potentially doubles the number of events.

The best setup I found so far:
for {
var event Event = nil
select {
case event, ok := <-events:
if !ok {
return
}
persisted_events = append(persisted_events, event)
case event <- ticker:
}
for _, client := range clients {
select {
case client.writer <- persisted_events[client.num_events:]:
client.num_events = len(persisted_events)
default:
}
}
}
This is a slight variation of the broken example in the question. There are two key differences:
The writer channel takes a slice of events (instead of a single event)
There's a ticker, that wakes the loop periodically, even if there are no new events
Combined, this achives that even if a slow client catches up right after it misses the last event(s) before a long silent period, the final catching up is only delayed by one period of the ticker, instead of the combination of the ticker period plus the number of events missed. Therefore, reducing the ticker period can cap the tail latency. Not ideal in terms of performance, but simple (in terms of concurrency).

Related

When is it necessary to drain timer channel

I have a loop that reads from a channel (indirectly receiving from a socket) but must also send a ping regularly if there is no other traffic. So I create a time.Timer on every loop iteration like this:
var timer *time.Timer
for {
timer = time.NewTimer(pingFrequency)
select {
case message := <-ch:
....
case <-timer.C:
ping()
}
_ = timer.Stop()
}
The doco for the Stop method implies you should drain the channel if it returns false but is that really necessary since the previous value of timer is no longer used and will be released by the GC. Ie even if the old timer.C has an unread value it does not matter since nothing is using the channel any longer.
The doc. for t.Stop() says:
To ensure the channel is empty after a call to Stop,
check the return value and drain the channel.
My question is: Why would you need to ensure the channel is empty since nothing is using it and it will be freed by the GC?
Please note that I am aware of the proper way to stop a timer in the general case.
if !timer.Stop() {
select {
case <-timer.C:
default:
}
}
This specific question has not been answered on SO or elsewhere (apart from the comment from #JimB below).

Go Concurrency Circular Logic

I'm just getting into concurrency in Go and trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel. If a message comes into my dispatch function via the dispatchchan channel and my other go routines are busy, the message is appended onto the stack slice in the dispatcher and the dispatcher will try to send again later when a worker becomes available, and/or no more messages are received on the dispatchchan. This is because the dispatchchan and the jobchan are unbuffered, and the go routine the workers are running will append other messages to the dispatcher up to a certain point and I don't want the workers blocked waiting on the dispatcher and creating a deadlock. Here's the dispatcher code I've come up with so far:
func dispatch() {
var stack []string
acount := 0
for {
select {
case d := <-dispatchchan:
stack = append(stack, d)
case c := <-mw:
acount = acount + c
case jobchan <-stack[0]:
if len(stack) > 1 {
stack[0] = stack[len(stack)-1]
stack = stack[:len(stack)-1]
} else {
stack = nil
}
default:
if acount == 0 && len(stack) == 0 {
close(jobchan)
close(dispatchchan)
close(mw)
wg.Done()
return
}
}
}
Complete example at https://play.golang.wiki/p/X6kXVNUn5N7
The mw channel is a buffered channel the same length as the number of worker go routines. It acts as a semaphore for the worker pool. If the worker routine is doing [m]eaningful [w]ork it throws int 1 on the mw channel and when it finishes its work and goes back into the for loop listening to the jobchan it throws int -1 on the mw. This way the dispatcher knows if there's any work being done by the worker pool, or if the pool is idle. If the pool is idle and there are no more messages on the stack, then the dispatcher closes the channels and return control to the main func.
This is all good but the issue I have is that the stack itself could be zero length so the case where I attempt to send stack[0] to the jobchan, if the stack is empty, I get an out of bounds error. What I'm trying to figure out is how to ensure that when I hit that case, either stack[0] has a value in it or not. I don't want that case to send an empty string to the jobchan.
Any help is greatly appreciated. If there's a more idomatic concurrency pattern I should consider, I'd love to hear about it. I'm not 100% sold on this solution but this is the farthest I've gotten so far.
This is all good but the issue I have is that the stack itself could be zero length so the case where I attempt to send stack[0] to the jobchan, if the stack is empty, I get an out of bounds error.
I can't reproduce it with your playground link, but it's believable, because at lest one gofunc worker might have been ready to receive on that channel.
My output has been Msgcnt: 0, which is also easily explained, because gofunc might not have been ready to receive on jobschan when dispatch() runs its select. The order of these operations is not defined.
trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel
A channel needs no dispatcher. A channel is the dispatcher.
If a message comes into my dispatch function via the dispatchchan channel and my other go routines are busy, the message is [...] will [...] send again later when a worker becomes available, [...] or no more messages are received on the dispatchchan.
With a few creative edits, it was easy to turn that into something close to the definition of a buffered channel. It can be read from immediately, or it can take up to some "limit" of messages that can't be immediately dispatched. You do define limit, though it's not used elsewhere within your code.
In any function, defining a variable you don't read will result in a compile time error like limit declared but not used. This stricture improves code quality and helps identify typeos. But at package scope, you've gotten away with defining the unused limit as a "global" and thus avoided a useful error - you haven't limited anything.
Don't use globals. Use passed parameters to define scope, because the definition of scope is tantamount to functional concurrency as expressed with the go keyword. Pass the relevant channels defined in local scope to functions defined at package scope so that you can easily track their relationships. And use directional channels to enforce the producer/consumer relationship between your functions. More on this later.
Going back to "limit", it makes sense to limit the quantity of jobs you're queueing because all resources are limited, and accepting more messages than you have any expectation of processing requires more durable storage than process memory provides. If you don't feel obligated to fulfill those requests no matter what, don't accept "too many" of them in the first place.
So then, what function has dispatchchan and dispatch()? To store a limited number of pending requests, if any, before they can be processed, and then to send them to the next available worker? That's exactly what a buffered channel is for.
Circular Logic
Who "knows" when your program is done? main() provides the initial input, but you close all 3 channels in `dispatch():
close(jobchan)
close(dispatchchan)
close(mw)
Your workers write to their own job queue so only when the workers are done writing to it can the incoming job queue be closed. However, individual workers also don't know when to close the jobs queue because other workers are writing to it. Nobody knows when your algorithm is done. There's your circular logic.
The mw channel is a buffered channel the same length as the number of worker go routines. It acts as a semaphore for the worker pool.
There's a race condition here. Consider the case where all n workers have just received the last n jobs. They've each read from jobschan and they're checking the value of ok. disptatcher proceeds to run its select. Nobody is writing to dispatchchan or reading from jobschan right now so the default case is immediately matched. len(stack) is 0 and there's no current job so dispatcher closes all channels including mw. At some point thereafter, a worker tries to write to a closed channel and panics.
So finally I'm ready to provide some code, but I have one more problem: I don't have a clear problem statement to write code around.
I'm just getting into concurrency in Go and trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel.
Channels between goroutines are like the teeth that synchronize gears. But to what end do the gears turn? You're not trying to keep time, nor construct a wind-up toy. Your gears could be made to turn, but what would success look like? Their turning?
Let's try to define a more specific use case for channels: given an arbitrarily long set of durations coming in as strings on standard input*, sleep that many seconds in one of n workers. So that we actually have a result to return, we'll say each worker will return the start and end time the duration was run for.
So that it can run in the playground, I'll simulate standard input with a hard-coded byte buffer.
package main
import (
"bufio"
"bytes"
"fmt"
"os"
"strings"
"sync"
"time"
)
type SleepResult struct {
worker_id int
duration time.Duration
start time.Time
end time.Time
}
func main() {
var num_workers = 2
workchan := make(chan time.Duration)
resultschan := make(chan SleepResult)
var wg sync.WaitGroup
var resultswg sync.WaitGroup
resultswg.Add(1)
go results(&resultswg, resultschan)
for i := 0; i < num_workers; i++ {
wg.Add(1)
go worker(i, &wg, workchan, resultschan)
}
// playground doesn't have stdin
var input = bytes.NewBufferString(
strings.Join([]string{
"3ms",
"1 seconds",
"3600ms",
"300 ms",
"5s",
"0.05min"}, "\n") + "\n")
var scanner = bufio.NewScanner(input)
for scanner.Scan() {
text := scanner.Text()
if dur, err := time.ParseDuration(text); err != nil {
fmt.Fprintln(os.Stderr, "Invalid duration", text)
} else {
workchan <- dur
}
}
close(workchan) // we know when our inputs are done
wg.Wait() // and when our jobs are done
close(resultschan)
resultswg.Wait()
}
func results(wg *sync.WaitGroup, resultschan <-chan SleepResult) {
for res := range resultschan {
fmt.Printf("Worker %d: %s : %s => %s\n",
res.worker_id, res.duration,
res.start.Format(time.RFC3339Nano), res.end.Format(time.RFC3339Nano))
}
wg.Done()
}
func worker(id int, wg *sync.WaitGroup, jobchan <-chan time.Duration, resultschan chan<- SleepResult) {
var res = SleepResult{worker_id: id}
for dur := range jobchan {
res.duration = dur
res.start = time.Now()
time.Sleep(res.duration)
res.end = time.Now()
resultschan <- res
}
wg.Done()
}
Here I use 2 wait groups, one for the workers, one for the results. This makes sure Im done writing all the results before main() ends. I keep my functions simple by having each function do exactly one thing at a time: main reads inputs, parses durations from them, and sends them off to the next worker. The results function collects results and prints them to standard output. The worker does the sleeping, reading from jobchan and writing to resultschan.
workchan can be buffered (or not, as in this case); it doesn't matter because the input will be read at the rate it can be processed. We can buffer as much input as we want, but we can't buffer an infinite amount. I've set channel sizes as big as 1e6 - but a million is a lot less than infinite. For my use case, I don't need to do any buffering at all.
main knows when the input is done and can close the jobschan. main also knows when jobs are done (wg.Wait()) and can close the results channel. Closing these channels is an important signal to the worker and results goroutines - they can distinguish between a channel that is empty and a channel that is guaranteed not to have any new additions.
for job := range jobchan {...} is shorthand for your more verbose:
for {
job, ok := <- jobchan
if !ok {
wg.Done()
return
}
...
}
Note that this code creates 2 workers, but it could create 20 or 2000, or even 1. The program functions regardless of how many workers are in the pool. It can handle any volume of input (though interminable input of course leads to an interminable program). It does not create a cyclic loop of output to input. If your use case requires jobs to create more jobs, that's a more challenging scenario that can typically be avoided with careful planning.
I hope this gives you some good ideas about how you can better use concurrency in your Go applications.
https://play.golang.wiki/p/cZuI9YXypxI

Go reset a timer.NewTimer within select loop

I have a scenario in which I'm processing events on a channel, and one of those events is a heartbeat which needs to occur within a certain timeframe. Events which are not heartbeats will continue consuming the timer, however whenever the heartbeat is received I want to reset the timer. The obvious way to do this would be by using a time.NewTimer.
For example:
func main() {
to := time.NewTimer(3200 * time.Millisecond)
for {
select {
case event, ok := <-c:
if !ok {
return
} else if event.Msg == "heartbeat" {
to.Reset(3200 * time.Millisecond)
}
case remediate := <-to.C:
fmt.Println("do some stuff ...")
return
}
}
}
Note that a time.Ticker won't work here as the remediation should only be triggered if the heartbeat hasn't been received, not every time.
The above solution works in the handful of low volume tests I've tried it on, however I came across a Github issue indicating that resetting a Timer which has not fired is a no-no. Additionally the documentation states:
Reset should be invoked only on stopped or expired timers with drained channels. If a program has already received a value from t.C, the timer is known to have expired and the channel drained, so t.Reset can be used directly. If a program has not yet received a value from t.C, however, the timer must be stopped and—if Stop reports that the timer expired before being stopped—the channel explicitly drained:
if !t.Stop() {
<-t.C
}
t.Reset(d)
This gives me pause, as it seems to describe exactly what I'm attempting to do. I'm resetting the Timer whenever the heartbeat is received, prior to it having fired. I'm not experienced enough with Go yet to digest the whole post, but it certainly seems like I may be headed down a dangerous path.
One other solution I thought of is to simply replace the Timer with a new one whenever the heartbeat occurs, e.g:
else if event.Msg == "heartbeat" {
to = time.NewTimer(3200 * time.Millisecond)
}
At first I was worried that the rebinding to = time.NewTimer(3200 * time.Millisecond) wouldn't be visible within the select:
For all the cases in the statement, the channel operands of receive operations and the channel and right-hand-side expressions of send statements are evaluated exactly once, in source order, upon entering the "select" statement. The result is a set of channels to receive from or send to, and the corresponding values to send.
But in this particular case since we are inside a loop, I would expect that upon each iteration we re-enter select and therefore the new binding should be visible. Is that a fair assumption?
I realize there are similar questions out there, and I've tried to read the relevant posts/documentation, but I am new to Go just want to be sure I'm understanding things correctly here.
So my questions are:
Is my use of timer.Reset() unsafe, or are the cases mentioned in the Github issue highlighting other problems which are not applicable here? Is the explanation in the docs cryptic or do I just need more experience with Go?
If it is unsafe, is my second proposed solution acceptable (rebinding the timer on each iteration).
ADDENDUM
Upon further reading, most of the pitfalls outlined in the issues are describing scenarios in which the timer has already fired (placing a result on the channel), and subsequent to that firing some other process attempts to Reset it. For this narrow case, I understand the need to test with !t.Stop() since a false return of Stop would indicate the timer has already fired, and as such must be drained prior to calling Reset.
What I still do not understand, is why it is necessary to call t.Stop() prior to t.Reset(), when the Timer has yet to fire. None of the examples go into that as far as I can tell.
What I still do not understand, is why it is necessary to call t.Stop() prior to t.Reset(), when the Timer has yet to fire.
The "when the Timer has yet to fire" bit is critical here. The timer fires within a separate go routine (part of the runtime) and this can happen at any time. You have no way of knowing whether the timer has fired at the time you call to.Reset(3200 * time.Millisecond) (it may even fire while that function is running!).
Here is an example that demonstrates this and is somewhat similar to what you are attempting (based on this):
func main() {
eventC := make(chan struct{}, 1)
go keepaliveLoop(eventC )
// Reset the timer 1000 (approx) times; once every millisecond (approx)
// This should prevent the timer from firing (because that only happens after 2 ms)
for i := 0; i < 1000; i++ {
time.Sleep(time.Millisecond)
// Don't block if there is already a reset request
select {
case eventC <- struct{}{}:
default:
}
}
}
func keepaliveLoop(eventC chan struct{}) {
to := time.NewTimer(2 * time.Millisecond)
for {
select {
case <-eventC:
//if event.Msg == "heartbeat"...
time.Sleep(3 * time.Millisecond) // Simulate reset work (delay could be partly dur to whatever is triggering the
to.Reset(2 * time.Millisecond)
case <-to.C:
panic("this should never happen")
}
}
}
Try it in the playground.
This may appear contrived due to the time.Sleep(3 * time.Millisecond) but that is just included to consistently demonstrate the issue. Your code may work 99.9% of the time but there is always the possibility that both the event and timer channels will fire before the select is run (in which a random case will run) or while the code in the case event, ok := <-c: block is running (including while Reset() is in progress). The result of this happening would be unexpected calls of the remediate code (which may not be a big issue).
Fortunately solving the issue is relatively easy (following the advice in the documentation):
time.Sleep(3 * time.Millisecond) // Simulate reset work (delay could be partly dur to whatever is triggering the
if !to.Stop() {
<-to.C
}
to.Reset(2 * time.Millisecond)
Try this in the playground.
This works because to.Stop returns "true if the call stops the timer, false if the timer has already expired or been stopped". Note that things get a more complicated if the timer is used in multiple go-routines "This cannot be done concurrent to other receives from the Timer's channel or other calls to the Timer's Stop method" but this is not the case in your use-case.
Is my use of timer.Reset() unsafe, or are the cases mentioned in the Github issue highlighting other problems which are not applicable here?
Yes - it is unsafe. However the impact is fairly low. The event arriving and timer triggering would need to happen almost concurrently and, in that case, running the remediate code might not be a big issue. Note that the fix is fairly simple (as per the docs)
If it is unsafe, is my second proposed solution acceptable (rebinding the timer on each iteration).
Your second proposed solution also works (but note that the garbage collector cannot free the timer until after it has fired, or been stopped, which may cause issues if you are creating timers rapidly).
Note: Re the suggestion from #JotaSantos
Another thing that could be done is to add a select when draining <-to.C (on the Stop "if") with a default clause. That would prevent the pause.
See this comment for details of why this may not be a good approach (it's also unnecessary in your situation).
I've faced a similar issue. After reading a lot of information, I came up with a solution that goes along these lines:
package main
import (
"fmt"
"time"
)
func main() {
const timeout = 2 * time.Second
// Prepare a timer that is stopped and ready to be reset.
// Stop will never return false, because an hour is too long
// for timer to fire. Thus there's no need to drain timer.C.
timer := time.NewTimer(timeout)
timer.Stop()
// Make sure to stop the timer when we return.
defer timer.Stop()
// This variable is needed because we need to track if we can safely reset the timer
// in a loop. Calling timer.Stop() will return false on every iteration, but we can only
// drain the timer.C once, otherwise it will deadlock.
var timerSet bool
c := make(chan time.Time)
// Simulate events that come in every second
// and every 5th event delays so that timer can fire.
go func() {
var i int
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
for t := range ticker.C {
i++
if i%5 == 0 {
fmt.Println("Sleeping")
time.Sleep(3 * time.Second)
}
c <- t
if i == 20 {
break
}
}
close(c)
}()
for {
select {
case t, ok := <-c:
if !ok {
fmt.Println("Closed channel")
return
}
fmt.Println("Got event", t, timerSet)
// We got an event, and timer was already set.
// We need to stop the timer and drain the channel if needed,
// so that we can safely reset it later.
if timerSet {
if !timer.Stop() {
<-timer.C
}
timerSet = false
}
// If timer was not set, or it was stopped before, it's safe to reset it.
if !timerSet {
timerSet = true
timer.Reset(timeout)
}
case remediate := <-timer.C:
fmt.Println("Timeout", remediate)
// It's important to store that timer is not set anymore.
timerSet = false
}
}
}
Link to playground: https://play.golang.org/p/0QlujZngEGg

Is there any way to prevent default golang program finish

I have a server working with websocket connections and a database. Some users can connect by sockets, so I need to increment their "online" in db; and at the moment of their disconnection I also decrement their "online" field in db. But in case the server breaks down I use a local variable replica map[string]int of users online. So I need to postpone the server shutdown until it completes a database request that decrements all users "online" in accordance with my variable replica, because at this way socket connection doesnt send default "close" event.
I have found a package github.com/xlab/closer that handles some system calls and can do some action before program finished, but my database request doesnt work in this way (code below)
func main() {
...
// trying to handle program finish event
closer.Bind(cleanupSocketConnections(&pageHandler))
...
}
// function that handles program finish event
func cleanupSocketConnections(p *controllers.PageHandler) func() {
return func() {
p.PageService.ResetOnlineUsers()
}
}
// this map[string]int contains key=userId, value=count of socket connections
type PageService struct {
Users map[string]int
}
func (p *PageService) ResetOnlineUsers() {
for userId, count := range p.Users {
// decrease online of every user in program variable
InfoService{}.DecreaseInfoOnline(userId, count)
}
}
Maybe I use it incorrectly or may be there is a better way to prevent default program finish?
First of all executing tasks when the server "breaks down" as you said is quite complicated, because breaking down can mean a lot of things and nothing can guarantee clean up functions execution when something goes really bad in your server.
From an engineering point of view (if setting users offline on breakdown is so important), the best would be to have a secondary service, on another server, that receives user connection and disconnection events and ping event, if it receives no updates in a set timeout the service considers your server down and proceeds to set every user offline.
Back to your question, using defer and waiting for termination signals should cover 99% of cases. I commented the code to explain the logic.
// AllUsersOffline is called when the program is terminated, it takes a *sync.Once to make sure this function is performed only
// one time, since it might be called from different goroutines.
func AllUsersOffline(once *sync.Once) {
once.Do(func() {
fmt.Print("setting all users offline...")
// logic to set all users offline
})
}
// CatchSigs catches termination signals and executes f function at the end
func CatchSigs(f func()) {
cSig := make(chan os.Signal, 1)
// watch for these signals
signal.Notify(cSig, syscall.SIGKILL, syscall.SIGTERM, syscall.SIGINT, syscall.SIGQUIT, syscall.SIGHUP) // these are the termination signals in GNU => https://www.gnu.org/software/libc/manual/html_node/Termination-Signals.html
// wait for them
sig := <- cSig
fmt.Printf("received signal: %s", sig)
// execute f
f()
}
func main() {
/* code */
// the once is used to make sure AllUsersOffline is performed ONE TIME.
usersOfflineOnce := &sync.Once{}
// catch termination signals
go CatchSigs(func() {
// when a termination signal is caught execute AllUsersOffline function
AllUsersOffline(usersOfflineOnce)
})
// deferred functions are called even in case of panic events, although execution is not to take for granted (OOM errors etc)
defer AllUsersOffline(usersOfflineOnce)
/* code */
// run server
err := server.Run()
if err != nil {
// error logic here
}
// bla bla bla
}
I think that you need to look at go routines and channel.
Here something (maybe) useful:
https://nathanleclaire.com/blog/2014/02/15/how-to-wait-for-all-goroutines-to-finish-executing-before-continuing/

How do I properly post a message to a simple Go Chatserver using REST API

I am currently building a simple chat server that supports posting messages through a REST API.
example:
========
```
curl -X POST -H "Content-Type: application/json" --data '{"user":"alex", "text":"this is a message"}' http://localhost:8081/message
{
"ok": true
}
Right now, I'm just currently storing the messages in an array of messages. I'm pretty sure this is an inefficient way. So is there a simple, better way to get and store the messages using goroutines and channels that will make it thread-safe.
Here is what I currently have:
type Message struct {
Text string
User string
Timestamp time.Time
}
var Messages = []Message{}
func messagePost(c http.ResponseWriter, req *http.Request){
decoder := json.NewDecoder(req.Body)
var m Message
err := decoder.Decode(&m)
if err != nil {
panic(err)
}
if m.Timestamp == (time.Time{}) {
m.Timestamp = time.Now()
}
addUser(m.User)
Messages = append(Messages, m)
}
Thanks!
It could be made thread safe using mutex, as #ThunderCat suggested but I think this does not add concurrency. If two or more requests are made simultaneously, one will have to wait for the other to complete first, slowing the server down.
Adding Concurrency: You make it faster and handle more concurrent request by using a queue (which is a Go channel) and a worker that listens on that channel - it'll be a simple implementation. Every time a message comes in through a Post request, you add to the queue (this is instantaneous and the HTTP response can be sent immediately). In another goroutine, you detect that a message has been added to the queue, you take it out append it to your Messages slice. While you're appending to Messages, the HTTP requests don't have to wait.
Note: You can make it even better by having multiple goroutines listen on the queue, but we can leave that for later.
This is how the code will somewhat look like:
type Message struct {
Text string
User string
Timestamp time.Time
}
var Messages = []Message{}
// messageQueue is the queue that holds new messages until they are processed
var messageQueue chan Message
func init() { // need the init function to initialize the channel, and the listeners
// initialize the queue, choosing the buffer size as 8 (number of messages the channel can hold at once)
messageQueue = make(chan Message, 8)
// start a goroutine that listens on the queue/channel for new messages
go listenForMessages()
}
func listenForMessages() {
// whenever we detect a message in the queue, append it to Messages
for m := range messageQueue {
Messages = append(Messages, m)
}
}
func messagePost(c http.ResponseWriter, req *http.Request){
decoder := json.NewDecoder(req.Body)
var m Message
err := decoder.Decode(&m)
if err != nil {
panic(err)
}
if m.Timestamp == (time.Time{}) {
m.Timestamp = time.Now()
}
addUser(m.User)
// add the message to the channel, it'll only wait if the channel is full
messageQueue <- m
}
Storing Messages: As other users have suggested, storing messages in memory may not be the right choice since the messages won't persist if the application is restarted. If you're working on a small, proof-of-concept type project and don't want to figure out the DB, you could save the Messages variable as a flat file on the server and then read from it every time the application starts (*Note: this should not be done on a production system, of course, for that you should set up a Database). But yeah, database should be the way to go.
Use a mutex to make the program threadsafe.
var Messages = []Message{}
var messageMu sync.Mutex
...
messageMu.Lock()
Messages = append(Messages, m)
messageMu.Unlock()
There's no need to use channels and goroutines to make the program threadsafe.
A database is probably a better choice for storing messages than the in memory slice used in the question. Asking how to use a database to implement a chat program is too broad a question for SO.

Resources