PUB/SUB pattern in ZeroMQ not working - go

I am trying to implement a very basic PUB/SUB pattern using ZeroMQ. I would like to have a server (always active) broadcasting messages (publisher) to all clients and does not care about connected clients.
If a clients connect to this server as a subscriber, it should receive the message.
However, I can not send the message using PUB/SUB.
In Python it would be:
# publisher (server.py)
import zmq
ctx = zmq.Context()
publisher = ctx.socket(zmq.PUB)
publisher.bind('tcp://127.0.0.1:9091')
while True:
publisher.send_string("test")
and
# subscriber (client.py)
import zmq
ctx = zmq.Context()
subscriber = ctx.socket(zmq.SUB)
subscriber.connect('tcp://127.0.0.1:9091')
while True:
msg = subscriber.recv_string()
print msg
Or in golang:
package main
import (
"github.com/pebbe/zmq4"
"log"
"time"
)
func Listen(subscriber *zmq4.Socket) {
for {
s, err := subscriber.Recv(0)
if err != nil {
log.Println(err)
continue
}
log.Println("rec", s)
}
}
func main() {
publisher, _ := zmq4.NewSocket(zmq4.PUB)
defer publisher.Close()
publisher.Bind("tcp://*:9090")
subscriber, _ := zmq4.NewSocket(zmq4.SUB)
defer subscriber.Close()
subscriber.Connect("tcp://127.0.0.1:9090")
go Listen(subscriber)
for _ = range time.Tick(time.Second) {
publisher.Send("test", 0)
log.Println("send", "test")
}
}
Did I mis-understood this pattern or do I need to send a particular signal from the client to the server, when connecting. I am interested in the golang version and only use the python version for testing.

Did I mis-understood this pattern? Yes, fortunately you did.
ZeroMQ archetypes were defined so as to represent a certain behaviour. As said, PUSH-archetype AccessPoint pushes every message "through" all the so far setup communication channels, PULL-er AccessPoint pulls anything that has arrived down the line(s) to "it's hands", PUB-lisher AccessPoint publishes, SUB-scriber AccessPoint subscribes, so as to receive just the messages, that match it's topic-filter(s), but not any other.
As it seems clear, such Archetype "specification" helps build the ZeroMQ smart messaging / signalling infrastructure for our ease of use in distributed-systems architectures.
# subscriber (client.py)
import zmq
ctx = zmq.Context()
subscriber = ctx.socket( zmq.SUB )
subscriber.connect( 'tcp://127.0.0.1:9091' )
subscriber.setsockopt( zmq.LINGER, 0 ) # ALWAYS:
subscriber.setsockopt( zmq.SUBSCRIBE, "" ) # OTHERWISE NOTHING DELIVERED
while True:
msg = subscriber.recv_string() # MAY USE .poll() + zmq.NOBLOCK
print msg
subscriber, _ := zmq4.NewSocket( zmq4.SUB )
subscriber.Connect( "tcp://127.0.0.1:9090" )
subscriber.SetSubscribe( filter ) // SET: <topic-filter>
subscriber.SetLinger( 0 ) // SAFETY FIRST: PREVENT DEADLOCK
defer subscriber.Close() // NOW MAY SAFELY SET:
...
msg, _ := subscriber.Recv( 0 )
As defined, any right instantiated SUB-side AccessPoint object has literally zero-chance to know, what will be the choice of what messages are those right ones, so that they ought be "delivered" and what messages are not.
Without this initial piece of knowledge, ZeroMQ designers had a principal choice to be either Archetype-policy consistent and let PUB-side AccessNode to distribute all the .send()-acquired messages only to those SUB-side AccessNode(s), that have explicitly requested to receive any such, right via the zmq.SUBSCRIBE-mechanics or to deliver everything sent from PUB also to all so far un-decided SUB-s.
The former was a consistent and professional design step from ZeroMQ authors.
The latter would actually mean to violate ZeroMQ own RFC-specification.
The latter choice would be something like if one has just moved to a new apartment, one would hardly expect to find all newspapers and magazines to appear delivered in one's new mailbox from next morning on, would one? But if one subscribes to Boston Globe, the very next morning the fresh release will be at the doorstep as it will remain to be there, until one cancels the subscription or the newspaper went bankrupt or a lack of paper rolls prevented the printing shop from delivering in due time and fashion or a traffic jam in the Big Dig tunnel might have caused troubles for all or just the local delivery some one particular day.
All this is natural and compatible with the Archetype-policy.
Intermezzo: Golang has already bindings to many different API versions
Technology purists will object here, that early API releases ( till some v3.2+ ) actually did technically transport all message-payloads from a PUB to all SUB-s, as it simplified PUB-side workload envelope, but increased transport-class(es) data-flow and SUB-side resources / deferred topic-filter processing. Yet all this was hidden from user-code, right by the API horizon of abstraction. So, except of a need to properly scale resources, this was transparent to user. More recent API versions reverted the role of topic-filter processor and let this to now happen on the PUB-side. Nevertheless, in both cases, the ZeroMQ RFC specification policy is implemented in such a manner, the SUB-side will never deliver ( through the .recv()-interface ) a single message, that was not matching the valid, explicit SUB-side subscription(s)
In all cases a SUB-side has not yet explicitly set any zmq.SUBSCRIBE-instructed topic-filter, it cannot and will not deliver anything ( which is both natural and fully-consistent with the ZeroMQ RFC Archetype-policy defined for the SUB-type AccessPoint ).
The Best Next Step:
Always, at least, read the ZeroMQ API documentation, where all details are professionally specified - so at least, one can get a first glimpse on the breath of the smart messaging / signaling framework.
This will not help anyone to start on a green-field and fully-build one's own complex mental-concept and indepth understanding of how all the things work internally, which is obviously not any API-documentation's ambition, is it? Yet, this will help anyone to refresh or remind about all configurable details, once one has mastered the ZeroMQ internal architecture, as detailed in the source, referred in the next paragraph.
Plus, for anyone, who is indeed interested in distributed-systems or just zeromq per-se, it is worth one's time and efforts to always read Pieter HINTJENS' book "Code Connected, Volume 1" ( freely available in pdf ) plus any other of his books on his rich experience on software engineering later, because his many insights into modern computing may and will inspire ( and lot ).
edit:
MWE in GO
package main
import (
"github.com/pebbe/zmq4"
"log"
"time"
)
func Listen(subscriber *zmq4.Socket) {
for {
s, err := subscriber.Recv(0)
if err != nil {
log.Println(err)
continue
}
log.Println("rec", s)
}
}
func main() {
publisher, _ := zmq4.NewSocket(zmq4.PUB)
publisher.SetLinger(0)
defer publisher.Close()
publisher.Bind("tcp://127.0.0.1:9092")
subscriber, _ := zmq4.NewSocket(zmq4.SUB)
subscriber.SetLinger(0)
defer subscriber.Close()
subscriber.Connect("tcp://127.0.0.1:9092")
subscriber.SetSubscribe("")
go Listen(subscriber)
for _ = range time.Tick(time.Second) {
publisher.Send("test", 0)
log.Println("send", "test")
}
}

Related

Go Concurrency Circular Logic

I'm just getting into concurrency in Go and trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel. If a message comes into my dispatch function via the dispatchchan channel and my other go routines are busy, the message is appended onto the stack slice in the dispatcher and the dispatcher will try to send again later when a worker becomes available, and/or no more messages are received on the dispatchchan. This is because the dispatchchan and the jobchan are unbuffered, and the go routine the workers are running will append other messages to the dispatcher up to a certain point and I don't want the workers blocked waiting on the dispatcher and creating a deadlock. Here's the dispatcher code I've come up with so far:
func dispatch() {
var stack []string
acount := 0
for {
select {
case d := <-dispatchchan:
stack = append(stack, d)
case c := <-mw:
acount = acount + c
case jobchan <-stack[0]:
if len(stack) > 1 {
stack[0] = stack[len(stack)-1]
stack = stack[:len(stack)-1]
} else {
stack = nil
}
default:
if acount == 0 && len(stack) == 0 {
close(jobchan)
close(dispatchchan)
close(mw)
wg.Done()
return
}
}
}
Complete example at https://play.golang.wiki/p/X6kXVNUn5N7
The mw channel is a buffered channel the same length as the number of worker go routines. It acts as a semaphore for the worker pool. If the worker routine is doing [m]eaningful [w]ork it throws int 1 on the mw channel and when it finishes its work and goes back into the for loop listening to the jobchan it throws int -1 on the mw. This way the dispatcher knows if there's any work being done by the worker pool, or if the pool is idle. If the pool is idle and there are no more messages on the stack, then the dispatcher closes the channels and return control to the main func.
This is all good but the issue I have is that the stack itself could be zero length so the case where I attempt to send stack[0] to the jobchan, if the stack is empty, I get an out of bounds error. What I'm trying to figure out is how to ensure that when I hit that case, either stack[0] has a value in it or not. I don't want that case to send an empty string to the jobchan.
Any help is greatly appreciated. If there's a more idomatic concurrency pattern I should consider, I'd love to hear about it. I'm not 100% sold on this solution but this is the farthest I've gotten so far.
This is all good but the issue I have is that the stack itself could be zero length so the case where I attempt to send stack[0] to the jobchan, if the stack is empty, I get an out of bounds error.
I can't reproduce it with your playground link, but it's believable, because at lest one gofunc worker might have been ready to receive on that channel.
My output has been Msgcnt: 0, which is also easily explained, because gofunc might not have been ready to receive on jobschan when dispatch() runs its select. The order of these operations is not defined.
trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel
A channel needs no dispatcher. A channel is the dispatcher.
If a message comes into my dispatch function via the dispatchchan channel and my other go routines are busy, the message is [...] will [...] send again later when a worker becomes available, [...] or no more messages are received on the dispatchchan.
With a few creative edits, it was easy to turn that into something close to the definition of a buffered channel. It can be read from immediately, or it can take up to some "limit" of messages that can't be immediately dispatched. You do define limit, though it's not used elsewhere within your code.
In any function, defining a variable you don't read will result in a compile time error like limit declared but not used. This stricture improves code quality and helps identify typeos. But at package scope, you've gotten away with defining the unused limit as a "global" and thus avoided a useful error - you haven't limited anything.
Don't use globals. Use passed parameters to define scope, because the definition of scope is tantamount to functional concurrency as expressed with the go keyword. Pass the relevant channels defined in local scope to functions defined at package scope so that you can easily track their relationships. And use directional channels to enforce the producer/consumer relationship between your functions. More on this later.
Going back to "limit", it makes sense to limit the quantity of jobs you're queueing because all resources are limited, and accepting more messages than you have any expectation of processing requires more durable storage than process memory provides. If you don't feel obligated to fulfill those requests no matter what, don't accept "too many" of them in the first place.
So then, what function has dispatchchan and dispatch()? To store a limited number of pending requests, if any, before they can be processed, and then to send them to the next available worker? That's exactly what a buffered channel is for.
Circular Logic
Who "knows" when your program is done? main() provides the initial input, but you close all 3 channels in `dispatch():
close(jobchan)
close(dispatchchan)
close(mw)
Your workers write to their own job queue so only when the workers are done writing to it can the incoming job queue be closed. However, individual workers also don't know when to close the jobs queue because other workers are writing to it. Nobody knows when your algorithm is done. There's your circular logic.
The mw channel is a buffered channel the same length as the number of worker go routines. It acts as a semaphore for the worker pool.
There's a race condition here. Consider the case where all n workers have just received the last n jobs. They've each read from jobschan and they're checking the value of ok. disptatcher proceeds to run its select. Nobody is writing to dispatchchan or reading from jobschan right now so the default case is immediately matched. len(stack) is 0 and there's no current job so dispatcher closes all channels including mw. At some point thereafter, a worker tries to write to a closed channel and panics.
So finally I'm ready to provide some code, but I have one more problem: I don't have a clear problem statement to write code around.
I'm just getting into concurrency in Go and trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel.
Channels between goroutines are like the teeth that synchronize gears. But to what end do the gears turn? You're not trying to keep time, nor construct a wind-up toy. Your gears could be made to turn, but what would success look like? Their turning?
Let's try to define a more specific use case for channels: given an arbitrarily long set of durations coming in as strings on standard input*, sleep that many seconds in one of n workers. So that we actually have a result to return, we'll say each worker will return the start and end time the duration was run for.
So that it can run in the playground, I'll simulate standard input with a hard-coded byte buffer.
package main
import (
"bufio"
"bytes"
"fmt"
"os"
"strings"
"sync"
"time"
)
type SleepResult struct {
worker_id int
duration time.Duration
start time.Time
end time.Time
}
func main() {
var num_workers = 2
workchan := make(chan time.Duration)
resultschan := make(chan SleepResult)
var wg sync.WaitGroup
var resultswg sync.WaitGroup
resultswg.Add(1)
go results(&resultswg, resultschan)
for i := 0; i < num_workers; i++ {
wg.Add(1)
go worker(i, &wg, workchan, resultschan)
}
// playground doesn't have stdin
var input = bytes.NewBufferString(
strings.Join([]string{
"3ms",
"1 seconds",
"3600ms",
"300 ms",
"5s",
"0.05min"}, "\n") + "\n")
var scanner = bufio.NewScanner(input)
for scanner.Scan() {
text := scanner.Text()
if dur, err := time.ParseDuration(text); err != nil {
fmt.Fprintln(os.Stderr, "Invalid duration", text)
} else {
workchan <- dur
}
}
close(workchan) // we know when our inputs are done
wg.Wait() // and when our jobs are done
close(resultschan)
resultswg.Wait()
}
func results(wg *sync.WaitGroup, resultschan <-chan SleepResult) {
for res := range resultschan {
fmt.Printf("Worker %d: %s : %s => %s\n",
res.worker_id, res.duration,
res.start.Format(time.RFC3339Nano), res.end.Format(time.RFC3339Nano))
}
wg.Done()
}
func worker(id int, wg *sync.WaitGroup, jobchan <-chan time.Duration, resultschan chan<- SleepResult) {
var res = SleepResult{worker_id: id}
for dur := range jobchan {
res.duration = dur
res.start = time.Now()
time.Sleep(res.duration)
res.end = time.Now()
resultschan <- res
}
wg.Done()
}
Here I use 2 wait groups, one for the workers, one for the results. This makes sure Im done writing all the results before main() ends. I keep my functions simple by having each function do exactly one thing at a time: main reads inputs, parses durations from them, and sends them off to the next worker. The results function collects results and prints them to standard output. The worker does the sleeping, reading from jobchan and writing to resultschan.
workchan can be buffered (or not, as in this case); it doesn't matter because the input will be read at the rate it can be processed. We can buffer as much input as we want, but we can't buffer an infinite amount. I've set channel sizes as big as 1e6 - but a million is a lot less than infinite. For my use case, I don't need to do any buffering at all.
main knows when the input is done and can close the jobschan. main also knows when jobs are done (wg.Wait()) and can close the results channel. Closing these channels is an important signal to the worker and results goroutines - they can distinguish between a channel that is empty and a channel that is guaranteed not to have any new additions.
for job := range jobchan {...} is shorthand for your more verbose:
for {
job, ok := <- jobchan
if !ok {
wg.Done()
return
}
...
}
Note that this code creates 2 workers, but it could create 20 or 2000, or even 1. The program functions regardless of how many workers are in the pool. It can handle any volume of input (though interminable input of course leads to an interminable program). It does not create a cyclic loop of output to input. If your use case requires jobs to create more jobs, that's a more challenging scenario that can typically be avoided with careful planning.
I hope this gives you some good ideas about how you can better use concurrency in your Go applications.
https://play.golang.wiki/p/cZuI9YXypxI

Attempting to acquire a lock with a deadline in golang?

How can one only attempt to acquire a mutex-like lock in go, either aborting immediately (like TryLock does in other implementations) or by observing some form of deadline (basically LockBefore)?
I can think of 2 situations right now where this would be greatly helpful and where I'm looking for some sort of solution. The first one is: a CPU-heavy service which receives latency sensitive requests (e.g. a web service). In this case you would want to do something like the RPCService example below. It is possible to implement it as a worker queue (with channels and stuff), but in that case it becomes more difficult to gauge and utilize all available CPU. It is also possible to just accept that by the time you acquire the lock your code may already be over deadline, but that is not ideal as it wastes some amount of resources and means we can't do things like a "degraded ad-hoc response".
/* Example 1: LockBefore() for latency sensitive code. */
func (s *RPCService) DoTheThing(ctx context.Context, ...) ... {
if s.someObj[req.Parameter].mtx.LockBefore(ctx.Deadline()) {
defer s.someObj[req.Parameter].mtx.Unlock()
... expensive computation based on internal state ...
} else {
return s.cheapCachedResponse[req.Parameter]
}
}
Another case is when you have a bunch of objects which should be touched, but which may be locked, and where touching them should complete within a certain amount of time (e.g. updating some stats). In this case you could also either use LockBefore() or some form of TryLock(), see the Stats example below.
/* Example 2: TryLock() for updating stats. */
func (s *StatsObject) updateObjStats(key, value interface{}) {
if s.someObj[key].TryLock() {
defer s.someObj[key].Unlock()
... update stats ...
... fill in s.cheapCachedResponse ...
}
}
func (s *StatsObject) UpdateStats() {
s.someObj.Range(s.updateObjStats)
}
For ease of use, let's assume that in the above case we're talking about the same s.someObj. Any object may be blocked by DoTheThing() operations for a long time, which means we would want to skip it in updateObjStats. Also, we would want to make sure that we return the cheap response in DoTheThing() in case we can't acquire a lock in time.
Unfortunately, sync.Mutex only and exclusively has the functions Lock() and Unlock(). There is no way to potentially acquire a lock. Is there some easy way to do this instead? Am I approaching this class of problems from an entirely wrong angle, and is there a different, more "go"ish way to solve them? Or will I have to implement my own Mutex library if I want to solve these? I am aware of issue 6123 which seems to suggest that there is no such thing and that the way I'm approaching these problems is entirely un-go-ish.
Use a channel with buffer size of one as mutex.
l := make(chan struct{}, 1)
Lock:
l <- struct{}{}
Unlock:
<-l
Try lock:
select {
case l <- struct{}{}:
// lock acquired
<-l
default:
// lock not acquired
}
Try with timeout:
select {
case l <- struct{}{}:
// lock acquired
<-l
case <-time.After(time.Minute):
// lock not acquired
}
I think you're asking several different things here:
Does this facility exist in the standard libray? No, it doesn't. You can probably find implementations elsewhere - this is possible to implement using the standard library (atomics, for example).
Why doesn't this facility exist in the standard library: the issue you mentioned in the question is one discussion. There are also several discussions on the go-nuts mailing list with several Go code developers contributing: link 1, link 2. And it's easy to find other discussions by googling.
How can I design my program such that I won't need this?
The answer to (3) is more nuanced and depends on your exact issue. Your question already says
It is possible to implement it as a worker queue (with channels and
stuff), but in that case it becomes more difficult to gauge and
utilize all available CPU
Without providing details on why it would be more difficult to utilize all CPUs, as opposed to checking for a mutex lock state.
In Go you usually want channels whenever the locking schemes become non-trivial. It shouldn't be slower, and it should be much more maintainable.
How about this package: https://github.com/viney-shih/go-lock . It use channel and semaphore (golang.org/x/sync/semaphore) to solve your problem.
go-lock implements TryLock, TryLockWithTimeout and TryLockWithContext functions in addition to Lock and Unlock. It provides flexibility to control the resources.
Examples:
package main
import (
"fmt"
"time"
"context"
lock "github.com/viney-shih/go-lock"
)
func main() {
casMut := lock.NewCASMutex()
casMut.Lock()
defer casMut.Unlock()
// TryLock without blocking
fmt.Println("Return", casMut.TryLock()) // Return false
// TryLockWithTimeout without blocking
fmt.Println("Return", casMut.TryLockWithTimeout(50*time.Millisecond)) // Return false
// TryLockWithContext without blocking
ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
defer cancel()
fmt.Println("Return", casMut.TryLockWithContext(ctx)) // Return false
// Output:
// Return false
// Return false
// Return false
}
PMutex from package https://github.com/myfantasy/mfs
PMutex implements RTryLock(ctx context.Context) and TryLock(ctx context.Context)
// ctx - some context
ctx := context.Background()
mx := mfs.PMutex{}
isLocked := mx.TryLock(ctx)
if isLocked {
// DO Something
mx.Unlock()
} else {
// DO Something else
}

Is it safe to hide sending to channel behind function call

I have a struct called Hub with a Run() method which is executed in its own goroutine. This method sequentially handles incoming messages. Messages arrive concurrently from multiple producers (separate goroutines). Of course I use a channel to accomplish this task. But now I want to hide the Hub behind an interface to be able to choose from its implementations. So, using a channel as a simple Hub's field isn't appropriate.
package main
import "fmt"
import "time"
type Hub struct {
msgs chan string
}
func (h *Hub) Run() {
for {
msg, hasMore := <- h.msgs
if !hasMore {
return
}
fmt.Println("hub: msg received", msg)
}
}
func (h *Hub) SendMsg(msg string) {
h.msgs <- msg
}
func send(h *Hub, prefix string) {
for i := 0; i < 5; i++ {
fmt.Println("main: sending msg")
h.SendMsg(fmt.Sprintf("%s %d", prefix, i))
}
}
func main() {
h := &Hub{make(chan string)}
go h.Run()
for i := 0; i < 10; i++ {
go send(h, fmt.Sprintf("msg sender #%d", i))
}
time.Sleep(time.Second)
}
So I've introduced Hub.SendMsg(msg string) function that just calls h.msgs <- msg and which I can add to the HubInterface. And as a Go-newbie I wonder, is it safe from the concurrency perspective? And if so - is it a common approach in Go?
Playground here.
Channel send semantics do not change when you move the send into a method. Andrew's answer points out that the channel needs to be created with make to send successfully, but that was always true, whether or not the send is inside a method.
If you are concerned about making sure callers can't accidentally wind up with invalid Hub instances with a nil channel, one approach is to make the struct type private (hub) and have a NewHub() function that returns a fully initialized hub wrapped in your interface type. Since the struct is private, code in other packages can't try to initialize it with an incomplete struct literal (or any struct literal).
That said, it's often possible to create invalid or nonsense values in Go and that's accepted: net.IP("HELLO THERE BOB") is valid syntax, or net.IP{}. So if you think it's better to expose your Hub type go ahead.
Easy answer
Yes
Better answer
No
Channels are great for emitting data from unknown go-routines. They do so safely, however I would recommend being careful with a few parts. In the listed example the channel is created with the construction of the struct by the consumer (and not not by a consumer).
Say the consumer creates the Hub like the following: &Hub{}. Perfectly valid... Apart from the fact that all the invokes of SendMsg() will block for forever. Luckily you placed those in their own go-routines. So you're still fine right? Wrong. You are now leaking go-routines. Seems fine... until you run this for a period of time. Go encourages you to have valid zero values. In this case &Hub{} is not valid.
Ensuring SendMsg() won't block could be achieved via a select{} however you then have to decide what to do when you encounter the default case (e.g. throw data away). The channel could block for more reasons than bad setup too. Say later you do more than simply print the data after reading from the channel. What if the read gets very slow, or blocks on IO. You then will start pushing back on the producers.
Ultimately, channels allow you to not think much about concurrency... However if this is something of high-throughput, then you have quite a bit to consider. If it is production code, then you need to understand that your API here involves SendMsg() blocking.

Distribute the same keyword to multiple goroutines

I have something like this mock (code below) which distributes the same keyword out to multiple goroutines, except the goroutines all take different amount of times doing things with the keyword but can operate independently of each other so they don't need any synchronization. The solution given below to distribute clearly synchronizes the goroutines.
I just want to toss this idea out there to see how other people would deal with this type of distribution, as I assume it is fairly common and someone else has thought about it before.
Here are some other solutions I have thought up and why they seem kinda meh to me:
One goroutine for each keyword
Each time a new keyword comes in spawn a goroutine to handle the distribution
Give the keyword a bitmask or something for each goroutine to update
This way once all of the workers have touched the keyword it can be deleted and we can move on
Give each worker its own stack to work off of
This seems kinda appealing, just give each worker a stack to add each keyword to, but we would eventually run into a problem of a ton of memory being taken up since it is planned to run so long
The problem with all of these is that my code is supposed to run for a long time, unwatched, and that would lead to either a huge build up of keywords or goroutines due to the lazy worker taking longer than the others. It almost seems like it'd be nice to give each worker its own Amazon SQS queue or implement something similar to that myself.
EDIT:
Store the keyword outside the program
I just thought of doing it this way instead, I could perhaps just store the keyword outside the program until they all grab it and then delete it once it has been used up. This sits ok with me actually, I don't have a problem with using up disk space
Anyway here is an example of the approach that waits for all to finish:
package main
import (
"flag"
"fmt"
"math/rand"
"os"
"os/signal"
"strconv"
"time"
)
var (
shutdown chan struct{}
count = flag.Int("count", 5, "number to run")
)
type sleepingWorker struct {
name string
sleep time.Duration
ch chan int
}
func NewQuicky(n string) sleepingWorker {
var rq sleepingWorker
rq.name = n
rq.ch = make(chan int)
rq.sleep = time.Duration(rand.Intn(5)) * time.Second
return rq
}
func (r sleepingWorker) Work() {
for {
fmt.Println(r.name, "is about to sleep, number:", <-r.ch)
time.Sleep(r.sleep)
}
}
func NewLazy() sleepingWorker {
var rq sleepingWorker
rq.name = "Lazy slow worker"
rq.ch = make(chan int)
rq.sleep = 20 * time.Second
return rq
}
func distribute(gen chan int, workers ...sleepingWorker) {
for kw := range gen {
for _, w := range workers {
fmt.Println("sending keyword to:", w.name)
select {
case <-shutdown:
return
case w.ch <- kw:
fmt.Println("keyword sent to:", w.name)
}
}
}
}
func main() {
flag.Parse()
shutdown = make(chan struct{})
go func() {
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt)
<-c
close(shutdown)
}()
x := make([]sleepingWorker, *count)
for i := 0; i < (*count)-1; i++ {
x[i] = NewQuicky(strconv.Itoa(i))
go x[i].Work()
}
x[(*count)-1] = NewLazy()
go x[(*count)-1].Work()
gen := make(chan int)
go distribute(gen, x...)
go func() {
i := 0
for {
i++
select {
case <-shutdown:
return
case gen <- i:
}
}
}()
<-shutdown
os.Exit(0)
}
Let's assume I understand the problem correctly:
There's not too much you can do about it I'm afraid. You have limited resources (assuming all resources are limited) so if data to your input is written faster then you process it, there will be some synchronisation needed. At the end the whole process will run as quickly as the slowest worker anyway.
If you really need data from the workers available as soon as possible, the best you can do is to add some kind of buffering. But the buffer must be limited in size (even if you run in the cloud it would be limited by your wallet) so assuming never ending torrent of input it will only postpone the choke until some time in the future where you will start seeing "synchronisation" again.
All the ideas you presented in your questions are based on buffering the data. Even if you run a routine for every keyword-worker pair, this will buffer one element in every routine and, unless you implement the limit on total number of routines, you'll run out of memory. And even if you always leave some room for the quickest worker to spawn a new routine, the input queue won't be able to deliver new items as it would be choked on the slowest worker.
Buffering would solve your problem if on average you input is slower than processing time, but you have occasional spikes. If your buffer is big enough you can than accommodate the increase of throughput and maybe your quickest worker won't notice a thing.
Solution?
As go comes with buffered channels, this is the easiest to implement (also suggested by icza in the comment). Just give each worker a buffer. If you know which worker is the slowest, you can give it a bigger buffer. In this scenario you're limited by the memory of your machine.
If you're not happy with the single-machine memory limit then yes, per one of your ideas, you can "simply" store the buffer (queue) for each worker on the hard drive. But this is also limited and just postpones the blocking scenario until later. This is essentially the same as your Amazon SQS proposal (you could keep buffer in the cloud, but you need either limit it reasonably or prepare for the bill.)
The final note, depending on the system you're building, it might be not a good idea to buffer items in such a massive scale allowing to build up the backlog for the slower workers – it's often not desirable to have a worker hours, days, weeks behind the input flow and this is what would happen with an infinite buffer. The real answer then would be: improve your slowest worker to process things faster. (And add some buffering to improve the experience.)

Go amqp method to list all currently declared queues?

I'm using streadway/amqp to do a tie in from rabbitmq to our alert system. I need a method that can return a list of all the currently declared queues (exchanges would be nice too!) so that I can go through and get all the message counts.
I'm digging through the api documentation here...
http://godoc.org/github.com/streadway/amqp#Queue
...but I don't seem to be finding what I'm looking for. We're currently using a bash call to 'rabbitmqctl list_queues' but that's a kludge way to get this information, requires a custom sudo setting, and fires off hundreds of log entries a day to the secure log.
edit: method meaning, 'a way to get this piece of information' as opposed to an actual call, though a call would be great I don't believe it exists.
Answered my own question. There isn't a way! The amqp spec doesn't have a standard way of finding this out which seems like a glaring oversight to me. However, since my backend is rabbitmq with the management plugin, I can make a call to that to get this information.
from https://stackoverflow.com/a/21286370/5076297 (in python, I'll just have to translate this and probably also figure out the call to get vhosts):
import requests
def rest_queue_list(user='guest', password='guest', host='localhost', port=15672, virtual_host=None):
url = 'http://%s:%s/api/queues/%s' % (host, port, virtual_host or '')
response = requests.get(url, auth=(user, password))
queues = [q['name'] for q in response.json()]
return queues
edit: In golang (this was a headache to figure out as I haven't done anything with structures in years)
package main
import (
"fmt"
"net/http"
"encoding/json"
)
func main() {
type Queue struct {
Name string `json:name`
VHost string `json:vhost`
}
manager := "http://127.0.0.1:15672/api/queues/"
client := &http.Client{}
req, _ := http.NewRequest("GET", manager, nil)
req.SetBasicAuth("guest", "guest")
resp, _ := client.Do(req)
value := make([]Queue, 0)
json.NewDecoder(resp.Body).Decode(&value)
fmt.Println(value)
}
Output looks like this (I have two queues named hello and test)
[{hello /} {test /}]

Resources