Execute jobs concurrently in sequential order - go

"What?" you ask, "That title doesn't make any sense."
Consider the following:
Jobs with different ids may be processed asynchronously but jobs with the same id should be processed synchronously and in order from the queue.
My current implementation creates a go routine to handle the jobs for each specific id and looks something like this:
func FanOut() chan<- *Job {
channel := make(chan *Job)
routines = make(map[string]chan<- *Job)
go func() {
for j := range channel {
r, found := routines[j.id]
if !found {
r = Routine()
routines[j.id] = r
}
r <- j
}
}()
return channel
}
This appears to work well (in current testing) but the creation of thousands of go routines might not be the best approach? Additionally the fan out code blocks unless a buffered channel is used.
Rather than a collection of go routines (above) I'm considering using a collection of sync.Mutex. The idea would be to have a pool of go routines which must first establish a lock on the mutex corresponding to the job id.
Are there any existing Go patterns suited to handling these requirements?
Is there a better approach?

Create a channel for each ID - perhaps a slice of channels or a map (indexed by ID). Each channel would have a go-routine that processes the jobs for that ID in order. Simple.
I wouldn't worry about creating too many go-routines. And I wouldn't use mutex - without getting into too much detail using channels and go-routines allows each job to only be processed by one go-routine at a time and avoids possibility of data races.
BTW I only added this as an answer as I am not permitted to add comments (yet?).

Related

Golang Concurrency Code Review of Codewalk

I'm trying to understand best practices for Golang concurrency. I read O'Reilly's book on Go's concurrency and then came back to the Golang Codewalks, specifically this example:
https://golang.org/doc/codewalk/sharemem/
This is the code I was hoping to review with you in order to learn a little bit more about Go. My first impression is that this code is breaking some best practices. This is of course my (very) unexperienced opinion and I wanted to discuss and gain some insight on the process. This isn't about who's right or wrong, please be nice, I just want to share my views and get some feedback on them. Maybe this discussion will help other people see why I'm wrong and teach them something.
I'm fully aware that the purpose of this code is to teach beginners, not to be perfect code.
Issue 1 - No Goroutine cleanup logic
func main() {
// Create our input and output channels.
pending, complete := make(chan *Resource), make(chan *Resource)
// Launch the StateMonitor.
status := StateMonitor(statusInterval)
// Launch some Poller goroutines.
for i := 0; i < numPollers; i++ {
go Poller(pending, complete, status)
}
// Send some Resources to the pending queue.
go func() {
for _, url := range urls {
pending <- &Resource{url: url}
}
}()
for r := range complete {
go r.Sleep(pending)
}
}
The main method has no way to cleanup the Goroutines, which means if this was part of a library, they would be leaked.
Issue 2 - Writers aren't spawning the channels
I read that as a best practice, the logic to create, write and cleanup a channel should be controlled by a single entity (or group of entities). The reason behind this is that writers will panic when writing to a closed channel. So, it is best for the writer(s) to create the channel, write to it and control when it should be closed. If there are multiple writers, they can be synced with a WaitGroup.
func StateMonitor(updateInterval time.Duration) chan<- State {
updates := make(chan State)
urlStatus := make(map[string]string)
ticker := time.NewTicker(updateInterval)
go func() {
for {
select {
case <-ticker.C:
logState(urlStatus)
case s := <-updates:
urlStatus[s.url] = s.status
}
}
}()
return updates
}
This function shouldn't be in charge of creating the updates channel because it is the reader of the channel, not the writer. The writer of this channel should create it and pass it to this function. Basically saying to the function "I will pass updates to you via this channel". But instead, this function is creating a channel and it isn't clear who is responsible of cleaning it up.
Issue 3 - Writing to a channel asynchronously
This function:
func (r *Resource) Sleep(done chan<- *Resource) {
time.Sleep(pollInterval + errTimeout*time.Duration(r.errCount))
done <- r
}
Is being referenced here:
for r := range complete {
go r.Sleep(pending)
}
And it seems like an awful idea. When this channel is closed, we'll have a goroutine sleeping somewhere out of our reach waiting to write to that channel. Let's say this goroutine sleeps for 1h, when it wakes up, it will try to write to a channel that was closed in the cleanup process. This is another example of why the writters of the channels should be in charge of the cleanup process. Here we have a writer who's completely free and unaware of when the channel was closed.
Please
If I missed any issues from that code (related to concurrency), please list them. It doesn't have to be an objective issue, if you'd have designed the code in a different way for any reason, I'm also interested in learning about it.
Biggest lesson from this code
For me the biggest lesson I take from reviewing this code is that the cleanup of channels and the writing to them has to be synchronized. They have to be in the same for{} or at least communicate somehow (maybe via other channels or primitives) to avoid writing to a closed channel.
It is the main method, so there is no need to cleanup. When main returns, the program exits. If this wasn't the main, then you would be correct.
There is no best practice that fits all use cases. The code you show here is a very common pattern. The function creates a goroutine, and returns a channel so that others can communicate with that goroutine. There is no rule that governs how channels must be created. There is no way to terminate that goroutine though. One use case this pattern fits well is reading a large resultset from a
database. The channel allows streaming data as it is read from the
database. In that case usually there are other means of terminating the
goroutine though, like passing a context.
Again, there are no hard rules on how channels should be created/closed. A channel can be left open, and it will be garbage collected when it is no longer used. If the use case demands so, the channel can be left open indefinitely, and the scenario you worry about will never happen.
As you are asking about if this code was part of a library, yes it would be poor practice to spawn goroutines with no cleanup inside a library function. If those goroutines carry out documented behaviour of the library, it's problematic that the caller doesn't know when that behaviour is going to happen. If you have any behaviour that is typically "fire and forget", it should be the caller who chooses when to forget about it. For example:
func doAfter5Minutes(f func()) {
go func() {
time.Sleep(5 * time.Minute)
f()
log.Println("done!")
}()
}
Makes sense, right? When you call the function, it does something 5 minutes later. The problem is that it's easy to misuse this function like this:
// do the important task every 5 minutes
for {
doAfter5Minutes(importantTaskFunction)
}
At first glance, this might seem fine. We're doing the important task every 5 minutes, right? In reality, we're spawning many goroutines very quickly, probably consuming all available memory before they start dropping off.
We could implement some kind of callback or channel to signal when the task is done, but really, the function should be simplified like so:
func doAfter5Minutes(f func()) {
time.Sleep(5 * time.Minute)
f()
log.Println("done!")
}
Now the caller has the choice of how to use it:
// call synchronously
doAfter5Minutes(importantTaskFunction)
// fire and forget
go doAfter5Minutes(importantTaskFunction)
This function arguably should also be changed. As you say, the writer should effectively own the channel, as they should be the one closing it. The fact that this channel-reading function insists on creating the channel it reads from actually coerces itself into this poor "fire and forget" pattern mentioned above. Notice how the function needs to read from the channel, but it also needs to return the channel before reading. It therefore had to put the reading behaviour in a new, un-managed goroutine to allow itself to return the channel right away.
func StateMonitor(updates chan State, updateInterval time.Duration) {
urlStatus := make(map[string]string)
ticker := time.NewTicker(updateInterval)
defer ticker.Stop() // not stopping the ticker is also a resource leak
for {
select {
case <-ticker.C:
logState(urlStatus)
case s := <-updates:
urlStatus[s.url] = s.status
}
}
}
Notice that the function is now simpler, more flexible and synchronous. The only thing that the previous version really accomplishes, is that it (mostly) guarantees that each instance of StateMonitor will have a channel all to itself, and you won't have a situation where multiple monitors are competing for reads on the same channel. While this may help you avoid a certain class of bugs, it also makes the function a lot less flexible and more likely to have resource leaks.
I'm not sure I really understand this example, but the golden rule for channel closing is that the writer should always be responsible for closing the channel. Keep this rule in mind, and notice a few points about this code:
The Sleep method writes to r
The Sleep method is executed concurrently, with no method of tracking how many instances are running, what state they are in, etc.
Based on these points alone, we can say that there probably isn't anywhere in the program where it would be safe to close r, because there's seemingly no way of knowing if it will be used again.

Cancellation pattern in Golang

Here is a quote from 50 Shades Of Go: Traps, Gotchas and Common mistakes:
You can also use a special cancellation channel to interrupt the
workers.
func First(query string, replicas ...Search) Result {
c := make(chan Result)
done := make(chan struct{})
defer close(done)
searchReplica := func(i int) {
select {
case c <- replicas[i](query):
case <- done:
}
}
for i := range replicas {
go searchReplica(i)
}
return <-c
}
As far as understand, it means that we use channel done to interrupt the workers ahead of time without waiting for full execution (in our case execution of replicas[i](query). Therefore, we can receive a result from the fastest worker ("First Wins Pattern") and then cancel the work in all other workers and save the resources.
On the other hand, according to the specification:
For all the cases in the statement, the channel operands of receive
operations and the channel and right-hand-side expressions of send
statements are evaluated exactly once, in source order, upon entering
the "select" statement.
As far as I understand, it means we cannot interrupt the workers, as in any case, all workers will evaluate function replicas[i]query and only then select case <- done and finish their execution.
Could you please point out at the mistake in my reasoning?
Your reasoning is correct, the wording on the site is not completely clear. What this "construct" achieves is that the goroutines will not be left hanging forever, but once the searches finish, the goroutines will end properly. Nothing more is happening there.
In general, you can't interrupt any goroutine from the outside, the goroutine itself has to support some kind of termination (e.g. shutdown channel, context.Context etc.). See cancel a blocking operation in Go.
So yes, in the example you posted, all searches will be launched, concurrently, result of the fastest one will be returned as it arrives, the rest of the goroutines will continue to run as long as their search is finished.
What happens to the rest? The rest will be discarded (case <- done will be chosen, as an unbuffered channel cannot hold any elements, and there will be no one else receiving more from the channel).
You can verify this in this Go Playground example.

Golang channel output order

func main() {
messages := make(chan string)
go func() { messages <- "hello" }()
go func() { messages <- "ping" }()
msg := <-messages
msg2 := <-messages
fmt.Println(msg)
fmt.Println(msg2)
The above code consistently prints "ping" and then "hello" on my terminal.
I am confused about the order in which this prints, so I was wondering if I could get some clarification on my thinking.
I understand that unbuffered channels are blocking while waiting for both a sender and a receiver. So in the above case, when these 2 go routines are executed, there isn't, in both cases,a receiver yet. So I am guessing that both routines block until a receiver is available on the channel.
Now... I would assume that first "hello" is tried into the channel, but has to wait... at the same time, "ping" tries, but again has to wait. Then
msg := <- messages
shows up, so I would assume that at that stage, the program will arbitrarily pick one of the waiting goroutines and allow it to send its message over into the channel, since msg is ready to receive.
However, it seems that no matter how many times I run the program, it always is msg that gets assigned "ping" and msg2 that gets assigned "hello", which gives the impression that "ping" always gets priority to send first (to msg). Why is that?
It’s not about order of reading a channel but about order of goroutines execution which is not guaranteed.
Try to ‘Println’ from the function where you are writing to the channel (before and after writing) and I think it should be in same order as reading from the channel.
In Golang Spec Channels order is described as:-
Channels act as first-in-first-out queues. For example, if one
goroutine sends values on a channel and a second goroutine receives
them, the values are received in the order sent.
It will prints which value is available first to be received on other end.
If you wants to synchronize them use different channels or add wait Groups.
package main
import (
"fmt"
)
func main() {
messages1 := make(chan string)
messages2 := make(chan string)
go func(<-chan string) {
messages2 <- "ping"
}(messages2)
go func(<-chan string) {
messages1 <- "hello"
}(messages1)
fmt.Println(<-messages1)
fmt.Println(<-messages2)
}
If you see you can easily receive any value you want according to your choice using different channels.
Go playground
I just went through this same thing. See my post here: Golang channels, order of execution
Like you, I saw a pattern that was counter-intuitive. In a place where there actually shouldn't be a pattern. Once you launch a go process, you have launched a thread of execution, and basically all bets are off at that point regarding the order that the threads will execute their steps. But if there was going to be an order, logic tells us the first one called would be executed first.
In actual fact, if you recompile that program each time, the results will vary. That's what I found when I started compiling/running it on my local computer. In order to make the results random, I had to "dirty" the file, by adding and removing a space for instance. Then the compiler would re-compile the program, and then I would get a random order of execution. But when compiled in the go sandbox, the result was always the same.
When you use the sandbox, the results are apparently cached. I couldn't get the order to change in the sandbox by using insignificant changes. The only way I got it to change was to issue a time.Sleep(1) command between the launching of the go statements. Then, the first one launched would be the first one executed every time. I still don't think I'd bet my life on that continuing to happen though because they are separate threads of execution and there are no guarantees.
The bottom line is that I was seeing a deterministic result where there should be no determinism. That's what stuck me. I was fully cleared up when I found that the results really are random in a normal environment. The sandbox is a great tool to have. But it's not a normal environment. Compile and run your code locally and you will see the varying results you would expect.
I was confused when I first met this. but now I am clear about this. the reason cause this is not about channel, but for goroutine.
As The Go Memory Model mention, there's no guaranteed to goroutine's running and exit, so when you create two goroutine, you cannot make sure that they are running in order.
So if you want printing follow FIFO rule, you can change your code like this:
func main() {
messages := make(chan string)
go func() {
messages <- "hello"
messages <- "ping"
}()
//go func() { messages <- "ping" }()
msg := <-messages
msg2 := <-messages
fmt.Println(msg)
fmt.Println(msg2)
}

How are Go channels implemented?

After (briefly) reviewing the Go language spec, effective Go, and the Go memory model, I'm still a little unclear as to how Go channels work under the hood.
What kind of structure are they? They act kind of like a thread-safe queue /array.
Does their implementation depend on the architecture?
The source file for channels is (from your go source code root) in /src/pkg/runtime/chan.go.
hchan is the central data structure for a channel, with send and receive linked lists (holding a pointer to their goroutine and the data element) and a closed flag. There's a Lock embedded structure that is defined in runtime2.go and that serves as a mutex (futex) or semaphore depending on the OS. The locking implementation is in lock_futex.go (Linux/Dragonfly/Some BSD) or lock_sema.go (Windows/OSX/Plan9/Some BSD), based on the build tags.
Channel operations are all implemented in this chan.go file, so you can see the makechan, send and receive operations, as well as the select construct, close, len and cap built-ins.
For a great in-depth explanation on the inner workings of channels, you have to read Go channels on steroids by Dmitry Vyukov himself (Go core dev, goroutines, scheduler and channels among other things).
Here is a good talk that describes roughly how channels are implemented:
https://youtu.be/KBZlN0izeiY
Talk description:
GopherCon 2017: Kavya Joshi - Understanding Channels
Channels provide a simple mechanism for goroutines to communicate, and a powerful construct to build sophisticated concurrency patterns. We will delve into the inner workings of channels and channel operations, including how they're supported by the runtime scheduler and memory management systems.
You asked two questions:
What kind of structure are they?
Channels in go are indeed "kind of like a thread-safe queue", to be more precise, channels in Go have the following properties:
goroutine-safe
Provide FIFO semantics
Can store and pass values between goroutines
Cause goroutines to block and unblock
Every time you create a channel, an hchan struct is allocated on the heap, and a pointer to the hchan memory location is returned represented as a channel, this is how go-routines can share it.
The first two properties described above are implemented similarly to a queue with a lock.
The elements that the channel can pass to different go-routines are implemented as a circular queue (ring buffer) with indices in the hchan struct, the indices account for the position of elements in the buffer.
Circular queue:
qcount uint // total data in the queue
dataqsiz uint // size of the circular queue
buf unsafe.Pointer // points to an array of dataqsiz elements
And the indices:
sendx uint // send index
recvx uint // receive index
Every time a go-routine needs to access the channel structure and modify it's state it holds the lock, e.g: copy elements to/ from the buffer, update lists or an index. Some operations are optimized to be lock-free, but this is out of the scope for this answer.
The block and un-block property of go channels is achieved using two queues (linked lists) that hold the blocked go-routines
recvq waitq // list of recv waiters
sendq waitq // list of send waiters
Every time a go-routine wants to add a task to a full channel (buffer is full), or to take a task from an empty channel (buffer is empty), a pseudo go-routine sudog struct is allocated and the go-routine adds the sudog as a node to the send or receive waiters list accordingly. Then the go-routine updates the go runtime scheduler using special calls, which hints when they should be taken out of execution (gopark) or ready to run (goready).
Notice this is a very simplified explanations that hides some complexities.
Does their implementation depend on the architecture?
Besides the lock implementation that is OS specific as #mna already explained, I'm not aware of any architecture specific constraints optimizations or differences.
A simpler way to look at channels is as such, in that you may like to hold a program up while waiting for a condition to complete, typically used to prevent RACE condition, which means a thread might not finish before another, and then something your later thread or code depends on sometimes does not complete.
An example could be, you have a thread to retrieve some data from a database or other server and place the data into a variable, slice or map, and for some reason it gets delayed. then you have a process that uses that variable, but since it hasn't been initialised, or its not got its data yet. the program fails.
So a simple way to look at it in code is as follows:
package main
import "fmt"
var doneA = make(chan bool)
var doneB = make(chan bool)
var doneC = make(chan bool)
func init() { // this runs when you program starts.
go func() {
doneA <- true //Give donA true
}()
}
func initB() { //blocking
go func() {
a := <- doneA //will wait here until doneA is true
// Do somthing here
fmt.Print(a)
doneB <- true //State you finished
}()
}
func initC() {
go func() {
<-doneB // still blocking, but dont care about the value
// some code here
doneC <- true // Indicate finished this function
}()
}
func main() {
initB()
initC()
}
So hope this helps. not the selected answer above, but i believe should help to remove the mystery. I wonder if I should make a question and self answer?

What's the point of one-way channels in Go?

I'm learning Go and so far very impressed with it. I've read all the online docs at golang.org and am halfway through Chrisnall's "The Go Programming Language Phrasebook". I get the concept of channels and think that they will be extremely useful. However, I must have missed something important along the way, as I can't see the point to one-way channels.
If I'm interpreting them correctly, a read-only channel can only be received on and a write-only channel can only be transmitted on, so why have a channel that you can send to and never receive on? Can they be cast from one "direction" to the other? If so, again, what's the point if there's no actual constraint? Are they nothing more than a hint to client code of the channel's purpose?
A channel can be made read-only to whoever receives it, while the sender still has a two-way channel to which they can write. For example:
func F() <-chan int {
// Create a regular, two-way channel.
c := make(chan int)
go func() {
defer close(c)
// Do stuff
c <- 123
}()
// Returning it, implicitly converts it to read-only,
// as per the function return type.
return c
}
Whoever calls F(), receives a channel from which they can only read.
This is mostly useful to avoid potential misuse of a channel at compile time.
Because read/write-only channels are distinct types, the compiler can use
its existing type-checking mechanisms to ensure the caller does not try to write
stuff into a channel it has no business writing to.
I think the main motivation for read-only channels is to prevent corruption and panics of the channel. Imagine if you could write to the channel returned by time.After. This could mess up a lot of code.
Also, panics can occur if you:
close a channel more than once
write to a closed channel
These operations are compile-time errors for read-only channels, but they can cause nasty race conditions when multiple go-routines can write/close a channel.
One way of getting around this is to never close channels and let them be garbage collected. However, close is not just for cleanup, but it actually has use when the channel is ranged over:
func consumeAll(c <-chan bool) {
for b := range c {
...
}
}
If the channel is never closed, this loop will never end. If multiple go-routines are writing to a channel, then there's a lot of book-keeping that has to go on with deciding which one will close the channel.
Since you cannot close a read-only channel, this makes it easier to write correct code. As #jimt pointed out in his comment, you cannot convert a read-only channel to a writeable channel, so you're guaranteed that only parts of the code with access to the writable version of a channel can close/write to it.
Edit:
As for having multiple readers, this is completely fine, as long as you account for it. This is especially useful when used in a producer/consumer model. For example, say you have a TCP server that just accepts connections and writes them to a queue for worker threads:
func produce(l *net.TCPListener, c chan<- net.Conn) {
for {
conn, _ := l.Accept()
c<-conn
}
}
func consume(c <-chan net.Conn) {
for conn := range c {
// do something with conn
}
}
func main() {
c := make(chan net.Conn, 10)
for i := 0; i < 10; i++ {
go consume(c)
}
addr := net.TCPAddr{net.ParseIP("127.0.0.1"), 3000}
l, _ := net.ListenTCP("tcp", &addr)
produce(l, c)
}
Likely your connection handling will take longer than accepting a new connection, so you want to have lots of consumers with a single producer. Multiple producers is more difficult (because you need to coordinate who closes the channel) but you can add some kind of a semaphore-style channel to the channel send.
Go channels are modelled on Hoare's Communicating Sequential Processes, a process algebra for concurrency that is oriented around event flows between communicating actors (small 'a'). As such, channels have a direction because they have a send end and a receive end, i.e. a producer of events and a consumer of events. A similar model is used in Occam and Limbo also.
This is important - it would be hard to reason about deadlock issues if a channel-end could arbitrarily be re-used as both sender and receiver at different times.

Resources