I want to compute the inverse element of a prime in modular arithmetic.
In order to speed things up I start a few goroutines which try to find the element in a certain range. When the first one finds the element, it sends it to the main goroutine and at this point I want to terminate the program. So I call close in the main goroutine, but I don't know if the goroutines will finish their execution (I guess not). So a few questions arise:
1) Is this a bad style, should I have something like a WaitGroup?
2) Is there a more idiomatic way to do this computation?
package main
import "fmt"
const (
Procs = 8
P = 1000099
Base = 1<<31 - 1
)
func compute(start, end uint64, finished chan struct{}, output chan uint64) {
for i := start; i < end; i++ {
select {
case <-finished:
return
default:
break
}
if i*P%Base == 1 {
output <- i
}
}
}
func main() {
finished := make(chan struct{})
output := make(chan uint64)
for i := uint64(0); i < Procs; i++ {
start := i * (Base / Procs)
end := (i + 1) * (Base / Procs)
go compute(start, end, finished, output)
}
fmt.Println(<-output)
close(finished)
}
Is there a more idiomatic way to do this computation?
You don't actually need a loop to compute this.
If you use the GCD function (part of the standard library), you get returned numbers x and y such that:
x*P+y*Base=1
this means that x is the answer you want (because x*P = 1 modulo Base):
package main
import (
"fmt"
"math/big"
)
const (
P = 1000099
Base = 1<<31 - 1
)
func main() {
bigP := big.NewInt(P)
bigBase := big.NewInt(Base)
// Compute inverse of bigP modulo bigBase
bigGcd := big.NewInt(0)
bigX := big.NewInt(0)
bigGcd.GCD(bigX,nil,bigP,bigBase)
// x*bigP+y*bigBase=1
// => x*bigP = 1 modulo bigBase
fmt.Println(bigX)
}
Is this a bad style, should I have something like a WaitGroup?
A wait group solves a different problem.
In general, to be a responsible go citizen here and ensure your code runs and tidies up behind itself, you may need to do a combination of:
Signal to the spawned goroutines to stop their calculations when the result of the computation has been found elsewhere.
Ensure a synchronous process waits for the goroutines to stop before returning. This is not mandatory if they properly respond to the signal in #1, but if you don't wait, there will be no guarantee they have terminated before the parent goroutine continues.
In your example program, which performs this task and then quits, there is strictly no need to do either. As this comment indicates, your program's main method terminates upon a satisfactory answer being found, at which point the program will end, any goroutines will be summarily terminated, and the operating system will tidy up any consumed resources. Waiting for goroutines to stop is unnecessary.
However, if you wrapped this code up into a library or it became part of a long running "inverse prime calculation" service, it would be desirable to tidy up the goroutines you spawned to avoid wasting cycles unnecessarily. Additionally, in general, you may have other scenarios in which goroutines store state, hold handles to external resources, or hold handles to internal objects which you risk leaking if not properly tidied away – it is desirable to properly close these.
Communicating the requirement to stop working
There are several approaches to communicate this. I don't claim this is an exhaustive list! (Please do suggest other general-purpose methods in the comments or by proposing edits to the post.)
Using a special channel
Signal the child goroutines by closing a special "shutdown" channel reserved for the purpose. This exploits the channel axiom:
A receive from a closed channel returns the zero value immediately
On receiving from the shutdown channel, the goroutine should immediately arrange to tidy any local state and return from the function. Your earlier question had example code which implemented this; a version of the pattern is:
func myGoRoutine(shutdownChan <-chan struct{}) {
select {
case <-shutdownChan:
// tidy up behaviour goes here
return
// You may choose to listen on other channels here to implement
// the primary behaviour of the goroutine.
}
}
func main() {
shutdownChan := make(chan struct{})
go myGoRoutine(shutdownChan)
// some time later
close(shutdownChan)
}
In this instance, the shutdown logic is wasted because the main() method will immediately return after the call to close. This will race with the shutdown of the goroutine, but we should assume it will not properly execute its tidy-up behaviour. Point 2 addresses ways to fix this.
Using a context
The context package provides the option to create a context which can be cancelled. On cancellation, a channel exposed by the context's Done() method will be closed, which signals time to return from the goroutine.
This approach is approximately the same as the previous method, with the exception of neater encapsulation and the availability of a context to pass to downstream calls in your goroutine to cancel nested calls where desired. Example:
func myGoRoutine(ctx context.Context) {
select {
case <-ctx.Done():
// tidy up behaviour goes here
return
// Put real behaviour for the goroutine here.
}
}
func main() {
// Get a context (or use an existing one if you are provided with one
// outside a `main` method:
ctx := context.Background()
// Create a derived context with a cancellation method
ctx, cancel := context.WithCancel(ctx)
go myGoRoutine(ctx)
// Later, when ready to quit
cancel()
}
This has the same bug as the other case in that the main method will not wait for the child goroutines to quit before returning.
Waiting (or "join"ing) for child goroutines to stop
The code which closes the shutdown channel or closes the context in the above examples will not wait for child goroutines to stop working before continuing. This may be acceptable in some instances, while in others you may require the guarantee that goroutines have stopped before continuing.
sync.WaitGroup can be used to implement this requirement. The documentation is comprehensive. A wait group is a counter which should be incremented using its Add method on starting a goroutine and decremented using its Done method when a goroutine completes. Code can wait for the counter to return to zero by calling its Wait method, which blocks until the condition is true. All calls to Add must occur before a call to Wait.
Example code:
func main() {
var wg sync.WaitGroup
// Increment the WaitGroup with the number of goroutines we're
// spawning.
wg.Add(1)
// It is common to wrap a goroutine in a function which performs
// the decrement on the WaitGroup once the called function returns
// to avoid passing references of this control logic to the
// downstream consumer.
go func() {
// TODO: implement a method to communicate shutdown.
callMyFunction()
wg.Done()
}()
// Indicate shutdown, e.g. by closing a channel or cancelling a
// context.
// Wait for goroutines to stop
wg.Wait()
}
Is there a more idiomatic way to do this computation?
This algorithm is certainly parallelizable through use of goroutines in the manner you have defined. As the work is CPU-bound, the limitation of goroutines to the number of available CPUs makes sense (in the absence of other work on the machine) to benefit from the available compute resource.
See peterSO's answer for a bug fix.
Related
I'm just getting into concurrency in Go and trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel. If a message comes into my dispatch function via the dispatchchan channel and my other go routines are busy, the message is appended onto the stack slice in the dispatcher and the dispatcher will try to send again later when a worker becomes available, and/or no more messages are received on the dispatchchan. This is because the dispatchchan and the jobchan are unbuffered, and the go routine the workers are running will append other messages to the dispatcher up to a certain point and I don't want the workers blocked waiting on the dispatcher and creating a deadlock. Here's the dispatcher code I've come up with so far:
func dispatch() {
var stack []string
acount := 0
for {
select {
case d := <-dispatchchan:
stack = append(stack, d)
case c := <-mw:
acount = acount + c
case jobchan <-stack[0]:
if len(stack) > 1 {
stack[0] = stack[len(stack)-1]
stack = stack[:len(stack)-1]
} else {
stack = nil
}
default:
if acount == 0 && len(stack) == 0 {
close(jobchan)
close(dispatchchan)
close(mw)
wg.Done()
return
}
}
}
Complete example at https://play.golang.wiki/p/X6kXVNUn5N7
The mw channel is a buffered channel the same length as the number of worker go routines. It acts as a semaphore for the worker pool. If the worker routine is doing [m]eaningful [w]ork it throws int 1 on the mw channel and when it finishes its work and goes back into the for loop listening to the jobchan it throws int -1 on the mw. This way the dispatcher knows if there's any work being done by the worker pool, or if the pool is idle. If the pool is idle and there are no more messages on the stack, then the dispatcher closes the channels and return control to the main func.
This is all good but the issue I have is that the stack itself could be zero length so the case where I attempt to send stack[0] to the jobchan, if the stack is empty, I get an out of bounds error. What I'm trying to figure out is how to ensure that when I hit that case, either stack[0] has a value in it or not. I don't want that case to send an empty string to the jobchan.
Any help is greatly appreciated. If there's a more idomatic concurrency pattern I should consider, I'd love to hear about it. I'm not 100% sold on this solution but this is the farthest I've gotten so far.
This is all good but the issue I have is that the stack itself could be zero length so the case where I attempt to send stack[0] to the jobchan, if the stack is empty, I get an out of bounds error.
I can't reproduce it with your playground link, but it's believable, because at lest one gofunc worker might have been ready to receive on that channel.
My output has been Msgcnt: 0, which is also easily explained, because gofunc might not have been ready to receive on jobschan when dispatch() runs its select. The order of these operations is not defined.
trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel
A channel needs no dispatcher. A channel is the dispatcher.
If a message comes into my dispatch function via the dispatchchan channel and my other go routines are busy, the message is [...] will [...] send again later when a worker becomes available, [...] or no more messages are received on the dispatchchan.
With a few creative edits, it was easy to turn that into something close to the definition of a buffered channel. It can be read from immediately, or it can take up to some "limit" of messages that can't be immediately dispatched. You do define limit, though it's not used elsewhere within your code.
In any function, defining a variable you don't read will result in a compile time error like limit declared but not used. This stricture improves code quality and helps identify typeos. But at package scope, you've gotten away with defining the unused limit as a "global" and thus avoided a useful error - you haven't limited anything.
Don't use globals. Use passed parameters to define scope, because the definition of scope is tantamount to functional concurrency as expressed with the go keyword. Pass the relevant channels defined in local scope to functions defined at package scope so that you can easily track their relationships. And use directional channels to enforce the producer/consumer relationship between your functions. More on this later.
Going back to "limit", it makes sense to limit the quantity of jobs you're queueing because all resources are limited, and accepting more messages than you have any expectation of processing requires more durable storage than process memory provides. If you don't feel obligated to fulfill those requests no matter what, don't accept "too many" of them in the first place.
So then, what function has dispatchchan and dispatch()? To store a limited number of pending requests, if any, before they can be processed, and then to send them to the next available worker? That's exactly what a buffered channel is for.
Circular Logic
Who "knows" when your program is done? main() provides the initial input, but you close all 3 channels in `dispatch():
close(jobchan)
close(dispatchchan)
close(mw)
Your workers write to their own job queue so only when the workers are done writing to it can the incoming job queue be closed. However, individual workers also don't know when to close the jobs queue because other workers are writing to it. Nobody knows when your algorithm is done. There's your circular logic.
The mw channel is a buffered channel the same length as the number of worker go routines. It acts as a semaphore for the worker pool.
There's a race condition here. Consider the case where all n workers have just received the last n jobs. They've each read from jobschan and they're checking the value of ok. disptatcher proceeds to run its select. Nobody is writing to dispatchchan or reading from jobschan right now so the default case is immediately matched. len(stack) is 0 and there's no current job so dispatcher closes all channels including mw. At some point thereafter, a worker tries to write to a closed channel and panics.
So finally I'm ready to provide some code, but I have one more problem: I don't have a clear problem statement to write code around.
I'm just getting into concurrency in Go and trying to create a dispatch go routine that will send jobs to a worker pool listening on the jobchan channel.
Channels between goroutines are like the teeth that synchronize gears. But to what end do the gears turn? You're not trying to keep time, nor construct a wind-up toy. Your gears could be made to turn, but what would success look like? Their turning?
Let's try to define a more specific use case for channels: given an arbitrarily long set of durations coming in as strings on standard input*, sleep that many seconds in one of n workers. So that we actually have a result to return, we'll say each worker will return the start and end time the duration was run for.
So that it can run in the playground, I'll simulate standard input with a hard-coded byte buffer.
package main
import (
"bufio"
"bytes"
"fmt"
"os"
"strings"
"sync"
"time"
)
type SleepResult struct {
worker_id int
duration time.Duration
start time.Time
end time.Time
}
func main() {
var num_workers = 2
workchan := make(chan time.Duration)
resultschan := make(chan SleepResult)
var wg sync.WaitGroup
var resultswg sync.WaitGroup
resultswg.Add(1)
go results(&resultswg, resultschan)
for i := 0; i < num_workers; i++ {
wg.Add(1)
go worker(i, &wg, workchan, resultschan)
}
// playground doesn't have stdin
var input = bytes.NewBufferString(
strings.Join([]string{
"3ms",
"1 seconds",
"3600ms",
"300 ms",
"5s",
"0.05min"}, "\n") + "\n")
var scanner = bufio.NewScanner(input)
for scanner.Scan() {
text := scanner.Text()
if dur, err := time.ParseDuration(text); err != nil {
fmt.Fprintln(os.Stderr, "Invalid duration", text)
} else {
workchan <- dur
}
}
close(workchan) // we know when our inputs are done
wg.Wait() // and when our jobs are done
close(resultschan)
resultswg.Wait()
}
func results(wg *sync.WaitGroup, resultschan <-chan SleepResult) {
for res := range resultschan {
fmt.Printf("Worker %d: %s : %s => %s\n",
res.worker_id, res.duration,
res.start.Format(time.RFC3339Nano), res.end.Format(time.RFC3339Nano))
}
wg.Done()
}
func worker(id int, wg *sync.WaitGroup, jobchan <-chan time.Duration, resultschan chan<- SleepResult) {
var res = SleepResult{worker_id: id}
for dur := range jobchan {
res.duration = dur
res.start = time.Now()
time.Sleep(res.duration)
res.end = time.Now()
resultschan <- res
}
wg.Done()
}
Here I use 2 wait groups, one for the workers, one for the results. This makes sure Im done writing all the results before main() ends. I keep my functions simple by having each function do exactly one thing at a time: main reads inputs, parses durations from them, and sends them off to the next worker. The results function collects results and prints them to standard output. The worker does the sleeping, reading from jobchan and writing to resultschan.
workchan can be buffered (or not, as in this case); it doesn't matter because the input will be read at the rate it can be processed. We can buffer as much input as we want, but we can't buffer an infinite amount. I've set channel sizes as big as 1e6 - but a million is a lot less than infinite. For my use case, I don't need to do any buffering at all.
main knows when the input is done and can close the jobschan. main also knows when jobs are done (wg.Wait()) and can close the results channel. Closing these channels is an important signal to the worker and results goroutines - they can distinguish between a channel that is empty and a channel that is guaranteed not to have any new additions.
for job := range jobchan {...} is shorthand for your more verbose:
for {
job, ok := <- jobchan
if !ok {
wg.Done()
return
}
...
}
Note that this code creates 2 workers, but it could create 20 or 2000, or even 1. The program functions regardless of how many workers are in the pool. It can handle any volume of input (though interminable input of course leads to an interminable program). It does not create a cyclic loop of output to input. If your use case requires jobs to create more jobs, that's a more challenging scenario that can typically be avoided with careful planning.
I hope this gives you some good ideas about how you can better use concurrency in your Go applications.
https://play.golang.wiki/p/cZuI9YXypxI
Can't figure out how I can cancel a task if it takes to much to time compute in the same thread of execution via context semantics?
I use this example as a reference point
https://golang.org/src/context/context_test.go
The goal here call a doWork, if doWork takes to much time to compute, GetValueWithDeadline should after a timeout return 0, or if caller called cancel that cancel a wait, (here it main is caller) or the value returned in in give a time window.
The same scenario can be done In a different way. ( separate goroutine sleep, wakeup check value etc, condition on a mutex, etc) but I really want to understand the correct way to use context.
The channel semantic I understand but here I can't achieve the desired effect, the default case
call to a doWork fault under default case and sleep.
package main
import (
"context"
"fmt"
"log"
"math/rand"
"sync"
"time"
)
type Server struct {
lock sync.Mutex
}
func NewServer() *Server {
s := new(Server)
return s
}
func (s *Server) doWork() int {
s.lock.Lock()
defer s.lock.Unlock()
r := rand.Intn(100)
log.Printf("Going to nap for %d", r)
time.Sleep(time.Duration(r) * time.Millisecond)
return r
}
// I take an example from here and it very unclear where is do work executed
// https://golang.org/src/context/context_test.go
func (s *Server) GetValueWithDeadline(ctx context.Context) int {
val := 0
select {
case <- time.After(150 * time.Millisecond):
fmt.Println("overslept")
return 0
case <- ctx.Done():
fmt.Println(ctx.Err())
return 0
default:
val = s.doWork()
}
return all
}
func main() {
rand.Seed(time.Now().UTC().UnixNano())
s := NewServer()
for i :=0; i < 10; i++ {
d := time.Now().Add(50 * time.Millisecond)
ctx, cancel := context.WithDeadline(context.Background(), d)
log.Print(s.GetValueWithDeadline(ctx))
cancel()
}
}
Thank you
There are multiple problems with your approach.
What problem contexts solve
First, the primary reason contexts were invented in Go is that they allow to unify an approach to cancellation of a set of tasks.
To explain this concept using a simple example, consider a client request to some sever; to simplify further let it be an HTTP request.
The client connects to the server, sends some data telling the server what to do to fulfill the request and then waits for the server to respond.
Let's now suppose the request requires elaborate and time-consuming processing on the server — for instance, suppose it needs to perform multiple complex queries to multiple remote database engines, do multiple HTTP requests to external services and then process the acquired results to actually produce the data the client wants.
So the client starts its request and the server goes on with all those requests.
To hide latency of individual tasks the server has to perform to fulfill the request, it runs them in separate goroutines.
Once each goroutine completes the assigned task, it communicates its result (and/or an error) back to the goroutine which handles the client's request, and so on.
Now suppose that the client fails to wait for the response to its request for whatever reason — a network outage, an explicit timeout in the client's software, the user kills the app which initiated the request etc, — there are lots of possibilities.
As you can see, there's little sense for the server to continue spending resources to finish the tasks which were logically bound to the now-dead request: there's no one to hear back the result anyway.
So it makes sense to reap those tasks once we know the request is not going to be completed, and that's where contexts come into play: you can associate each incoming request with a single context and then either pass it itself to any goroutine spawned to carry out a single task required to be done to fulfill the request, or derive another request from that and pass it instead.
Then, as soon as you cancel the "root" request, that signal is propagated through the whole tree of requests derived from the root one.
Now each goroutine which were given a context, might "listen" on it to be notified when that cancellation signal is sent, and once the goroutine notices that it might drop whatever it was busy doing and exit.
In terms of actual context.Context type that signal is called "done" — as in "we're done doing whatever that context is assotiated with", — and that's why the goroutine which wants to know it should stop doing its work listens on a special channel returned by the context's method called Done.
Back to your example
To make it work, you'd do something like:
func (s *Server) doWork(ctx context.Context) int {
s.lock.Lock()
defer s.lock.Unlock()
r := rand.Intn(100)
log.Printf("Going to nap for %d", r)
select {
case <- time.After(time.Duration(r) * time.Millisecond):
return r
case <- ctx.Done():
return -1
}
}
func (s *Server) GetValueWithTimeout(ctx context.Context, maxTime time.Duration) int {
d := time.Now().Add(maxTime)
ctx, cancel := context.WithDeadline(ctx, d)
defer cancel()
return s.doWork(ctx)
}
func main() {
const maxTime = 50 * time.Millisecond
rand.Seed(time.Now().UTC().UnixNano())
s := NewServer()
for i :=0; i < 10; i++ {
v := s.GetValueWithTimeout(context.Background(), maxTime)
log.Print(v)
}
}
(Playground).
So what happens here?
The GetValueWithTimeout method accepts the maximum time it should take the doWork method to produce a value, calculates the deadline, derives a context which cancels itself once the deadline passes from the context passed to the method and calls doWork with the new context object.
The doWork method arms its own timer to go off after a random time interval and then listens on both the context and the timer.
This one is the critical point: the code which performs some unit of work which is supposed to be cancellable must check the context to become "done" actively, by itself.
So, in our toy example, either the doWork's own timer fires first or the deadline of the generated context gets reached first; whatever happens first, makes the select statement unblock and proceed.
Note that if your "do the work" code wold be more involved — it would actually do something instead of sleeping, — you would most probably need to check on the context's status periodically, usually after performing invividual bits of that work.
For some reason, when I remove the fmt.Printlns then the code is blocking.
I've got no idea why it happens. All I want to do is to implement a simple concurrency limiter...
I've never experienced such a weird thing. It's like that fmt flushes the variables or something and makes it work.
Also, when I use a regular function instead of a goroutine then it works too.
Here's the following code -
package main
import "fmt"
type ConcurrencyLimit struct {
active int
Limit int
}
func (c *ConcurrencyLimit) Block() {
for {
fmt.Println(c.active, c.Limit)
// If should block
if c.active == c.Limit {
continue
}
c.active++
break
}
}
func (c *ConcurrencyLimit) Decrease() int {
fmt.Println("decrease")
if c.active > 0 {
c.active--
}
return c.active
}
func main() {
c := ConcurrencyLimit{Limit: 1}
c.Block()
go func() {
c.Decrease()
}()
c.Block()
}
Clarification: Even though I've accepted #kaedys 's answer(here) a solution was answered by #Kaveh Shahbazian (here)
You're not giving c.Decrease() a chance to run. c.Block() runs an infinite for loop, but it never blocks in that for loop, just calling continue over and over on every iteration. The main thread spins at 100% usage endlessly.
However, when you add an fmt.Print() call, that makes a syscall, which allows the other goroutine to run.
This post has details on how exactly goroutines yield or are pre-empted. Note, however, that it's slightly out of date, as entering a function now has a random chance to yield that thread to another goroutine, to prevent similar style flooding of threads.
As others have pointed out, Block() will never yield; a goroutine is not a thread. You could use Gosched() in the runtime package to force a yield -- but note that spinning this way in Block() is a pretty terrible idea.
There are much better ways to do concurrency limiting. See http://jmoiron.net/blog/limiting-concurrency-in-go/ for one example
What you are looking for is called a semaphore. You can apply this pattern using channels
http://www.golangpatterns.info/concurrency/semaphores
The idea is that you create a buffered channel of a desired length. Then you make callers acquire the resource by putting a value into the channel and reading it back out when they want to free the resource. Doing so creates proper synchronization points in your program so that the Go scheduler runs correctly.
What you are doing now is spinning the cpu and blocking the Go scheduler. It depends on how many cpus you have available, the version of Go, and the value of GOMAXPROCS. Given the right combination, there may not be another available thread to service other goroutines while you infinitely spin that particular thread.
While other answers pretty much covered the reason (not giving a chance for the goroutine to run) - and I'm not sure what you intend to achieve here - you are mutating a value concurrently without proper synchronization. A rewrite of above code with synchronization considered; would be:
type ConcurrencyLimit struct {
active int
Limit int
cond *sync.Cond
}
func (c *ConcurrencyLimit) Block() {
c.cond.L.Lock()
for c.active == c.Limit {
c.cond.Wait()
}
c.active++
c.cond.L.Unlock()
c.cond.Signal()
}
func (c *ConcurrencyLimit) Decrease() int {
defer c.cond.Signal()
c.cond.L.Lock()
defer c.cond.L.Unlock()
fmt.Println("decrease")
if c.active > 0 {
c.active--
}
return c.active
}
func main() {
c := ConcurrencyLimit{
Limit: 1,
cond: &sync.Cond{L: &sync.Mutex{}},
}
c.Block()
go func() {
c.Decrease()
}()
c.Block()
fmt.Println(c.active, c.Limit)
}
sync.Cond is a synchronization utility designed for times that you want to check if a condition is met, concurrently; while other workers are mutating the data of the condition.
The Lock and Unlock functions work as we expect from a lock. When we are done with checking or mutating, we can call Signal to awake one goroutine (or call Broadcast to awake more than one), so the goroutine knows that is free to act upon the data (or check a condition).
The only part that may seem unusual is the Wait function. It is actually very simple. It is like calling Unlock and instantly call Lock again - with the exception that Wait would not try to lock again, unless triggered by Signal (or Broadcast) in other goroutines; like the workers that are mutating the data (of the condition).
var x int
done := false
go func() { x = f(...); done = true }
while done == false { }
This is a Go code piece. My fiend told me this is UB code. Why?
As explained in "Why does this program terminate on my system but not on playground?"
The Go Memory Model does not guarantee that the value written to x in the goroutine will ever be observed by the main program.
A similarly erroneous program is given as an example in the section on go routine destruction.
The Go Memory Model also specifically calls out busy waiting without synchronization as an incorrect idiom in this section.
(in your case, there is no guarantee that the value written to done in the goroutine will ever be observed by the main program)
Here, You need to do some kind of synchronization in the goroutine in order to guarantee that done=true happens before one of the iterations of the for loop in main.
The "while" (non-existent in Go) should be replaced by, for instance, a channel you block on (waiting for a communication)
for {
<-c // 2
}
Based on a channel (c := make(chan bool)) created in main, and closed (close(c)) in the goroutine.
The sync package provides other means to wait for a gorountine to end before exiting main.
See for instance Golang Example Wait until all the background goroutine finish:
var w sync.WaitGroup
w.Add(1)
go func() {
// do something
w.Done()
}
w.Wait()
The below code gives compilation error saying 'unexpected go':
x := go doSomething(arg)
func doSomething(arg int) int{
...
return my_int_value
}
I know, I can fetch the return value if I call the function normally i.e. without using goroutine or I can use channels etc.
My question is why is it not possible to fetch a return value like this from a goroutine.
Why is it not possible to fetch a return value from a goroutine assigning it to a variable?
Run goroutine (asynchronously) and fetch return value from function are essentially contradictory actions. When you say go you mean "do it asynchronously" or even simpler: "Go on! Don't wait for the function execution be finished". But when you assign function return value to a variable you are expecting to have this value within the variable. So when you do that x := go doSomething(arg) you are saying: "Go on, don't wait for the function! Wait-wait-wait! I need a returned value be accessible in x var right in the next line below!"
Channels
The most natural way to fetch a value from a goroutine is channels. Channels are the pipes that connect concurrent goroutines. You can send values into channels from one goroutine and receive those values into another goroutine or in a synchronous function. You could easily obtain a value from a goroutine not breaking concurrency using select:
func main() {
c1 := make(chan string)
c2 := make(chan string)
go func() {
time.Sleep(time.Second * 1)
c1 <- "one"
}()
go func() {
time.Sleep(time.Second * 2)
c2 <- "two"
}()
for i := 0; i < 2; i++ {
// Await both of these values
// simultaneously, printing each one as it arrives.
select {
case msg1 := <-c1:
fmt.Println("received", msg1)
case msg2 := <-c2:
fmt.Println("received", msg2)
}
}
}
The example is taken from Go By Example
CSP & message-passing
Go is largerly based on CSP theory. The naive description from above could be precisely outlined in terms of CSP (although I believe it is out of scope of the question). I strongly recommend to familiarize yourself with CSP theory at least because it is RAD. These short quotations give a direction of thinking:
As its name suggests, CSP allows the description of systems in terms of component processes that operate independently, and interact with each other solely through message-passing communication.
In computer science, message passing sends a message to a process and relies on the process and the supporting infrastructure to select and invoke the actual code to run. Message passing differs from conventional programming where a process, subroutine, or function is directly invoked by name.
The strict answer is that you can do that. It's just probably not a good idea. Here's code that would do that:
var x int
go func() {
x = doSomething()
}()
This will spawn off a new goroutine which will calculate doSomething() and then assign the result to x. The problem is: how are you going to use x from the original goroutine? You probably want to make sure the spawned goroutine is done with it so that you don't have a race condition. But if you want to do that, you'll need a way to communicate with the goroutine, and if you've got a way to do that, why not just use it to send the value back?
The idea of the go keyword is that you run the doSomething function asynchronously, and continue the current goroutine without waiting for the result, kind of like executing a command in a Bash shell with an '&' after it. If you want to do
x := doSomething(arg)
// Now do something with x
then you need the current goroutine to block until doSomething finishes. So why not just call doSomething in the current goroutine? There are other options (like, doSomething could post a result to a channel, which the current goroutine receives values from) but simply calling doSomething and assigning the result to a variable is obviously simpler.
It's a design choice by Go creators. There's a whole lot of abstractions/APIs to represent the value of async I/O operations - promise, future, async/await, callback, observable, etc. These abstractions/APIs are inherently tied to the unit of scheduling - coroutines - and these abstractions/APIs dictate how coroutines (or more precisely the return value of async I/O represented by them) can be composed.
Go chose message passing (aka channels) as the abstraction/API to represent the return value of async I/O operations. And of course, goroutines and channels give you a composable tool to implement async I/O operations.
Why not use a channel to write into?
chanRes := make(chan int, 1)
go doSomething(arg, chanRes)
//blocks here or you can use some other sync mechanism (do something else) and wait
x := <- chanRes
func doSomething(arg int, out chan<- int){
...
out <- my_int_value
}