Golang Goroutine synchronization with channel - go

I have the following program where the HTTP server is created using gorilla mux.
When any request comes, it starts goroutine 1. In processing, I am starting another goroutine 2.
I want to wait for goroutine 2's response in goroutine 1? How I can do that?
How to ensure that only goroutine 2 will give the response to goroutine 1?
There can be GR4 created by GR3 and GR 3 should wait for GR4 only.
GR = Goroutine
SERVER
package main
import (
"encoding/json"
"fmt"
"net/http"
"strconv"
"time"
"github.com/gorilla/mux"
)
type Post struct {
ID string `json:"id"`
Title string `json:"title"`
Body string `json:"body"`
}
var posts []Post
var i = 0
func getPosts(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
i++
fmt.Println(i)
ch := make(chan int)
go getTitle(ch, i)
p := Post{
ID: "123",
}
// Wait for getTitle result and update variable P with title
s := <-ch
//
p.Title = strconv.Itoa(s) + strconv.Itoa(i)
json.NewEncoder(w).Encode(p)
}
func main() {
router := mux.NewRouter()
posts = append(posts, Post{ID: "1", Title: "My first post", Body: "This is the content of my first post"})
router.HandleFunc("/posts", getPosts).Methods("GET")
http.ListenAndServe(":9999", router)
}
func getTitle(resultCh chan int, m int) {
time.Sleep(2 * time.Second)
resultCh <- m
}
CLIENT
package main
import (
"fmt"
"net/http"
"io/ioutil"
"time"
)
func main(){
for i :=0;i <100 ;i++ {
go main2()
}
time.Sleep(200 * time.Second)
}
func main2() {
url := "http://localhost:9999/posts"
method := "GET"
client := &http.Client {
}
req, err := http.NewRequest(method, url, nil)
if err != nil {
fmt.Println(err)
}
res, err := client.Do(req)
defer res.Body.Close()
body, err := ioutil.ReadAll(res.Body)
fmt.Println(string(body))
}
RESULT ACTUAL
{"id":"123","title":"25115","body":""}
{"id":"123","title":"23115","body":""}
{"id":"123","title":"31115","body":""}
{"id":"123","title":"44115","body":""}
{"id":"123","title":"105115","body":""}
{"id":"123","title":"109115","body":""}
{"id":"123","title":"103115","body":""}
{"id":"123","title":"115115","body":""}
{"id":"123","title":"115115","body":""}
{"id":"123","title":"115115","body":""}
RESULT EXPECTED
{"id":"123","title":"112112","body":""}
{"id":"123","title":"113113","body":""}
{"id":"123","title":"115115","body":""}
{"id":"123","title":"116116","body":""}
{"id":"123","title":"117117","body":""}

there are few ways to do this , a simple way is to use channels
change the getTitle func to this
func getTitle(resultCh chan string) {
time.Sleep(2 * time.Second)
resultCh <- "Game Of Thrones"
}
and the getPosts will use it like this
func getPosts(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
ch := make(chan string)
go getTitle(ch)
s := <-ch // this will wait until getTile inserts data to channel
p := Post{
ID: s,
}
json.NewEncoder(w).Encode(p)
}
i suspect you are new to go, this is a basic channel usage , check more details here Channels

So the problem you're having is that you haven't really leaned how to deal with concurrent code (not a dis, I was there once). Most of this centers not around channels. The channels are working correctly as #kojan's answer explains. Where things go awry is with the i variable. Firstly you have to understand that i is not being mutated atomically so if your client requests arrive in parallel you can mess up the number:
C1 : C2:
i == 6 i == 6
i++ i++
i == 7 i == 7
Two increments in software become one increment in actuality because i++ is really 3 operations: load, increment, store.
The second problem you have is that i is not a pointer, so when you pass i to your go routine you're making a copy. the i in the go routine is sent back on the channel and becomes the first number in your concatenated string which you can watch increment. However the i left behind which is used in the tail of the string has continued to be incremented by successive client invocations.

Related

Running a maximum of two go routines continuously forever

I'm trying to run a function concurrently. It makes a call to my DB that may take 2-10 seconds. I would like it to continue on to the next routine once it has finished, even if the other one is still processing, but only ever want it be processing a max of 2 at a time. I want this to happen indefinitely. I feel like I'm almost there, but waitGroup forces both routines to wait until completion prior to continuing to another iteration.
const ROUTINES = 2;
for {
var wg sync.WaitGroup
_, err:= db.Exec(`Random DB Call`)
if err != nil {
panic(err)
}
ch := createRoutines(db, &wg)
wg.Add(ROUTINES)
for i := 1; i <= ROUTINES; i++ {
ch <- i
time.Sleep(2 * time.Second)
}
close(ch)
wg.Wait()
}
func createRoutines(db *sqlx.DB, wg *sync.WaitGroup) chan int {
var ch = make(chan int, 5)
for i := 0; i < ROUTINES ; i++ {
go func(db *sqlx.DB) {
defer wg.Done()
for {
_, ok := <-ch
if !ok {
return
}
doStuff(db)
}
}(db)
}
return ch
}
If you need to only have n number of goroutines running at the same time, you can have a buffered channel of size n and use that to block creating new goroutines when there is no space left, something like this
package main
import (
"fmt"
"math/rand"
"time"
)
func main() {
const ROUTINES = 2
rand.Seed(time.Now().UnixNano())
stopper := make(chan struct{}, ROUTINES)
var counter int
for {
counter++
stopper <- struct{}{}
go func(c int) {
fmt.Println("+ Starting goroutine", c)
time.Sleep(time.Duration(rand.Intn(3)) * time.Second)
fmt.Println("- Stopping goroutine", c)
<-stopper
}(counter)
}
}
In this example you see how you can only have ROUTINES number of goroutines that live 0, 1 or 2 seconds. In the output you can also see how every time one goroutine ends another one starts.
This adds an external dependency, but consider this implementation:
package main
import (
"context"
"database/sql"
"log"
"github.com/MicahParks/ctxerrpool"
)
func main() {
// Create a pool of 2 workers for database queries. Log any errors.
databasePool := ctxerrpool.New(2, func(_ ctxerrpool.Pool, err error) {
log.Printf("Failed to execute database query.\nError: %s", err.Error())
})
// Get a list of queries to execute.
queries := []string{
"SELECT first_name, last_name FROM customers",
"SELECT price FROM inventory WHERE sku='1234'",
"other queries...",
}
// TODO Make a database connection.
var db *sql.DB
for _, query := range queries {
// Intentionally shadow the looped variable for scope.
query := query
// Perform the query on a worker. If no worker is ready, it will block until one is.
databasePool.AddWorkItem(context.TODO(), func(workCtx context.Context) (err error) {
_, err = db.ExecContext(workCtx, query)
return err
})
}
// Wait for all workers to finish.
databasePool.Wait()
}

How do I select on os.Stdin and http in Go?

Say I want to accept an animal. The user can either set the type of animal at the What type of animal? prompt on a terminal, or she can go to http://localhost:1234/animal?type=kitten
Whichever she does, the terminal will read What type of animal? kitten (assuming she chose a kitten) and the program will then prompt the user on the terminal (and only the terminal) What is the kitten's name?
My thought was to use channels to go routines, but since both go routines will be stuck in a function call (Scan() for the terminal, ListenAndServe() for http) then I'm not clear how to stop the go routine that is still in a function call once the input is received. The normal method of selecting on channels won't work.
In C I'd just do a select(2) on the two relevant file descriptors (fd 0 and the fd from listen()) but I don't think that's how to do it in Go.
Note: I answered this down below. If you're down voting the question this I'd be curious to know why as I would have found the answer I came up with useful. If you're down voting the answer, I'd really be interested in knowing why. If it's non-idiomatic Go or if it has other issues I'm missing, I'd really like to fix them.
OK, I figured out a solution for what I was trying to do.
This might not be the most Go idiomatic way though. If the channels took a struct with input source and the string I could probably toss the select and the webinput channel.
package main
import (
"bufio"
"context"
"fmt"
"io"
"net/http"
"os"
"time"
)
func getHTTPAnimal(input chan string) *http.Server {
srv := &http.Server{Addr: ":1234"}
http.HandleFunc("/animal", func(w http.ResponseWriter, r *http.Request) {
animal, ok := r.URL.Query()["type"]
if !ok || len(animal[0]) <= 0 {
io.WriteString(w, "Animal not understood.\n")
return
}
io.WriteString(w, "Animal understood.\n")
input <- string(animal[0])
})
go func() {
if err := srv.ListenAndServe(); err != nil {
if err.Error() != "http: Server closed" {
fmt.Printf("getHTTPAnimal error with ListenAndServe: %s", err)
}
}
}()
return srv
}
func getTerminalInput(input chan string) {
scanner := bufio.NewScanner(os.Stdin)
for {
scanner.Scan()
input <- scanner.Text()
}
}
func main() {
input := make(chan string)
webinput := make(chan string)
srv := getHTTPAnimal(webinput)
fmt.Print("Enter animal type: ")
go getTerminalInput(input)
var animal string
select {
case ta := <-input:
animal = ta
case webta := <-webinput:
animal = webta
fmt.Printf("%s\n", animal)
}
ctx, _ := context.WithTimeout(context.Background(), 5*time.Second)
srv.Shutdown(ctx)
close(webinput)
fmt.Printf("Enter animal name: ")
name := <-input
fmt.Printf("Congrats on getting %s the half-%s\n", name, animal)
}
You can stop goroutine with close statement.
Example:
package main
import "sync"
func main() {
var wg sync.WaitGroup
wg.Add(1)
ch := make(chan int)
go func() {
for {
foo, ok := <- ch
if !ok {
println("done")
wg.Done()
return
}
println(foo)
}
}()
ch <- 1
ch <- 2
ch <- 3
close(ch)
wg.Wait()
}
There is couple of ways to stop goroutine:
1) by closing the read channel:
msgCh := make(chan string)
go func(){
for msg, ok := range msgCh {
if !ok {
return
}
}
}()
msgCh <- "String"
close(msgCh)
2) passing some sort of kill switch
msgCh := make(chan string)
killSwitch := make(chan struct{})
go func(){
for {
select{
case msg <- msgCh:
log.Println(msg)
case <-killSwitch:
return
}
}
}
msgCh <- "String"
close(killSwitch)
With approach NR2 you are avoiding writing to closed chan that would end up as panic. Also close signal is populated as fanout pattern (all places where that chan is used will receive that signal)

How to exit from my go code using go routines and term ui

I recently started learning go and I am really impressed with all the features. I been playing with go routines and term-ui and facing some trouble. I am trying to exit this code from console after I run it but it just doesn't respond. If I run it without go-routine it does respond to my q key press event.
Any help is appreciated.
My code
package main
import (
"fmt"
"github.com/gizak/termui"
"time"
"strconv"
)
func getData(ch chan string) {
i := 0
for {
ch <- strconv.Itoa(i)
i++
time.Sleep(time.Second)
if i == 20 {
break
}
}
}
func Display(ch chan string) {
err := termui.Init()
if err != nil {
panic(err)
}
defer termui.Close()
termui.Handle("/sys/kbd/q", func(termui.Event) {
fmt.Println("q captured")
termui.Close()
termui.StopLoop()
})
for elem := range ch {
par := termui.NewPar(elem)
par.Height = 5
par.Width = 37
par.Y = 4
par.BorderLabel = "term ui example with chan"
par.BorderFg = termui.ColorYellow
termui.Render(par)
}
}
func main() {
ch := make(chan string)
go getData(ch)
Display(ch)
}
This is possibly the answer you are looking for. First off, you aren't using termui correctly. You need to call it's Loop function to start the Event loop so that it can actually start listening for the q key. Loop is called last because it essentially takes control of the main goroutine from then on until StopLoop is called and it quits.
In order to stop the goroutines, it is common to have a "stop" channel. Usually it is a chan struct{} to save memory because you don't ever have to put anything in it. Wherever you want the goroutine to possibly stop and shutoff (or do something else perhaps), you use a select statement with the channels you are using. This select is ordered, so it will take from them in order unless they block, in which case it tries the next one, so the stop channel usually goes first. The stop channel normally blocks, but to get it to take this path, simply close()ing it will cause this path to be chosen in the select. So we close() it in the q keyboard handler.
package main
import (
"fmt"
"github.com/gizak/termui"
"strconv"
"time"
)
func getData(ch chan string, stop chan struct{}) {
i := 0
for {
select {
case <-stop:
break
case ch <- strconv.Itoa(i):
}
i++
time.Sleep(time.Second)
if i == 20 {
break
}
}
}
func Display(ch chan string, stop chan struct{}) {
for {
var elem string
select {
case <-stop:
break
case elem = <-ch:
}
par := termui.NewPar(elem)
par.Height = 5
par.Width = 37
par.Y = 4
par.BorderLabel = "term ui example with chan"
par.BorderFg = termui.ColorYellow
termui.Render(par)
}
}
func main() {
ch := make(chan string)
stop := make(chan struct{})
err := termui.Init()
if err != nil {
panic(err)
}
defer termui.Close()
termui.Handle("/sys/kbd/q", func(termui.Event) {
fmt.Println("q captured")
close(stop)
termui.StopLoop()
})
go getData(ch, stop)
go Display(ch, stop)
termui.Loop()
}

Idiomatic goroutine termination and error handling

I have a simple concurrency use case in go, and I cannot figure out an elegant solution to my problem.
I want to write a method fetchAll that queries an unspecified number of resources from remote servers in parallel. If any of the fetches fails, I want to return that first error immediately.
My initial implementation leaks goroutines:
package main
import (
"fmt"
"math/rand"
"sync"
"time"
)
func fetchAll() error {
wg := sync.WaitGroup{}
errs := make(chan error)
leaks := make(map[int]struct{})
defer fmt.Println("these goroutines leaked:", leaks)
// run all the http requests in parallel
for i := 0; i < 4; i++ {
leaks[i] = struct{}{}
wg.Add(1)
go func(i int) {
defer wg.Done()
defer delete(leaks, i)
// pretend this does an http request and returns an error
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
errs <- fmt.Errorf("goroutine %d's error returned", i)
}(i)
}
// wait until all the fetches are done and close the error
// channel so the loop below terminates
go func() {
wg.Wait()
close(errs)
}()
// return the first error
for err := range errs {
if err != nil {
return err
}
}
return nil
}
func main() {
fmt.Println(fetchAll())
}
Playground: https://play.golang.org/p/Be93J514R5
I know from reading https://blog.golang.org/pipelines that I can create a signal channel to cleanup the other threads. Alternatively, I could probably use context to accomplish it. But it seems like such a simple use case should have a simpler solution that I'm missing.
Using Error Group makes this even simpler. This automatically waits for all the supplied Go Routines to complete successfully, or cancels all those remaining in the case of any one routine returning an error (in which case that error is the one bubble back up to the caller).
package main
import (
"context"
"fmt"
"math/rand"
"time"
"golang.org/x/sync/errgroup"
)
func fetchAll(ctx context.Context) error {
errs, ctx := errgroup.WithContext(ctx)
// run all the http requests in parallel
for i := 0; i < 4; i++ {
errs.Go(func() error {
// pretend this does an http request and returns an error
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
return fmt.Errorf("error in go routine, bailing")
})
}
// Wait for completion and return the first error (if any)
return errs.Wait()
}
func main() {
fmt.Println(fetchAll(context.Background()))
}
All but one of your goroutines are leaked, because they're still waiting to send to the errs channel - you never finish the for-range that empties it. You're also leaking the goroutine who's job is to close the errs channel, because the waitgroup is never finished.
(Also, as Andy pointed out, deleting from map is not thread-safe, so that'd need protection from a mutex.)
However, I don't think maps, mutexes, waitgroups, contexts etc. are even necessary here. I'd rewrite the whole thing to just use basic channel operations, something like the following:
package main
import (
"fmt"
"math/rand"
"time"
)
func fetchAll() error {
var N = 4
quit := make(chan bool)
errc := make(chan error)
done := make(chan error)
for i := 0; i < N; i++ {
go func(i int) {
// dummy fetch
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
err := error(nil)
if rand.Intn(2) == 0 {
err = fmt.Errorf("goroutine %d's error returned", i)
}
ch := done // we'll send to done if nil error and to errc otherwise
if err != nil {
ch = errc
}
select {
case ch <- err:
return
case <-quit:
return
}
}(i)
}
count := 0
for {
select {
case err := <-errc:
close(quit)
return err
case <-done:
count++
if count == N {
return nil // got all N signals, so there was no error
}
}
}
}
func main() {
rand.Seed(time.Now().UnixNano())
fmt.Println(fetchAll())
}
Playground link: https://play.golang.org/p/mxGhSYYkOb
EDIT: There indeed was a silly mistake, thanks for pointing it out. I fixed the code above (I think...). I also added some randomness for added Realism™.
Also, I'd like to stress that there really are multiple ways to approach this problem, and my solution is but one way. Ultimately it comes down to personal taste, but in general, you want to strive towards "idiomatic" code - and towards a style that feels natural and easy to understand for you.
Here's a more complete example using errgroup suggested by joth. It shows processing successful data, and will exit on the first error.
https://play.golang.org/p/rU1v-Mp2ijo
package main
import (
"context"
"fmt"
"golang.org/x/sync/errgroup"
"math/rand"
"time"
)
func fetchAll() error {
g, ctx := errgroup.WithContext(context.Background())
results := make(chan int)
for i := 0; i < 4; i++ {
current := i
g.Go(func() error {
// Simulate delay with random errors.
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
if rand.Intn(2) == 0 {
return fmt.Errorf("goroutine %d's error returned", current)
}
// Pass processed data to channel, or receive a context completion.
select {
case results <- current:
return nil
// Close out if another error occurs.
case <-ctx.Done():
return ctx.Err()
}
})
}
// Elegant way to close out the channel when the first error occurs or
// when processing is successful.
go func() {
g.Wait()
close(results)
}()
for result := range results {
fmt.Println("processed", result)
}
// Wait for all fetches to complete.
return g.Wait()
}
func main() {
fmt.Println(fetchAll())
}
As long as each goroutine completes, you won't leak anything. You should create the error channel as buffered with the buffer size equal to the number of goroutines so that the send operations on the channel won't block. Each goroutine should always send something on the channel when it finishes, whether it succeeds or fails. The loop at the bottom can then just iterate for the number of goroutines and return if it gets a non-nil error. You don't need the WaitGroup or the other goroutine that closes the channel.
I think the reason it appears that goroutines are leaking is that you return when you get the first error, so some of them are still running.
By the way, maps are not goroutine safe. If you share a map among goroutines and some of them are making changes to the map, you need to protect it with a mutex.
This answer includes the ability to get the responses back into doneData -
package main
import (
"fmt"
"math/rand"
"os"
"strconv"
)
var doneData []string // responses
func fetchAll(n int, doneCh chan bool, errCh chan error) {
partialDoneCh := make(chan string)
for i := 0; i < n; i++ {
go func(i int) {
if r := rand.Intn(100); r != 0 && r%10 == 0 {
// simulate an error
errCh <- fmt.Errorf("e33or for reqno=" + strconv.Itoa(r))
} else {
partialDoneCh <- strconv.Itoa(i)
}
}(i)
}
// mutation of doneData
for d := range partialDoneCh {
doneData = append(doneData, d)
if len(doneData) == n {
close(partialDoneCh)
doneCh <- true
}
}
}
func main() {
// rand.Seed(1)
var n int
var e error
if len(os.Args) > 1 {
if n, e = strconv.Atoi(os.Args[1]); e != nil {
panic(e)
}
} else {
n = 5
}
doneCh := make(chan bool)
errCh := make(chan error)
go fetchAll(n, doneCh, errCh)
fmt.Println("main: end")
select {
case <-doneCh:
fmt.Println("success:", doneData)
case e := <-errCh:
fmt.Println("failure:", e, doneData)
}
}
Execute using go run filename.go 50 where N=50 i.e amount of parallelism

GO language: fatal error: all goroutines are asleep - deadlock

Code below works fine with hard coded JSON data however doesn't work when I read JSON data from a file. I'm getting fatal error: all goroutines are asleep - deadlock error when using sync.WaitGroup.
WORKING EXAMPLE WITH HARD-CODED JSON DATA:
package main
import (
"bytes"
"fmt"
"os/exec"
"time"
)
func connect(host string) {
cmd := exec.Command("ssh", host, "uptime")
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
fmt.Println(err)
}
fmt.Printf("%s: %q\n", host, out.String())
time.Sleep(time.Second * 2)
fmt.Printf("%s: DONE\n", host)
}
func listener(c chan string) {
for {
host := <-c
go connect(host)
}
}
func main() {
hosts := [2]string{"user1#111.79.154.111", "user2#111.79.190.222"}
var c chan string = make(chan string)
go listener(c)
for i := 0; i < len(hosts); i++ {
c <- hosts[i]
}
var input string
fmt.Scanln(&input)
}
OUTPUT:
user#user-VirtualBox:~/go$ go run channel.go
user1#111.79.154.111: " 09:46:40 up 86 days, 18:16, 0 users, load average: 5"
user2#111.79.190.222: " 09:46:40 up 86 days, 17:27, 1 user, load average: 9"
user1#111.79.154.111: DONE
user2#111.79.190.222: DONE
NOT WORKING - EXAMPLE WITH READING JSON DATA FILE:
package main
import (
"bytes"
"fmt"
"os/exec"
"time"
"encoding/json"
"os"
"sync"
)
func connect(host string) {
cmd := exec.Command("ssh", host, "uptime")
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
fmt.Println(err)
}
fmt.Printf("%s: %q\n", host, out.String())
time.Sleep(time.Second * 2)
fmt.Printf("%s: DONE\n", host)
}
func listener(c chan string) {
for {
host := <-c
go connect(host)
}
}
type Content struct {
Username string `json:"username"`
Ip string `json:"ip"`
}
func main() {
var wg sync.WaitGroup
var source []Content
var hosts []string
data := json.NewDecoder(os.Stdin)
data.Decode(&source)
for _, value := range source {
hosts = append(hosts, value.Username + "#" + value.Ip)
}
var c chan string = make(chan string)
go listener(c)
for i := 0; i < len(hosts); i++ {
wg.Add(1)
c <- hosts[i]
defer wg.Done()
}
var input string
fmt.Scanln(&input)
wg.Wait()
}
OUTPUT
user#user-VirtualBox:~/go$ go run deploy.go < hosts.txt
user1#111.79.154.111: " 09:46:40 up 86 days, 18:16, 0 users, load average: 5"
user2#111.79.190.222: " 09:46:40 up 86 days, 17:27, 1 user, load average: 9"
user1#111.79.154.111 : DONE
user2#111.79.190.222: DONE
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [semacquire]:
sync.runtime_Semacquire(0xc210000068)
/usr/lib/go/src/pkg/runtime/sema.goc:199 +0x30
sync.(*WaitGroup).Wait(0xc210047020)
/usr/lib/go/src/pkg/sync/waitgroup.go:127 +0x14b
main.main()
/home/user/go/deploy.go:64 +0x45a
goroutine 3 [chan receive]:
main.listener(0xc210038060)
/home/user/go/deploy.go:28 +0x30
created by main.main
/home/user/go/deploy.go:53 +0x30b
exit status 2
user#user-VirtualBox:~/go$
HOSTS.TXT
[
{
"username":"user1",
"ip":"111.79.154.111"
},
{
"username":"user2",
"ip":"111.79.190.222"
}
]
Go program ends when the main function ends.
From the language specification
Program execution begins by initializing the main package and then invoking the function main. When that function invocation returns, the program exits. It does not wait for other (non-main) goroutines to complete.
Therefore, you need to wait for your goroutines to finish. The common solution for this is to use sync.WaitGroup object.
The simplest possible code to synchronize goroutine:
package main
import "fmt"
import "sync"
var wg sync.WaitGroup // 1
func routine() {
defer wg.Done() // 3
fmt.Println("routine finished")
}
func main() {
wg.Add(1) // 2
go routine() // *
wg.Wait() // 4
fmt.Println("main finished")
}
And for synchronizing multiple goroutines
package main
import "fmt"
import "sync"
var wg sync.WaitGroup // 1
func routine(i int) {
defer wg.Done() // 3
fmt.Printf("routine %v finished\n", i)
}
func main() {
for i := 0; i < 10; i++ {
wg.Add(1) // 2
go routine(i) // *
}
wg.Wait() // 4
fmt.Println("main finished")
}
WaitGroup usage in order of execution.
Declaration of global variable. Making it global is the easiest way to make it visible to all functions and methods.
Increasing the counter. This must be done in main goroutine because there is no guarantee that newly started goroutine will execute before 4 due to memory model guarantees.
Decreasing the counter. This must be done at the exit of goroutine. Using deferred call, we make sure that it will be called whenever function ends no matter but no matter how it ends.
Waiting for the counter to reach 0. This must be done in main goroutine to prevent program exit.
* The actual parameters are evaluated before starting new gouroutine. Thus it is needed to evaluate them explicitly before wg.Add(1) so the possibly panicking code would not leave increased counter.
Use
param := f(x)
wg.Add(1)
go g(param)
instead of
wg.Add(1)
go g(f(x))
Thanks for the very nice and detailed explanation Grzegorz Żur.
One thing that I want to point it out that typically the func that needs to be threaded wont be in main(), so we would have something like this:
package main
import (
"bufio"
"fmt"
"io"
"io/ioutil"
"math/rand"
"os"
"reflect"
"regexp"
"strings"
"sync"
"time"
)
var wg sync.WaitGroup // VERY IMP to declare this globally, other wise one //would hit "fatal error: all goroutines are asleep - deadlock!"
func doSomething(arg1 arg1Type) {
// cured cancer
}
func main() {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
randTime := r.Intn(10)
wg.Add(1)
go doSomething(randTime)
wg.Wait()
fmt.Println("Waiting for all threads to finish")
}
The thing that I want to point it out is that global declaration of wg is very crucial for all threads to finish before main()
try this code snippest
package main
import (
"bytes"
"fmt"
"os/exec"
"time"
"sync"
)
func connect(host string, wg *sync.WaitGroup) {
defer wg.Done()
cmd := exec.Command("ssh", host, "uptime")
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
fmt.Println(err)
}
fmt.Printf("%s: %q\n", host, out.String())
time.Sleep(time.Second * 2)
fmt.Printf("%s: DONE\n", host)
}
func listener(c chan string,wg *sync.WaitGroup) {
for {
host,ok := <-c
// check channel is closed or not
if !ok{
break
}
go connect(host)
}
}
func main() {
var wg sync.WaitGroup
hosts := [2]string{"user1#111.79.154.111", "user2#111.79.190.222"}
var c chan string = make(chan string)
go listener(c)
for i := 0; i < len(hosts); i++ {
wg.Add(1)
c <- hosts[i]
}
close(c)
var input string
fmt.Scanln(&input)
wg.Wait()
}

Resources