This is my Snippet Code to run the whole worker
for w := 1; w <= *totalworker; w++ {
wg.Add(1)
go worker(w, jobs, results, dir, &wg)
}
This was my Worker
defer wg.Done()
for j := range jobs {
filename := j[0][4] + ".csv"
fmt.Printf("\nWorker %d starting a job\n", id)
//results <- j //to show The result of jobs, unnecessary
fmt.Printf("\nWorker %d Creating %s.csv\n", id, j[0][4])
CreateFile(dir, &filename, j)
fmt.Printf("\nWorker %d Finished creating %s.csv on %s\n", id, j[0][4], *dir)
fmt.Printf("\nWorker %d finished a job\n", id)
}
}
When i run without WaitGroup it will only create a few of the whole file i needed. but It show the process of it. It show worker1 do job, worker2 do job etc... so until the end of program it will show each of it.
Otherwise, with waitgroup it create the whole file i need. But, its completely do all in one without show the process, show when i run it with WaitGroup it just ended like ... wait where is the whole process xD, it just ended with showing Worker1 do job, worker2 do job etc... at End of program.
Is there any thing i can do with this Waitgroup so it show each of its print?
You need create some channels to listening what previous channel is completed like that, this my example I have 20 routines, they will process some logic at the same time, and return in original order:
package channel
import (
"fmt"
"sync"
"time"
)
func Tests() {
c := make(map[int]chan bool)
var wg sync.WaitGroup
// total go routine
loop := 20
// stop in step
stop := 11
for i := 1; i <= loop; i++ {
// init channel
c[i] = make(chan bool)
}
for i := 1; i <= loop; i++ {
wg.Add(1)
go func(c map[int]chan bool, i int) {
defer wg.Done()
// logic step
fmt.Println("Process Logic step ", i)
if i == 1 {
fmt.Println("Sending message first ", i)
c[i] <- true // send now notify to next step
} else {
select {
case channel := <-c[i-1]:
defer close(c[i-1])
if channel == true {
// send now
fmt.Println("Sending message ", i)
// not sent
if i < loop { // fix deadlock when channel doesnt write
if i == stop && stop > 0 {
c[i] <- false // stop in step i
} else {
c[i] <- true // send now notify to next step
}
}
} else {
// not send
if i < loop { // fix deadlock when channel doesnt write
c[i] <- false
}
}
}
}
}(c, i)
}
wg.Wait()
fmt.Println("go here ")
//time.Sleep(3 * time.Second)go run
fmt.Println("End")
}
Related
In this example, we have a worker. The idea here is simulate clean shutdown of all go routines based on a condition.
In this case, go routines get spun - based on workers count. Each go routine reads the channel, does some work and sends output to the outputChannel.
The main go routine reads this output and prints it. To simulate a stop condition, the doneChannel is closed. Expected outcome is that select inside each go routine will pick this up and execute return which in turn will call the defer println. The actual output is that it never gets called and main exits.
Not sure what's the reason behind this.
package main
import (
"log"
"time"
)
const jobs = 100
const workers = 1
var timeout = time.After(5 * time.Second)
func main() {
doneChannel := make(chan interface{})
outputChannel := make(chan int)
numberStream := generator()
for i := 1; i <= workers; i++ {
go worker(doneChannel, numberStream, outputChannel)
}
// listen for output
loop:
for {
select {
case i := <-outputChannel:
log.Println(i)
case <-timeout:
// before you timeout cleanup go routines
break loop
}
}
close(doneChannel)
time.Sleep(5 * time.Second)
log.Println("main exited")
}
func generator() <-chan int {
defer log.Println("generator completed !")
c := make(chan int)
go func() {
for i := 1; i <= jobs; i++ {
c <- i
}
defer close(c)
}()
return c
}
func worker(done <-chan interface{}, c <-chan int, output chan<- int) {
// this will be a go routine
// Do some work and send results to output Channel.
// Incase if the done channel is called kill the go routine.
defer log.Println("go routines exited")
for {
select {
case <-done:
log.Println("here")
return
case i := <-c:
time.Sleep(1 * time.Second) // worker delay
output <- i * 100
}
}
}
When your main loop finishes during the timeout, you continue your program and
Close done channel
Print message
Exit
There is no reason to wait for any goroutine to process the signal of this channel.
If you add a small sleep you will see some messages
In real scenarios we use a waitgroup to be sure all goroutine finish properly
I have the following piece of code. I'm trying to run 3 GO routines at the same time never exceeding three. This works as expected, but the code is supposed to be running updates a table in the DB.
So the first routine processes the first 50, then the second 50, and then third 50, and it repeats. I don't want two routines processing the same rows at the same time and due to how long the update takes, this happens almost every time.
To solve this, I started flagging the rows with a new column processing which is a bool. I set it to true for all rows to be updated when the routine starts and sleep the script for 6 seconds to allow the flag to be updated.
This works for a random amount of time, but every now and then, I'll see 2-3 jobs processing the same rows again. I feel like the method I'm using to prevent duplicate updates is a bit janky and was wondering if there was a better way.
stopper := make(chan struct{}, 3)
var counter int
for {
counter++
stopper <- struct{}{}
go func(db *sqlx.DB, c int) {
fmt.Println("start")
updateTables(db)
fmt.Println("stop"b)
<-stopper
}(db, counter)
time.Sleep(6 * time.Second)
}
in updateTables
var ids[]string
err := sqlx.Select(db, &data, `select * from table_data where processing = false `)
if err != nil {
panic(err)
}
for _, row:= range data{
list = append(ids, row.Id)
}
if len(rows) == 0 {
return
}
for _, row:= range data{
_, err = db.Exec(`update table_data set processing = true where id = $1, row.Id)
if err != nil {
panic(err)
}
}
// Additional row processing
I think there's a misunderstanding on approach to go routines in this case.
Go routines to do these kind of work should be approached like worker Threads, using channels as the communication method in between the main routine (which will be doing the synchronization) and the worker go routines (which will be doing the actual job).
package main
import (
"log"
"sync"
"time"
)
type record struct {
id int
}
func main() {
const WORKER_COUNT = 10
recordschan := make(chan record)
var wg sync.WaitGroup
for k := 0; k < WORKER_COUNT; k++ {
wg.Add(1)
// Create the worker which will be doing the updates
go func(workerID int) {
defer wg.Done() // Marking the worker as done
for record := range recordschan {
updateRecord(record)
log.Printf("req %d processed by worker %d", record.id, workerID)
}
}(k)
}
// Feeding the records channel
for _, record := range fetchRecords() {
recordschan <- record
}
// Closing our channel as we're not using it anymore
close(recordschan)
// Waiting for all the go routines to finish
wg.Wait()
log.Println("we're done!")
}
func fetchRecords() []record {
result := []record{}
for k := 0; k < 100; k++ {
result = append(result, record{k})
}
return result
}
func updateRecord(req record) {
time.Sleep(200 * time.Millisecond)
}
You can even buffer things in the main go routine if you need to update all the 50 tables at once.
why there is deadlock after printing all values ?
what i understand
as from receiving part code channel is waiting which letting to block or pause main go routine although i tried with waitgroup doesn't work
package main
import (
"fmt"
//"sync"
)
//output from 10 20 30 ... - 100
func main() {
//wg := sync.WaitGroup{}
done := make(chan int)
for i := 1; i <= 10; i++ {
//wg.Add(1)
go func(i int) {
done <- i * 10
}(i)
// close(done)
}
// close(done)
//wg.Wait()
// for item := range done{
// fmt.Println(item)}
for {
if value, ok := <-done; ok {
fmt.Println("received is ", value)
} else {
return
//os.Exit(1)
}
}
}
As per the answer from #aureliar value, ok := <-done will block until a value is received on the channel or the channel is closed (and, once your goroutines complete, neither of these happen). From your question and the comments in your code it looks like you were close to working out how to solve this by waiting for the goroutines to complete and closing the channel.
Because you know the number of goroutines in advance (and each goroutine always sends one value on the channel) there is a simple solution (playground):
func main() {
noOfGoRoutines := 10
done := make(chan int)
for i := 1; i <= noOfGoRoutines; i++ {
go func(i int) {
done <- i * 10
}(i)
}
for noOfGoRoutines > 0 {
value := <-done
fmt.Println("received is ", value)
noOfGoRoutines--
}
}
Things get a bit more complicated when you don't know how many values will be received in advance. In that case closing the channel is a good way of letting the receiver know you have finished. In your case this means closing the channel after the goroutines have completed (this is important because sending to a closed channel leads to a panic).
To do this using a WaitGroup you will need three functions:
wg.Add(delta int) to set the counter. The call you have commented out is fine but an alternative is to call wg.Add(10) before entering the loop.
wg.Done() "decrements the WaitGroup counter by one" - you need to call this before each goroutine ends (its common to defer the call). This was missing from your code.
wg.Wait() "blocks until the WaitGroup counter is zero".
With the above in place you can safely call close(done) knowing that nothing further will be sent to the channel. However there is a complication - if you just add this code after your first loop you will hit another deadlock because your goroutines will all block at done <- i * 10. This happens because main will be blocked at wg.Wait() meaning nothing is receiving from the channel and as per the spec:
If the capacity is zero or absent, the channel is unbuffered and communication succeeds only when both a sender and receiver are ready.
This can be solved by waiting/closing within another goroutine.
You can try this in the playground.
package main
import (
"fmt"
"sync"
)
//output from 10 20 30 ... - 100
func main() {
wg := sync.WaitGroup{}
done := make(chan int)
for i := 1; i <= 10; i++ {
wg.Add(1)
go func(i int) {
done <- i * 10
wg.Done()
}(i)
}
go func() {
wg.Wait()
close(done)
}()
for value := range done {
fmt.Println("received is ", value)
}
fmt.Println("channel closed")
/* I have simplified your loop but this would also work
for {
if value, ok := <-done; ok {
fmt.Println("received is ", value)
} else {
fmt.Println("channel closed")
return
}
}
*/
}
Because you never close the channel. So the value, ok := <-done part is always waiting for the 11th value that will never come.
Replacing this part should do the trick:
for i := 1; i <= 10; i++ {
//wg.Add(1)
go func(i int) {
done <- i * 10
}(i)
// close(done)
}
new:
go func(){
for i := 1; i <= 10; i++ {
done <- i * 10
}
close(done)
}()
I've been experimenting with goroutines and channels, and I wanted to test the WaitGroup feature. Here I'm trying to execute an HTTP flood job, where the parent thread spawns a lot of goroutines which will make infinite requests, unless receiving a stop message:
func (hf *HTTPFlood) Run() {
childrenStop := make(chan int, hf.ConcurrentCalls)
stop := false
totalRequests := 0
requestsChan := make(chan int)
totalErrors := 0
errorsChan := make(chan int)
var wg sync.WaitGroup
for i := 0; i < hf.ConcurrentCalls; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-childrenStop:
fmt.Printf("stop child\n")
return
default:
_, err := Request(hf.Victim.String())
requestsChan <- 1
if err != nil {
errorsChan <- 1
}
}
}
}()
}
timeout := time.NewTimer(time.Duration(MaximumJobTime) * time.Second)
for !stop {
select {
case req := <- requestsChan:
totalReq += req
case err := <- errorsChan:
totalErrors += err
case <- timeout.C:
fmt.Printf("%s timed up\n", hf.Victim.String())
for i := 0; i < hf.ConcurrentCalls; i++ {
childrenStop <- 1
}
close(childrenStop)
stop = true
break
}
}
fmt.Printf("waiting\n")
wg.Wait()
fmt.Printf("after wait\n")
close(requestsChan)
close(errorsChan)
fmt.Printf("end\n")
}
Once timeout is fired, the parent thread successfully exits the loop and reaches the Wait instruction, but even though the stopChildren channel is filled, the child goroutines seem to never receive messages on the stopChildren channel.
What am I missing?
EDIT:
So the issue obviously was how the channels and its sends/receives were managed.
First of all the childrenStop channel was closed before all childs had received the message. The channel should be closed after the Wait
On the other hand, since no reads were done neither on requestsChan nor errorsChan once the parent thread sends the stop signal, most of the childs stayed blocked sending on these two channels. I tried to keep reading in the parent thread, outside the loop just before the Wait but that didn't work so I switched the implementation to Atomic counters which seem to be a more suitable way to manage this specific use case.
func (hf *HTTPFlood) Run() {
childrenStop := make(chan int, hf.ConcurrentCalls)
var totalReq uint64
var totalErrors uint64
var wg sync.WaitGroup
for i := 0; i < hf.ConcurrentCalls; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-childrenStop:
fmt.Printf("stop child\n")
return
default:
_, err := Request(hf.Victim.String())
atomic.AddUint64(&totalReq, 1)
if err != nil {
atomic.AddUint64(&totalErrors, 1)
}
}
}
}()
}
timeout := time.NewTimer(time.Duration(MaximumJobTime) * time.Second)
<- timeout.C
fmt.Printf("%s timed up\n", hf.Victim.String())
for i := 0; i < hf.ConcurrentCalls; i++ {
childrenStop <- 1
}
fmt.Printf("waiting\n")
wg.Wait()
fmt.Printf("after wait\n")
close(childrenStop)
fmt.Printf("end\n")
}
Your go routines can be is blocked at requestsChan <- 1.
case <- timeout.C:
fmt.Printf("%s timed up\n", hf.Victim.String())
for i := 0; i < hf.ConcurrentCalls; i++ {
childrenStop <- 1
}
close(childrenStop)
stop = true
break
Here you are sending a number to childrenStop and expect the go routines to receive it. But while you are sending the childrenStop signal, your routines could have sent something on requestsChan. But as you break from the loop after sending the close signals, there's no one listening on requestsChan to receive.
You can confirm this by printing something just before and after requestsChan <- 1 to confirm the behaviour.
A channel will block when you send something on it while no one is receiving on the other end
Here is a possible modification.
package main
import (
"fmt"
"time"
)
func main() {
requestsChan := make(chan int)
done := make(chan chan bool)
for i := 0; i < 5; i++ {
go func(it int) {
for {
select {
case c := <-done:
c <- true
return
default:
requestsChan <- it
}
}
}(i)
}
max := time.NewTimer(1 * time.Millisecond)
allChildrenDone := make(chan bool)
childrenDone := 0
childDone := make(chan bool)
go func() {
for {
select {
case i := <-requestsChan:
fmt.Printf("received %d;", i)
case <-max.C:
fmt.Println("\nTimeup")
for i := 0; i < 5; i++ {
go func() {
done <- childDone
fmt.Println("sent done")
}()
}
case <-childDone:
childrenDone++
fmt.Println("child done ", childrenDone)
if childrenDone == 5 {
allChildrenDone <- true
return
}
}
}
}()
fmt.Println("Waiting")
<-allChildrenDone
}
Thing to note here is that am sending the close signal in go routines so that the loop can continue while i wait for all the children to have cleanly exit.
Please watch this talk by Rob Pike which covers these details clearly.
[Edit]: The previous code would have resulted in a running routine after exiting.
I have a goroutine which can generate an infinite number of values (each more suitable than the last), but it takes progressively longer to find each values. I'm trying to find a way to add a time limit, say 10 seconds, after which my function does something with the best value received so far.
This is my current "solution", using a channel and timer:
// the goroutine which runs infinitely
// (or at least a very long time for high values of depth)
func runSearch(depth int, ch chan int) {
for i := 1; i <= depth; i++ {
fmt.Printf("Searching to depth %v\n", i)
ch <- search(i)
}
}
// consumes progressively better values until the channel is closed
func awaitBestResult(ch chan int) {
var result int
for result := range ch {
best = result
}
// do something with best result here
}
// run both consumer and producer
func main() {
timer := time.NewTimer(time.Millisecond * 2000)
ch := make(chan int)
go runSearch(1000, ch)
go awaitBestResult(ch)
<-timer.C
close(ch)
}
This mostly works - the best result is processed after the timer ends and the channel is closed. However, I then get a panic (panic: send on closed channel) from the runSearch goroutine, since the channel has been closed by the main function.
How can I stop the first goroutine running after the timer has completed? Any help is very appreciated.
You need to ensure that the goroutine knows when it is done processing, so that it doesn't attempt to write to a closed channel, and panic.
This sounds like a perfect case for the context package:
func runSearch(ctx context.Context, depth int, ch chan int) {
for i := 1; i <= depth; i++ {
select {
case <- ctx.Done()
// Context cancelled, return
return
default:
}
fmt.Printf("Searching to depth %v\n", i)
ch <- search(i)
}
}
Then in main():
// run both consumer and producer
func main() {
ctx := context.WithTimeout(context.Background, 2 * time.Second)
ch := make(chan int)
go runSearch(ctx, 1000, ch)
go awaitBestResult(ch)
close(ch)
}
You are getting a panic because your sending goroutine runSearch apparently outlives the timer and it is trying to send a value on the channel which is already closed by your main goroutine. You need to devise a way to signal the sending go routine not to send any values once your timer is lapsed and before you close the channel in main. On the other hand if your search gets over sooner you also need to communicate to main to move on. You can use one channel and synchronize so that there are no race conditions. And finally you need to know when your consumer has processed all the data before you can exit main.
Here's something which may help.
package main
import (
"fmt"
"sync"
"time"
)
var mu sync.Mutex //To protect the stopped variable which will decide if a value is to be sent on the signalling channel
var stopped bool
func search(i int) int {
time.Sleep(1 * time.Millisecond)
return (i + 1)
}
// (or at least a very long time for high values of depth)
func runSearch(depth int, ch chan int, stopSearch chan bool) {
for i := 1; i <= depth; i++ {
fmt.Printf("Searching to depth %v\n", i)
n := search(i)
select {
case <-stopSearch:
fmt.Println("Timer over! Searched till ", i)
return
default:
}
ch <- n
fmt.Printf("Sent depth %v result for processing\n", i)
}
mu.Lock() //To avoid race condition with timer also being
//completed at the same time as execution of this code
if stopped == false {
stopped = true
stopSearch <- true
fmt.Println("Search completed")
}
mu.Unlock()
}
// consumes progressively better values until the channel is closed
func awaitBestResult(ch chan int, doneProcessing chan bool) {
var best int
for result := range ch {
best = result
}
fmt.Println("Best result ", best)
// do something with best result here
//and communicate to main when you are done processing the result
doneProcessing <- true
}
func main() {
doneProcessing := make(chan bool)
stopSearch := make(chan bool)
// timer := time.NewTimer(time.Millisecond * 2000)
timer := time.NewTimer(time.Millisecond * 12)
ch := make(chan int)
go runSearch(1000, ch, stopSearch)
go awaitBestResult(ch, doneProcessing)
select {
case <-timer.C:
//If at the same time runsearch is also completed and trying to send a value !
//So we hold a lock before sending value on the channel
mu.Lock()
if stopped == false {
stopped = true
stopSearch <- true
fmt.Println("Timer expired")
}
mu.Unlock()
case <-stopSearch:
fmt.Println("runsearch goroutine completed")
}
close(ch)
//Wait for your consumer to complete processing
<-doneProcessing
//Safe to exit now
}
On playground. Change the value of timer to observe both the scenarios.