How can i use gomaxprocs? The code below sets gomaxprocs, but then more workers are spawned that set. I expect 2 processes but 5 are still run.
package main
import (
"fmt"
"runtime"
"sync"
"time"
)
func worker(i int, waiter chan struct{}, wg *sync.WaitGroup) {
defer func(waiter chan struct{}, wg *sync.WaitGroup) {
fmt.Printf("worker %d done\n", i)
wg.Done()
<-waiter
}(waiter, wg)
fmt.Printf("worker %d starting\n", i)
time.Sleep(time.Second)
}
func main() {
runtime.GOMAXPROCS(2)
var concurrency = 5
var items = 10
waiter := make(chan struct{}, concurrency)
var wg sync.WaitGroup
for i := 0; i < items; i++ {
wg.Add(1)
waiter <- struct{}{}
go worker(i, waiter, &wg)
}
wg.Wait()
}
Go has three concepts for what C/C++ programmers think of as a thread: G, P, M.
M = actual thread
G = Goroutines (i.e., the code in your program)
P = Processor
There is no Go API for limiting the number of Ms. There is no API for limiting the number of Gs - a new one gets created every time go func(...) is called. The GOMAXPROCS thing is there to limit Ps.
Each P is used to track the runtime state of some running Goroutine.
You should think of GOMAXPROCS as the peak number of Ms devoted to running Goroutines. (There are other Ms that don't run Goroutines, but handle garbage collection tasks and serve as template threads for creating new Ms as needed etc. Some Ms are devoted to holding runtime state while some Go code is blocked inside a system call.)
So, in terms of the code in your program, GOMAXPROCS is a constraint for how parallel its Go code execution can be. When a running Goroutine reaches a point where it becomes blocked, it is parked and its P is used to resume execution of some other Goroutine that is not blocked.
Related
Without frameworks like RxGo, how could I accomplish the following in Go?:
Three goroutines of different running times are in a context: short, medium, and, long. These goroutines are long-running jobs like reading or uploading a large file.
If any one of the three goroutines cancel (on error), the other two will cancel.
Run code on all three goroutines when cancelled (like panic & recover). These are actions like, closing the large file or notifying the upload has failed.
In a diagram, the desired outcome:
Updated attempt:
package main
import (
"context"
"fmt"
"math/rand"
"sync"
"time"
)
func randInt(min int, max int) int {
return min + rand.Intn(max-min)
}
func main() {
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
rand.Seed(time.Now().UnixNano())
wg := &sync.WaitGroup{}
errworker := make(chan int, 4)
work := func(i int, ec chan int, c context.Context, cx context.CancelFunc) {
defer wg.Done()
workchan := make(chan struct{})
go func() {
interval := time.Duration(randInt(1, 10000)) * time.Millisecond
fmt.Printf("Operation %d started: will take %s\n", i, interval)
time.Sleep(interval)
if randInt(0, 100) > 80 {
fmt.Printf("Operation %d failed!\n", i)
ec <- i
cx()
}
workchan <- struct{}{}
}()
select {
case <-workchan:
fmt.Printf("Operation %d done\n", i)
case <-c.Done():
fmt.Printf("Operation %d halted\n", i)
}
}
for i := 0; i < 4; i++ {
wg.Add(1)
go work(i, errworker, ctx, cancel)
}
wg.Wait()
close(errworker)
for e := range errworker {
fmt.Printf("Error in worker %d\n", e)
}
}
(Edit) Playground: https://go.dev/play/p/CNACYe43Dh3
Updated attempt playground: https://go.dev/play/p/5lzdERwqG8o
Is there a simple, elegant ELI5 solution to this problem? I can't help but notice with channels, I may need child contexts with child goroutines to listen for the cancellation signal, since when a goroutine enters a long running job, there was no way (from what I've tried) to stop the goroutine other than cancelling the context altogether. A controlled way to terminate the goroutines early is desired.
(Edit)
In my updated attempt I simply placed the actual work in an inner goroutine while the outer goroutine listens for context completion. In this attempt I decided to use an error channel (although in theory only one error will come through) to catch and process errors at the end.
Are there any caveats or blind spots to this approach? Feel free to correct me on implementation and approach, the goal is to have a controlled way to terminate goroutines early.
Thanks!
I'm trying to build a generic pipeline library using worker pools. I created an interface for a source, pipe, and sink. You see, the pipe's job is to receive data from an input channel, process it, and output the result onto a channel. Here is its intended behavior:
Receive data from an input channel.
Delegate the data to an available worker.
The worker sends the result to the output channel.
Close the output channel once all workers are finished.
func (p *pipe) Process(in chan interface{}) (out chan interface{}) {
var wg sync.WaitGroup
out = make(chan interface{}, 100)
go func() {
for i := 1; i <= 100; i++ {
go p.work(in, out, &wg)
}
wg.Wait()
close(out)
}()
return
}
func (p *pipe) work(jobs <-chan interface{}, out chan<- interface{}, wg *sync.WaitGroup) {
for j := range jobs {
func(j Job) {
defer wg.Done()
wg.Add(1)
res := doSomethingWith(j)
out <- res
}(j)
}
}
However, running it may either exit without processing all of the inputs or panic with a send on closed channel message. Building the source with the -race flag gives out a data race warning between close(out) and out <- res.
Here's what I think might happen. Once a number of workers have finished their jobs, there's a split second where wg's counter reach zero. Hence, wg.Wait() is done and the program proceeds to close(out). Meanwhile, the job channel isn't finished producing data, meaning some workers are still running in another goroutine. Since the out channel is already closed, it results in a panic.
Should the wait group be placed somewhere else? Or is there a better way to wait for all workers to finish?
It's not clear why you want one worker per job, but if you do, you can restructure your outer loop setup (see untested code below). This kind of obviates the need for worker pools in the first place.
Always, though, do a wg.Add before spinning off any worker. Right here, you are spinning off exactly 100 workers:
var wg sync.WaitGroup
out = make(chan interface{}, 100)
go func() {
for i := 1; i <= 100; i++ {
go p.work(in, out, &wg)
}
wg.Wait()
close(out)
}()
You could therefore do this:
var wg sync.WaitGroup
out = make(chan interface{}, 100)
go func() {
wg.Add(100) // ADDED - count the 100 workers
for i := 1; i <= 100; i++ {
go p.work(in, out, &wg)
}
wg.Wait()
close(out)
}()
Note that you can now move wg itself down into the goroutine that spins off the workers. This can make things cleaner, if you give up on the notion of having each worker spin off jobs as new goroutines. But if each worker is going to spin off another goroutine, that worker itself must also use wg.Add, like this:
for j := range jobs {
wg.Add(1) // ADDED - count the spun-off goroutines
func(j Job) {
res := doSomethingWith(j)
out <- res
wg.Done() // MOVED (for illustration only, can defer as before)
}(j)
}
wg.Done() // ADDED - our work in `p.work` is now done
That is, each anonymous function is another user of the channel, so increment the users-of-channel count (wg.Add(1)) before spinning off a new goroutine. When you have finished reading the input channel jobs, call wg.Done() (perhaps via an earlier defer, but I showed it at the end here).
The key to thinking about this is that wg counts the number of active goroutines that could, at this point, write to the channel. It only goes to zero when no goroutines intend to write any more. That makes it safe to close the channel.
Consider using the rather simpler (but untested):
func (p *pipe) Process(in chan interface{}) (out chan interface{}) {
out = make(chan interface{})
var wg sync.WaitGroup
go func() {
defer close(out)
for j := range in {
wg.Add(1)
go func(j Job) {
res := doSomethingWith(j)
out <- res
wg.Done()
}(j)
}
wg.Wait()
}()
return out
}
You now have one goroutine that is reading the in channel as fast as it can, spinning off jobs as it goes. You'll get one goroutine per incoming job, except when they finish their work early. There is no pool, just one worker per job (same as your code except that we knock out the pools that aren't doing anything useful).
Or, since there are only some number of CPUs available, spin off some number of goroutines as you did before at the start, but have each one run one job to completion, and deliver its result, then go back to reading the next job:
func (p *pipe) Process(in chan interface{}) (out chan interface{}) {
out = make(chan interface{})
go func() {
defer close(out)
var wg sync.WaitGroup
ncpu := runtime.NumCPU() // or something fancier if you like
wg.Add(ncpu)
for i := 0; i < ncpu; i++ {
go func() {
defer wg.Done()
for j := range in {
out <- doSomethingWith(j)
}
}()
}
wg.Wait()
}
return out
}
By using runtime.NumCPU() we get only as many workers reading jobs as there are CPUs to run jobs. Those are the pools and they only do one job at a time.
There's generally no need to buffer the output channel, if the output-channel readers are well-structured (i.e., don't cause the pipeline to constipate). If they're not, the depth of buffering here limits how many jobs you can "work ahead" of whoever is consuming the results. Set it based on how useful it is to do this "working ahead"—not necessarily the number of CPUs, or the number of expected jobs, or whatever.
It's possible that the jobs are being completed just as fast as they're being sent. In this case the WaitGroup will be floating near zero even while there's many more items to process.
One fix for this is to add one before sending jobs, and decrement that one after sending them all, effectively consider the sender to be one of the 'jobs'. In this case, it's better if we do the wg.Add in the sender:
func (p *pipe) Process(in chan interface{}) (out chan interface{}) {
var wg sync.WaitGroup
out = make(chan interface{}, 100)
go func() {
for i := 1; i <= 100; i++ {
wg.Add(1)
go p.work(in, out, &wg)
}
wg.Wait()
close(out)
}()
return
}
func (p *pipe) work(jobs <-chan interface{}, out chan<- interface{}, wg *sync.WaitGroup) {
for j := range jobs {
func(j Job) {
res := doSomethingWith(j)
out <- res
wg.Done()
}(j)
}
}
One thing I notice in the code is that a goroutine is started for each job. At the same time each job processes the jobs channel in a loop until empty/closed. It doesn't seem necessary to do both.
I'm having difficulty getting go concurrency to work correctly. I'm working with data loaded from an XML Data Source. Once I load the data into memory, i loop through the XML elements and perform an operation. The code prior to the concurrency addition has been tested and functional, and I don't believe it has any influence on the concurrency addition. I have 2 failed attempts at concurrency implementations, both with different outputs. I used locking because i dont want to enter a race condition.
For this implementation, it never enters the goroutine.
var mu sync.Mutex
// length is 197K
for i:=0;i<len(listings.Listings);i++{
go func(){
mu.Lock()
// code execution (tested prior to adding concurrency and locking)
mu.Unlock()
}()
}
For this implementation using waitGroups, a runtime out of memory occurs
var mu sync.Mutex
var wg sync.WaitGroup
// length is 197K
for i:=0;i<len(listings.Listings);i++{
wg.Add(1)
go func(){
mu.Lock()
// code execution (tested prior to adding concurrency and locking and wait group)
wg.Done()
mu.Unlock()
}()
}
wg.Wait()
I'm not really sure what's going on and could use some assistance.
You don't need Mutex here if you want to make it concurrent
197K goroitines are a lot, try lower amount of goroutines. You can accomplish it by creating N goroutines, when each of them is listening to the same channel.
https://play.golang.org/p/s4e0YyHdyPq
package main
import (
"fmt"
"sync"
)
type Listing struct{}
func main() {
var (
wg sync.WaitGroup
concurrency = 100
)
c := make(chan Listing)
wg.Add(concurrency)
for i := 0; i < concurrency; i++ {
go func(ci <-chan Listing) {
for l := range ci {
// code, l is a single Listing
fmt.Printf("%v", l)
}
wg.Done()
}(c)
}
// replace with your var
listings := []Listing{Listing{}}
for _, l := range listings {
c <- l
}
close(c)
wg.Wait()
}
How do I deal with a situation where undetected deadlock occurs when reading results of execution of uncertain number tasks from a channel in a complex program, e.g. web server?
package main
import (
"fmt"
"math/rand"
"time"
)
func main() {
rand.Seed(time.Now().UTC().UnixNano())
results := make(chan int, 100)
// we can't know how many tasks there will be
for i := 0; i < rand.Intn(1<<8)+1<<8; i++ {
go func(i int) {
time.Sleep(time.Second)
results <- i
}(i)
}
// can't close channel here
// because it is still written in
//close(results)
// something else is going on other threads (think web server)
// therefore a deadlock won't be detected
go func() {
for {
time.Sleep(time.Second)
}
}()
for j := range results {
fmt.Println(j)
// we just stuck in here
}
}
In case of simpler programs go detects a deadlock and properly fails. Most examples either fetch a known number of results, or write to the channel sequentially.
The trick is to use sync.WaitGroup and wait for the tasks to finish in a non-blocking way.
var wg sync.WaitGroup
// we can't know how many tasks there will be
for i := 0; i < rand.Intn(1<<8)+1<<8; i++ {
wg.Add(1)
go func(i int) {
time.Sleep(time.Second)
results <- i
wg.Done()
}(i)
}
// wait for all tasks to finish in other thread
go func() {
wg.Wait()
close(results)
}()
// execution continues here so you can print results
See also: Go Concurrency Patterns: Pipelines and cancellation - The Go Blog
I'm trying to write my first web-spider in Golang. Its task is to crawl domains (and inspect their html) from the provided database query. The idea is to have no 3rd party dependencies (e.g. msg queue), or as little as possible, yet it has to be performant enough to crawl 5 million domains per day. I have approx 150 million domains I need to check every month.
The very basic version below - it runs in "infinite loop" as theoretically the crawl process would be endless.
func crawl(n time.Duration) {
var wg sync.WaitGroup
runtime.GOMAXPROCS(runtime.NumCPU())
for _ = range time.Tick(n * time.Second) {
wg.Add(1)
go func() {
defer wg.Done()
// do the expensive work here - query db, crawl domain, inspect html
}()
}
wg.Wait()
}
func main() {
go crawl(1)
select{}
}
Running this code on 4 CPU cores at the moment means it can perform max 345600 requests during 24 hours ((60 * 60 * 24) * 4) with the given threshold of 1s. At least that's my understanding :-) If my thinking's correct then I will need to come up with solution being 14x faster to meet daily requirements.
I would appreciate your advices in regards to make the crawler faster, but without resolving to complicated stack setup or buying server with more CPU cores.
Why have the timing component at all?
Just create a channel that you feed URLs to, then spawn N goroutines that loop over that channel and do the work.
then just tweak the value of N until your CPU/memory is capped ~90% utilization (to accommodate fluctuations in site response times)
something like this (on Play):
package main
import "fmt"
import "sync"
var numWorkers = 10
func crawler(urls chan string, wg *sync.WaitGroup) {
defer wg.Done()
for u := range urls {
fmt.Println(u)
}
}
func main() {
ch := make(chan string)
var wg sync.WaitGroup
for i := 0; i < numWorkers; i++ {
wg.Add(1)
go crawler(ch, &wg)
}
ch <- "http://ibm.com"
ch <- "http://google.com"
close(ch)
wg.Wait()
fmt.Println("All Done")
}