Golang order output of go routines - go

I have 16 go routines which return output , which is typically a struct.
struct output{
index int,
description string,
}
Now all these 16 go routines run in parallel, and the total expected output structs from all the go routines is expected to be a million. I have used the basic sorting of go lang it is very expensive to do that, could some one help me with the approach to take to sort the output based on the index and I need to write the "description" field on to a file based on the order of index.
For instance ,
if a go routine gives output as {2, "Hello"},{9,"Hey"},{4,"Hola"}, my output file should contain
Hello
Hola
Hey
All these go routines run in parallel and I have no control on the order of execution , hence I am passing the index to finally order the output.

One thing to consider before getting into the answer is your example code will not compile. To define a type of struct in Go, you would need to change your syntax to
type output struct {
index int
description string
}
In terms of a potential solution to your problem - if you already reliably have unique index's as well as the expected count of the result set - you should not have to do any sorting at all. Instead synchronize the go routines over a channel and insert the output in an allocated slice at the respective index. You can then iterate over that slice to write the contents to a file. For example:
ch := make(chan output) //each go routine will write to this channel
wg := new(sync.WaitGroup) //wait group to sync all go routines
//execute 16 goroutines
for i := 0; i < 16; i++ {
wg.Add(1)
go worker(ch, wg) //this is expecting each worker func to call wg.Done() when completing its portion of work
}
//create a "quit" channel that will be used to signal to the select statement below that your go routines are all done
quit := make(chan bool)
go func() {
wg.Wait()
quit <- true
}()
//initialize a slice with length and capacity to 1mil, the expected result size mentioned in your question
sorted := make([]string, 1000000, 1000000)
//use the for loop, select pattern to sync the results from your 16 go routines and insert them into the sorted slice
for {
select {
case output := <-ch:
//this is not robust - check notes below example
sorted[output.index] = output.description
case <-quit:
//implement a function you could pass the sorted slice to that will write the results
// Ex: writeToFile(sorted)
return
}
}
A couple notes on this solution: it is dependent upon you knowing the size of the expected result set. If you do not know what the size of the result set is - in the select statement you will need to check if the index is read from ch exceeds the length of the sorted slice and allocate additional space before inserting our you program will crash as a result of an out of bounds error

You could use the module Ordered-concurrently to merge your inputs and then print them in order.
https://github.com/tejzpr/ordered-concurrently
Example - https://play.golang.org/p/hkcIuRHj63h
package main
import (
concurrently "github.com/tejzpr/ordered-concurrently/v2"
"log"
"math/rand"
"time"
)
type loadWorker int
// The work that needs to be performed
// The input type should implement the WorkFunction interface
func (w loadWorker) Run() interface{} {
time.Sleep(time.Millisecond * time.Duration(rand.Intn(10)))
return w
}
func main() {
max := 10
inputChan := make(chan concurrently.WorkFunction)
output := concurrently.Process(inputChan, &concurrently.Options{PoolSize: 10, OutChannelBuffer: 10})
go func() {
for work := 0; work < max; work++ {
inputChan <- loadWorker(work)
}
close(inputChan)
}()
for out := range output {
log.Println(out.Value)
}
}
Disclaimer: I'm the module creator

Related

Issue with goroutine and Waitgroup

I am trying to iterate a loop and call go routine on an anonymous function and adding a waitgroup on each iteration. And passing a string to same anonymous function and appending the value to slice a. Since I am looping 10000 times length of the slice is expected to be 10000. But I see random numbers. Not sure what is the issue. Can anyone help me fix this problem?
Here is my code snippet
import (
"fmt"
"sync"
)
func main() {
var wg = new(sync.WaitGroup)
var a []string
for i := 0; i <= 10000; i++ {
wg.Add(1)
go func(s string) {
a = append(a, s)
wg.Done()
}("MaxPayne")
}
wg.Wait()
fmt.Println(len(a))
}
Notice how appending a slice, you actually make a new slice, and then assign it back to the slice variable. So you have un-controlled concurrent writing to the variable a. Concurrent writing to the same value is not safe in Go (and most languages). In order to make it safe, you can serialize the writes with a mutex.
Try:
var lock sync.Mutex
var a []string
and
lock.Lock()
a = append(a, s)
lock.Unlock()
For more information about how a mutex works, see the tour and the sync package.
Here is a pattern to achieve a similar result, but without needing a mutex and still being safe.
package main
import (
"fmt"
"sync"
)
func main() {
const sliceSize = 10000
var wg = new(sync.WaitGroup)
var a = make([]string, sliceSize)
for i := 0; i < sliceSize; i++ {
wg.Add(1)
go func(s string, index int) {
a[index] = s
wg.Done()
}("MaxPayne", i)
}
wg.Wait()
}
This isn't exactly the same as your other program, but here's what it does.
Create a slice that already has the desired size of 10,000 (each element is an empty string at this point)
For each number 0...9999, create a new goroutine that is given a specific index to write a specific string into
After all goroutines have exited and the waitgroup is done waiting, then we know that each index of the slice has successfully been filled.
The memory access is now safe even without a mutex, because each goroutine is only writing to it's respective index (and each goroutine gets a unique index). Therefore, none of these concurrent memory writes conflict with each other. After initially creating the slice with the desired size, the variable a itself doesn't need to be assigned to again, so the original memory race is eliminated.

length of slice vary while already using waitgroup

I have a hard time understanding concurrency/paralel. in my code I made a loop of 5 cycle. Inside of the loop I added the wg.Add(1), in total I have 5 Adds. Here's the code:
package main
import (
"fmt"
"sync"
)
func main() {
var list []int
wg := sync.WaitGroup{}
for i := 0; i < 5; i++ {
wg.Add(1)
go func(c *[]int, i int) {
*c = append(*c, i)
wg.Done()
}(&list, i)
}
wg.Wait()
fmt.Println(len(list))
}
The main func waits until all the goroutines finish but when I tried to print the length of slice I get random results. ex (1,3,etc) is there something that is missing for it to get the expected result ie 5 ?
is there something that is missing for it to get the expected result ie 5 ?
Yes, proper synchronization. If multiple goroutines access the same variable where at least one of them is a write, you need explicit synchronization.
Your example can be "secured" with a single mutex:
var list []int
wg := sync.WaitGroup{}
mu := &sync.Mutex{} // A mutex
for i := 0; i < 5; i++ {
wg.Add(1)
go func(c *[]int, i int) {
mu.Lock() // Must lock before accessing shared resource
*c = append(*c, i)
mu.Unlock() // Unlock when we're done with it
wg.Done()
}(&list, i)
}
wg.Wait()
fmt.Println(len(list))
This will always print 5.
Note: the same slice is read at the end to prints its length, yet we are not using the mutex there. This is because the use of waitgroup ensures that we can only get to that point after all goroutines that modify it have completed their job, so data race cannot occur there. But in general both reads and writes have to be synchronized.
See possible duplicates:
go routine not collecting all objects from channel
Server instances with multiple users
Why does this code cause data race?
How safe are Golang maps for concurrent Read/Write operations?
golang struct concurrent read and write without Lock is also running ok?
See related questions:
Can I concurrently write different slice elements
If I am using channels properly should I need to use mutexes?
Is it safe to read a function pointer concurrently without a lock?
Concurrent access to maps with 'range' in Go

memory pooling and buffered channel with multiple goroutines

I'm creating a program which create random bson.M documents, and insert them in database.
The main goroutine generate the documents, and push them to a buffered channel. In the same time, two goroutines fetch the documents from the channel and insert them in database.
This process take a lot of memory and put too much pressure on garbage colelctor, so I'm trying to implement a memory pool to limit the number of allocations
Here is what I have so far:
package main
import (
"fmt"
"math/rand"
"sync"
"time"
"gopkg.in/mgo.v2/bson"
)
type List struct {
L []bson.M
}
func main() {
var rndSrc = rand.NewSource(time.Now().UnixNano())
pool := sync.Pool{
New: func() interface{} {
l := make([]bson.M, 1000)
for i, _ := range l {
m := bson.M{}
l[i] = m
}
return &List{L: l}
},
}
// buffered channel to store generated bson.M docs
var record = make(chan List, 3)
// start worker to insert docs in database
for i := 0; i < 2; i++ {
go func() {
for r := range record {
fmt.Printf("first: %v\n", r.L[0])
// do the insert ect
}
}()
}
// feed the channel
for i := 0; i < 100; i++ {
// get an object from the pool instead of creating a new one
list := pool.Get().(*List)
// re generate the documents
for j, _ := range list.L {
list.L[j]["key1"] = rndSrc.Int63()
}
// push the docs to the channel, and return them to the pool
record <- *list
pool.Put(list)
}
}
But it looks like one List is used 4 times before being regenerated:
> go run test.go
first: map[key1:943279487605002381 key2:4444061964749643436]
first: map[key1:943279487605002381 key2:4444061964749643436]
first: map[key1:943279487605002381 key2:4444061964749643436]
first: map[key1:943279487605002381 key2:4444061964749643436]
first: map[key1:8767993090152084935 key2:8807650676784718781]
...
Why isn't the list regenerated each time ? How can I fix this ?
The problem is that you have created a buffered channel with var record = make(chan List, 3). Hence this code:
record <- *list
pool.Put(list)
May return immediately and the entry will be placed back into the pool before it has been consumed. Hence the underlying slice will likely be modified in another loop iteration before your consumer has had a chance to consume it. Although you are sending List as a value object, remember that the []bson.M is a pointer to an allocated array and will still be pointing to the same memory when you send a new List value. Hence why you are seeing the duplicate output.
To fix, modify your channel to send the List pointer make(chan *List, 3) and change your consumer to put the entry back in the pool once finished, e.g:
for r := range record {
fmt.Printf("first: %v\n", r.L[0])
// do the insert etc
pool.Put(r) // Even if error occurs
}
Your producer should then sent the pointer with the pool.Put removed, i.e.
record <- list

How to search a huge slice of maps[string]string concurrently

I need to search a huge slice of maps[string]string. My thought was that this is a good chance for go's channel and go routines.
The Plan was to divide the slice in parts and send search them in parallel.
But I was kind of shocked that my parallel version timed out while the search of the whole slice did the trick.
I am not sure what I am doing wrong. Down below is my code which I used to test the concept. The real code would involve more complexity
//Search for a giving term
//This function gets the data passed which will need to be search
//and the search term and it will return the matched maps
// the data is pretty simply the map contains { key: andSomeText }
func Search(data []map[string]string, term string) []map[string]string {
set := []map[string]string{}
for _, v := range data {
if v["key"] == term {
set = append(set, v)
}
}
return set
}
So this works pretty well to search the slice of maps for a given SearchTerm.
Now I thought if my slice would have like 20K entries, I would like to do the search in parallel
// All searches all records concurrently
// Has the same function signature as the the search function
// but the main task is to fan out the slice in 5 parts and search
// in parallel
func All(data []map[string]string, term string) []map[string]string {
countOfSlices := 5
part := len(data) / countOfSlices
fmt.Printf("Size of the data:%v\n", len(data))
fmt.Printf("Fragemnt Size:%v\n", part)
timeout := time.After(60000 * time.Millisecond)
c := make(chan []map[string]string)
for i := 0; i < countOfSlices; i++ {
// Fragments of the array passed on to the search method
go func() { c <- Search(data[(part*i):(part*(i+1))], term) }()
}
result := []map[string]string{}
for i := 0; i < part-1; i++ {
select {
case records := <-c:
result = append(result, records...)
case <-timeout:
fmt.Println("timed out!")
return result
}
}
return result
}
Here are my tests:
I have a function to generate my test data and 2 tests.
func GenerateTestData(search string) ([]map[string]string, int) {
rand.Seed(time.Now().UTC().UnixNano())
strin := []string{"String One", "This", "String Two", "String Three", "String Four", "String Five"}
var matchCount int
numOfRecords := 20000
set := []map[string]string{}
for i := 0; i < numOfRecords; i++ {
p := rand.Intn(len(strin))
s := strin[p]
if s == search {
matchCount++
}
set = append(set, map[string]string{"key": s})
}
return set, matchCount
}
The 2 tests: The first just traverses the slice and the second searches in parallel
func TestSearchItem(t *testing.T) {
tests := []struct {
InSearchTerm string
Fn func(data []map[string]string, term string) []map[string]string
}{
{
InSearchTerm: "This",
Fn: Search,
},
{InSearchTerm: "This",
Fn: All,
},
}
for i, test := range tests {
startTime := time.Now()
data, expectedMatchCount := GenerateTestData(test.InSearchTerm)
result := test.Fn(data, test.InSearchTerm)
fmt.Printf("Test: [%v]:\nTime: %v \n\n", i+1, time.Since(startTime))
assert.Equal(t, len(result), expectedMatchCount, "expected: %v to be: %v", len(result), expectedMatchCount)
}
}
It would be great if someone could explain me why my parallel code is so slow? What is wrong with the code and what I am missing here as well as what the recommended way would be to search huge slices in memory 50K+.
This looks like just a simple typo. The problem is that you divide your original big slice into 5 pieces (countOfSlices), and you properly launch 5 goroutines to search each part:
for i := 0; i < countOfSlices; i++ {
// Fragments of the array passed on to the search method
go func() { c <- Search(data[(part*i):(part*(i+1))], term) }()
}
This means you should expect 5 results, but you don't. You expect 4000-1 results:
for i := 0; i < part-1; i++ {
select {
case records := <-c:
result = append(result, records...)
case <-timeout:
fmt.Println("timed out!")
return result
}
}
Obviously if you only launched 5 goroutines, each of which delivers 1 single result, you can only expect as many (5). And since your loop waits a lot more (which will never come), it times out as expected.
Change the condition to this:
for i := 0; i < countOfSlices; i++ {
// ...
}
Concurrency is not parallelism. Go is massively concurrent language, not parallel. Even using multicore machine you will pay for data exchange between CPUs when accessing your shared slice in computation threads. You can take advantage of concurrency searching just first match for example. Or doing something with results(say print them, or write to some Writer, or sort) while continue to search.
func Search(data []map[string]string, term string, ch chan map[string]string) {
for _, v := range data {
if v["key"] == term {
ch <- v
}
}
}
func main(){
...
go search(datapart1, term, ch)
go search(datapart2, term, ch)
go search(datapart3, term, ch)
...
for vv := range ch{
fmt.Println(vv) //do something with match concurrently
}
...
}
The recommended way to search huge slice would be to keep it sorted, or make binary tree. There are no magic.
There are two problems - as icza notes you never finish the select as you need to use countOfSlices, and then also your call to Search will not get the data you want as you need to allocate that before calling the go func(), so allocate the slice outside and pass it in.
You might find it still isn't faster though to do this particular work in parallel with such simple data (perhaps with more complex data on a machine with lots of cores it would be worthwhile)?
Be sure when testing that you try swapping the order of your test runs - you might be surprised by the results! Also perhaps try the benchmarking tools available in the testing package which runs your code lots of times for you and averages the results, this might help you get a better idea of whether the fanout speeds things up.

what can create huge overhead of goroutines?

for an assignment we are using go and one of the things we are going to do is to parse a uniprotdatabasefile line-by-line to collect uniprot-records.
I prefer not to share too much code, but I have a working code snippet that does parse such a file (2.5 GB) correctly in 48 s (measured using the time go-package). It parses the file iteratively and add lines to a record until a record end signal is reached (a full record), and metadata on the record is created. Then the record string is nulled, and a new record is collected line-by-line. Then I thought that I would try to use go-routines.
I have got some tips before from stackoverflow, and then to the original code I simple added a function to handle everything concerning the metadata-creation.
So, the code is doing
create an empty record,
iterate the file and add lines to the record,
if a record stop signal is found (now we have a full record) - give it to a go routine to create the metadata
null the record string and continue from 2).
I also added a sync.WaitGroup() to make sure that I waited (in the end) for each routine to finish. I thought that this would actually lower the time spent on parsing the databasefile as it continued to parse while the goroutines would act on each record. However, the code seems to run for more than 20 minutes indicating that something is wrong or the overhead went crazy. Any suggestions?
package main
import (
"bufio"
"crypto/sha1"
"fmt"
"io"
"log"
"os"
"strings"
"sync"
"time"
)
type producer struct {
parser uniprot
}
type unit struct {
tag string
}
type uniprot struct {
filenames []string
recordUnits chan unit
recordStrings map[string]string
}
func main() {
p := producer{parser: uniprot{}}
p.parser.recordUnits = make(chan unit, 1000000)
p.parser.recordStrings = make(map[string]string)
p.parser.collectRecords(os.Args[1])
}
func (u *uniprot) collectRecords(name string) {
fmt.Println("file to open ", name)
t0 := time.Now()
wg := new(sync.WaitGroup)
record := []string{}
file, err := os.Open(name)
errorCheck(err)
scanner := bufio.NewScanner(file)
for scanner.Scan() { //Scan the file
retText := scanner.Text()
if strings.HasPrefix(retText, "//") {
wg.Add(1)
go u.handleRecord(record, wg)
record = []string{}
} else {
record = append(record, retText)
}
}
file.Close()
wg.Wait()
t1 := time.Now()
fmt.Println(t1.Sub(t0))
}
func (u *uniprot) handleRecord(record []string, wg *sync.WaitGroup) {
defer wg.Done()
recString := strings.Join(record, "\n")
t := hashfunc(recString)
u.recordUnits <- unit{tag: t}
u.recordStrings[t] = recString
}
func hashfunc(record string) (hashtag string) {
hash := sha1.New()
io.WriteString(hash, record)
hashtag = string(hash.Sum(nil))
return
}
func errorCheck(err error) {
if err != nil {
log.Fatal(err)
}
}
First of all: your code is not thread-safe. Mainly because you're accessing a hashmap
concurrently. These are not safe for concurrency in go and need to be locked. Faulty line in your code:
u.recordStrings[t] = recString
As this will blow up when you're running go with GOMAXPROCS > 1, I'm assuming that you're not doing that. Make sure you're running your application with GOMAXPROCS=2 or higher to achieve parallelism.
The default value is 1, therefore your code runs on one single OS thread which, of course, can't be scheduled on two CPU or CPU cores simultaneously. Example:
$ GOMAXPROCS=2 go run udb.go uniprot_sprot_viruses.dat
At last: pull the values from the channel or otherwise your program will not terminate.
You're creating a deadlock if the number of goroutines exceeds your limit. I tested with a
76MiB file of data, you said your file was about 2.5GB. I have 16347 entries. Assuming linear growth,
your file will exceed 1e6 and therefore there are not enough slots in the channel and your program
will deadlock, giving no result while accumulating goroutines which don't run to fail at the end
(miserably).
So the solution should be to add a go routine which pulls the values from the channel and does
something with them.
As a side note: If you're worried about performance, do not use strings as they're always copied. Use []byte instead.

Resources