I want to make Reader.Read concurrent with channel communication.
so I made two ways to run it
1:
type ReturnRead struct {
n int
err error
}
type ReadGoSt struct {
Returnc <-chan ReturnRead
Nextc chan struct{}
}
func (st *ReadGoSt) Close() {
defer func() {
recover()
}()
close(st.Next)
}
func ReadGo(r io.Reader, b []byte) *ReadGoSt {
returnc := make(chan ReturnRead)
nextc := make(chan bool)
go func() {
for range nextc {
n, err := r.Read(b)
returnc <- ReturnRead{n, err}
if err != nil {
return
}
}
}()
return &ReadGoSt{returnc, nextc}
}
2:
func ReadGo(r io.Reader, b []byte) <-chan ReturnRead {
returnc := make(chan ReturnRead)
go func() {
n, err := r.Read(b)
returnc <- ReturnRead{n, err}
}()
return returnc
}
i think code 2 creates too many overhead
which is the better code? 1? 2?
Code 1 is better and probably faster. Code 2 will just read once. But I think both solutions are not the best. you should loop over the read and send back only the bytes readed.
Something like : http://play.golang.org/p/zRPXOtdgWD
Related
I am having issue while using waitgroup with the buffered channel. The problem is waitgroup closes before channel is read completely, which make my channel is half read and break in between.
func main() {
var wg sync.WaitGroup
var err error
start := time.Now()
students := make([]studentDetails, 0)
studentCh := make(chan studentDetail, 10000)
errorCh := make(chan error, 1)
wg.Add(1)
go s.getDetailStudents(rCtx, studentCh , errorCh, &wg, s.Link, false)
go func(ch chan studentDetail, e chan error) {
LOOP:
for {
select {
case p, ok := <-ch:
if ok {
L.Printf("Links %s: [%s]\n", p.title, p.link)
students = append(students, p)
} else {
L.Print("Closed channel")
break LOOP
}
case err = <-e:
if err != nil {
break
}
}
}
}(studentCh, errorCh)
wg.Wait()
close(studentCh)
close(errorCh)
L.Warnln("closed: all wait-groups completed!")
L.Warnf("total items fetched: %d", len(students))
elapsed := time.Since(start)
L.Warnf("operation took %s", elapsed)
}
The problem is this function is recursive. I mean some http call to fetch students and then make more calls depending on condition.
func (s Student) getDetailStudents(rCtx context.Context, content chan<- studentDetail, errorCh chan<- error, wg *sync.WaitGroup, url string, subSection bool) {
util.MustNotNil(rCtx)
L := logger.GetLogger(rCtx)
defer func() {
L.Println("Closing all waitgroup!")
wg.Done()
}()
wc := getWC()
httpClient := wc.Registry.MustHTTPClient()
res, err := httpClient.Get(url)
if err != nil {
L.Fatal(err)
}
defer res.Body.Close()
if res.StatusCode != 200 {
L.Errorf("status code error: %d %s", res.StatusCode, res.Status)
errorCh <- errors.New("service_status_code")
return
}
// parse response and return error if found some through errorCh as done above.
// decide page subSection based on response if it is more.
if !subSection {
wg.Add(1)
go s.getDetailStudents(rCtx, content, errorCh, wg, link, true)
// L.Warnf("total pages found %d", pageSub.Length()+1)
}
// Find students from response list and parse each Student
students := s.parseStudentItemList(rCtx, item)
for _, student := range students {
content <- student
}
L.Warnf("Calling HTTP Service for %q with total %d record", url, elementsSub.Length())
}
Variables are changed to avoid original code base.
The problem is students are read randomly as soon as Waitgroup complete. I am expecting to hold the execution until all students are read, In case of error it should break as soon error encounter.
You need to know when the receiving goroutine completes. The WaitGroup does that for the generating goroutine. So, you can use two waitgroups:
wg.Add(1)
go s.getDetailStudents(rCtx, studentCh , errorCh, &wg, s.Link, false)
wgReader.Add(1)
go func(ch chan studentDetail, e chan error) {
defer wgReader.Done()
...
}
wg.Wait()
close(studentCh)
close(errorCh)
wgReader.Wait() // Wait for the readers to complete
Since you are using buffered channels you can retrieve the remaining values after closing the channel. You will also need a mechanism to prevent your main function from exiting too early while the reader is still doing work ,as #Burak Serdar has advised.
I restructured the code to give a working example but it should get the point across.
package main
import (
"context"
"log"
"sync"
"time"
)
type studentDetails struct {
title string
link string
}
func main() {
var wg sync.WaitGroup
var err error
students := make([]studentDetails, 0)
studentCh := make(chan studentDetails, 10000)
errorCh := make(chan error, 1)
start := time.Now()
wg.Add(1)
go getDetailStudents(context.TODO(), studentCh, errorCh, &wg, "http://example.com", false)
LOOP:
for {
select {
case p, ok := <-studentCh:
if ok {
log.Printf("Links %s: [%s]\n", p.title, p.link)
students = append(students, p)
} else {
log.Println("Draining student channel")
for p := range studentCh {
log.Printf("Links %s: [%s]\n", p.title, p.link)
students = append(students, p)
}
break LOOP
}
case err = <-errorCh:
if err != nil {
break LOOP
}
case <-wrapWait(&wg):
close(studentCh)
}
}
close(errorCh)
elapsed := time.Since(start)
log.Printf("operation took %s", elapsed)
}
func getDetailStudents(rCtx context.Context, content chan<- studentDetails, errorCh chan<- error, wg *sync.WaitGroup, url string, subSection bool) {
defer func() {
log.Println("Closing")
wg.Done()
}()
if !subSection {
wg.Add(1)
go getDetailStudents(rCtx, content, errorCh, wg, url, true)
// L.Warnf("total pages found %d", pageSub.Length()+1)
}
content <- studentDetails{
title: "title",
link: "link",
}
}
// helper function to allow using WaitGroup in a select
func wrapWait(wg *sync.WaitGroup) <-chan struct{} {
out := make(chan struct{})
go func() {
wg.Wait()
out <- struct{}{}
}()
return out
}
wg.Add(1)
go func(){
defer wg.Done()
// I do not think that you need a recursive function.
// this function overcomplicated.
s.getDetailStudents(rCtx, studentCh , errorCh, &wg, s.Link, false)
}(...)
wg.Add(1)
go func(ch chan studentDetail, e chan error) {
defer wg.Done()
...
}(...)
wg.Wait()
close(studentCh)
close(errorCh)
This should solve the problem. s.getDetailStudents function must be simplified. Making it recursive does not have any benefit.
I'm looking for the simplest code for looping over a dataset in parallel. The requirements are that the number of goroutines is fixed and that they can return an error code. The following is a quick attempt which doesn't work, since the loops will deadlock as both goroutines are blocking on the error channel
package main
import (
"fmt"
"sync"
)
func worker(wg *sync.WaitGroup, intChan chan int, errChan chan error) {
defer wg.Done()
for i := range intChan {
fmt.Printf("Got %d\n", i)
errChan <- nil
}
}
func main() {
ints := []int{1,2,3,4,5,6,7,8,9,10}
intChan := make(chan int)
errChan := make(chan error)
wg := new(sync.WaitGroup)
for i := 0; i < 2; i++ {
wg.Add(1)
go worker(wg, intChan, errChan)
}
for i := range ints {
intChan <- i
}
for range ints {
err := <- errChan
fmt.Printf("Error: %v\n", err)
}
close(intChan)
wg.Wait()
}
What is the simplest pattern for doing this?
Listen for errors in a goroutine:
go func() {
for err:=range errChan {
// Deal with err
}
}()
for i := 0; i < 2; i++ {
wg.Add(1)
go worker(wg, intChan, errChan)
}
for i := range ints {
intChan <- i
}
close(errChan) // Error listener goroutine terminates after this
I am playing around with Golang and I created this little app to make several concurrent api calls using goroutines.
While the app works, after the calls complete, the app gets stuck, which makes sense because it cannot exit the range c loop because the channel is not closed.
I am not sure where to better close the channel in this pattern.
package main
import "fmt"
import "net/http"
func main() {
links := []string{
"https://github.com/fabpot",
"https://github.com/andrew",
"https://github.com/taylorotwell",
"https://github.com/egoist",
"https://github.com/HugoGiraudel",
}
checkUrls(links)
}
func checkUrls(urls []string) {
c := make(chan string)
for _, link := range urls {
go checkUrl(link, c)
}
for msg := range c {
fmt.Println(msg)
}
close(c) //this won't get hit
}
func checkUrl(url string, c chan string) {
_, err := http.Get(url)
if err != nil {
c <- "We could not reach:" + url
} else {
c <- "Success reaching the website:" + url
}
}
You close a channel when there are no more values to send, so in this case it's when all checkUrl goroutines have completed.
var wg sync.WaitGroup
func checkUrls(urls []string) {
c := make(chan string)
for _, link := range urls {
wg.Add(1)
go checkUrl(link, c)
}
go func() {
wg.Wait()
close(c)
}()
for msg := range c {
fmt.Println(msg)
}
}
func checkUrl(url string, c chan string) {
defer wg.Done()
_, err := http.Get(url)
if err != nil {
c <- "We could not reach:" + url
} else {
c <- "Success reaching the website:" + url
}
}
(Note that the error from http.Get is only going to reflect connection and protocol errors. It is not going to contain http server errors if you're expecting those too, which you must be seeing how you're checking for paths and not just hosts.)
When writing programs in Go using channels and goroutines always think about who (which function) owns a channel. I prefer the practice of letting the function who owns a channel close it. If i were to write this i would do as shown below.
Note: A better way to handle situations like this is the Fan-out, fan-in concurrency pattern. refer(https://blog.golang.org/pipelines)Go Concurrency Patterns
package main
import "fmt"
import "net/http"
import "sync"
func main() {
links := []string{
"https://github.com/fabpot",
"https://github.com/andrew",
"https://github.com/taylorotwell",
"https://github.com/egoist",
"https://github.com/HugoGiraudel",
}
processURLS(links)
fmt.Println("End of Main")
}
func processURLS(links []string) {
resultsChan := checkUrls(links)
for msg := range resultsChan {
fmt.Println(msg)
}
}
func checkUrls(urls []string) chan string {
outChan := make(chan string)
go func(urls []string) {
defer close(outChan)
var wg sync.WaitGroup
for _, url := range urls {
wg.Add(1)
go checkUrl(&wg, url, outChan)
}
wg.Wait()
}(urls)
return outChan
}
func checkUrl(wg *sync.WaitGroup, url string, c chan string) {
defer wg.Done()
_, err := http.Get(url)
if err != nil {
c <- "We could not reach:" + url
} else {
c <- "Success reaching the website:" + url
}
}
In a function definition, if a channel is an argument without a direction, does it have to send or receive something?
func makeRequest(url string, ch chan<- string, results chan<- string) {
start := time.Now()
resp, err := http.Get(url)
defer resp.Body.Close()
if err != nil {
fmt.Printf("%v", err)
}
resp, err = http.Post(url, "text/plain", bytes.NewBuffer([]byte("Hey")))
defer resp.Body.Close()
secs := time.Since(start).Seconds()
if err != nil {
fmt.Printf("%v", err)
}
// Cannot move past this.
ch <- fmt.Sprintf("%f", secs)
results <- <- ch
}
func MakeRequestHelper(url string, ch chan string, results chan string, iterations int) {
for i := 0; i < iterations; i++ {
makeRequest(url, ch, results)
}
for i := 0; i < iterations; i++ {
fmt.Println(<-ch)
}
}
func main() {
args := os.Args[1:]
threadString := args[0]
iterationString := args[1]
url := args[2]
threads, err := strconv.Atoi(threadString)
if err != nil {
fmt.Printf("%v", err)
}
iterations, err := strconv.Atoi(iterationString)
if err != nil {
fmt.Printf("%v", err)
}
channels := make([]chan string, 100)
for i := range channels {
channels[i] = make(chan string)
}
// results aggregate all the things received by channels in all goroutines
results := make(chan string, iterations*threads)
for i := 0; i < threads; i++ {
go MakeRequestHelper(url, channels[i], results, iterations)
}
resultSlice := make([]string, threads*iterations)
for i := 0; i < threads*iterations; i++ {
resultSlice[i] = <-results
}
}
In the above code,
ch <- or <-results
seems to be blocking every goroutine that executes makeRequest.
I am new to concurrency model of Go. I understand that sending to and receiving from a channel blocks but find it difficult what is blocking what in this code.
I'm not really sure that you are doing... It seems really convoluted. I suggest you read up on how to use channels.
https://tour.golang.org/concurrency/2
That being said you have so much going on in your code that it was much easier to just gut it to something a bit simpler. (It can be simplified further). I left comments to understand the code.
package main
import (
"fmt"
"io/ioutil"
"log"
"net/http"
"sync"
"time"
)
// using structs is a nice way to organize your code
type Worker struct {
wg sync.WaitGroup
semaphore chan struct{}
result chan Result
client http.Client
}
// group returns so that you don't have to send to many channels
type Result struct {
duration float64
results string
}
// closing your channels will stop the for loop in main
func (w *Worker) Close() {
close(w.semaphore)
close(w.result)
}
func (w *Worker) MakeRequest(url string) {
// a semaphore is a simple way to rate limit the amount of goroutines running at any single point of time
// google them, Go uses them often
w.semaphore <- struct{}{}
defer func() {
w.wg.Done()
<-w.semaphore
}()
start := time.Now()
resp, err := w.client.Get(url)
if err != nil {
log.Println("error", err)
return
}
defer resp.Body.Close()
// don't have any examples where I need to also POST anything but the point should be made
// resp, err = http.Post(url, "text/plain", bytes.NewBuffer([]byte("Hey")))
// if err != nil {
// log.Println("error", err)
// return
// }
// defer resp.Body.Close()
secs := time.Since(start).Seconds()
b, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Println("error", err)
return
}
w.result <- Result{duration: secs, results: string(b)}
}
func main() {
urls := []string{"https://facebook.com/", "https://twitter.com/", "https://google.com/", "https://youtube.com/", "https://linkedin.com/", "https://wordpress.org/",
"https://instagram.com/", "https://pinterest.com/", "https://wikipedia.org/", "https://wordpress.com/", "https://blogspot.com/", "https://apple.com/",
}
workerNumber := 5
worker := Worker{
semaphore: make(chan struct{}, workerNumber),
result: make(chan Result),
client: http.Client{Timeout: 5 * time.Second},
}
// use sync groups to allow your code to wait for
// all your goroutines to finish
for _, url := range urls {
worker.wg.Add(1)
go worker.MakeRequest(url)
}
// by declaring wait and close as a seperate goroutine
// I can get to the for loop below and iterate on the results
// in a non blocking fashion
go func() {
worker.wg.Wait()
worker.Close()
}()
// do something with the results channel
for res := range worker.result {
fmt.Printf("Request took %2.f seconds.\nResults: %s\n\n", res.duration, res.results)
}
}
The channels in channels are nil (no make is executed; you make the slice but not the channels), so any send or receive will block. I'm not sure exactly what you're trying to do here, but that's the basic problem.
See https://golang.org/doc/effective_go.html#channels for an explanation of how channels work.
I have the following function which spins off a given amount of go routines
func (r *Runner) Execute() {
var wg sync.WaitGroup
wg.Add(len(r.pipelines))
for _, p := range r.pipelines {
go executePipeline(p, &wg)
}
wg.Wait()
errs := ....//contains list of errors reported by any/all go routines
}
I was thinking there might be some way with channels, but I can't seem to figure it out.
One way to do this is using mutexes if you can make executePipeline retuen errors:
// ...
for _, p := range r.pipelines {
go func(p pipelineType) {
if err := executePipeline(p, &wg); err != nil {
mu.Lock()
errs = append(errs, err)
mu.UnLock()
}
}(p)
}
To use a channel, you can have a separate goroutine listning for errors:
errCh := make(chan error)
go func() {
for e := range errCh {
errs = append(errs, e)
}
}
and in the Execute function, make the following changes:
// ...
wg.Add(len(r.pipelines))
for _, p := range r.pipelines {
go func(p pipelineType) {
if err := executePipeline(p, &wg); err != nil {
errCh <- err
}
}(p)
}
wg.Wait()
close(errCh)
You can always use #zerkms method listed above if the number of goroutines is not high.
instead of returning error from executePipleline and using a anonymous function wrapper, you can always make above changes within the function itself.
You can use channels as #Kaveh Shahbazian suggested:
func (r *Runner) Execute() {
pipelineChan := makePipeline(r.pipelines)
for cnt := 0; cnt < len(r.pipelines); cnt++{
//recieve from channel
p := <- pipelineChan
//do something with the result
}
}
func makePipeline(pipelines []pipelineType) <-chan pipelineType{
pipelineChan := make(chan pipelineType)
go func(){
for _, p := range pipelines {
go func(p pipelineType){
pipelineChan <- executePipeline(p)
}(p)
}
}()
return pipelineChan
}
Please see this example: https://gist.github.com/steven-ferrer/9b2eeac3eed3f7667e8976f399d0b8ad