Closing a channel after a timeout - go

I have a small Go program that makes a number of requests every tick (1 second). I'm attempting to make these requests concurrently. I want to count and log the number of successful requests made in one tick, and then move on. If requests don't complete in time, I don't want to block the main ticker.
The code below achieves this, but I don't believe I'm closing the channel in concurrentReqs correctly. As any requests that miss the deadline still log with the previous tick. I also believe the ticker in the main function will block waiting for the concurrentReqs to finish. I tried moving the close(ch) inside of the timeout case in my select, but this results in a 'send on closed channel' error.
My understanding is that using contexts with a deadline (probably set in my main ticker) might be a solution for this, but I'm struggling to wrap my head around them, and I wonder if there's something else I can try.
Note: the timeout in concurrentReqs is deliberately low, since I'm testing locally.
package main
import (
"fmt"
"time"
"net/http"
)
type response struct {
num int
statusCode int
requestDuration time.Duration
}
func singleRequest(url string, i int, tick int) response {
start := time.Now()
client := http.Client{ Timeout: 100 * time.Millisecond }
resp, _ := client.Get(url)
fmt.Printf("%d: %d\n", tick, i)
defer resp.Body.Close()
return response{statusCode: int(resp.StatusCode), requestDuration: time.Since(start)}
}
func concurrentReqs(url string, reqsPerTick int, tick int) (results []response){
ch := make(chan response, reqsPerTick)
timeout := time.After(20 * time.Millisecond) // deliberately low
results = make([]response, 0)
for i := 0; i < reqsPerTick; i++ {
go func(i int, t int) {
ch <- singleRequest(url, i, tick)
}(i, tick)
}
for i := 0; i < reqsPerTick; i++ {
select {
case response := <- ch:
results = append(results, response)
case <- timeout:
return
}
}
close(ch)
return
}
func main() {
var url string = "http://end-point.svc/req"
c := time.Tick(1 * time.Second)
for next := range c {
things := concurrentReqs(url, 100, next.Second())
fmt.Printf("%s: Successful Reqs - %d\n", int(next.Second()), len(things))
}
}

I suggest to use context with timeout for cancellation and timing out. Also I think using wait group and mutex protected result writing helps simplicity here by eliminating second loop.
package main
import (
"context"
"fmt"
"log"
"net/http"
"sync"
"time"
)
type response struct {
num int
statusCode int
requestDuration time.Duration
}
func singleRequest(ctx context.Context, url string, i int, tick int) (response, error) {
start := time.Now()
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
return response{requestDuration: time.Since(start)}, err
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return response{requestDuration: time.Since(start)}, err
}
fmt.Printf("%d: %d\n", tick, i)
defer resp.Body.Close()
return response{statusCode: int(resp.StatusCode), requestDuration: time.Since(start)}, nil
}
func concurrentReqs(url string, reqsPerTick int, tick int) (results []response) {
mu := sync.Mutex{}
results = make([]response, 0)
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Millisecond)
defer cancel()
wg := sync.WaitGroup{}
for i := 0; i < reqsPerTick; i++ {
wg.Add(1)
go func(i int, t int) {
defer wg.Done()
response, err := singleRequest(ctx, url, i, tick)
if err != nil {
log.Print(err)
return
}
mu.Lock()
results = append(results, response)
mu.Unlock()
}(i, tick)
}
wg.Wait()
return results
}
func main() {
var url string = "http://end-point.svc/req"
c := time.Tick(1 * time.Second)
for next := range c {
// You may want to wrap this in a goroutine to make sure tick is not skipped.
// Otherwise if concurrentReqs takes more than a tick time for whatever reason, a tick will be skipped.
things := concurrentReqs(url, 100, next.Second())
fmt.Printf("%s: Successful Reqs - %d\n", int(next.Second()), len(things))
}
}

Related

Concurrency issues with crawler

I try to build concurrent crawler based on Tour and some others SO answers regarding that. What I have currently is below but I think I have here two subtle issues.
Sometimes I get 16 urls in response and sometimes 17 (debug print in main). I know it because when I even change WriteToSlice to Read then in Read sometimes 'Read: end, counter = ' is never reached and it's always when I get 16 urls.
I have troubles with err channel, I get no messages in this channel, even when I run my main Crawl method with address like www.golang.org so without valid schema error should be send via err channel
Concurrency is really difficult topic, help and advice will be appreciated
package main
import (
"fmt"
"net/http"
"sync"
"golang.org/x/net/html"
)
type urlCache struct {
urls map[string]struct{}
sync.Mutex
}
func (v *urlCache) Set(url string) bool {
v.Lock()
defer v.Unlock()
_, exist := v.urls[url]
v.urls[url] = struct{}{}
return !exist
}
func newURLCache() *urlCache {
return &urlCache{
urls: make(map[string]struct{}),
}
}
type results struct {
data chan string
err chan error
}
func newResults() *results {
return &results{
data: make(chan string, 1),
err: make(chan error, 1),
}
}
func (r *results) close() {
close(r.data)
close(r.err)
}
func (r *results) WriteToSlice(s *[]string) {
for {
select {
case data := <-r.data:
*s = append(*s, data)
case err := <-r.err:
fmt.Println("e ", err)
}
}
}
func (r *results) Read() {
fmt.Println("Read: start")
counter := 0
for c := range r.data {
fmt.Println(c)
counter++
}
fmt.Println("Read: end, counter = ", counter)
}
func crawl(url string, depth int, wg *sync.WaitGroup, cache *urlCache, res *results) {
defer wg.Done()
if depth == 0 || !cache.Set(url) {
return
}
response, err := http.Get(url)
if err != nil {
res.err <- err
return
}
defer response.Body.Close()
node, err := html.Parse(response.Body)
if err != nil {
res.err <- err
return
}
urls := grablUrls(response, node)
res.data <- url
for _, url := range urls {
wg.Add(1)
go crawl(url, depth-1, wg, cache, res)
}
}
func grablUrls(resp *http.Response, node *html.Node) []string {
var f func(*html.Node) []string
var results []string
f = func(n *html.Node) []string {
if n.Type == html.ElementNode && n.Data == "a" {
for _, a := range n.Attr {
if a.Key != "href" {
continue
}
link, err := resp.Request.URL.Parse(a.Val)
if err != nil {
continue
}
results = append(results, link.String())
}
}
for c := n.FirstChild; c != nil; c = c.NextSibling {
f(c)
}
return results
}
res := f(node)
return res
}
// Crawl ...
func Crawl(url string, depth int) []string {
wg := &sync.WaitGroup{}
output := &[]string{}
visited := newURLCache()
results := newResults()
defer results.close()
wg.Add(1)
go crawl(url, depth, wg, visited, results)
go results.WriteToSlice(output)
// go results.Read()
wg.Wait()
return *output
}
func main() {
r := Crawl("https://www.golang.org", 2)
// r := Crawl("www.golang.org", 2) // no schema, error should be generated and send via err
fmt.Println(len(r))
}
Both your questions 1 and 2 are a result of the same bug.
In Crawl() you are not waiting for this go routine to finish: go results.WriteToSlice(output). On the last crawl() function, the wait group is released, the output is returned and printed before the WriteToSlice function finishes with the data and err channel. So what has happened is this:
crawl() finishes, placing data in results.data and results.err.
Waitgroup wait() unblocks, causing main() to print the length of the result []string
WriteToSlice adds the last data (or err) item to the channel
You need to return from Crawl() not only when the data is done being written to the channel, but also when the channel is done being read in it's entirety (including the buffer). A good way to do this is close channels when you are sure that you are done with them. By organizing your code this way, you can block on the go routine that is draining the channels, and instead of using the wait group to release to main, you wait until the channels are 100% done.
You can see this gobyexample https://gobyexample.com/closing-channels. Remember that when you close a channel, the channel can still be used until the last item is taken. So you can close a buffered channel, and the reader will still get all the items that were queued in the channel.
There is some code structure that can change to make this cleaner, but here is a quick way to fix your program. Change Crawl to block on WriteToSlice. Close the data channel when the crawl function finishes, and wait for WriteToSlice to finish.
// Crawl ...
func Crawl(url string, depth int) []string {
wg := &sync.WaitGroup{}
output := &[]string{}
visited := newURLCache()
results := newResults()
go func() {
wg.Add(1)
go crawl(url, depth, wg, visited, results)
wg.Wait()
// All data is written, this makes `WriteToSlice()` unblock
close(results.data)
}()
// This will block until results.data is closed
results.WriteToSlice(output)
close(results.err)
return *output
}
Then on write to slice, you have to check for the closed channel to exit the for loop:
func (r *results) WriteToSlice(s *[]string) {
for {
select {
case data, open := <-r.data:
if !open {
return // All data done
}
*s = append(*s, data)
case err := <-r.err:
fmt.Println("e ", err)
}
}
}
Here is the full code: https://play.golang.org/p/GBpGk-lzrhd (it won't work in the playground)

Panic while trying to stop creating more goroutines

I'm trying to parallelize calls to an API to speed things up, but I'm facing a problem where I need to stop spinning up goroutines to call the API if I receive an error from one of the goroutine calls. Since I am closing the channel twice(once in the error handling part and when the execution is done), I'm getting a panic: close of closed channel error. Is there an elegant way to handle this without the program to panic? Any help would be appreciated!
The following is the pseudo-code snippet.
for i := 0; i < someNumber; i++ {
go func(num int, q chan<- bool) {
value, err := callAnAPI()
if err != nil {
close(q)//exit from the for-loop
}
// process the value here
wg.Done()
}(i, quit)
}
close(quit)
To mock my scenario, I have written the following program. Is there any way to exit the for-loop gracefully once the condition(commented out) is satisfied?
package main
import (
"fmt"
"sync"
)
func receive(q <-chan bool) {
for {
select {
case <-q:
return
}
}
}
func main() {
quit := make(chan bool)
var result []int
wg := &sync.WaitGroup{}
wg.Add(10)
for i := 0; i < 10; i++ {
go func(num int, q chan<- bool) {
//if num == 5 {
// close(q)
//}
result = append(result, num)
wg.Done()
}(i, quit)
}
close(quit)
receive(quit)
wg.Wait()
fmt.Printf("Result: %v", result)
}
You can use context package which defines the Context type, which carries deadlines, cancellation signals, and other request-scoped values across API boundaries and between processes.
package main
import (
"context"
"fmt"
"sync"
)
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel() // cancel when we are finished, even without error
wg := &sync.WaitGroup{}
for i := 0; i < 10; i++ {
wg.Add(1)
go func(num int) {
defer wg.Done()
select {
case <-ctx.Done():
return // Error occured somewhere, terminate
default: // avoid blocking
}
// your code here
// res, err := callAnAPI()
// if err != nil {
// cancel()
// return
//}
if num == 5 {
cancel()
return
}
fmt.Println(num)
}(i)
}
wg.Wait()
fmt.Println(ctx.Err())
}
Try on: Go Playground
You can also take a look to this answer for more detailed explanation.

Confusion regarding channel directions and blocking in Go

In a function definition, if a channel is an argument without a direction, does it have to send or receive something?
func makeRequest(url string, ch chan<- string, results chan<- string) {
start := time.Now()
resp, err := http.Get(url)
defer resp.Body.Close()
if err != nil {
fmt.Printf("%v", err)
}
resp, err = http.Post(url, "text/plain", bytes.NewBuffer([]byte("Hey")))
defer resp.Body.Close()
secs := time.Since(start).Seconds()
if err != nil {
fmt.Printf("%v", err)
}
// Cannot move past this.
ch <- fmt.Sprintf("%f", secs)
results <- <- ch
}
func MakeRequestHelper(url string, ch chan string, results chan string, iterations int) {
for i := 0; i < iterations; i++ {
makeRequest(url, ch, results)
}
for i := 0; i < iterations; i++ {
fmt.Println(<-ch)
}
}
func main() {
args := os.Args[1:]
threadString := args[0]
iterationString := args[1]
url := args[2]
threads, err := strconv.Atoi(threadString)
if err != nil {
fmt.Printf("%v", err)
}
iterations, err := strconv.Atoi(iterationString)
if err != nil {
fmt.Printf("%v", err)
}
channels := make([]chan string, 100)
for i := range channels {
channels[i] = make(chan string)
}
// results aggregate all the things received by channels in all goroutines
results := make(chan string, iterations*threads)
for i := 0; i < threads; i++ {
go MakeRequestHelper(url, channels[i], results, iterations)
}
resultSlice := make([]string, threads*iterations)
for i := 0; i < threads*iterations; i++ {
resultSlice[i] = <-results
}
}
In the above code,
ch <- or <-results
seems to be blocking every goroutine that executes makeRequest.
I am new to concurrency model of Go. I understand that sending to and receiving from a channel blocks but find it difficult what is blocking what in this code.
I'm not really sure that you are doing... It seems really convoluted. I suggest you read up on how to use channels.
https://tour.golang.org/concurrency/2
That being said you have so much going on in your code that it was much easier to just gut it to something a bit simpler. (It can be simplified further). I left comments to understand the code.
package main
import (
"fmt"
"io/ioutil"
"log"
"net/http"
"sync"
"time"
)
// using structs is a nice way to organize your code
type Worker struct {
wg sync.WaitGroup
semaphore chan struct{}
result chan Result
client http.Client
}
// group returns so that you don't have to send to many channels
type Result struct {
duration float64
results string
}
// closing your channels will stop the for loop in main
func (w *Worker) Close() {
close(w.semaphore)
close(w.result)
}
func (w *Worker) MakeRequest(url string) {
// a semaphore is a simple way to rate limit the amount of goroutines running at any single point of time
// google them, Go uses them often
w.semaphore <- struct{}{}
defer func() {
w.wg.Done()
<-w.semaphore
}()
start := time.Now()
resp, err := w.client.Get(url)
if err != nil {
log.Println("error", err)
return
}
defer resp.Body.Close()
// don't have any examples where I need to also POST anything but the point should be made
// resp, err = http.Post(url, "text/plain", bytes.NewBuffer([]byte("Hey")))
// if err != nil {
// log.Println("error", err)
// return
// }
// defer resp.Body.Close()
secs := time.Since(start).Seconds()
b, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Println("error", err)
return
}
w.result <- Result{duration: secs, results: string(b)}
}
func main() {
urls := []string{"https://facebook.com/", "https://twitter.com/", "https://google.com/", "https://youtube.com/", "https://linkedin.com/", "https://wordpress.org/",
"https://instagram.com/", "https://pinterest.com/", "https://wikipedia.org/", "https://wordpress.com/", "https://blogspot.com/", "https://apple.com/",
}
workerNumber := 5
worker := Worker{
semaphore: make(chan struct{}, workerNumber),
result: make(chan Result),
client: http.Client{Timeout: 5 * time.Second},
}
// use sync groups to allow your code to wait for
// all your goroutines to finish
for _, url := range urls {
worker.wg.Add(1)
go worker.MakeRequest(url)
}
// by declaring wait and close as a seperate goroutine
// I can get to the for loop below and iterate on the results
// in a non blocking fashion
go func() {
worker.wg.Wait()
worker.Close()
}()
// do something with the results channel
for res := range worker.result {
fmt.Printf("Request took %2.f seconds.\nResults: %s\n\n", res.duration, res.results)
}
}
The channels in channels are nil (no make is executed; you make the slice but not the channels), so any send or receive will block. I'm not sure exactly what you're trying to do here, but that's the basic problem.
See https://golang.org/doc/effective_go.html#channels for an explanation of how channels work.

Is it a better way to do parallel programming that this?

I made this script for getting the follower count of "influencers" from instagram
the "runtime" number I am getting from it is between 550-750ms.
It is not that bad, but I am wondering whether it could be better or not (as I am a golang noob - learning it 3 weeks only)
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
"sync"
"time"
)
type user struct {
User userData `json:"user"`
}
type userData struct {
Followers count `json:"followed_by"`
}
type count struct {
Count int `json:"count"`
}
func getFollowerCount(in <-chan string) <-chan int {
out := make(chan int)
go func() {
for un := range in {
URL := "https://www.instagram.com/" + un + "/?__a=1"
resp, err := http.Get(URL)
if err != nil {
// handle error
fmt.Println(err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
var u user
err = json.Unmarshal(body, &u)
if err != nil {
fmt.Println(err)
}
// return u.User.Followers.Count
out <- u.User.Followers.Count
}
close(out)
}()
return out
}
func merge(cs ...<-chan int) <-chan int {
var wg sync.WaitGroup
out := make(chan int)
output := func(c <-chan int) {
for n := range c {
out <- n
}
wg.Done()
}
wg.Add(len(cs))
for _, c := range cs {
go output(c)
}
go func() {
wg.Wait()
close(out)
}()
return out
}
func gen(users ...string) <-chan string {
out := make(chan string)
go func() {
for _, u := range users {
out <- u
}
close(out)
}()
return out
}
func main() {
start := time.Now()
fmt.Println("STARTING UP")
usrs := []string{"kanywest", "kimkardashian", "groovyq", "kendricklamar", "barackobama", "asaprocky", "champagnepapi", "eminem", "drdre", "g_eazy", "skrillex"}
in := gen(usrs...)
d1 := getFollowerCount(in)
d2 := getFollowerCount(in)
d3 := getFollowerCount(in)
d4 := getFollowerCount(in)
d5 := getFollowerCount(in)
d6 := getFollowerCount(in)
d7 := getFollowerCount(in)
d8 := getFollowerCount(in)
d9 := getFollowerCount(in)
d10 := getFollowerCount(in)
for d := range merge(d1, d2, d3, d4, d5, d6, d7, d8, d9, d10) {
fmt.Println(d)
}
elapsed := time.Since(start)
log.Println("runtime", elapsed)
}
I agree with jeevatkm, there are numerous way to implement your task and improve it. Some notes:
Separate the function that actually do the job (i.e. fetch result from remote service) and the function which is responsible for coordinating all the jobs.
It is a good practice to propagate an errorto the caller instead of consumes (handles) it in a function to be called.
Since the jobs are done in parallel, the result could be returned in undetermined order. Thus, besides follower count, result should contains other related information(s).
The following implementation may be one alternative:
package main
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"sync"
"time"
)
type user struct {
User userData `json:"user"`
}
type userData struct {
Followers count `json:"followed_by"`
}
type count struct {
Count int `json:"count"`
}
//Wrap username, count, and error. See (3) above.
type follower struct {
Username string
Count int
Error error
}
//GetFollowerCountFunc is a function for
//fetching follower count of a specific user.
type GetFollowerCountFunc func(string) (int, error)
//Mockup function for test
func mockGetFollowerCountFor(userName string) (int, error) {
if len(userName) < 9 {
return -1, errors.New("mocking error in get follower count")
}
return 10, nil
}
//Fetch result from remote service. See (1) above.
func getFollowerCountFor(userName string) (int, error) {
URL := "https://www.instagram.com/" + userName + "/?__a=1"
resp, err := http.Get(URL)
if err != nil {
return -1, err
}
defer resp.Body.Close()
var u user
if err := json.NewDecoder(resp.Body).Decode(&u); err != nil {
return -1, err
}
return u.User.Followers.Count, nil
}
//Function that coordinates/distributes the jobs. See (1), (2) above.
func getFollowersAsync(users []string, fn GetFollowerCountFunc) <-chan follower {
//allocate channels for storing result
//number of allocated channels define the maximum *parallel* worker
followers := make(chan follower, len(users))
//The following is also valid
//followers := make(chan follower, 5)
//Do the job distribution in goroutine (Asynchronously)
go func() {
var wg sync.WaitGroup
wg.Add(len(users))
for _, u := range users {
//Run a *parallel* worker
go func(uid string) {
cnt, err := fn(uid)
if err != nil {
followers <- follower{uid, -1, err}
} else {
followers <- follower{uid, cnt, nil}
}
wg.Done()
}(u)
}
//wait all workers finish
wg.Wait()
//close the channels so the `for ... range` will exit gracefully
close(followers)
}()
//This function will returns immediately
return followers
}
func main() {
start := time.Now()
fmt.Println("STARTING UP")
usrs := []string{"kanywest", "kimkardashian", "groovyq", "kendricklamar", "barackobama", "asaprocky", "champagnepapi", "eminem", "drdre", "g_eazy", "skrillex"}
results := getFollowersAsync(usrs, getFollowerCountFor)
//For TESTING:
//results := getFollowersAsync(usrs, mockGetFollowerCountFor)
for r := range results {
if r.Error != nil {
fmt.Printf("Error for user '%s' => %v", r.Username, r.Error)
} else {
fmt.Printf("%s: %d\n", r.Username, r.Count)
}
}
elapsed := time.Since(start)
fmt.Println("runtime", elapsed)
}
Welcome to Go, happy learning.
You're doing good, you can improve your program many ways (such as json decoder, less no of chan, etc). Following is one of the approach. Execution time is between 352-446ms (take it with grain of salt, since network call is involved in your code. Might vary based on server response time).
Your updated code:
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
"sync"
"time"
)
type user struct {
User userData `json:"user"`
}
type userData struct {
Followers count `json:"followed_by"`
}
type count struct {
Count int `json:"count"`
}
func getFollowerCount(username string, result chan<- int, wg *sync.WaitGroup) {
defer wg.Done()
reqURL := "https://www.instagram.com/" + username + "/?__a=1"
resp, err := http.Get(reqURL)
if err != nil {
log.Println(err)
return
}
defer resp.Body.Close()
var u user
if err := json.NewDecoder(resp.Body).Decode(&u); err != nil {
log.Println(err)
return
}
result <- u.User.Followers.Count
}
func execute(users []string, result chan<- int) {
wg := &sync.WaitGroup{}
for _, username := range users {
wg.Add(1)
go getFollowerCount(username, result, wg)
}
wg.Wait()
result <- -1
}
func main() {
start := time.Now()
fmt.Println("STARTING UP")
usrs := []string{"kanywest", "kimkardashian", "groovyq", "kendricklamar", "barackobama", "asaprocky", "champagnepapi", "eminem", "drdre", "g_eazy", "skrillex"}
result := make(chan int)
go execute(usrs, result)
for v := range result {
if v == -1 {
break
}
fmt.Println(v)
}
elapsed := time.Since(start)
fmt.Println("runtime:", elapsed)
}

Golang program hangs without finishing execution

I have the following golang program;
package main
import (
"fmt"
"net/http"
"time"
)
var urls = []string{
"http://www.google.com/",
"http://golang.org/",
"http://yahoo.com/",
}
type HttpResponse struct {
url string
response *http.Response
err error
status string
}
func asyncHttpGets(url string, ch chan *HttpResponse) {
client := http.Client{}
if url == "http://www.google.com/" {
time.Sleep(500 * time.Millisecond) //google is down
}
fmt.Printf("Fetching %s \n", url)
resp, err := client.Get(url)
u := &HttpResponse{url, resp, err, "fetched"}
ch <- u
fmt.Println("sent to chan")
}
func main() {
fmt.Println("start")
ch := make(chan *HttpResponse, len(urls))
for _, url := range urls {
go asyncHttpGets(url, ch)
}
for i := range ch {
fmt.Println(i)
}
fmt.Println("Im done")
}
Run it on Playground
However when I run it; it hangs (ie the last part that ought to print Im done doesnt run.)
Here's the terminal output;;
$ go run get.go
start
Fetching http://yahoo.com/
Fetching http://golang.org/
Fetching http://www.google.com/
sent to chan
&{http://www.google.com/ 0xc820144120 fetched}
sent to chan
&{http://golang.org/ 0xc82008b710 fetched}
sent to chan
&{http://yahoo.com/ 0xc82008b7a0 fetched}
The problem is that ranging over a channel in a for loop will continue forever unless the channel is closed. If you want to read precisely len(urls) values from the channel, you should loop that many times:
for i := 0; i < len(urls); i++ {
fmt.Println(<-ch)
}
Another good dirty devious trick would be to use sync.WaitGroup and increment it per goroutine and then monitor it with a Wait and after its done it will close your channel allowing the next blocks of code to run, the reason I am offering you this approach is because it gets away from using a static number in a loop like len(urls) so that you can have a dynamic slice that might change and what not.
The reason Wait and close are in their own goroutine is so that your code can reach the for loop to range over your channel
package main
import (
"fmt"
"net/http"
"time"
"sync"
)
var urls = []string{
"http://www.google.com/",
"http://golang.org/",
"http://yahoo.com/",
}
type HttpResponse struct {
url string
response *http.Response
err error
status string
}
func asyncHttpGets(url string, ch chan *HttpResponse, wg *sync.WaitGroup) {
client := http.Client{}
if url == "http://www.google.com/" {
time.Sleep(500 * time.Millisecond) //google is down
}
fmt.Printf("Fetching %s \n", url)
resp, err := client.Get(url)
u := &HttpResponse{url, resp, err, "fetched"}
ch <- u
fmt.Println("sent to chan")
wg.Done()
}
func main() {
fmt.Println("start")
ch := make(chan *HttpResponse, len(urls))
var wg sync.WaitGroup
for _, url := range urls {
wg.Add(1)
go asyncHttpGets(url, ch, &wg)
}
go func() {
wg.Wait()
close(ch)
}()
for i := range ch {
fmt.Println(i)
}
fmt.Println("Im done")
}

Resources